Feature selection provides a useful method for reducing the size of large data sets while maintaining integrity, thereby improving the accuracy of neural networks and other classifiers. However, running multiple feature selection models and their accompanying classifiers can make interpreting results difficult. To this end, we present a data-driven methodology called Meta-Best that not only returns a single feature set related to a classification target, but also returns an optimal size and ranks the features by importance within the set. This proposed methodology is tested on six distinct targets from the well-known REGARDS dataset: Deceased, Self-Reported Diabetes, Light Alcohol Abuse Risk, Regular NSAID Use, Current Smoker, and Self-Reported Stroke. This methodology is shown to improve the classification rate of neural networks by 0.056 using the ROC Area Under Curve metric compared to a control test with no feature selection.
M. Chaplin et al., "Improved Classification Of Medical Data Using Meta-Best Feature Selection," Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, pp. 5602 - 5605, article no. 9175289, Institute of Electrical and Electronics Engineers, Jul 2020.
The definitive version is available at https://doi.org/10.1109/EMBC44109.2020.9175289
Electrical and Computer Engineering
International Standard Book Number (ISBN)
International Standard Serial Number (ISSN)
Article - Conference proceedings
© 2023 Institute of Electrical and Electronics Engineers, All rights reserved.
01 Jul 2020