Abstract

Feature selection provides a useful method for reducing the size of large data sets while maintaining integrity, thereby improving the accuracy of neural networks and other classifiers. However, running multiple feature selection models and their accompanying classifiers can make interpreting results difficult. To this end, we present a data-driven methodology called Meta-Best that not only returns a single feature set related to a classification target, but also returns an optimal size and ranks the features by importance within the set. This proposed methodology is tested on six distinct targets from the well-known REGARDS dataset: Deceased, Self-Reported Diabetes, Light Alcohol Abuse Risk, Regular NSAID Use, Current Smoker, and Self-Reported Stroke. This methodology is shown to improve the classification rate of neural networks by 0.056 using the ROC Area Under Curve metric compared to a control test with no feature selection.

Department(s)

Electrical and Computer Engineering

International Standard Book Number (ISBN)

978-172811990-8

International Standard Serial Number (ISSN)

1557-170X

Document Type

Article - Conference proceedings

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2023 Institute of Electrical and Electronics Engineers, All rights reserved.

Publication Date

01 Jul 2020

Share

 
COinS