DBSDA: Lowering the Bound of Misclassification Rate for Sparse Linear Discriminant Analysis via Model Debiasing


Linear discriminant analysis (LDA) is a well-known technique for linear classification, feature extraction, and dimension reduction. To improve the accuracy of LDA under the high dimension low sample size (HDLSS) settings, shrunken estimators, such as Graphical Lasso, can be used to strike a balance between biases and variances. Although the estimator with induced sparsity obtains a faster convergence rate, however, the introduced bias may also degrade the performance. In this paper, we theoretically analyze how the sparsity and the convergence rate of the precision matrix (also known as inverse covariance matrix) estimator would affect the classification accuracy by proposing an analytic model on the upper bound of an LDA misclassification rate. Guided by the model, we propose a novel classifier, DBSDA, which improves classification accuracy through debiasing. Theoretical analysis shows that DBSDA possesses a reduced upper bound of misclassification rate and better asymptotic properties than sparse LDA (SDA). We conduct experiments on both synthetic datasets and real application datasets to confirm the correctness of our theoretical analysis and demonstrate the superiority of DBSDA over LDA, SDA, and other downstream competitors under HDLSS settings.


Mathematics and Statistics

Second Department

Engineering Management and Systems Engineering

Third Department

Computer Science

Research Center/Lab(s)

Intelligent Systems Center

Keywords and Phrases

Analytical models; Classification; Convergence; Covariance matrices; Debiasing; Estimation; Linear discriminant analysis; Linear discriminant analysis; Sociology; Sparsity; Upper bound

International Standard Serial Number (ISSN)


Document Type

Article - Journal

Document Version


File Type





© 2018 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Mar 2019