Distributed Min-Max Learning Scheme for Neural Networks with Applications to High-Dimensional Classification

Abstract

In this article, a novel learning methodology is introduced for the problem of classification in the context of high-dimensional data. In particular, the challenges introduced by high-dimensional data sets are addressed by formulating a L1 regularized zero-sum game where optimal sparsity is estimated through a two-player game between the penalty coefficients/sparsity parameters and the deep neural network weights. In order to solve this game, a distributed learning methodology is proposed where additional variables are utilized to derive layerwise cost functions. Finally, an alternating minimization approach developed to solve the problem where the Nash solution provides optimal sparsity and compensation through the classifier. The proposed learning approach is implemented in a parallel and distributed environment through a novel computational algorithm. The efficiency of the approach is demonstrated both theoretically and empirically with nine data sets.

Department(s)

Mathematics and Statistics

Comments

National Science Foundation, Grant IIP 1134721

Keywords and Phrases

Distributed Optimization; Machine Learning; Neural Networks

International Standard Serial Number (ISSN)

2162-2388; 2162-237X

Document Type

Article - Journal

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2021 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Oct 2021

PubMed ID

32941155

Share

 
COinS