In this paper, generalization error for traditional learning regimes-based classification is demonstrated to increase in the presence of bigdata challenges such as noise and heterogeneity. To reduce this error while mitigating vanishing gradients, a deep neural network (NN)-based framework with a direct error-driven learning scheme is proposed. To reduce the impact of heterogeneity, an overall cost comprised of the learning error and approximate generalization error is defined where two NNs are utilized to estimate the costs respectively. To mitigate the issue of vanishing gradients, a direct error-driven learning regime is proposed where the error is directly utilized for learning. It is demonstrated that the proposed approach improves accuracy by 7 % over traditional learning regimes. The proposed approach mitigated the vanishing gradient problem and improved generalization by 6%.

Meeting Name

3rd International Neural Network Society Conference on Big Data and Deep Learning, INNS BDDL 2018 (2018: Apr. 17-19, Bali, Indonesia)


Electrical and Computer Engineering

Second Department

Mathematics and Statistics

Research Center/Lab(s)

Intelligent Systems Center

Second Research Center/Lab

Center for High Performance Computing Research


This research was supported in part by an NSF I/UCRC award IIP 1134721 and Intelligent Systems Center.

Keywords and Phrases

Big data; Cost benefit analysis; Errors; Error-driven learning; Generalization Error; Heterogeneity; Learning error; Noise; Overall costs; Traditional learning; Vanishing gradients; Deep neural networks; Bigdata; Noise

International Standard Serial Number (ISSN)


Document Type

Article - Conference proceedings

Document Version

Final Version

File Type





© 2018 The Authors, All rights reserved.

Creative Commons Licensing

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Publication Date

01 Apr 2018