Trop-Elm: A Double-Regularized Elm using Lars and Tikhonov Regularization

Abstract

In This Paper an Improvement of the Optimally Pruned Extreme Learning Machine (Op-Elm) in the Form of a L2 Regularization Penalty Applied within the Op-Elm is Proposed. the Op-Elm Originally Proposes a Wrapper Methodology Around the Extreme Learning Machine (Elm) Meant to Reduce the Sensitivity of the Elm to Irrelevant Variables and Obtain More Parsimonious Models Thanks to Neuron Pruning. the Proposed Modification of the Op-Elm Uses a Cascade of Two Regularization Penalties: First a L1 Penalty to Rank the Neurons of the Hidden Layer, Followed by a L2 Penalty on the Regression Weights (Regression between Hidden Layer and Output Layer) for Numerical Stability and Efficient Pruning of the Neurons. the New Methodology is Tested Against State of the Art Methods Such as Support Vector Machines or Gaussian Processes and the Original Elm and Op-Elm, on 11 Different Data Sets; It Systematically Outperforms the Op-Elm (Average of 27% Better Mean Square Error) and Provides More Reliable Results - in Terms of Standard Deviation of the Results - While Remaining Always Less Than One Order of Magnitude Slower Than the Op-Elm. © 2011 Elsevier B.v.

Department(s)

Engineering Management and Systems Engineering

Keywords and Phrases

ELM; LARS; OP-ELM; Regularization; Ridge regression; Tikhonov regularization

International Standard Serial Number (ISSN)

1872-8286; 0925-2312

Document Type

Article - Journal

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2024 Elsevier, All rights reserved.

Publication Date

01 Sep 2011

Share

 
COinS