Gpu-Accelerated and Parallelized Elm Ensembles for Large-Scale Regression
Abstract
The Paper Presents an Approach for Performing Regression on Large Data Sets in Reasonable Time, using an Ensemble of Extreme Learning Machines (Elms). the Main Purpose and Contribution of This Paper Are to Explore How the Evaluation of This Ensemble of Elms Can Be Accelerated in Three Distinct Ways: (1) Training and Model Structure Selection of the Individual Elms Are Accelerated by Performing These Steps on the Graphics Processing Unit (Gpu), Instead of the Processor (Cpu); (2) the Training of Elm is Performed in Such a Way that Computed Results Can Be Reused in the Model Structure Selection, Making Training Plus Model Structure Selection More Efficient; (3) the Modularity of the Ensemble Model is Exploited and the Process of Model Training and Model Structure Selection is Parallelized Across Multiple Gpu and Cpu Cores, Such that Multiple Models Can Be Built at the Same Time. the Experiments Show that Competitive Performance is Obtained on the Regression Tasks, and that the Gpu-Accelerated and Parallelized Elm Ensemble Achieves Attractive Speedups over using a Single Cpu. Furthermore, the Proposed Approach is Not Limited to a Specific Type of Elm and Can Be Employed for a Large Variety of Elms. © 2011.
Recommended Citation
M. Van Heeswijk et al., "Gpu-Accelerated and Parallelized Elm Ensembles for Large-Scale Regression," Neurocomputing, vol. 74, no. 16, pp. 2430 - 2437, Elsevier, Sep 2011.
The definitive version is available at https://doi.org/10.1016/j.neucom.2010.11.034
Department(s)
Engineering Management and Systems Engineering
Keywords and Phrases
ELM; Ensemble methods; GPU; High-performance computing; Parallelization
International Standard Serial Number (ISSN)
1872-8286; 0925-2312
Document Type
Article - Journal
Document Version
Citation
File Type
text
Language(s)
English
Rights
© 2024 Elsevier, All rights reserved.
Publication Date
01 Sep 2011