Per-Sample Prediction Intervals for Extreme Learning Machines
Abstract
Prediction Intervals in Supervised Machine Learning Bound the Region Where the True Outputs of New Samples May Fall. They Are Necessary in the Task of Separating Reliable Predictions of a Trained Model from Near Random Guesses, Minimizing the Rate of False Positives, and Other Problem-Specific Tasks in Applied Machine Learning. Many Real Problems Have Heteroscedastic Stochastic Outputs, Which Explains the Need of Input-Dependent Prediction Intervals. This Paper Proposes to Estimate the Input-Dependent Prediction Intervals by a Separate Extreme Learning Machine Model, using Variance of its Predictions as a Correction Term Accounting for the Model Uncertainty. the Variance is Estimated from the Model's Linear Output Layer with a Weighted Jackknife Method. the Methodology is Very Fast, Robust to Heteroscedastic Outputs, and Handles Both Extremely Large Datasets and Insufficient Amount of Training Data.
Recommended Citation
A. Akusok et al., "Per-Sample Prediction Intervals for Extreme Learning Machines," International Journal of Machine Learning and Cybernetics, vol. 10, no. 5, pp. 991 - 1001, Springer, May 2019.
The definitive version is available at https://doi.org/10.1007/s13042-017-0777-2
Department(s)
Engineering Management and Systems Engineering
Keywords and Phrases
Confidence interval; Coverage; ELM; False positives; Heteroscedastic; Prediction interval; variance estimation
International Standard Serial Number (ISSN)
1868-808X; 1868-8071
Document Type
Article - Journal
Document Version
Citation
File Type
text
Language(s)
English
Rights
© 2024 Springer, All rights reserved.
Publication Date
01 May 2019