Applying Mutual Information for Prototype or Instance Selection in Regression Problems

Abstract

The Problem of Selecting the Patterns to Be Learned by Any Model is Usually Not Considered by the Time of Designing the Concrete Model But as a Preprocessing Step. Information Theory Provides a Robust Theoretical Framework for Performing Input Variable Selection Thanks to the Concept of Mutual Information. Recently the Computation of the Mutual Information for Regression Tasks Has Been Proposed So This Paper Presents a New Application of the Concept of Mutual Information Not to Select the Variables But to Decide Which Prototypes Should Belong to the Training Data Set in Regression Problems. the Proposed Methodology Consists in Deciding If a Prototype Should Belong or Not to the Training Set using as Criteria the Estimation of the Mutual Information between the Variables. the Novelty of the Approach is to Focus in Prototype Selection for Regression Problems Instead of Classification as the Majority of the Literature Deals Only with the Last One. Other Element that Distinguishes This Work from Others is that It is Not Proposed as an Outlier Identificator But as an Algorithm that Determines the Best Subset of Input Vectors by the Time of Building a Model to Approximate It. as the Experiment Section Shows, This New Method is Able to Identify a High Percentage of the Real Data Set When It is Applied to a Highly Distorted Data Sets.

Department(s)

Engineering Management and Systems Engineering

International Standard Book Number (ISBN)

978-293030709-1

Document Type

Article - Conference proceedings

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2024 European Symposium on Artificial Neural Networks, All rights reserved.

Publication Date

01 Dec 2009

This document is currently not available here.

Share

 
COinS