Abstract

In Most Domain Adaption Approaches, All Features Are Used for Domain Adaption. However, Often, Not Every Feature is Beneficial for Domain Adaption. in Such Cases, Incorrectly Involving All Features Might Cause the Performance to Degrade. in Other Words, to Make the Model Trained on the Source Domain Work Well on the Target Domain, it is Desirable to Find Invariant Features for Domain Adaption Rather Than using All Features. However, Invariant Features Across Domains May Lie in a Higher Order Space, instead of in the Original Feature Space. Moreover, the Discriminative Ability of Some Invariant Features Such as Shared Background Information is Weak and Needs to Be Further Filtered. Therefore, in This Paper, We Propose a Novel Domain Adaption Algorithm based on an Explicit Feature Map and Feature Selection. the Data Are First Represented by a Kernel-Induced Explicit Feature Map, such that High-Order Invariant Features Can Be Revealed. Then, by Minimizing the Marginal Distribution Difference, Conditional Distribution Difference, and the Model Error, the Invariant Discriminative Features Are Effectively Selected. This Problem is Np-Hard to Be Solved, and We Propose to Relax It and Solve It by a Cutting Plane Algorithm. Experimental Results on Six Real-World Benchmarks Have Demonstrated the Effectiveness and Efficiency of the Proposed Algorithm, Which Outperforms Many State-Of-The-Art Domain Adaption Approaches.

Department(s)

Engineering Management and Systems Engineering

Comments

Appalachian Regional Commission, Grant 1121720013

Keywords and Phrases

Distribution distance; domain adaption; feature selection; transfer learning

International Standard Serial Number (ISSN)

2162-2388; 2162-237X

Document Type

Article - Journal

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2024 Institute of Electrical and Electronics Engineers, All rights reserved.

Publication Date

01 Apr 2019

PubMed ID

30176608

Share

 
COinS