Abstract

This paper addresses the problem of multi-task learning (MTL) in settings where the task assignment is not known. We propose two mechanisms for the problem of inference of task's parameter without task specification: parameter adaptation and parameter selection methods. In parameter adaptation, the model's parameter is iteratively updated using a recurrent neural network (RNN) learner as the mechanism to adapt to different tasks. For the parameter selection model, a parameter matrix is learned beforehand with the task known apriori. During testing, a bandit algorithm is utilized to determine the appropriate parameter vector for the model on the fly. We explored two different scenarios in MTL without task specification, continuous learning and reset learning. In continuous learning, the model has to adjust its parameter continuously to a number of different task without knowing when task changes. Whereas in reset learning, the parameter is reset to an initial value to aid transition to different tasks. Results on three real benchmark datasets demonstrate the comparative performance of both models with respect to multiple RNN configurations, MTL algorithms and bandit selection policies.

Department(s)

Electrical and Computer Engineering

Keywords and Phrases

bandit algorithm; multi-task learning; recurrent network

International Standard Book Number (ISBN)

978-172811985-4

Document Type

Article - Conference proceedings

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2023 Institute of Electrical and Electronics Engineers, All rights reserved.

Publication Date

01 Jul 2019

Share

 
COinS