There have been recent interest in meta-learning systems: I.e., networks that are trained to learn across multiple tasks. This paper focuses on optimization and generalization of a meta-learning system based on recurrent networks. The optimization investigates the influence of diverse structures and parameters on its performance. We demonstrate the generalization (robustness) of our meta-learning system to learn across multiple tasks including tasks unseen during the meta training phase. We introduce a meta-cost function (Mean Squared Fair Error) that enhances the performance of the system by not penalizing it during transitions to learning a new task. Evaluation results are presented for Boolean and quadratic functions datasets. The best performance is obtained using a Long Short-Term Memory (LSTM) topology without a forget gate and with a clipped memory cell. The results demonstrate i) the impact of different LSTM architectures, parameters, and error functions on the meta-learning process; ii) that the mean squared fair error function does improve performance for best learning; and iii) the robustness of our meta-learning framework as it generalizes well when tested on tasks unseen during meta-training. Comparison between No-Forget-Gate LSTM and Gated Recurrent Unit also suggest that absence of a memory cell tends to degrade performance.
T. Nguyen et al., "Meta-Learning Related Tasks With Recurrent Networks: Optimization And Generalization," Proceedings of the International Joint Conference on Neural Networks, article no. 8489583, Institute of Electrical and Electronics Engineers, Oct 2018.
The definitive version is available at https://doi.org/10.1109/IJCNN.2018.8489583
Electrical and Computer Engineering
Keywords and Phrases
long short-term memory; meta-learning; performance optimization; recurrent networks
International Standard Book Number (ISBN)
Article - Conference proceedings
© 2023 Institute of Electrical and Electronics Engineers, All rights reserved.
10 Oct 2018