Effects of Learning Rate on the Performance of the Population Based Incremental Learning Algorithm

Ganesh K. Venayagamoorthy, Missouri University of Science and Technology
K. A. Folly

This document has been relocated to http://scholarsmine.mst.edu/ele_comeng_facwork/998

There were 17 downloads as of 27 Jun 2016.

Abstract

The effect of learning rate (LR) on the performance of a newly introduced evolutionary algorithm called population-based incremental learning (PBIL) is investigated in this paper. PBIL is a technique that combines a simple genetic algorithm (GA) with competitive learning (CL). Although CL is often studied in the context of artificial neural networks (ANNs), it plays a vital role in PBIL in that the idea of creating a prototype vector in learning vector quantization (LVQ) is central to PBIL. In PBIL, the crossover operator of GAs is abstracted away and the role of population is redefined. PBIL maintains a real-valued probability vector (PV) or prototype vector from which solutions are generated. The probability vector controls the random bitstrings generated by PBIL and is used to create other individuals through learning. The setting of the learning rate (LR) can greatly affect the performance of PBIL. However, the effect of the learning rate in PBIL is not yet fully understood. In this paper, PBIL is used to design power system stabilizers (PSSs) for a multi-machine power system. Four cases studies with different learning rate patterns are investigated. These include fixed LR; purely adaptive LR; fixed LR followed by adaptive LR; and adaptive LR followed by fixed LR. It is shown that a smaller learning rate leads to more exploration of the algorithm which introduces more diversity in the population at the cost of slower convergence. On the other hand, a higher learning rate means more exploitation of the algorithm and hence, this could lead to a premature convergence in the case of fixed LR. Therefore, in setting the LR, a trade-off is needed between exploitation and exploration.