TD Methods Applied to Mixture of Experts for Learning 9 X 9 Go Evaluation Function
The temporal difference (TD) method is applied on a committee of neural network experts to learn the board evaluation function for the Oriental board game Go. The game has simple rules but requires complex strategies to play well, and, the conventional tree search algorithm for computer games make poor Go program. Thus, the game Go is an ideal problem domain for exploring machine learning algorithms. Here, the neural networks learned a board evaluation function for Go played on 9 x 9 board sizes. Two learning algorithms, e.g., hybrid mixture of experts (HME) and Meta-Pi, are used to train the neural network experts. Both algorithms learned good Go evaluation functions and the neural network based Go engines were able to defeat a public domain rule-based program more than 50% of the times. The performances of the mixture networks are compared with that of a single feedforward network trained similarly.
R. Zaman and D. C. Wunsch, "TD Methods Applied to Mixture of Experts for Learning 9 X 9 Go Evaluation Function," Proceedings of the International Joint Conference on Neural Networks, vol. 6, pp. 3734-3739, Institute of Electrical and Electronics Engineers (IEEE), Jan 1999.
The definitive version is available at https://doi.org/10.1109/IJCNN.1999.830746
International Joint Conference on Neural Networks (IJCNN'99) (1999: Jul. 10-16, Washington, DC)
Electrical and Computer Engineering
International Standard Serial Number (ISSN)
Article - Conference proceedings
© 1999 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.