This paper examines the performance of an HDP-type adaptive critic design (ACD) of the game Go. The game Go is an ideal problem domain for exploring machine learning; it has simple rules but requires complex strategies to play well. All current commercial Go programs are knowledge based implementations; they utilize input feature and pattern matching along with minimax type search techniques. But the extremely high branching factor puts a limit on their capabilities, and they are very weak compared to the relative strengths of other game programs like chess. In this paper, the Go-playing ACD consists of a critic network and an action network. The HDP type critic network learns to predict the cumulative utility function of the current board position from training games, and, the action network chooses a next move which maximizes critics next step cost-to-go. After about 6000 different training games against a public domain program, WALLY, the network (playing WHITE) began to win in some of the games and showed slow but steady improvements on test games
R. Zaman et al., "Adaptive Critic Design in Learning to Play Game of Go," Proceedings of the International Conference on Neural Networks,1997, Institute of Electrical and Electronics Engineers (IEEE), Jan 1997.
The definitive version is available at http://dx.doi.org/10.1109/ICNN.1997.611623
International Conference on Neural Networks,1997
Electrical and Computer Engineering
Keywords and Phrases
Action Network; Adaptive Critic Design; Critic Network; Cumulative Utility Function; Game of Go; Games of Skill; Learning; Learning (Artificial Intelligence); Neural Net Architecture
Article - Conference proceedings
© 1997 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.