Neural Networks and Markov Models for the Iterated Prisoner's Dilemma
The study of strategic interaction among a society of agents is often handled using the machinery of game theory. This research examines how a Markov Decision Process (MDP) model may be applied to an important element of repeated game theory: the iterated prisoner's dilemma. Our study uses a Markovian approach to the game to represent the problem of in a computer simulation environment. A pure Markov approach is used on a simplified version of the iterated game and then we formulate the general game as a partially observable Markov decision process (POMDP). Finally, we use a cellular structure as an environment for players to compete and adapt. We apply both a simple replacement strategy and a cellular neural network to the environment.
J. E. Seiffertt et al., "Neural Networks and Markov Models for the Iterated Prisoner's Dilemma," Proceedings of the International Joint Conference on Neural Networks, pp. 2860-2866, Institute of Electrical and Electronics Engineers (IEEE), Jan 2009.
The definitive version is available at https://doi.org/10.1109/IJCNN.2009.5178800
2009 International Joint Conference on Neural Networks, IJCNN '09 (2009: Jun. 14-19, Atlanta, GA)
Electrical and Computer Engineering
International Standard Book Number (ISBN)
Article - Conference proceedings
© 2009 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.
01 Jan 2009