An Adaptive Strategy via Reinforcement Learning for the Prisoner's Dilemma Game
Abstract
The iterated prisoner's dilemma (IPD) is an ideal model for analyzing interactions between agents in complex networks. It has attracted wide interest in the development of novel strategies since the success of tit-for-tat in Axelrod's tournament. This paper studies a new adaptive strategy of IPD in different complex networks, where agents can learn and adapt their strategies through reinforcement learning method. A temporal difference learning method is applied for designing the adaptive strategy to optimize the decision making process of the agents. Previous studies indicated that mutual cooperation is hard to emerge in the IPD. Therefore, three examples which based on square lattice network and scale-free network are provided to show two features of the adaptive strategy. First, the mutual cooperation can be achieved by the group with adaptive agents under scale-free network, and once evolution has converged mutual cooperation, it is unlikely to shift. Secondly, the adaptive strategy can earn a better payoff compared with other strategies in the square network. The analytical properties are discussed for verifying evolutionary stability of the adaptive strategy.
Recommended Citation
L. Xue et al., "An Adaptive Strategy via Reinforcement Learning for the Prisoner's Dilemma Game," IEEE/CAA Journal of Automatica Sinica, vol. 5, no. 1, pp. 301 - 310, Institute of Electrical and Electronics Engineers (IEEE), Jan 2018.
The definitive version is available at https://doi.org/10.1109/JAS.2017.7510466
Department(s)
Electrical and Computer Engineering
Research Center/Lab(s)
Intelligent Systems Center
Second Research Center/Lab
Center for High Performance Computing Research
Keywords and Phrases
Adaptive systems; Decision making; Game theory; Learning systems; Reinforcement learning; Sun; Adaptation models; Decision making process; Games; Iterated Prisoner's dilemma; Learning (artificial intelligence); Prisoner's dilemma game; Reinforcement learning method; Temporal difference learning; Complex networks
International Standard Serial Number (ISSN)
2329-9266
Document Type
Article - Journal
Document Version
Citation
File Type
text
Language(s)
English
Rights
© 2018 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.
Publication Date
01 Jan 2018
Comments
This work was supported by the National Natural Science Foundation (NNSF) of China (61603196, 61503079, 61520106009, 61533008), the Natural Science Foundation of Jiangsu Province of China (BK20150851), China Postdoctoral Science Foundation (2015M581842), Jiangsu Postdoctoral Science Foundation (1601259C), Nanjing University of Posts and Telecommunications Science Foundation (NUPTSF) (NY215011), Priority Academic Program Development of Jiangsu Higher Education Institutions, the open fund of Key Laboratory of Measurement and Control of Complex Systems of Engineering, Ministry of Education (MCCSE2015B02), and the Research Innovation Program for College Graduates of Jiangsu Province (CXLX1309).