Keywords and Phrases
Finance; Investment; Machine learning; Recurrent neural networks (RNN); Reinforcement learning; Trading
"Multiple recurrent reinforcement learners were implemented to make trading decisions based on real and freely available macro-economic data. The learning algorithm and different reinforcement functions (the Differential Sharpe Ratio, Differential Downside Deviation Ratio and Returns) were revised and the performances were compared while transaction costs were taken into account. (This is important for practical implementations even though many publications ignore this consideration.) It was assumed that the traders make long-short decisions in the S&P500 with complementary 3-month treasury bill investments. Leveraged positions in the S&P500 were disallowed. Notably, the Differential Sharpe Ratio and the Differential Downside Deviation Ratio are risk adjusted and are therefore expected to yield more stable and less risky strategies. Interestingly, the Return-traders performed the most consistently. Explanations for these findings were explored. The strong performance of the return-based traders - even based on few and readily available macro-economic time series - showed the power and practical relevance of the simpler algorithm"--Abstract, page iv.
Samaranayake, V. A.
Wunsch, Donald C.
Mathematics and Statistics
M.S. in Applied Mathematics
Missouri University of Science and Technology
x, 46 pages
© 2019 Louis Kurt Bernhard Steinmeister, All rights reserved.
Thesis - Open Access
Electronic OCLC #
Steinmeister, Louis Kurt Bernhard, "Less is more: Beating the market with recurrent reinforcement learning" (2019). Masters Theses. 7909.