Solving Markov Decision Processes with Downside Risk Adjustment

Abstract

Markov decision processes (MDPs) and their variants are widely studied in the theory of controls for stochastic discrete-event systems driven by Markov chains. Much of the literature focusses on the risk-neutral criterion in which the expected rewards, either average or discounted, are maximized. There exists some literature on MDPs that takes risks into account. Much of this addresses the exponential utility (EU) function and mechanisms to penalize different forms of variance of the rewards. EU functions have some numerical deficiencies, while variance measures variability both above and below the mean rewards; the variability above mean rewards is usually beneficial and should not be penalized/avoided. As such, risk metrics that account for pre-specified targets (thresholds) for rewards have been considered in the literature, where the goal is to penalize the risks of revenues falling below those targets. Existing work on MDPs that takes targets into account seeks to minimize risks of this nature. Minimizing risks can lead to poor solutions where the risk is zero or near zero, but the average rewards are also rather low. In this paper, hence, we study a risk-averse criterion, in particular the so-called downside risk, which equals the probability of the revenues falling below a given target, where, in contrast to minimizing such risks, we only reduce this risk at the cost of slightly lowered average rewards. A solution where the risk is low and the average reward is quite high, although not at its maximum attainable value, is very attractive in practice. To be more specific, in our formulation, the objective function is the expected value of the rewards minus a scalar times the downside risk. In this setting, we analyze the infinite horizon MDP, the finite horizon MDP, and the infinite horizon semi-MDP (SMDP). We develop dynamic programming and reinforcement learning algorithms for the finite and infinite horizon. The algorithms are tested in numerical studies and show encouraging performance.

Department(s)

Engineering Management and Systems Engineering

Research Center/Lab(s)

Intelligent Systems Center

Keywords and Phrases

Decision theory; Dynamic programming; Learning algorithms; Markov processes; Reinforcement learning; Stochastic systems; Targets; Discrete events; Downside risks; Expected values; Exponential utility; Infinite horizons; Markov Decision Processes; Objective functions; Thresholds; Risks

International Standard Serial Number (ISSN)

1476-8186; 1751-8520

Document Type

Article - Journal

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2016 Chinese Academy of Sciences, All rights reserved.

Publication Date

01 Jun 2016

Share

 
COinS