"The semi-Markov decision process (SMDP) is a variant of the Markov decision process (MOP). This dissertation work focuses on the application of SMDPs to disaster response management and to maintenance management. Average and discounted reward are two popular performance metrics for MDPs/SMDPs. While both dynamic programming (DP) methods, i.e., value iteration and policy iteration, are commonly used to solve MDPs/SMDPs, value iteration is easier to apply than policy iteration. The existing value iteration algorithms for average reward SMDPs have some noteworthy limitations, which are sought to be overcome in this work. Reinforcement learning (RL) techniques, which are also studied in this work, are used when DP methods break down due to the curse of dimensionality. The work in this dissertation is divided into two essays.
The first essay is on disaster response management. A comprehensive risk-based emergency model for a post-earthquake scenario, which includes domino-effect phenomena and is based on SMDPs, is developed. The goal is to minimize the rate of risk posed to the people affected after an earthquake. A value iteration algorithm for SMDPs, based on the stochastic shortest path approach, is developed as a solution technique. The proposed algorithm overcomes the limitations of the existing value iteration algorithms. Numerical results generated by the proposed algorithm are very encouraging. Convergence for the algorithm also has been established.
In the second essay, a new DP algorithm based on value iteration and two new RL algorithms (i-SMART and a model-building adaptive critic) are proposed. The new algorithms are used to solve a variety of preventive maintenance (PM) problems and generate encouraging computational results. Scheduling the time interval for PM is very crucial in a total productive maintenance program. Further, the proposed DP algorithm overcomes the limitations of the existing value iterations algorithms"--Abstract, page iii.
Murray, Susan L.
Le, Vy Khoi
Engineering Management and Systems Engineering
Ph. D. in Engineering Management
Missouri University of Science and Technology
xi, 101 pages
© 2013 Shuva Ghosh, All rights reserved.
Dissertation - Restricted Access
Emergency management -- Mathematical models
Maintenance -- Mathematical models
Print OCLC #
Electronic OCLC #
Link to Catalog Record
Electronic access to the full-text of this document is restricted to Missouri S&T users. Otherwise, request this publication directly from Missouri S&T Library or contact your local library.http://merlin.lib.umsystem.edu/record=b11034832~S5
Ghosh, Shuva, "Two essays on dynamic programming and reinforcement learning" (2013). Doctoral Dissertations. 2431.
Share My Dissertation If you are the author of this work and would like to grant permission to make it openly accessible to all, please click the button above.