Abstract

In This Article, We Address Two Key Challenges in Deep Reinforcement Learning (DRL) Setting, Sample Inefficiency and Slow Learning, with a Dual-Neural Network (NN)-Driven Learning Approach. in the Proposed Approach, We Use Two Deep NNs with Independent Initialization to Robustly Approximate the Action-Value Function in the Presence of Image Inputs. in Particular, We Develop a Temporal Difference (TD) Error-Driven Learning (EDL) Approach, Where We Introduce a Set of Linear Transformations of the TD Error to Directly Update the Parameters of Each Layer in the Deep NN. We Demonstrate Theoretically that the Cost Minimized by the EDL Regime is an Approximation of the Empirical Cost, and the Approximation Error Reduces as Learning Progresses, Irrespective of the Size of the Network. using Simulation Analysis, We Show that the Proposed Methods Enable Faster Learning and Convergence and Require Reduced Buffer Size (Thereby Increasing the Sample Efficiency).

Department(s)

Electrical and Computer Engineering

Keywords and Phrases

Artificial neural networks; Convergence; Costs; Deep $Q$ -learning (DQN); Deep learning; Games; games; images; Q-learning; Task analysis

International Standard Serial Number (ISSN)

2162-2388; 2162-237X

Document Type

Article - Journal

Document Version

Final Version

File Type

text

Language(s)

English

Rights

© 2023 Institute of Electrical and Electronics Engineers, All rights reserved.

Publication Date

01 Jan 2023

Share

 
COinS