In this paper, a novel reinforcement learning neural network (NN)-based controller, referred to adaptive critic controller, is proposed for affine nonlinear discrete-time systems with applications to nanomanipulation. In the online NN reinforcement learning method, one NN is designated as the critic NN, which approximates the long-term cost function by assuming that the states of the nonlinear systems is available for measurement. An action NN is employed to derive an optimal control signal to track a desired system trajectory while minimizing the cost function. Online updating weight tuning schemes for these two NNs are also derived. By using the Lyapunov approach, the uniformly ultimate boundedness (UUB) of the tracking error and weight estimates is shown. Nanomanipulation implies manipulating objects with nanometer size. It takes several hours to perform a simple task in the nanoscale world. To accomplish the task automatically the proposed online learning control design is evaluated for the task of nanomanipulation and verified in the simulation environment

Meeting Name

2007 IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning (2007: Apr. 1-5, Honolulu, HI)


Electrical and Computer Engineering

Second Department

Computer Science


National Science Foundation (U.S.)

Keywords and Phrases

Adaptive Control; Discrete Time Systems; Engines; Gradient Methods; Learning (Artificial Intelligence); Lyapunov Method; Neurocontrollers; Nonlinear Control Systems; Optimal Control

Document Type

Article - Conference proceedings

Document Version

Final Version

File Type





© 2007 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Jan 2007