Data-Driven Human-Robot Interaction Without Velocity Measurement using Off-Policy Reinforcement Learning

Abstract

In this paper, we present a novel data-driven design method for the human-robot interaction (HRI) system, where a given task is achieved by cooperation between the human and the robot. The presented HRI controller design is a two-level control design approach consisting of a task-oriented performance optimization design and a plant-oriented impedance controller design. The task-oriented design minimizes the human effort and guarantees the perfect task tracking in the outer-loop, while the plant-oriented achieves the desired impedance from the human to the robot manipulator end-effector in the inner-loop. Data-driven reinforcement learning techniques are used for performance optimization in the outer-loop to assign the optimal impedance parameters. In the inner-loop, a velocity-free filter is designed to avoid the requirement of end-effector velocity measurement. On this basis, an adaptive controller is designed to achieve the desired impedance of the robot manipulator in the task space. The simulation and experiment of a robot manipulator are conducted to verify the efficacy of the presented HRI design framework.

Department(s)

Electrical and Computer Engineering

Keywords and Phrases

Adaptive Impedance Control; Data-Driven Method; Human-Robot Interaction (HRI); Reinforcement Learning; Velocity-Free

International Standard Serial Number (ISSN)

2329-9274; 2329-9266

Document Type

Article - Journal

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2021 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Jan 2022

Share

 
COinS