Location
Havener Center, Miner Lounge / Wiese Atrium, 1:30pm-3:30pm
Start Date
4-2-2026 1:30 PM
End Date
4-2-2026 3:30 PM
Presentation Date
April 2, 2026; 1:30pm-3:30pm
Description
Advances in robotics and artificial intelligence have increased expectations for interactive robots in eldercare and assistive applications. A key challenge in creating safe and effective systems is accurately recognizing human intent and translating it into meaningful commands for robots to follow. Traditional physics-based models often fail to fully represent human force interactions due to their dynamic nature, while neural networks provide a promising alternative for predicting force-movement intentions. Multi-layer perceptrons (MLPs) show potential, however, they struggle with temporal dependencies and generalization. Long Short-Term Memory (LSTM) networks and Convolutional Neural Networks (CNNs) help address these limitations. This study compares these architectures in a leader-follower experiment and proposes a hybrid CNN-LSTM model, improving accuracy from 69% to 86%, enabling better adaptability, precision, and responsiveness in physical human-robot interactions.
Biography
Khosro Ghorbani Zadeh is currently a Ph.D. student in mechanical engineering at Missouri University of Science and Technology, where he is a Kummer Innovation and Entrepreneurial Fellow. He received his B.S. degree in robotic engineer ng from Hamedan University of Technology and his M.S. degree in mechanical engineering with a focus on dynamics and control from Isfahan University of Technology, in 2018 and 2021, respectively. His research interests include robotics, human-robot interaction, haptics, and nonlinear control.
Meeting Name
2026 - Miners Solving for Tomorrow Research Conference
Department(s)
Mechanical and Aerospace Engineering
Document Type
Poster
Document Version
Final Version
File Type
event
Language(s)
English
Rights
© 2026 The Authors, All rights reserved
Included in
Human Intention Prediction using CNN and LSTM Networks in Physical Human–Robot Interactions
Havener Center, Miner Lounge / Wiese Atrium, 1:30pm-3:30pm
Advances in robotics and artificial intelligence have increased expectations for interactive robots in eldercare and assistive applications. A key challenge in creating safe and effective systems is accurately recognizing human intent and translating it into meaningful commands for robots to follow. Traditional physics-based models often fail to fully represent human force interactions due to their dynamic nature, while neural networks provide a promising alternative for predicting force-movement intentions. Multi-layer perceptrons (MLPs) show potential, however, they struggle with temporal dependencies and generalization. Long Short-Term Memory (LSTM) networks and Convolutional Neural Networks (CNNs) help address these limitations. This study compares these architectures in a leader-follower experiment and proposes a hybrid CNN-LSTM model, improving accuracy from 69% to 86%, enabling better adaptability, precision, and responsiveness in physical human-robot interactions.

Comments
Advisor: Yun Seong Song, songyun@mst.edu