You are in:Home/Publications/Incremental learning of reach-to-grasp behavior: A PSO-based Inverse optimal control approach

Dr. Haitham El-Hussieny :: Publications:

Incremental learning of reach-to-grasp behavior: A PSO-based Inverse optimal control approach
Authors: H El-Hussieny; SFM Assal; AA Abouelsoud; SM Megahed and T Ogasawara
Year: 2015
Keywords: Not Available
Journal: 7th International Conference of Soft Computing and Pattern Recognition (SoCPaR), 2015
Volume: Not Available
Issue: Not Available
Pages: Not Available
Publisher: IEEE
Local/International: International
Paper Link:
Full paper Not Available
Supplementary materials Not Available

In recent years, there has been an increasing interest in modeling natural human movements. The main question to be addressed is: what is the optimality criteria that human has optimized to achieve a certain movement. One of the most significant current discussions is the modeling of the reach-to-grasp movements that human naturally perform while approaching a certain object for grasping. Recent advances in Inverse Reinforcement Learning (IRL) approaches have facilitated investigation of reach-to-grasp movements in terms of the optimal control theory. IRL aims to learn the cost function that best describes the demonstrated human reach-to-grasp movements. Thus far, gradient-based techniques have been used to obtain the parameters of the underlying cost function. Such approaches, however, have failed to find the global optimal parameters since they are limited by locating only local optimum values. In this research, learning of the cost function for the reach-to-grasp movements is addressed as an Inverse Linear Quadratic Regulator (ILQR) problem, where linear dynamic equations and a quadratic cost are assumed. An efficient evolutionary optimization technique, Particle Swarm Optimization (PSO), is used to obtain the unknown cost for the reach-to-grasp movements under consideration. Moreover, an incremental-ILQR Algorithm is proposed to adjust the learned cost once new untrained demonstrations exist to overcome the over-fitting issue. The obtained results are encouraging and show harmony with those in neuroscience literature.

Google ScholarAcdemia.eduResearch GateLinkedinFacebookTwitterGoogle PlusYoutubeWordpressInstagramMendeleyZoteroEvernoteORCIDScopus