Imitating Human Search Strategies for Assembly

We present a Learning from Demonstration method for teaching robots to perform search strategies imitated from humans in scenarios where alignment tasks fail due to position uncertainty. The method utilizes human demonstrations to learn both a state invariant dynamics model and an exploration distribution that captures the search area covered by the demonstrator. We present two alternative algorithms for computing a search trajectory from the exploration distribution, one based on sampling and another based on deterministic ergodic control. We augment the search trajectory with forces learnt through the dynamics model to enable searching both in force and position domains. An impedance controller with superposed forces is used for reproducing the learnt strategy. We experimentally evaluate the method on a KUKA LWR4+ performing a 2D peg-in-hole and a 3D electricity socket task. Results show that the proposed method can, with only few human demonstrations, learn to complete the search task.

Ehlers Dennis, Suomalainen Markku, Lundell Jens, Kyrki Ville

A4 Article in conference proceedings

2019 International Conference on Robotics and Automation (ICRA). 20-24 May 2019, Montreal, Canada

D. Ehlers, M. Suomalainen, J. Lundell and V. Kyrki, "Imitating Human Search Strategies for Assembly," 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 2019, pp. 7821-7827. doi: 10.1109/ICRA.2019.8793780

https://doi.org/10.1109/ICRA.2019.8793780 http://urn.fi/urn:nbn:fi-fe2019081924648