JOURNAL ARTICLE

Long Time Sequential Task Learning From Unstructured Demonstrations

Huiwen ZhangYuwang LiuWeijia Zhou

Year: 2019 Journal:   IEEE Access Vol: 7 Pages: 96240-96252   Publisher: Institute of Electrical and Electronics Engineers

Abstract

Learning from demonstration (LfD), which provides a natural way to transfer skills to robots, has been extensively researched for decades, and an army of methods and applications have been developed and investigated for learning an individual or low-level task. Nevertheless, learning long time sequential tasks is still very difficult as it involves task segmentation and sub-task clustering under an extremely large demonstration variance. Besides, the representation problem should be considered when doing segmentation. This paper presents a new unified framework to solve the problems of segmentation, clustering, and representation in a sequential task. The segmentation algorithm segments unstructured demonstrations into movement primitives (MPs). Then, the MPs are automatically clustered and labeled so that they can be reused in other tasks. Finally, the representation model is leveraged to encode and generalize the learned MPs in new contexts. To achieve the first goal, a change-point detection algorithm based on Bayesian inference is leveraged. It can segment unstructured demonstrations online with minimum prior knowledge requirements. By following the Gaussian distributed assumption in the segmentation model, MPs are encoded by Gaussians or Gaussian mixture models. Thus, the clustering of MPs is formulated as a clustering over cluster (CoC) problem. The Kullback-Leibler divergence is used to measure similarities between MPs, through which the MPs with smaller distance are clustered into the same group. To replay and generalize the task in novel contexts, we use task-parameterized regression models such as the Gaussian mixture regression. We implemented our framework on a sequential open-and-place task. The experiments demonstrate that the segmentation accuracy of our framework can reach 94.3% and the recognition accuracy can reach 97.1%. Comparisons with the state-of-the-art algorithm also indicate that our framework is superior or comparable to their results.

Keywords:
Computer science Cluster analysis Artificial intelligence Task (project management) Segmentation Programming by demonstration Machine learning Multi-task learning Inference Mixture model Pattern recognition (psychology) Robot

Metrics

1
Cited By
0.17
FWCI (Field Weighted Citation Impact)
47
Refs
0.47
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Robot Manipulation and Learning
Physical Sciences →  Engineering →  Control and Systems Engineering
Reinforcement Learning in Robotics
Physical Sciences →  Computer Science →  Artificial Intelligence
Prosthetics and Rehabilitation Robotics
Physical Sciences →  Engineering →  Biomedical Engineering

Related Documents

JOURNAL ARTICLE

Complex Task Learning from Unstructured Demonstrations

Scott Niekum

Journal:   Proceedings of the AAAI Conference on Artificial Intelligence Year: 2021 Vol: 26 (1)Pages: 2402-2403
JOURNAL ARTICLE

Learning Task Specifications from Demonstrations

Marcell Vazquez-ChanlatteSusmit JhaAshish TiwariMark K. HoSanjit A. Seshia

Journal:   arXiv (Cornell University) Year: 2017 Vol: 31 Pages: 5367-5377
JOURNAL ARTICLE

Learning Task Priorities from Demonstrations

João SilvérioSylvain CalinonLeonel RozoDarwin G. Caldwell

Journal:   IEEE Transactions on Robotics Year: 2018 Vol: 35 (1)Pages: 78-94
JOURNAL ARTICLE

Learning grounded finite-state representations from unstructured demonstrations

Scott NiekumSarah OsentoskiGeorge KonidarisSachin ChittaBhaskara MarthiAndrew G. Barto

Journal:   The International Journal of Robotics Research Year: 2014 Vol: 34 (2)Pages: 131-157
BOOK-CHAPTER

Learning Temporal Task Specifications From Demonstrations

Mattijs BaertSam LerouxPieter Simoens

Lecture notes in computer science Year: 2024 Pages: 81-98
© 2026 ScienceGate Book Chapters — All rights reserved.