We investigate incremental word learning in a Hidden Markov Model (HMM) framework suitable for human-robot interaction.In interactive learning, the tutoring time is a crucial factor.Hence our goal is to use as few training samples as possible while maintaining a good performance level.To adapt the states of the HMMs, different large-margin discriminative training strategies for increasing the separability of the classes are proposed.We also present a novel estimation of the variance floor when a very low number of training data is used.Finally our approach is successfully evaluated on isolated digits taken from the TIDIGITS database.
Irene Ayllón ClementeMartin HeckmannBritta Wrede
Reda JouraniKhalid DaoudiRégine Andre-ObrechtDriss Aboutajdine
Khalid DaoudiReda JouraniRégine Andre-ObrechtDriss Aboutajdine
Reda JouraniKhalid DaoudiRégine Andre-ObrechtDriss Aboutajdine