A number of models based on Bayesian network have recently been proposed and shown to be biologically plausible enough to explain various phenomena in visual cortex. The present work studies how far the same approach can extend to motor learning, in particular, in combination with reinforcement learning, with the aim of suggesting a possible cooperation mechanism of cerebral cortex and basal ganglia. The basis of our model is BESOM, a biologically solid model for cerebral cortex proposed by Ichisugi, but extended with a reinforcement learning capability. We show how reinforcement learning can benefit from Bayesian network computations with unsupervised learning, in particular, in approximate representation of a large state-action space and detection of a goal state. By a simulation with a concrete BESOM network inspired by anatomically known cortical hierarchy to carry out a reach movement task, we demonstrate our model's stable and robust ability for motor learning.
Jun IzawaToshiyuki KondoK. Ito
Jun IzawaToshiyuki KondoKoji Ito
Blanka BencsikMárton Szemenyei