JOURNAL ARTICLE

A motor learning neural model based on Bayesian network and reinforcement learning

Abstract

A number of models based on Bayesian network have recently been proposed and shown to be biologically plausible enough to explain various phenomena in visual cortex. The present work studies how far the same approach can extend to motor learning, in particular, in combination with reinforcement learning, with the aim of suggesting a possible cooperation mechanism of cerebral cortex and basal ganglia. The basis of our model is BESOM, a biologically solid model for cerebral cortex proposed by Ichisugi, but extended with a reinforcement learning capability. We show how reinforcement learning can benefit from Bayesian network computations with unsupervised learning, in particular, in approximate representation of a large state-action space and detection of a goal state. By a simulation with a concrete BESOM network inspired by anatomically known cortical hierarchy to carry out a reach movement task, we demonstrate our model's stable and robust ability for motor learning.

Keywords:
Reinforcement learning Computer science Artificial intelligence Machine learning Unsupervised learning Action selection Competitive learning Motor learning Artificial neural network Hierarchy Neuroscience Psychology

Metrics

4
Cited By
0.59
FWCI (Field Weighted Citation Impact)
33
Refs
0.68
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Neural dynamics and brain function
Life Sciences →  Neuroscience →  Cognitive Neuroscience
Visual perception and processing mechanisms
Life Sciences →  Neuroscience →  Cognitive Neuroscience
Functional Brain Connectivity Studies
Life Sciences →  Neuroscience →  Cognitive Neuroscience
© 2026 ScienceGate Book Chapters — All rights reserved.