JOURNAL ARTICLE

Model-Based Reinforcement Learning with Multinomial Logistic Function Approximation

Taehyun HwangMin-hwan Oh

Year: 2023 Journal:   Proceedings of the AAAI Conference on Artificial Intelligence Vol: 37 (7)Pages: 7971-7979   Publisher: Association for the Advancement of Artificial Intelligence

Abstract

We study model-based reinforcement learning (RL) for episodic Markov decision processes (MDP) whose transition probability is parametrized by an unknown transition core with features of state and action. Despite much recent progress in analyzing algorithms in the linear MDP setting, the understanding of more general transition models is very restrictive. In this paper, we propose a provably efficient RL algorithm for the MDP whose state transition is given by a multinomial logistic model. We show that our proposed algorithm based on the upper confidence bounds achieves O(d√(H^3 T)) regret bound where d is the dimension of the transition core, H is the horizon, and T is the total number of steps. To the best of our knowledge, this is the first model-based RL algorithm with multinomial logistic function approximation with provable guarantees. We also comprehensively evaluate our proposed algorithm numerically and show that it consistently outperforms the existing methods, hence achieving both provable efficiency and practical superior performance.

Keywords:
Regret Reinforcement learning Multinomial distribution Markov decision process Multinomial logistic regression Computer science Dimension (graph theory) Mathematical optimization Function (biology) Markov chain Artificial intelligence Algorithm Markov process Mathematics Machine learning Combinatorics Statistics

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
80
Refs
0.12
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Reinforcement Learning in Robotics
Physical Sciences →  Computer Science →  Artificial Intelligence
Data Stream Mining Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.