DISSERTATION

Multi-objective Path Finding Using Reinforcement Learning

Abstract

Path Finding is a vastly studied subject in the field of Computer Science. The problem of path-finding is defined as the discovery and plotting of an optimal route between two points on a plane. The existing algorithms that solve this problem are mostly static and rely heavily on the prior knowledge of the environment. They also require the environment to be deterministic. However, in real-world applications of the path-finding problem, often the environment is priorly unknown and stochastic, and with several conflicting objectives. In such cases, the aforementioned algorithms fail to produce effective results. In this project, we study and use a reinforcement learning approach for solving the many-objective path-finding problem, called Voting Q-Learning (VoQL), a model-free, on-policy learning algorithm. In this project, a set of optimal policies is determined with the help of the VoQL algorithm. This algorithm uses various voting methods borrowed from the field of social choice theory for action-selection. In addition to working with the existing methods for VOQL, the performance of additional voting methods is studied and evaluated for the first time.

Keywords:
Reinforcement learning Computer science Voting Path (computing) Set (abstract data type) Field (mathematics) Mathematical optimization Selection (genetic algorithm) Artificial intelligence Action selection Machine learning Theoretical computer science Mathematics

Metrics

7
Cited By
0.00
FWCI (Field Weighted Citation Impact)
21
Refs
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Transportation and Mobility Innovations
Physical Sciences →  Engineering →  Automotive Engineering
Optimization and Search Problems
Physical Sciences →  Computer Science →  Computer Networks and Communications
Auction Theory and Applications
Social Sciences →  Decision Sciences →  Management Science and Operations Research
© 2026 ScienceGate Book Chapters — All rights reserved.