JOURNAL ARTICLE

Optimizing Waterflooding in Multi-Agent Systems through Decentralized Actor-Centralized Critic Reinforcement Learning

Abstract

Summary Waterflooding optimization is crucial for maximizing oil recovery in mature fields, where fixed injection rates may be suboptimal due to reservoir complexities. This study introduces a Multi-Agent Physics-Informed Reinforcement Learning (MAPIRL) framework for waterflooding optimization. Using a Markov decision process, multiple RL agents interact with a reservoir simulation model, learning optimal strategies through an actor-critic RL architecture. The evaluation, based on net present value (NPV) improvement, demonstrates MAPIRL's effectiveness. A comparison with Multi-Objective Particle Swarm Optimization (MOPSO) reveals MAPIRL's superior NPV performance. In conclusion, MAPIRL offers a scientifically accurate and efficient approach for optimizing waterflooding in mature oil fields, reducing water consumption and costs while maximizing economic benefits. Its capacity to handle complex optimization challenges positions it as a promising tool for the energy industry, with potential applications in addressing various intricate problems.

Keywords:
Reinforcement learning Computer science Multi-agent system Distributed computing Artificial intelligence

Metrics

4
Cited By
2.56
FWCI (Field Weighted Citation Impact)
0
Refs
0.85
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Reinforcement Learning in Robotics
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.