Summary Waterflooding optimization is crucial for maximizing oil recovery in mature fields, where fixed injection rates may be suboptimal due to reservoir complexities. This study introduces a Multi-Agent Physics-Informed Reinforcement Learning (MAPIRL) framework for waterflooding optimization. Using a Markov decision process, multiple RL agents interact with a reservoir simulation model, learning optimal strategies through an actor-critic RL architecture. The evaluation, based on net present value (NPV) improvement, demonstrates MAPIRL's effectiveness. A comparison with Multi-Objective Particle Swarm Optimization (MOPSO) reveals MAPIRL's superior NPV performance. In conclusion, MAPIRL offers a scientifically accurate and efficient approach for optimizing waterflooding in mature oil fields, reducing water consumption and costs while maximizing economic benefits. Its capacity to handle complex optimization challenges positions it as a promising tool for the energy industry, with potential applications in addressing various intricate problems.
Prashant TrivediN. Hemachandra
Junyu ZhangAmrit Singh BediMengdi WangAlec Koppel
David SimõesNuno LauLuís Paulo Reis
Chungui LiMeng WangYuan Qing-neng