Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
The decentralisation and unpredictability of new renewable energy sources require rethinking our energy system. Data-driven approaches, such as reinforcement learning (RL), have emerged as new control strategies for operating these systems, but they have not yet been applied to system design. This paper aims to bridge this gap by studying the use of an RL-based method for joint design and control of a real-world PV and battery system. The design problem is first formulated as a mixed-integer linear programming problem (MILP). The optimal MILP solution is then used to evaluate the performance of an RL agent trained in a surrogate environment designed for applying an existing data-driven algorithm. The main difference between the two models lies in their optimization approaches: while MILP finds a solution that minimizes the total costs for a one-year operation given the deterministic historical data, RL is a stochastic method that searches for an optimal strategy over one week of data on expectation over all weeks in the historical dataset. Both methods were applied on a toy example using one-week data and on a case study using one-year data. In both cases, models were found to converge to similar control solutions, but their investment decisions differed. Overall, these outcomes are an initial step illustrating benefits and challenges of using RL for the joint design and control of energy systems.
Anna Fontcuberta i Morral, Alok Rudra, Santhanu Panikar Ramanandan, Joel René Sapera, Vladimir Dubrovskii, Sara Marti Sanchez
Olaf Blanke, Emanuela De Falco, Louis Philippe Albert, Hyeongdong Park, Baptiste Gauthier, Hyukjun Moon, Corentin Marie Hervé Robert Tasu