Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Heating, Ventilation, and Air Conditioning (HVAC) Systems utilize much energy, accounting for 40% of total building energy use. The temperatures in buildings are commonly held within narrow limits, leading to higher energy use. Measurements from office buildings in Switzerland illustrated that the air temperature was relatively steady for most of the hours; it was higher than that prescribed by building standards in Switzerland. Thus, designing energy-efficient building thermal control policies to reduce HVAC energy use while maintaining a dynamic indoor environment is essential. Also, studies suggest that such an environment may be healthier for the human body. However, it is challenging to implement such a control policy, considering the energy efficiency of the HVAC System, dynamic indoor environment, and thermal acceptability requirements. Optimizing the dynamic and stochastic energy demand using conventional control techniques is tricky. The challenge becomes even more complicated when the provisions concerning the dynamic indoor environment and thermal acceptability of occupants are introduced. A Reinforcement Learning (RL)-based framework is proposed to tackle this complexity for energy optimization and healthy thermal environment control in buildings. A novel deep RL algorithm with experience replay, called deep deterministic policy gradient (DDPG), has performed excellently in many continuous control tasks. In building controls, temperature, humidity, and airspeed, which are the predominant control variables, are all continuous. Therefore, DDPG is very suitable for addressing the problem in this scenario. The goal is to create an intelligent thermostat that can accurately control the indoor temperature based on data recorded from the indoor environment and the human body. To this objective, a prediction model for the energy expenditure in the human body from other more easily measurable physiological and environmental parameters has been developed. The prediction models leveraged the long short-term memory (LSTM) networks. The results show that the models developed provide a good level of prediction accuracy during both low and medium-intensity activities, with the MAPE mostly lying in the range of 5-20%. Harnessing the above algorithms, DIET Controller, modeled in python, was initially introduced in a simulation environment with energy plus using the functional mockup unit (FMU) interface. Co-simulation results show that the DIET Controller can reduce the HVAC energy use by about 40% compared to the conventional rule-based controller and facilitate the creation of a dynamic environment. Subsequently, the DIET Controller was tested in real operation with the ICE climate chamber. The experimental setup consisted of a single zone, which was set up with multiple environmental sensors gathering real-time data for the DIET Controller input. The experiments were usually conducted over 24 hours with thermal dummies simulating the heat gains from occupants. Integrating the DRL-based control framework with the existing HVAC system was quite challenging, which the BACnet protocol facilitated. Results showed that in real operation, the DIET Controller could reduce energy use by 28-64% compared to a rule-based control. Additionally, DIET Controller created a dynamic indoor environment for 96% of occupied hours. The results provide evidence that DIET Controller can be an effective method for controlling systems in real-world operation.
Berend Smit, Xiaoqi Zhang, Sauradeep Majumdar, Hyunsoo Park