Modeling The Environment In Deep Reinforcement Learning: The Case Of Energy Harvesting Base Stations
Nicola Piovesan, Paolo Dini, Marco Miozzo
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 13:51
In this paper, we focus on the design of energy self-sustainable mobile networks by enabling intelligent energy management that allows the base stations to mostly operate off-grid by using renewable energy. We propose a centralized control algorithm based on Deep Reinforcement Learning. The single agent is able to learn how to efficiently balance the energy inflow and spending among base stations observing the environment and interacting with it. In particular, we provide a study on the performance achieved by this approach when considering different representations of the environment. Numerical results demonstrate that using a good level of abstraction in the choice of the representation variables may enable a proper mapping of the environment into actions to take, so as to maximize the numerical reward.