Energy-Efficient Ultra-Dense Network using Deep Reinforcement Learning
Hyungyu Ju, Seungnyun Kim, YoungJoon Kim, Hyojin Lee, Byonghyo Shim
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 17:23
With the explosive growth in mobile data traffic, pursuing energy efficiency has become one of key challenges for the next generation communication systems.
In recent years, an approach to reduce the energy consumption of base stations (BSs) by selectively turning off the BSs, referred to as the sleep mode technique, has been suggested. However, due to the macro-cell oriented network operation and also computational overhead, this approach has not been so successful in the past.
In this paper, we propose an approach to determine the BS active/sleep mode of ultra-dense network (UDN) using deep reinforcement learning (DRL).
A key ingredient of the proposed scheme is to use action elimination network to reduce the wide action space (active/sleep mode selection).
Numerical results show that the proposed scheme can significantly reduce the energy consumption of UDN while ensuring the QoS requirement of the network.
In recent years, an approach to reduce the energy consumption of base stations (BSs) by selectively turning off the BSs, referred to as the sleep mode technique, has been suggested. However, due to the macro-cell oriented network operation and also computational overhead, this approach has not been so successful in the past.
In this paper, we propose an approach to determine the BS active/sleep mode of ultra-dense network (UDN) using deep reinforcement learning (DRL).
A key ingredient of the proposed scheme is to use action elimination network to reduce the wide action space (active/sleep mode selection).
Numerical results show that the proposed scheme can significantly reduce the energy consumption of UDN while ensuring the QoS requirement of the network.