Skip to main content

Hierarchical Caching Via Deep Reinforcement Learning

Gang Wang, Georgios B. Giannakis, Alireza Sadeghi

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 15:29
04 May 2020

Next generation wireless and wireline networks, including Internet, cellular, and content delivery networks are to serve user file requests proactively. To this aim, by storing anticipated popular contents during off-peak periods, and fetching them to end users during on-peak instances, these networks smoothen out the load fluctuations on the back-haul links. In this context, many practical networks contain a parent caching node connected to multiple leaf nodes to serve user file requests. To model the two-way interactive influence between caching decisions at the parent and leaf nodes, a reinforcement learning framework is put forth in this work. Furthermore, to endow a scalable algorithm that can effectively handle the the curse of dimensionality, a deep reinforcement learning approach is developed. Our novel caching policy relies on a deep Q-network to enforce the parent node with ability to learn-and-adapt to unknown policies of leaf nodes as well as spatio-temporal dynamic evolution of file requests, results in remarkable caching performance, as corroborated through numerical tests.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00