Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:16:45
10 Jun 2021

Wireless edge caching is an important strategy to fulfill the demands in the next generation wireless systems. Recent studies have indicated that among a network of small base stations (SBSs), joint content placement improves the cache hit performance via reinforcement learning, since content requests are correlated across SBSs and files. In this paper, we investigate multi-agent reinforcement learning (MARL), and identify four scenarios for cooperation. These scenarios include full cooperation (S1), episodic cooperation (S2), distributed cooperation (S3), and independent operation (no-cooperation). MARL algorithms have been presented for each scenario. Simulations results for averaged normalized cache hits show that cooperation with one neighbor (S3) can improve the performance significantly closer to full-cooperation (S1). Scenario 2 shows the importance of frequent cooperation, when the level of cooperation is high, which depends on the number of SBSs.

Chairs:
Chang Yoo

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00