Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:15:22
11 May 2022

To address the anomaly detection problem in the presence of noisy observations and to tackle the tuning and efficient exploration challenges that arise in deep reinforcement learning algorithms, we in this paper propose a soft actor-critic deep reinforcement learning framework. To evaluate the proposed framework, we measure its performance in terms of detection accuracy, stopping time, and the total number of samples needed for detection. Via simulation results, we demonstrate the performance when soft actor-critic algorithms are employed, and identify the impact of key parameters, such as the sensing cost, on the performance. In all results, we further provide comparisons between the performances of the proposed soft actor-critic and conventional actor-critic algorithms.

More Like This