Training Lstm For Unsupervised Anomaly Detection Without A Priori Knowledge
Yann Cherdo, Paul de Kerret, Renaud Pawlak
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 12:38
Unsupervised anomaly detection on time-series is widespread in the industry and an active research topic. Recently, impressive results have been obtained by leveraging the progresses of deep learning, and in particular through the use of Long Short Term Memory (LSTM) neural networks. Yet, latest state-of-the-art unsupervised LSTM-based solutions still require a priori knowledge about normality as they need to train the model on time-series without any anomaly. In contrast, we propose a novel anomaly detector, coined as LSTM-Decomposed (LSTM-D), that does not require this normality knowledge. More specifically, we pre-process the time-series with a spectral based information reduction such that the LSTM-based detector receiving the time-series becomes less likely to learn the anomaly, and hence miss its detection. We motivate our intuitions through simple examples and verify the performance improvement with respect to state-of-the-art solutions in a reference and publicly available data set.