论文标题
Tranad:多元时间序列数据中用于异常检测的深变压器网络
TranAD: Deep Transformer Networks for Anomaly Detection in Multivariate Time Series Data
论文作者
论文摘要
多元时间序列数据中有效的异常检测和诊断对于现代工业应用非常重要。但是,构建能够快速准确地确定异常观察的系统是一个具有挑战性的问题。这是由于缺乏异常标签,高数据波动率以及对现代应用中超低推理时间的需求。尽管最近有深度学习方法进行了异常检测的发展,但只有少数可以应对所有这些挑战。在本文中,我们提出了一种基于深度变压器网络检测和诊断模型的Tranad,该模型使用基于注意力的序列编码器来迅速推断数据中更广泛的时间趋势。 Tranad使用基于焦点分数的自我调节来实现强大的多模式特征提取和对抗训练以获得稳定性。此外,模型不足的元学习(MAML)使我们能够使用有限的数据训练模型。对六个公开数据集的广泛实证研究表明,Tranad可以通过数据和时间效率培训在检测和诊断性能方面胜过最先进的基线方法。具体而言,与基准相比,Tranad将F1得分提高了17%,将训练时间减少了99%。
Efficient anomaly detection and diagnosis in multivariate time-series data is of great importance for modern industrial applications. However, building a system that is able to quickly and accurately pinpoint anomalous observations is a challenging problem. This is due to the lack of anomaly labels, high data volatility and the demands of ultra-low inference times in modern applications. Despite the recent developments of deep learning approaches for anomaly detection, only a few of them can address all of these challenges. In this paper, we propose TranAD, a deep transformer network based anomaly detection and diagnosis model which uses attention-based sequence encoders to swiftly perform inference with the knowledge of the broader temporal trends in the data. TranAD uses focus score-based self-conditioning to enable robust multi-modal feature extraction and adversarial training to gain stability. Additionally, model-agnostic meta learning (MAML) allows us to train the model using limited data. Extensive empirical studies on six publicly available datasets demonstrate that TranAD can outperform state-of-the-art baseline methods in detection and diagnosis performance with data and time-efficient training. Specifically, TranAD increases F1 scores by up to 17%, reducing training times by up to 99% compared to the baselines.