LSTM-based Encoder-Decoder for Multi-sensor Anomaly Detection(EncDec-AD),程序员大本营,技术文章内容聚合第一站。
1.主要工作是将机械设备的传感器数据,LSTM-encoder-decoder模型输入正常数据时间序列训练模型,重构时间序列,然后使用异常数据进行测试,产生较高的重构错误,表明时间序列数据为异常的。 ps:在encoder-decoder模型中有score机制,较高的异常分数是更可能为异常的。
We present multi-step prediction results for gas concentration time series based on the ARMA model, the CHAOS model and the Encoder-Decoder model (single-sensor and multi-sensor) and compare these results. The Encoder-Decoder model provides high robustness in a multi-st...
3). The most important difference is related to the architecture of both networks, where the seq-to-seq model54 uses a multilayered LSTM, namely, four layers for both the encoder and decoder layers. In contrast, the proposed LSTM-AE model uses a shallow LSTM, namely, one LSTM layer for...
A deep learning model [22] is used to predict traffic flow using an LSTM encoder-decoder architecture, considering historical traffic flow data and weather conditions as inputs to predict future traffic flow. The model comprises an LSTM encoder to capture temporal dependencies in the input sequence...
The LSTM-AE model consists of two main components: the Encoder and the Decoder. The Encoder captures the temporal dependencies and features of the input sequence through multiple LSTM layers, while the Decoder is responsible for reconstructing the original time series data from the hidden ...
Therefore, the hidden state of the most recent Bi-LSTM cell is the output vector generated by the encoder. Subsequently, the repeat vector’s reconstructed original sequence input is fed into the first Bi-LSTM-based hidden layer of the decoder. The layer uses this vector as its first hidden...
This work proposes a multimodal fusion system based on a single RGB camera and six inertial sensors for effective human pose estimation. For the unimodal data analysis, two state-of-the-art methods have been adopted; namely the Transformer Inertial Poser network for the inertial sensor data ...
this paper proposes an effective deep learning model for solar power forecasting based on temporal correlation and meteorological knowledge. The model adopts an encoder-decoder architecture with multi-level attention machines and long short-term memory units. The encoder is designed to dynamically extract...
So that Y≈N(E(Y)), the LSTM autoencoder is prepared into two roles, decoder E(.) and encoder N(.). To put it another way, the RSSI-based LSTM autoencoder learns an encrypting phase that accurately captures the layout of the input data, as shown in Fig. 4, and a decryption ...