"Thesechallenges stem from a lack of explainability, leading to c ompromised accuracy and diminished trustworthiness.To address this issue, this paper proposes an explainable neural network model, the Attention-SP-LSTM-FIG, s pecifically designed for productivity prediction in aircraft final assembly ...
Fig. 1 Receptive field of Node A Full size image The basic principle of GCN is to learn the representations of each node by aggregating the features of its first-order neighbours through a parameterized learning mechanism. The receptive field of each node includes the immediate one-hop neighbours...
Zhou X, Feng J, Li Y (2021) Non-intrusive load decomposition based on CNN–LSTM hybrid deep learning model. Energy Rep 7:5762–5771 MATH Google Scholar Luan W, Zhang R, Liu B, Zhao B, Yu Y (2023) Leveraging sequence-to-sequence learning for online non-intrusive load monitoring in ...
There is a clear accuracy gap between the LSTM encoder and the BLSTM encoder because the latter uses the whole utterance information to generate the encoder output. In order to reduce the gap, it is a natural idea to use future context frames to generate more informative encoder output with ...
Then, we vary the number of the graph convolution step K from 1 to 4 on three datasets. The results are shown in Fig.6. we can see that as the graph convolution step increases and then decreases. The best performance is step 2 on all datasets. The cause of the above is that a low...
The CNN-LSTM-Attention model was compared with the LSTM model (Fig. 5), and the meteorological data and base flow data on the NTH day and before the NTH day were also used as inputs to simulate and predict the runoff on the NTH day. The results show that the simulation and prediction...
moreclearversionoftheeyeregionintoaccount,asFig.1 2.RelatedWork illustrates. Wedeheglobalrewardforreinmentlearn-FaceHalnationandImageSuper-Resolution. ingbytheoverallperformanceofthesuper-resolvedface,Facehalnationproblemisaspecialcaseofimagesuper-
As illustrated in Fig. 1, we need to conduct grid-based map segmentation to generate the crowd flows image used for the input of the overall network at first. In TaxiBJ, we utilized the same settings as in [1],including the definition of two kinds of crowd flows and the crowd flow mat...
Recent studies have shown that deep learning methods, especially long short-term memory (LSTM) models, have good results in short-term traffic flow prediction. Furthermore, the attention mechanism can properly assign weights to distinguish the importance of traffic time sequences, thereby further ...
The GLLA model consists of three main modules, as depicted in Fig. 1. The first module includes a hyperbolic embedding layer, collaborative graph learning layer, and bi-LSTM layer. These components extract hidden features from medical codes, patients’ diagnoses, and admission durations. The secon...