这么看来,写的真好。block的复用率很高。 # decoder 初始化self.decoder=Decoder([DecoderLayer(# 注意:这边第一层是prob attention mix == True,第二层是个 full attention mix == FalseAttentionLayer(Attn(True,factor,attention_dropout=dropout,output_attention=False),d_model,n_heads,mix=mix),AttentionLay...
Based on the DQN model, we propose a multi-factor stock trading strategy based on DQN with Multi-BiGRU and multi-head ProbSparse self-attention. The main contributions include three aspects: (1) To characterize stock prices from multiple perspectives, a new multi-factor strategy, including fi...