encoder_decoder_attention_bias: Bias and mask weights for encodre-decoder attention. [batch_size, input_length] Raises: ValueError: If encoder type not found. """inputs = common_layers.flatten4d3d(inputs) encoder_input, self_attention_bias, encoder_decoder_attention_bias = ( transformer.t...