Self-attention mechanism can be effectively used to encode a variable-length sequence into some fixed-length embeddings. Inspired by the structured self-attention mechanism proposed in [18] for sentence embedding, we adapt it to improve speaker embeddings in the x-vector baseline system shown in Fi...