while MultiHeadAttention should have to pay attention to two types of attention masks, 1. That masks the padding (called padding mask) 2. That prevents positions from attending (called attention mask) While current implementation seems to have only one mask i.e. that masks the padding (1st option).