Beta. Content is under active construction and has not been peer-reviewed. Report errors on GitHub.Disclaimer

Attention Mechanism Theory

10 questionsDifficulty 4-9View topic
Intermediate
0 / 10
8 intermediate2 advancedAdapts to your performance
1 / 10
intermediate (4/10)conceptual
In the transformer self-attention mechanism, why are the attention scores divided by the square root of the key dimension before applying softmax?