Beta. Content is under active construction and has not been peer-reviewed. Report errors on GitHub.Disclaimer

Transformer Architecture

9 questionsDifficulty 4-7View topic
Intermediate
0 / 9
8 intermediate1 advancedAdapts to your performance
1 / 9
intermediate (4/10)conceptual
In the transformer self-attention mechanism, why are the attention scores divided by the square root of the key dimension before applying softmax?