Skip to content

sean-doody/attention-in-transformers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Attention in Transformers

The notebook attention-in-transformers.ipynb implements the attention mechanisms covered in Josh Starmer's (StatQuest) DeepLearning.AI lesson, Attention in Transformers: Concepts and Code in PyTorch.

Topics covered include:

  1. Positional encodings.
  2. Self-attention.
  3. Masked self-attention.
  4. Encoder-decoder attention.
  5. Multi-head attention.

About

Coding attention mechanisms for transformer models in PyTorch.

Topics

Resources

Stars

Watchers

Forks