Generating interactive explainers on interesting topics using AI, inspired by the beautiful explainers.blog. Because you don't really understand something until you can play with it.
Your language model is secretly a reward model. Explore how DPO eliminates reinforcement learning from alignment with a single elegant equation.
A visual, interactive guide to the Transformer paper that replaced recurrence with attention and changed modern AI.
See how masked language modeling and bidirectional context reshaped NLP benchmarks overnight.
Explore the power laws that predict performance from model size, data, and compute.
Understand how rotary embeddings encode position through elegant geometric rotations.
Learn why training smaller models on more tokens can beat larger under-trained models.
Dive into the design decisions that made LLaMA efficient, open, and highly influential.
Explore how scale unlocked in-context learning and changed the direction of language models.