haotianblog
LLM Visualizer

LLM Visualizer

LLM visualizer

From tokens to next-token generation

This teaching lab uses only browser-side simulation data so you can inspect tokenization, embeddings, self-attention, sampling, and KV cache without loading a real model.

Input

Choose an example or type a short prompt

For explainability, v1 keeps the first 16 tokens and uses deterministic teaching weights.

Mechanism walkthrough

Watch one LLM forward pass like a debugger

Step through the LLM pipeline and the matching panel will open automatically.

0