LLM in the Browser
A tiny AI playground running entirely on your machine. The models are small, the results are unpredictable, and that's the whole point. Pick a model, download it once, and see what language models actually do up close.
Fair warning: the 135M model thinks 10+10 equals twelve, you're absolutely right! You have been warned.
This page is a technical demonstration only. Unwrite is not responsible for any content generated by these models. Outputs may be inaccurate, nonsensical, or inappropriate.
Device note: Larger models need more RAM and a faster CPU/GPU. Models over ~1 GB may crash mobile browsers — iOS Safari limits tabs to roughly 1–3 GB even on 8 GB iPhones, and Android Chrome caps around 2–4 GB on flagships. Older or budget devices will struggle with anything above the smallest models.
Welcome! I can chat with you using AI models that run entirely in your browser. Nothing you type ever leaves your device.
Pick a model to get started
Downloads once and caches in your browser. Smaller = faster.
How does in-browser AI work?
Everything on this page runs locally in your browser. These panels explain the technology behind it.