toast is composable AI for the terminal. Pipe text in, get intelligence out. It talks to cloud models, local models, or both. It reads a .persona file for its system prompt and a .tools file to know which commands it's allowed to run. In chat mode, it's a collaborator. In pipe mode, it's a one-shot transform.
ito is intent-first version control. Instead of commits, you have moments. Instead of commit messages, you have why — the intent behind the change. Every moment captures the full tree, content-addressed by SHA-256. You can undo, branch, search by intent, and sync between machines over rsync.
jam is a shell rebuilt for AI. No quoting nightmares, no $ expansion. It has two constructs that matter here: times runs a command N times, while runs until the command signals completion. These turn toast from a tool you use into an agent that works on its own.
$ ito log "tighten the intro — too much throat clearing"
logged a3f912c tighten the intro — too much throat clearing
✓ saved — you can always get back to this
None of them knows the others exist at the code level. The coupling lives in the .persona — a plain text file that teaches the AI a workflow. Swap the persona, swap the workflow. The tools stay the same.
Pipe anything into toast — diffs, source files, history, prose, configs — and get intelligence back. One command, one answer. No conversation, no state, just Unix pipes.
$ ito changes | toast "review this diff"
$ cat draft.md | toast "what's the weakest paragraph?"
$ cat nginx.conf | toast "any security issues?"
$ cat scene.py | toast "will this render correctly in Blender?"
Toast enters chat mode. It reads .persona and knows what kind of work it's doing. It reads .tools and knows which commands it can run — ito, cat, grep, and whatever else you allow.
$ toast
> help me rewrite this README for a developer audience
Toast runs ito status and ito history to orient itself. It reads .crumbs to check for prior context. It tells you what it plans to do. It makes edits, checks them, logs intent via ito log. You ask for changes. It makes them, logs them.
Every action is reversible. If toast breaks something, ito undo takes you back.
This is jam's times construct:
🍞 7 times toast server.c "harden this server"
This calls toast seven times in a loop. Each invocation is stateless — toast has no memory of the last call. But the files have memory. And ito has memory. And .crumbs has memory.
Each round, toast:
.crumbs to see what past rounds learned.ito log with intent..crumbs with what it learned or what's left to do.DONE to stop the loop.If you squint, this is gradient descent. Each round minimizes one deficiency. The loss function is implicit in the prompt. The learning rate is one change per step. It's a loose analogy — there are no partial derivatives — but the shape is the same: iterative refinement toward a goal, with memory of where you've been.
The insight is that none of this is specific to code. The loop doesn't care what's in the file. Text is text. Refinement is refinement. The prompt is the loss function, the file is the state, and .crumbs is the gradient history.
🍞 10 times toast store.c "improve — make it correct, fast, and safe"
After six rounds, ito history reads like a dev journal:
a3f912c handle client timeouts so slow readers don't block accept loop
b7e44d1 validate Content-Length to prevent buffer overflow on malformed requests
c912fa3 add graceful shutdown on SIGTERM so in-flight responses complete
d0a8b72 switch listen backlog from 5 to SOMAXCONN for burst traffic
e4c1190 log peer address on connect for debugging production issues
f882d03 null-check malloc returns in request parser
DONE after 6 rounds
Each change is small. Each is tested. Each is reversible. The AI didn't try to rewrite the server in one pass — it made one improvement per round, verified it worked, and moved on.
Writing is iterative refinement. Every good writer knows this. The difference is that now the loop runs itself.
# .persona
You are an editor. You read prose carefully and improve it
one change at a time. Focus on: clarity, rhythm, removing
throat-clearing, tightening sentences, killing adverbs.
Read .crumbs at the start of every round. One edit per round.
If the prose is clean, output DONE.
# .tools
v
cat
wc
🍞 8 times toast draft.md "edit — tighter, clearer, no filler"
After eight rounds:
a1b2c3d cut the first two paragraphs — the piece starts better at paragraph three
b3c4d5e replace passive voice in section 2 with direct statements
c5d6e7f merge the two paragraphs about pricing into one — they said the same thing
d7e8f9a kill "very", "really", "just", "actually" — 14 instances removed
e9f0a1b rewrite the conclusion to echo the opening image
f1a2b3c tighten sentence lengths in the technical section — avg was 31 words, now 18
DONE after 6 rounds
You get a revision history that reads like editor's notes. Every change is logged with intent. Don't like the conclusion rewrite? v undo. Want to see what it looked like three rounds ago? v at c5d6e7f.
This works for any kind of writing — blog posts, documentation, READMEs, marketing copy, academic papers. Change the .persona, change the sensibility. An editor persona tightens prose. A fact-checker persona flags unsupported claims. A translator persona converts idiom by idiom.
# Documentation
🍞 10 times toast api-docs.md "improve — accurate, complete, no jargon without definition"
# Marketing copy
🍞 5 times toast landing.md "sharpen — every sentence should make someone want to try it"
# Academic paper
🍞 8 times toast paper.md "edit for journal submission — precision, citations, no hedging"
Configs are text. Security posture is refinable. Compliance is a loss function.
# .persona
You are a systems hardening specialist. You review config
files and make one security or performance improvement per
round. Validate changes where possible. Be conservative —
don't break running services. Read .crumbs. One change per
round. DONE when the config meets production standards.
# .tools
v
cat
nginx
grep
curl
🍞 8 times toast nginx.conf "harden — security, performance, best practices"
a1d2e3f add security headers — X-Frame-Options, X-Content-Type-Options, CSP
b3e4f5a disable server tokens — don't advertise nginx version
c5f6a7b enable OCSP stapling for faster TLS handshakes
d7a8b9c tune worker_connections and keepalive for expected load
e9b0c1d add rate limiting on /api/ endpoints — 10r/s with burst of 20
f1c2d3e redirect HTTP to HTTPS — no plaintext allowed
g3d4e5f set up gzip for text/html, application/json — min-length 256
DONE after 7 rounds
Works on Dockerfiles, Terraform configs, CI pipelines, Kubernetes manifests. Anything that lives in a text file and has a notion of "better."
# Docker
🍞 6 times toast Dockerfile "optimize — smaller image, fewer layers, no root"
# Terraform
🍞 8 times toast main.tf "review — security, cost, redundancy"
# CI pipeline
🍞 5 times toast .github/workflows/ci.yml "improve — faster, more reliable, better caching"
Legal language is iteratively refined by nature. Lawyers already do gradient descent — they just charge by the hour.
# .persona
You are a contract reviewer. You tighten language, close
loopholes, improve clarity, and flag ambiguity. One change
per round. Do not alter commercial terms — only language
precision and protective clauses. Read .crumbs. DONE when
the contract is tight.
# .tools
v
cat
wc
🍞 8 times toast services-agreement.md "tighten — close loopholes, remove ambiguity"
a2c3d4e define "Deliverables" in section 1 — was used but never defined
b4d5e6f add limitation of liability cap tied to contract value
c6e7f8a replace "reasonable efforts" with "commercially reasonable efforts" — legal standard
d8f9a0b add mutual indemnification clause — was one-sided favoring client
e0a1b2c specify governing law and dispute resolution venue
f2b3c4d clarify IP assignment — distinguish pre-existing IP from work product
g4c5d6e add termination for convenience with 30-day notice and payment for work completed
DONE after 7 rounds
You don't need to be a lawyer to use this. You need a lawyer to verify it. But the AI gets you 80% of the way, and every change is logged with intent so the lawyer can review the trail, not just the final draft.
SQL queries, data pipelines, analysis scripts — all text, all refinable.
# .persona
You are a data engineer. You optimize queries for correctness,
performance, and readability. Check for edge cases, null handling,
and index usage. One improvement per round. Read .crumbs. DONE
when the query is production-grade.
# .tools
v
cat
psql
🍞 6 times toast query.sql "optimize — correctness, performance, edge cases"
a3b4c5d add COALESCE for nullable join columns — was silently dropping rows
b5c6d7e replace correlated subquery with CTE — 40x faster on test data
c7d8e9f add index hint comment for the created_at range scan
d9e0f1a handle timezone conversion — was comparing UTC to local naively
e1f2a3b add row-level security filter — query was bypassing tenant isolation
DONE after 5 rounds
Three layers, all plain files on disk:
Toast's scratchpad. "Section 3 still needs a concrete example." Read every round. Appended as toast learns.
The full chain of moments with intent. Toast reads this to avoid redoing past work. Searchable with v search.
The actual content. Read fresh every round. No embeddings. No vector database. Just cat.
Toast doesn't need to know about version control. It just needs to know it can run v log. ito doesn't need to know about AI. It just stores moments. The .persona teaches the workflow; the tools stay decoupled.
The .tools file is the intent boundary — toast only runs commands listed in it. For enforcement, toast runs inside firejail — seccomp filters, filesystem namespaces, network restrictions at the OS level. The allowlist tells toast what to try; firejail decides what actually executes.
No orchestration layer. No state machine. No DAG. Just a shell loop, a text file for memory, and a VCS that makes everything reversible.
It's always the same:
🍞 N times toast <file> "improve <criteria>"
The file is the state. The prompt is the loss function. .crumbs is the gradient history. ito makes every step reversible. The content — code, prose, configs, scenes, contracts, queries — is irrelevant to the mechanism. It's all just text being iteratively refined toward a goal.
The only requirement is that the artifact can be represented as text, or as a script that produces the artifact. That covers almost everything.
Two config files in a project directory:
# .persona (abridged)
You are a [role]. You [do what]. Read .crumbs at the start
of every session. In autonomous mode: one change per round.
If nothing left to improve, output DONE.
# .tools — one command per line, bare names
ito
cat
ls
grep
# ... add whatever the domain needs
# Install toast
$ curl -sSL linuxtoaster.com/install | sh
# Start a project
$ mkdir myproject && cd myproject && ito init
# Pair work:
$ toast
> help me write a technical brief on our auth system
# Or let it rip:
🍞 10 times toast brief.md "improve — clearer, tighter, no jargon"
$ ito history
Then read the trail of intent it left behind.
All three tools are written in C. ito is ~1,400 lines with zero dependencies. toast is ~3,200 lines (json-c, libsodium). jam is the shell that ties them together. macOS and Linux. linuxtoaster.com