Unix Reimagined | toast
AI tooling for the people who keep the lights on.
FREE Install toast on Mac or Linux
curl -sSL linuxtoaster.com/install | sh
Click to copy
On a MacBook? Use Apple Intelligence as your inference provider through appled. Completely free. No account, no API key, no cost.
Need more? pkill appled and toast will switch to
$20 Pay and Go. Top off as needed. Subscribe when ready.
We only look at anonymized usage data (model, token count) to improve our service — never prompts.
BYOK support: OpenAI · Anthropic · Google · Mistral · Groq · Cerebras · Perplexity · xAI · OpenRouter · Together
Local support: appled · Ollama · MLX · LM Studio · KoboldCpp · llama.cpp · vLLM · LocalAI · Jan
toast — AI power in your terminal
AI the Unix way. Composable.
Understand anything
Legacy code. Config files. Cryptic logs. Get explanations.
Get the command you need
Describe what you want in plain English. Get the exact command.
Diagnose your system
Not sure which tab is burning the CPU? Ask.
PID 75517 — Safari WebContent, 45.3% CPU. Kill it: kill 75517
Edit files at scale
toast reads files, writes patches, and works with any format.
Scrape the web
Pipe the web through Unix tools, let toast do the thinking. Schedule with cron for daily briefings.
sed 's/<[^>]*>//g' | \
toast "top 5 articles by novelty"
Track any repo
Combine curl, jq, and toast to understand what's happening in any repo.
jq '.[].commit.message' | \
toast "summarize recent kernel changes"
Terminal Chat
When you need a back-and-forth. Pull files into context with @.
> @models.py explain this
sure, the file contains...
> what does function...
Toast on Telegram
Talk to toast from your phone. Link your account, then message the bot.
Bring your own key
Use your own API keys — zero cost from us. Supports OpenAI, Anthropic, Google, Mistral, Groq, Cerebras, xAI, and more. Or run fully offline with Ollama, MLX, LM Studio, and others.
Unix re-imagined, the full stack
$49/mo adds an ever growing list of Unix-reimagined tools. AI native shell, version control, local inference, and automation — all composable with toast.
jam — A shell AI can actually use
No quoting, no $ surprises. Built-in loops, RPN math, and UDP multicast for multi-machine coordination.
ito — Version control, 15 commands
Record intent, derive diffs. No staging area, no detached HEAD. Single C file, ~1,100 lines. Search by why, not what.
toasted — Local inference, zero cost
From-scratch inference daemon for Apple Silicon. ~100 tok/s generation, 0.6s to first word. Written in C++, no Python. Your code never leaves the machine.
email & imessage — Bots in one line
Build AI bots for email and iMessage. Personality lives in your .persona file. One line to deploy.
Pricing
Unix is deterministic — predictable, composable, reliable. AI is non-deterministic — creative, adaptive, surprising. The hard part is the boundary work of making them work together. That's what we're building: Unix reimagined for both AI and people. Your membership funds that work.
Free to explore on MacBooks with appled, uses Apple Intelligence as inference provider. $20 Pay and Go inference when you need more, just pkill appled. $49/mo Member for the full stack. Founding Partner for teams.
Free
- toast + appled (Apple Intelligence)
- Installed automatically on MacBooks
- No account, no API key, no cost
- Runs entirely on-device
- Great for getting started
PayGo
- toast
- $20 in AI credits included — top off any time
- More powerful models than appled
- Anonymized usage data only (model, token count — never prompts)
- Custom personas via .persona files
- BYOK & local models free
- All updates
- Community Support
Member
- Everything in PayGo
- toasted local inference daemon
- Qwen3-Next-Coder on M4 at ~100 tok/s — zero cost per token. Requires Apple Silicon Mac · 64 GB (4-bit) or 128 GB (4/6/8-bit)
- Ever growing list of Unix re-imagined tools: jam, ito, email, imessage
- UDP networking for agents
- Priority Support
- Includes UnixClaw — your own Mac Mini AI assistant
- Includes Gradient Descent for Anything — autonomous refinement
Hosted
- Everything in Member
- We rack and plug your Mac Mini into network and power
- UnixClaw + toasted pre-configured
- Qwen3-Next-Coder on M4 at ~100 tok/s — zero cost per token. Requires Apple Silicon Mac · 64 GB (4-bit) or 128 GB (4/6/8-bit)
- Ship us your Mac Mini — or we source one at cost + 15%
- Includes UnixClaw — your own Mac Mini AI assistant
- Includes Gradient Descent for Anything — autonomous refinement
- Multiple Minis? Thunderbolt cluster networking available
Founding Partner
- Everything in Member
- Fund the rewrite of Unix.
- You tell us what is missing. We implement. You get the credit.
- Priority & dedicated support
- Consulting, seminars & FDE options
FAQ
How does it work?
Lightweight toast talks to local toastd, which keeps an HTTP/2 connection pool to linuxtoaster.com. Written in C to minimize latency. With BYOK, toastd connects directly to your provider—your traffic never touches our servers.
What's BYOK?
Got a PROVIDER_API_KEY set for Anthropic, Cerebras, Google Gemini, Groq, OpenAI, OpenRouter, Together, Mistral, Perplexity, and/or xAI? Use toast -p provider. Zero config.
What is a Founding Partner?
Companies funding the rewrite of Unix. Your team gets a software license, priority support, and consulting options. You're funding tools that make software simpler for all LinuxToaster users. Talk to us.
What's appled?
On MacBooks, the installer downloads appled — a local inference provider that uses Apple Intelligence. It's completely free: no account, no API key, no cost per token. Your prompts never leave your machine. When you outgrow what Apple Intelligence can offer, just run pkill appled — toast will automatically bring up a Stripe page for $20 PayGo with more powerful models.
Can I run it fully offline?
Yes. Use any local backend appled, toasted, Ollama, MLX, LM Studio, KoboldCpp, llama.cpp, vLLM, LocalAI, or Jan. No internet, no API keys, full privacy.
What's jam?
A shell rebuilt for AI. No quoting, no expansion, no $ syntax. Strings just work. Unrecognized input goes to the AI. Includes set/get for env vars, while/times for loops, RPN math, and a UDP multicast basket for multi-machine coordination.
What's toasted?
A from-scratch local inference daemon for Apple Silicon (Member tier). Unlike appled, which runs the smaller Apple Intelligence model for free, toasted loads Qwen3-Next-Coder — a 30B-parameter coding model — via C++ against Apple's MLX API. ~100 tok/s generation, ~400 tok/s prefill, 0.6s time-to-first-token with session caching. 128 GB supports 8/6/4-bit quantization, 64 GB supports 4-bit. Toasted is part of the $49/m subscription.
Where's my data stored?
Locally. Context in .crumbs, conversations in .chat. Version them, grep them, delete them. Your machine, your files.
macOS? Windows?
macOS and Linux today.
What about consulting?
Consulting is available for teams that want hands-on help with deployment, integration, or training. Enterprise accounts have a Forward Deployed Engineering option.
How does billing work?
On MacBooks, toast is free with appled (Apple Intelligence) — no account needed. $20 PayGo gets you a membership and $20 in AI credits — top off anytime. AI inference is charged based on use. BYOK or local inference carries no cost. We collect anonymized usage data (which model, token count) but never your prompts. You may choose to pay for consulting. You may choose to pay the monthly cost of a FDE.