"plain English" | toast
Your terminal speaks English.
Install toast (Mac or Linux)
curl -sSL linuxtoaster.com/install | sh
Click to copy
Prepaid $20 Inference. Top off anytime. BYOK supported. Zero commitment. Plans from $9/mo.
Slices Leverage built-in personas, Coder, Sys, Writer - or create your own.
BYOK: OpenAI · Anthropic · Google · Mistral · Groq · Cerebras · Perplexity · xAI · OpenRouter · Together · Ollama · MLX
Use Cases
Everything you'd ask Google or ChatGPT about the terminal—but faster, and right where you need it.
Understand anything
Legacy code. Config files. Cryptic logs. Get explanations.
Get the command you need
Describe what you want in plain English. Get the exact command.
Fix errors instantly
Pipe your error message. Get the fix.
Diagnose your system
Not sure what's eating your RAM? Ask.
Terminal Chat
When you need a back-and-forth conversation.
>hi, can you explain @models.py
sure, the file contains...
>what does function...
Telegram Chat
Link once, chat anywhere. Text any Slice from your phone.
# Send /link ABC123 to @linuxtoasterbot
Example: A shell that teaches you
Errors auto-explain themselves. Learn as you go, never get stuck.
🍞 ~> gcc main.c
main.c:42: error: expected ';'
🍞 Missing semicolon on line 41.
🍞 ~> curl api.local
Connection refused
🍞 Server isn't running. Try: docker-compose up
Power Users
Simple for beginners. Deep for experts. The toaster grows with you.
Slices
Specialized AI personas. No prompt engineering—the name is the interface.
Pipe chains
Compose like Unix. Chain multiple transforms.
Project context
Drop a .crumbs file. AI knows your stack.
Edit a book
Iterative refinement. Each pass reads, learns, decides, refines. Gradient descent for prose.
Codebase refactoring
Batch operations across every file. Safe to Ctrl-C and resume.
@file injection
In chat mode, pull files into context on the fly. Multi-file supported.
Any model
One interface, many providers. Compare models without changing your workflow.
BYOK
Bring your own API keys. With BYOK your files never touch our servers.
Run local
MLX, Ollama. Full privacy, no internet required.
Git hooks, log monitoring, CI/CD
# Pre-commit code review
git diff --cached | Reviewer || exit 1
# Real-time error diagnosis
tail -f app.log | grep ERROR | toast "diagnose"
# Auto-generate docs
find . -name "*.py" | xargs cat | toast "generate API docs" > API.md
Pricing
Prepaid $20 inference. BYOK supported. No API Key. Zero commitment.
Subscribe for managed AI with unified billing when ready.
Creator
- All Slices
- Create custom Slices
- BYOK supported
- Chat and Learning
- Community Support
Pro
- Higher Usage Limits
- Fast inference (up to 3000 T/sec)
- All providers, unified billing
- Automate workflows
- Priority support
Max
- Everything in Pro
- SSH access to dedicated Ubuntu VM
- Host your own website or API
- Local MLX inference on Apple Silicon
- Full privacy
- Expert Help Available
Enterprise
- On-premise deployment
- Integration & audit logs
- Custom fine-tuning
- Dedicated support
- Seminars and Training
FAQ
How does it work?
Lightweight toast talks to local toastd, which keeps an HTTP/2 connection pool to linuxtoaster.com. Written in C to minimize latency. With BYOK, toastd connects directly to your provider—your traffic never touches our servers.
What's BYOK?
Got a PROVIDER_API_KEY set for Anthropic, Cerebras, Google Gemini, Groq, OpenAI, OpenRouter, Together, Mistral, Perplexity, and/or xAI? Use toast -p provider. Zero config.
What does "request" mean?
One toast command = one request. Piping counts as one request. Chat mode makes one request per message.
Can I run it fully offline?
Yes. Use toast -p mlx or toast -p ollama with a local model. No internet, no API keys, full privacy.
What's a Slice?
A specialized AI persona, a slice through the latent space, a perspective. Coder knows code. Sys knows Unix. Writer writes docs. Or create your own with a .persona file.
Where's my data stored?
Locally. Context in .crumbs, conversations in .chat. Version them, grep them, delete them. Your machine, your files.
macOS? Windows?
macOS and Linux today. Windows WSL works.
What's included in Max?
SSH access to a dedicated Ubuntu VM on multi-tenant Mac Metal. Pre-configured for local MLX inference. No network latency, full privacy.