"plain English" | toast

Your terminal speaks English.

Terminal
$ ls -al | toast "roast my directory"
Oh wow, 47 node_modules folders? Your disk called, it's crying. And .DS_Store everywhere—you know those do nothing, right?
$

Install toast (Mac or Linux)

curl -sSL linuxtoaster.com/install | sh Click to copy

Prepaid $20 Inference. Top off anytime. BYOK supported. Zero commitment. Plans from $9/mo.

Slices Leverage built-in personas, Coder, Sys, Writer - or create your own.

BYOK: OpenAI · Anthropic · Google · Mistral · Groq · Cerebras · Perplexity · xAI · OpenRouter · Together · Ollama · MLX

Use Cases

Everything you'd ask Google or ChatGPT about the terminal—but faster, and right where you need it.

Understand anything

Legacy code. Config files. Cryptic logs. Get explanations.

cat /etc/nginx/nginx.conf | toast "explain in detail"

Get the command you need

Describe what you want in plain English. Get the exact command.

toast "how do I delete all .log files older than 7 days"

Fix errors instantly

Pipe your error message. Get the fix.

cat error.log | toast script.py "fix and output updated file" >newscript.py

Diagnose your system

Not sure what's eating your RAM? Ask.

ps aux | toast "what's using so much memory?"

Terminal Chat

When you need a back-and-forth conversation.

toast
>hi, can you explain @models.py
sure, the file contains...
>what does function...

Telegram Chat

Link once, chat anywhere. Text any Slice from your phone.

toast --telegram
# Send /link ABC123 to @linuxtoasterbot

Example: A shell that teaches you

Errors auto-explain themselves. Learn as you go, never get stuck.

toast-shell
🍞 ~> gcc main.c
main.c:42: error: expected ';'
🍞 Missing semicolon on line 41.

🍞 ~> curl api.local
Connection refused
🍞 Server isn't running. Try: docker-compose up
Download toast-shell (bash)

Power Users

Simple for beginners. Deep for experts. The toaster grows with you.

Slices

Specialized AI personas. No prompt engineering—the name is the interface.

cat api.py | Coder "write tests"

Pipe chains

Compose like Unix. Chain multiple transforms.

curl site.com | toast "summarize" | toast "translate to Spanish"

Project context

Drop a .crumbs file. AI knows your stack.

echo "Python 3.11, FastAPI" > .crumbs

Edit a book

Iterative refinement. Each pass reads, learns, decides, refines. Gradient descent for prose.

repeat 20 Editor draft.md "tighten prose, cut filler, add lessons learned to .context"

Codebase refactoring

Batch operations across every file. Safe to Ctrl-C and resume.

find . -name "*.py" -exec toast {} "add type hints" \;

@file injection

In chat mode, pull files into context on the fly. Multi-file supported.

> @schema.sql @models.py are these in sync?

Any model

One interface, many providers. Compare models without changing your workflow.

toast -p anthropic -m claude-opus-4-5 "explain"

BYOK

Bring your own API keys. With BYOK your files never touch our servers.

export OPENAI_API_KEY=sk-...

Run local

MLX, Ollama. Full privacy, no internet required.

toast -p mlx -m llama3 "explain"

Git hooks, log monitoring, CI/CD

# Pre-commit code review git diff --cached | Reviewer || exit 1 # Real-time error diagnosis tail -f app.log | grep ERROR | toast "diagnose" # Auto-generate docs find . -name "*.py" | xargs cat | toast "generate API docs" > API.md

Pricing

Prepaid $20 inference. BYOK supported. No API Key. Zero commitment.
Subscribe for managed AI with unified billing when ready.

Creator

$9/mo
  • All Slices
  • Create custom Slices
  • BYOK supported
  • Chat and Learning
  • Community Support

Max

$119+/mo
  • Everything in Pro
  • SSH access to dedicated Ubuntu VM
  • Host your own website or API
  • Local MLX inference on Apple Silicon
  • Full privacy
  • Expert Help Available

Enterprise

Custom
  • On-premise deployment
  • Integration & audit logs
  • Custom fine-tuning
  • Dedicated support
  • Seminars and Training
Contact Sales

FAQ

How does it work?

Lightweight toast talks to local toastd, which keeps an HTTP/2 connection pool to linuxtoaster.com. Written in C to minimize latency. With BYOK, toastd connects directly to your provider—your traffic never touches our servers.

What's BYOK?

Got a PROVIDER_API_KEY set for Anthropic, Cerebras, Google Gemini, Groq, OpenAI, OpenRouter, Together, Mistral, Perplexity, and/or xAI? Use toast -p provider. Zero config.

What does "request" mean?

One toast command = one request. Piping counts as one request. Chat mode makes one request per message.

Can I run it fully offline?

Yes. Use toast -p mlx or toast -p ollama with a local model. No internet, no API keys, full privacy.

What's a Slice?

A specialized AI persona, a slice through the latent space, a perspective. Coder knows code. Sys knows Unix. Writer writes docs. Or create your own with a .persona file.

Where's my data stored?

Locally. Context in .crumbs, conversations in .chat. Version them, grep them, delete them. Your machine, your files.

macOS? Windows?

macOS and Linux today. Windows WSL works.

What's included in Max?

SSH access to a dedicated Ubuntu VM on multi-tenant Mac Metal. Pre-configured for local MLX inference. No network latency, full privacy.