LinuxToaster Documentation

Your AI-powered terminal assistant. Talk to your computer in plain English.

Table of Contents

Introduction

LinuxToaster brings the power of AI directly to your terminal. No more memorizing commands, searching Stack Overflow, or switching contexts. Just ask in plain English and get instant answers.

Key Features: Natural language commands • Specialized AI personas • Local model support • Project awareness • Git integration

Installation

Install LinuxToaster with a single command:

curl -sSL linuxtoaster.com/install | sh

This will:

Manual Installation

If you prefer manual installation:

  1. Download the latest release from LinuxToaster.Com
  2. Pick the right version and move toast and toastd to your PATH
  3. Run chmod +x toast toastd

Quick Start

Start using LinuxToaster immediately:

# Ask for a command $ toast "how do I find files larger than 1GB" find . -size +1G -type f # Get explanations $ cat config.yml | toast "explain this configuration" # Enter chat mode $ toast > Help me debug this Python script [AI responds with debugging help]
Note: First-time users need to create an account with toast 'hi'

Core Features

Basic Usage

LinuxToaster works with standard Unix patterns:

# Direct queries $ toast "list all running docker containers" # Pipe data $ ps aux | toast "which process uses the most memory?" # Process files $ toast app.py "add error handling to this function"

Slices (Personas)

Slices are specialized AI personas for different tasks:

# Use built-in personas $ Coder "write unit tests for this function" $ Writer "summarize these meeting notes" $ Sys "why is my system so slow?" # Create custom personas $ toast --add SecurityExpert $ SecurityExpert "review this code for vulnerabilities"

BYOK (Bring Your Own Key)

Use your own API keys for any supported provider:

# Set environment variables $ export OPENAI_API_KEY=sk-... $ export ANTHROPIC_API_KEY=... # Switch providers $ toast -p openai "explain this" $ toast -p anthropic "explain this"

Supported providers: OpenAI, Anthropic, Google Gemini, Groq, Cerebras, Mistral, Perplexity, xAI, OpenRouter, Together

Local Models

Run completely offline with local models:

# Ollama $ toast -p ollama -m GLM-4.7 "explain this" # MLX (Apple Silicon) $ toast -p mlx -m GLM-4.7 "explain this"

Project Context

Give LinuxToaster context about your project:

# .crumbs file - project metadata $ echo "Python 3.11, FastAPI, PostgreSQL" > .crumbs # .persona file - default behavior $ echo "You are a Python expert. Be concise." > .persona # .context file - session context $ echo "Working on authentication module" > .context

Chat Mode

For interactive conversations:

$ toast > Help me optimize this SQL query [AI provides optimization] > Can you explain why that works? [AI explains the reasoning] > /exit

Chat history is saved in .chat files.

Pipe Chains & Composition

Chain multiple AI operations:

# Multi-step processing $ cat report.md | toast "summarize" | toast "translate to Spanish" | toast "format as email"

Advanced Usage

Git Hooks

Automate code review and commit messages:

# .git/hooks/pre-commit #!/bin/sh git diff --cached | toast -p anthropic "review for bugs" || exit 1 # .git/hooks/prepare-commit-msg #!/bin/sh git diff --cached | toast "write a commit message" > $1

Log Monitoring

Real-time log analysis:

# Monitor errors in real-time $ tail -f app.log | grep ERROR | toast "diagnose and suggest fixes"

CI/CD Integration

Use in CI/CD pipelines:

# GitHub Actions - name: Review PR run: | git diff origin/main | toast "review changes" || exit 1 # Generate documentation - name: Generate Docs run: | find src -name "*.py" | xargs cat | toast "generate API docs" > docs/API.md

Configuration

LinuxToaster can be configured through:

Common Environment Variables

Variable Purpose
PROVIDER_API_KEY BYOK AI provider

Troubleshooting

Common Issues

Daemon not running: LinuxToaster automatically starts toastd. If issues occur, restart with pkill toastd && toast
Authentication errors: Verify API keys are set correctly and have sufficient permissions
Model not found: Check provider documentation for available models

Debug Mode

Enable debug output:

$ toast -d "test command"

FAQ

Q: Is my data private?

A: With BYOK or local models, your data never touches our servers. Managed options process data securely with enterprise-grade encryption.

Q: Can I use LinuxToaster offline?

A: Yes, with local models via Ollama or MLX. No internet connection required.

Q: How does billing work?

A: Pay-as-you-go with prepaid credits, or subscribe for monthly credits. Check balance with toast --balance.

Q: Can I use custom models?

A: Yes, through Ollama or MLX. Any compatible model can be used.

Q: Is there an API?

A: LinuxToaster.com offers an API for use of Slices in your software.