NoClaw - Mac Mini Assistant the Unix Way

The problem with OpenClaw

What happens when you give an opaque system of 400,000+ lines of code your credentials and broad permissions while hoping it interprets English the way you meant it? A lot of people just ran that experiment with OpenClaw.

One user asked his OpenClaw to "let people know the newsletter will be late" — it messaged all 500 of his contacts individually. A journalist's agent reorganized her entire filesystem, rewrote draft articles, emailed editors without approval, and deleted completed work it decided was "redundant." These aren't edge cases, it is the expected result of overwhelming complexity.

The security picture is worse than the rogue agents suggest. OpenClaw runs as a persistent daemon with a gateway process, a heartbeat scheduler, session management across a dozen messaging platforms, a plugin SDK, native apps for iOS and Android, and a community skill registry called ClawHub.

That registry has already been weaponized. Security researchers found over 1,100 malicious skills on ClawHub — roughly one in five packages. Cisco called it a security nightmare. CrowdStrike published a blog on how to detect it as a threat on corporate endpoints. 1Password told people to treat any installation on a work machine as a potential incident. Over 135,000 OpenClaw instances were found exposed to the public internet with unsafe defaults, leaking API keys, chat histories, and credentials to anyone who looked.

Then there's the cost. OpenClaw re-injects the full tool list, skills metadata, workspace files, and entire conversation history with every single API call. Context accumulation accounts for 40–50% of token usage. One tech blogger reported burning 1.8 million tokens in a month — a $3,600 bill. Heavy users routinely hit $800–1,500/month. One developer measured 40 million input tokens in four days. An entire cottage industry of blog posts and optimization guides has sprung up around cutting OpenClaw's API spend.

Having an AI on your Mac Mini that handles your business is genuinely useful. But it doesn't require a million lines of code. Nor does it require thousands of dollars per month in inference.

NoClaw

Our NoClaw implements an AI agent using LinuxToaster's small single-purpose C tools. They can be understood and tested individually. Connected by Unix pipes. No plugin registry. No orchestration layer. Every tool does one thing, doesn't know the others exist, and can be killed with ctrl-c. The entire permission model is a plain text file you can read in ten seconds.

Toast is stateless in pipe mode — each invocation gets the persona, crumbs, and the conversation history for that one chat thread. No session bloat, no accumulated context, no runaway costs. And with toasted — our local inference daemon for Apple Silicon — toast runs inference on your Mac Mini at ~100 tokens/second. Zero cost per token. Your data never leaves the machine.

Or bring your own API keys to mix and match, one model for this another for that — Anthropic, OpenAI, Cerebras, and more — and pick your own cost structure.

There's no skill registry to poison in NoClaw because there are no skills. The AI's capabilities come from the .tools file and the commands on your machine. The tools don't even know each other exist — imessage doesn't import toast, toast doesn't link against kal. They connect through pipes, the same way grep connects to sort. If one misbehaves, you ctrl-c it. The blast radius of any single tool is exactly its own process.

OpenClaw's own documentation admits: "There is no 'perfectly secure' setup." That's true of any system. But there's a difference between a system where security requires hardening guides, audit commands, sandbox configurations, proxy authentication, and a dedicated security practice guide — and a system where security comes from the fact that you can read all of it in an afternoon.

Our Mac Mini assistant is built with a couple lines of code and a handful of textfiles. It handles email, calendar, contacts, notes, and todos. Every incoming Message gets piped through toast, and the reply goes back automatically. No Python. No Node. No daemon. No API function.

imessage bot 'toast "You are the executive assistant"'
email bot 'toast "You are the executive assistant, triage emails"'

How we got here: Let toast answer texts

About 150-lines of Python later we had something that worked. JSON-RPC over stdin/stdout. Named FIFOs. Subscription management. A notification loop. Most stop here. For us it felt like too much ceremony to do something simple: read a message, think about it, send a reply.

We tried to use imsg by OpenClaw as the iMessage interface, piped through toast, and sent replies back. Six lines of bash replaced most of the Python script:

script -q /dev/null imsg watch --json |
  jq --unbuffered -r 'select(.is_from_me==false) | "\(.chat_id)\t\(.text)"' |
  while IFS=$'\t' read -r cid text; do
    reply=$(echo "$text" | toast)
    imsg send --chat-id "$cid" --text "$reply"
  done

But it still felt clumsy. Too many flags. --json, --unbuffered, --chat-id, --text. Every flag is a decision someone has to make. So we built our own imessage CLI in C:

commandbehavior
imessagelist recent chats (or show current thread if one is set)
imessage chatslist chats (always)
imessage from Catherinechats matching name or identifier
imessage search "pizza"chats containing message text
imessage 42set current chat to 42, show thread
echo "hi" | imessagesend to current chat
echo "hi" | imessage user@icloud.comsend to address
echo "hi" | imessage +14155551234send to phone number
imessage map cmdon incoming message, run cmd with history on stdin
imessage bot cmdon incoming message, auto-reply with cmd's stdout

When it's unambiguous, imessage figures out what it can from context — whether there's a current chat, whether stdin has data, and whether you passed an address or a command.

It uses kqueue to watch the SQLite WAL file for new messages. History outputs them: and you: prefixes so it works directly as LLM context. Sending uses AppleScript via popen().

State lives in ~/.toast/imessage/current. In bot/map mode, each child gets IMESSAGE_CID, IMESSAGE_IDENT, and IMESSAGE_SENDER in the environment.

Different chats run in parallel, same chat runs serial so replies stay in order.

A one line assistant that handles your business:

imessage doesn't know about AI. toast doesn't know about iMessage. Both meet in a pipe.

# Assistant bot
imessage bot 'toast "reply as executive assistant"'

Composability means you can replace toast with anything:

# Echo bot
imessage bot 'tail -1'

# Fortune cookie bot
imessage bot 'fortune'

# Translator bot
imessage bot 'toast "translate to Spanish"'

One line each. No Python. No frameworks.

Adding working memory: .crumbs

Toast automatically reads a .crumbs file as context. This is directory specific; it will walk up the tree. Our first assistant used that to keep notes:

# Calendar
2026-02-08 12:00-13:00 Lunch with Bob
2026-02-08 15:00-16:00 Dentist
2026-02-10 09:00-10:00 Team standup
2026-02-12 18:00-19:30 Dinner with Sarah

# Contacts
Bob: +16505551234, coworker, prefers morning meetings
Sarah: +14155559876, friend, vegetarian
Mom: +16505554321, call on Sundays

# Todo
- Fix the garage door
- Renew passport (expires March)
- Buy birthday gift for Sarah (Feb 20)

# Notes
Gym: Mon/Wed/Fri 7am
Car in shop until Feb 10, taking the EUC

Someone texts "are you free Thursday?" — toast checks .crumbs, sees the dentist, suggests another time. "What's Bob's number?" — it knows. "Remind him about my birthday" — it appends to the todo list.

Toast can also write to .crumbs. So when someone texts "let's do lunch Friday at noon," the assistant adds it to the calendar. No API. No integration. Just a text file.

.tools tells toast which commands it's allowed to run. This isn't a suggestion — toast enforces it. One command per line, bare names:

# .tools
kal
rem
contacts
notes
emails
imessage

Want the AI to stop sending emails? Remove emails from the list. Want to audit what it can do? cat .tools.

.persona is the third file:

# .persona
You are the CEO's executive assistant. You respond to texts.
Tone: brief, direct, helpful. Text message style.

The file .crumbs contains the schedule, contacts, notes, and todos.

When asked to schedule something, add it to .crumbs.
If a time conflicts, suggest the nearest open slot.
If unsure about anything, say "Let me check."

The entire AI assistant — calendar, contacts, notes, todos, scheduling — is three text files and one line of bash:

imessage bot 'toast'

No database. No app. No subscription. Edit your schedule with vim.

The problem with text files

Your real calendar lives in Calendar.app. Your real reminders live in Reminders.app. Maintaining a parallel text file means everything drifts out of sync the moment you add an event from your phone or Siri creates a reminder.

We needed the assistant to talk to the actual macOS apps.

Talking to native macOS apps from the command line

So we wrote small CLI tools that bridge Unix pipes to macOS native apps — Calendar.app, Reminders.app, Contacts.app, and Notes.app — using the same philosophy as imessage: no flags when possible, behavior from context.

kal — reads and writes to Calendar.app:

# List today's events
kal

# List events for a date range
kal 2026-03-18 2026-03-20

# Create an event
kal add 'Lunch with Bob 2026-03-19 at noon.'

# Pipe your schedule into the AI
kal | toast "what does my day look like"

rem — reads and writes to Reminders.app:

contacts — reads and writes to Contacts.app:

notes — reads and writes to Notes.app:

emails — reads and writes to Apple's (or anyone's) IMAP and SMTP servers

Under the hood these use EventKit and the native macOS frameworks — the same ones Calendar.app, Reminders.app, Contacts.app, and Notes.app use. They read and write real data. If Siri adds a reminder on your phone, rem sees it. If the AI creates a calendar event, it shows up on your watch. When you get an email your assistant sees it too.

The new pipeline

Toast can use these tools directly — no shell gymnastics needed:

imessage bot 'toast "You are the executive assistant"'

Same one-liner as before. But now when someone texts "am I free Thursday afternoon?" — toast calls kal and checks your real Calendar.app. "Remind me to call the dentist" — it calls rem and creates an actual Reminder that syncs to all your devices. "Schedule lunch with Bob tomorrow at noon" — a real calendar event appears. "What's Bob's number?" — it checks Contacts.app.

Toast knows these tools are available and calls them when it needs to. The AI decides when to check the calendar, when to create a reminder, when to look up a contact. The text file was the prototype. The native integrations are the product. Same Unix philosophy, same composability, but now it talks to the apps you already use.

No database. No cloud. No subscription. Your Mac is the backend.

The silent iMessage assistant

bot auto-replies. But maybe that is too much and we just want a silent assistant, not having to worry about what the AI says to other people?

map runs a command on every incoming message — no reply, no interruption. That turns every conversation into a structured data source:

imessage map 'toast "extract any dates, commitments, or action items. call kal to add events, rem to add reminders. if nothing found, output nothing."'

Someone texts "let's do dinner Friday at 7" — a calendar event appears on your watch. "Can you send me the report by Monday?" — a reminder shows up. No reply, no confirmation, no interruption. It just organizes your life in the background.

That's what a real executive assistant does. They sit in the room, listen, and things just get handled. You find out when your calendar is already right.

All of our Unix re-imagined CLI tools, including toasted, are available to our members at linuxtoaster.com as part of their monthly subscription.