Almost every AI product you can buy in 2026 is a chat box. The chat box is in the corner of your IDE. The chat box is the center of your coding assistant. The chat box is the search interface, the documentation reader, the customer support tool, the agent front-end. When a company decides to add AI to its product, the default move is to put a chat box in a pane.
This is a catastrophic design choice, and the industry has made it so uniformly that it no longer feels like a choice at all. It feels like the medium.
But the chat box is not the medium. The chat box is a UI pattern borrowed from consumer messaging apps and stapled onto tools that have nothing to do with messaging. It was chosen because OpenAI's first product was a chat box and everyone copied it. That is the entire reason. There was no study showing that chat is the right interface for code editing, or data analysis, or system administration, or document editing. There was a precedent, and the industry defaulted to the precedent.
The chat box costs you three things, and none of them are recoverable once the pattern is established.
First, it costs you composition. A chat box is a dead-end interface. Text goes in, text comes out, and the text that comes out lives in the chat box. To do anything with the output — save it, transform it, pipe it to another tool, feed it back as input — you have to copy it out by hand, or use some awkward sidebar feature the chat box grew to compensate for its own inadequacy. The terminal, by contrast, is composable by construction. Output is data. Data flows.
Second, it costs you history. Everything you do in a chat box lives inside the chat box. Your conversations are trapped in a product's database, in a product's format, behind a product's API — if it has one at all. You cannot grep six months of chats for the conversation where you figured out how to debug the auth bug. Or if you can, the search is a fuzzy text search over opaque conversation objects, not a real query over real data. Compare: a shell history file is just lines in a file, and history | grep auth works because of forty years of Unix convention.
Third, it costs you workflow. A chat box assumes a conversational rhythm — you ask, it answers, you ask again, it answers again. That is a fine rhythm for genuinely conversational tasks, like talking through an idea. It is a terrible rhythm for getting work done. Most work is not a conversation. Most work is: take this input, run this transformation, produce this output. The chat box forces everything into a dialog shape, and a dialog shape is wrong for most of what people are actually doing.
The chat box happened because it is the easiest thing to build. A chat interface ships in a week. It requires no thought about the user's existing workflow. It requires no integration with other tools. It requires no opinions about how output should be shaped. You dump the model behind a text input, you render the tokens as they stream, you call it a product.
It is also the easiest thing to demo. A chat box looks magical the first time, because the user types a question and the model answers. The whole value proposition fits in a screenshot. The fact that the interaction pattern is a terrible fit for most real work does not matter in the demo, because the demo is the first ten seconds of the interaction. The badness shows up an hour in, when the user realizes they cannot actually do anything with what the chat box gave them.
The industry has been optimizing for the demo. Investors see demos. Journalists see demos. Twitter sees demos. The user who has to use the thing for a year is, as always, the last person whose opinion shapes the product.
The alternative is not complicated. It is just: put the model where the other tools are. If the user's work happens in a terminal, put the model in the terminal — not as a chat overlay, but as something pipeable. If the user's work happens in an editor, put the model behind a key binding that transforms selected text in place. If the user's work happens in a spreadsheet, put the model in a cell as a function. If the user's work happens in email, put the model in the mail filter.
The general rule is: identify the composition surface the user already has, and make the model work through it. Do not invent a new surface. Do not ask the user to leave what they are doing. Do not build a castle around the model.
A concrete example:
ps aux | toast "which process is burning the cpu"
Works because ps aux is a tool that produces text, pipes are a composition mechanism that accepts text, and toast is a tool that takes text and produces text. No chat box was involved. No session state was created. No product was opened. The user stayed where they were, typed one additional line, got the answer, moved on. The model was available exactly when it was needed and disappeared exactly when it was not.
Compare this to opening a chat box, typing "can you tell me which process is burning the cpu on my machine," getting back a paragraph explaining how to use ps, copying the command, pasting it into a terminal, running it, copying the output, pasting it back into the chat box, getting an answer. Every step in that second flow is context-switching friction that the pipe eliminated.
The chat box is not useless. There are genuinely conversational tasks. Rubber-ducking a design problem. Exploring a new topic. Writing dialogue. In these cases, the dialog shape is what you want, because the work itself is a dialog.
But these are a small fraction of what AI tooling is being used for. Most of what people want is transformation: turn this draft into a polished version, turn this log into an alert, turn this code into a refactored version, turn this file into a summary. Transformation is not a dialog. Transformation is a function call. The chat box is the wrong wrapper.
When you do need a chat — and sometimes you do — you should be able to reach for it as a specific tool. toast has an interactive mode for exactly this. You type toast with no arguments and you get a back-and-forth. But it is a mode of a tool, not the tool itself. The default interaction is the pipe. The chat is the exception.
The chat box is not a mistake about interface design. It is a mistake about what AI tools are for. The chat box design implies that the user will come to the model. It implies the model is the destination. The model is not the destination. The model is a component. The destination is whatever the user was doing — writing code, reading logs, answering email, analyzing data. The model's job is to make that destination easier to reach.
Interfaces that recognize this will win in the long run, because they are aligned with what the user was actually doing in the first place. Interfaces that do not recognize this will age the way IRC-inspired support widgets aged: they will look like artifacts of a specific moment, and users will migrate to whatever lets them stay where they already are.
The chat box feels inevitable right now because everyone is building one. It is not inevitable. It is a local maximum in a design space the industry has barely explored, and we think it is a shallow one. The interesting designs are the ones that put the model at the composition surface the user already uses — the shell, the editor, the spreadsheet, the inbox — and let the existing workflow absorb the new capability without disruption.
The chat box asks the user to come to the model. The right design brings the model to the user.
The difference sounds small. It is not. It determines whether the tool becomes part of the user's life or remains a thing they visit occasionally and eventually stop visiting at all.
This essay is part of LinuxToaster — Unix re-imagined for the era of AI.