Blog
ENGINEERING

Your AI agent already knows how to use a terminal. Why CLIs beat MCP servers

Mike Miller

March 13, 2026

mcp vs cli for ai agents

Table of contents

MCP got the vision right

Think about where the knowledge lives

What this looks like in practice

MCP still matters, but not for what you'd think

The practical takeaway

Here's something worth paying attention to. Give an AI agent shell access and point it at a CLI (command-line interface) tool it's never seen before. It runs --help, reads the output, and starts using the tool correctly. No instructions. No schema. It figured it out.

This shouldn't be surprising, but it is if you've been following the MCP conversation. We build and maintain both an MCP server and a CLI at Courier, and the difference in how agents use them is striking. It also tells you something important about where tool knowledge actually lives inside these models.

MCP got the vision right

Before MCP, every agent-to-tool integration was a one-off. Model Context Protocol fixed that. Servers expose tools with typed schemas, agents discover and call them through a standard interface. Thousands of servers, every major AI lab on board, Linux Foundation governance. A universal protocol for agent tooling is clearly the right idea.

The problem is the implementation pattern. MCP loads all your tool definitions into context before the agent can think. Connect a few servers and most of your context window is tool menus. Eric Holmes captured the frustration well: MCP servers are flaky, need constant re-auth, and add moving parts. CLI binaries just run.

But I think the overhead is actually a symptom of a deeper thing.

Think about where the knowledge lives

This is the key insight. LLMs were trained on the internet, and the internet is full of terminals. Millions of man pages, READMEs, Stack Overflow answers, shell scripts, blog tutorials. These models have seen git and curl and docker used in every possible context. That knowledge is in the weights. It's baked in. It's free at inference time.

MCP launched in late 2024. There is approximately zero MCP usage in any model's training data. So every MCP interaction depends entirely on schemas you load into context at runtime. You're paying for something the model has no prior understanding of.

With CLI, the model already gets it. You show it --help and it fills in everything else from what it already knows. With MCP, you're starting from scratch every session.

The terminal is a 50-year-old technology that accidentally became the best interface for AI agents, precisely because it's been documented so extensively for so long.

What this looks like in practice

We rebuilt the Courier CLI with agents in mind. Consistent courier [resource] <command> [flags] patterns, --format json on every command, --transform for filtering, --help everywhere.

Watch what an agent does when a teammate reports a user didn't get a notification:

  1. courier messages list --format json --transform "results.#(recipient==user-123)"
  2. courier messages retrieve --message-id "1-abc123" --format json
  3. courier profiles retrieve --user-id "user-123" --format pretty
  4. Finds no email on file. Reports the root cause. Offers to fix it.

Four commands, each output feeding the next decision. That's the natural loop: observe, decide, act, repeat. No tool catalog loaded upfront. The agent only touches what it needs.

MCP still matters, but not for what you'd think

Simon Willison made a good observation: MCP's real value is distribution, not invocation. You can change your MCP server anytime and every connected agent picks up the new tools instantly. No SDK updates, no versioning. That's genuinely useful.

MCP also shines when you're calling an LLM from your own code and want it to have tool access. OpenAI's Responses API, for instance, lets you pass an MCP server as a tool provider in a few lines. No shell, no agent loop, just an SDK call with structured tool access baked in. That's a real use case CLI can't touch.

And for browser agents, chatbots, and sandboxed enterprise environments where there's no terminal at all, MCP is the right call.

The trajectory is becoming clear, though. MCP is settling into distribution, discovery, and programmatic SDK access. The actual agent-in-a-terminal workflow, where most developer tool interaction happens today, is CLI territory. Anthropic seems to see this too. They've proposed converting MCP tools into lightweight code functions rather than full schemas. That's MCP evolving toward how CLI already works.

The practical takeaway

If you're building developer tools and want agents to use them, build a good CLI. --help on every command. --format json for structured output. Consistent, composable patterns. That's what works today, and the reasons it works (training data, composability, transparency) aren't going away.

Keep MCP for distribution, SDK-level tool access, and environments without a shell. But CLI is the primary path for developer agents.

The terminal won this round because it's been around long enough to be deeply embedded in how these models understand the world. That's a funny kind of advantage, but it's a real one.

Similar resources

EU data residency and translations and gdpr
EngineeringCourier

EU Data Residency for Notifications: What Engineering Teams Need to Know

Courier supports EU data residency through a dedicated datacenter in AWS EU-West-1 (Ireland), with full API feature parity, same-workspace dual-region access, built-in GDPR deletion endpoints, and localization support for multilingual notifications. Engineering teams can switch to EU hosting by changing a single base URL with no workspace migration or downtime required.

By Kyle Seyler

March 09, 2026

courier and expo push notifications
GuideEngineering

Expo Push Notifications: The Complete Implementation Guide (SDK 52+)

Expo push notifications are alerts sent from a server to a user's phone, even when the app isn't open. To set them up, install the expo-notifications library, ask the user for permission, and get a unique push token for their device. Your server sends a message to Expo's push service with that token, and Expo delivers it through Apple or Google. Push notifications only work on real phones, not simulators. Local notifications are different — they're scheduled by the app itself for things like reminders. You can also route Expo push through services like Courier to add email, SMS, and Slack fallbacks.

By Kyle Seyler

February 24, 2026

email infrastructure providers
AIGuideEngineering

Best Email API Providers for Developers in 2026: SendGrid vs Postmark vs Mailgun vs SES vs Resend

Your email provider sticks with you longer than most technical decisions. Courier handles notification infrastructure for thousands of teams, so we went deep on the six email providers that show up most: SendGrid, Postmark, Mailgun, Amazon SES, Resend, and SMTP. This guide covers real API primitives, actual code from each provider's docs, Courier integration examples with provider overrides, and an honest read on where each developer experience holds up and where it breaks down. We also asked Claude to review every API and tell us which one it would wire up first. The answer surprised us.

By Kyle Seyler

February 23, 2026

Multichannel Notifications Platform for SaaS

Products

Platform

Integrations

Customers

Blog

API Status

Subprocessors


© 2026 Courier. All rights reserved.