# When CLI Tools Are Enough > One year after MCP's launch, Anthropic published a solution to context overhead: filesystem-based tool discovery. The pattern looked familiar. URL: https://kumak.dev/when-cli-tools-are-enough/ Published: 2025-11-15 Category: opinion A year ago, Anthropic launched the Model Context Protocol. The community built thousands of MCP servers. Two weeks ago, Anthropic published [thoughts on making MCP more efficient](https://www.anthropic.com/engineering/code-execution-with-mcp): specifically, how to reduce token overhead when agents connect to many tools. Their solution proposes generating a filesystem structure where each MCP tool becomes a TypeScript file. The agent explores this filesystem to discover tools, reads only the definitions it needs, and writes code to orchestrate them. This reduced their example from 150,000 tokens to 2,000 tokens. What caught my attention: they're building a system where agents discover executable tools by exploring a file system structure. That's what `/usr/bin` has been since the invention of Unix. ## The Wrapper Pattern Many popular MCP servers wrap CLI tools that already output structured data. The pattern is consistent: receive MCP request, parse it, construct CLI command, execute, parse output, serialise back to MCP format. SQLite's command-line tool has supported JSON output since version 3.33.0 in 2020: ```bash sqlite3 mydb.db 'SELECT * FROM users WHERE active = true' -json ``` The GitHub CLI has had native JSON output since 2021: ```bash gh pr list --json number,title,author,state gh issue create --title "Bug report" --body "Description" ``` Multiple MCP servers explicitly describe themselves as "wrappers around the glab CLI tool" or similar. They add a serialisation layer to something that already outputs structured data. This isn't theoretical overhead. Twilio's [performance testing](https://www.twilio.com/en-us/blog/developers/twilio-alpha-mcp-server-real-world-performance) of their MCP server showed concrete results: they reduced API calls by 21.6% but increased cache reads by 28.5% and cache writes by 53.7%. **Overall cost per task increased by 23.5%.** They wrapped an API they already had access to, added an MCP layer to optimise it, and ended up paying more in tokens. The Gemini CLI team [documented](https://github.com/google-gemini/gemini-cli/issues/4544) "a significant delay of approximately 8 to 12 seconds before the tool becomes responsive" every time an MCP server launches. Not a one-time setup cost. Every session. CLI tools are instant. ## When MCP Actually Makes Sense MCP isn't unnecessary. There are legitimate use cases: **Proprietary internal systems** without CLIs, where adding a CLI isn't straightforward. Your company's custom ticketing system with complex state management and no external API designed for programmatic access benefits from an MCP server. **Complex OAuth flows** requiring browser-based authentication with multiple redirects and token management. Some services make programmatic access deliberately difficult. **Enterprise security requirements** that need audit logging, fine-grained permissions, and session management beyond standard Unix permissions. **Remote services** with SDK requirements, stateful connections, or client-side processing that can't be done with simple HTTP requests. These are real problems MCP solves well. But they're not the typical case. Most database access, version control operations, and file operations already have excellent CLI tooling. > **Warning:** Before letting any agent run database commands, make sure you have a backup. Stories of agents wiping production databases are not urban legends. Use read-only connections where possible, sandbox test data, and never give write access to anything you can't restore. ## Check the CLI First Anthropic's conclusion is telling: they note that "many of the problems here feel novel (context management, tool composition, state persistence), they have known solutions from software engineering." They're right. Shell scripts, pipes, redirects, and the filesystem have solved them for decades. Twilio concluded that "many builders may start with MCP for convenience, but later transition to custom tool integrations once they know exactly what their agent needs." That transition often means realising the CLI already does what you need. Before adding an MCP server, check if a CLI tool exists. If it does, try it. Agents like Claude are good at using command-line tools. They understand bash, construct pipelines, and parse structured output. These capabilities have been reliable since launch. Choose the simplest tool that solves your problem. Sometimes that's MCP. Often, it's the CLI you already have installed.