๐Ÿ“‹ Help shape our upcoming AI Agents course! Take our 3-minute survey and get 20% off when we launch.

Take Survey โ†’

mcp-cli

mcp-cli MCP server

1,290 stars
231 forks
Available MCP Tools 0 tools

Model Context Protocol tools provided by this server

No tools information available for this server.

Check the GitHub repository or documentation for more details.

README

A powerful, feature-rich command-line interface for interacting with Model Context Protocol servers. This client enables seamless communication with LLMs through integration with the CHUK-MCP protocol library which is a pyodide compatible pure python protocol implementation of MCP, supporting tool usage, conversation management, and multiple operational modes.

๐Ÿ”„ Protocol Implementation

The core protocol implementation has been moved to a separate package at: https://github.com/chrishayuk/chuk-mcp

This CLI is built on top of the protocol library, focusing on providing a rich user experience while the protocol library handles the communication layer.

๐ŸŒŸ Features

  • Multiple Operational Modes:
    • Chat Mode: Conversational interface with direct LLM interaction and automated tool usage
    • Interactive Mode: Command-driven interface for direct server operations
    • Command Mode: Unix-friendly mode for scriptable automation and pipelines
    • Direct Commands: Run individual commands without entering interactive mode
  • Multi-Provider Support:
    • OpenAI integration (gpt-4o-mini, gpt-4o, gpt-4-turbo, etc.)
    • Ollama integration (llama3.2, qwen2.5-coder, etc.)
    • Anthropic integration (claude-3-opus, claude-3-sonnet, etc.)
    • Extensible architecture for additional providers
  • Provider and Model Management:
    • Configure multiple LLM providers (API keys, endpoints, default models)
    • Switch between providers and models during sessions
    • Command-line arguments for provider/model selection
    • Interactive commands for provider configuration
  • Robust Tool System:
    • Automatic discovery of server-provided tools
    • Server-aware tool execution
    • Tool call history tracking and analysis
    • Support for complex, multi-step tool chains
  • Advanced Conversation Management:
    • Complete conversation history tracking
    • Filtering and viewing specific message ranges
    • JSON export capabilities for debugging or analysis
    • Conversation compaction for reduced token usage
  • Rich User Experience:
    • Command completion with context-aware suggestions
    • Colorful, formatted console output
    • Progress indicators for long-running operations
    • Detailed help and documentation
  • Resilient Resource Management:
    • Proper cleanup of asyncio resources
    • Graceful error handling
    • Clean terminal restoration
    • Support for multiple simultaneous server connections

๐Ÿ“‹ Prerequisites

  • Python 3.11 or higher
  • For OpenAI: Valid API key in OPENAI_API_KEY environment variable
  • For Anthropic: Valid API key in ANTHROPIC_API_KEY environment variable
  • For Ollama: Local Ollama installation
  • Server configuration file (default: server_config.json)
  • CHUK-MCP protocol library

๐Ÿš€ Installation

Install from Source

  1. Clone the repository:
git clone https://github.com/chrishayuk/mcp-cli
cd mcp-cli  
  1. Install the package with development dependencies:
pip install -e ".[cli,dev]"
  1. Run the CLI:
mcp-cli --help

Using UV (Alternative Installation)

If you prefer using UV for dependency management:

pip install uv

uv sync --reinstall

uv run mcp-cli --help

๐Ÿงฐ Global Command-line Arguments

Global options available for all modes and commands:

  • --server: Specify the server(s) to connect to (comma-separated for multiple)
  • --config-file: Path to server configuration file (default: server_config.json)
  • --provider: LLM provider to use (openai, anthropic, ollama, default: openai)
  • --model: Specific model to use (provider-dependent defaults)
  • --disable-filesystem: Disable filesystem access (default: true)

CLI Argument Format Issue

You might encounter a "Missing argument 'KWARGS'" error when running various commands. This is due to how the CLI parser is configured. To resolve this, use one of these approaches:

  1. Use the equals sign format for all arguments:
    mcp-cli tools call --server=sqlite
    mcp-cli chat --server=sqlite --provider=ollama --model=llama3.2
  2. Add a double-dash (--) after the command and before arguments:
    mcp-cli tools call -- --server sqlite
    mcp-cli chat -- --server sqlite --provider ollama --model llama3.2
  3. When using uv and multiple extra parameters, follow the 2nd step but add an empty string at the end:
    uv run mcp-cli chat -- --server sqlite --provider ollama --model llama3.2 ""

These format issues apply to all commands (chat, interactive, tools, etc.) and are due to how the argument parser interprets positional vs. named arguments.

๐ŸŒ Available Modes

1. Chat Mode

Chat mode provides a natural language interface for interacting with LLMs, where the model can automatically use available tools:

uv run mcp-cli

uv run mcp-cli chat --server sqlite

uv run mcp-cli chat --server sqlite --provider openai --model gpt-4o

2. Interactive Mode

Interactive mode provides a command-driven shell interface for direct server operations:

uv run mcp-cli interactive --server sqlite

3. Command Mode (Cmd)

Command mode provides a Unix-friendly interface for automation and pipeline integration:

uv run mcp-cli cmd --server sqlite [options]

4. Direct Commands

Run individual commands without entering an interactive mode:

uv run mcp-cli tools list {} --server sqlite

uv run mcp-cli tools call {} --server sqlite

๐Ÿค– Using Chat Mode

Chat mode provides a conversational interface with the LLM, automatically using available tools when needed.

Starting Chat Mode

uv run mcp-cli --server sqlite

uv run mcp-cli chat --server sqlite

uv run mcp-cli chat --server sqlite --provider openai --model gpt-4o
uv run mcp-cli chat --server sqlite --provider ollama --model llama3.2

uv run mcp-cli chat --server=sqlite --provider=ollama --model=llama3.2

Chat Commands

In chat mode, use these slash commands:

General Commands

  • /help: Show available commands
  • /help <command>: Show detailed help for a specific command
  • /quickhelp or /qh: Display a quick reference of common commands
  • exit or quit: Exit chat mode

Provider and Model Commands

  • /provider or /p: Display or manage LLM providers
    • /provider: Show current provider and model
    • /provider list: List all configured providers
    • /provider config: Show detailed provider configuration
    • /provider set <name> <key> <value>: Set a provider configuration value
    • /provider <name>: Switch to a different provider
  • /model or /m: Display or change the current model
    • /model: Show current model
    • /model <name>: Switch to a different model

Tool Commands

  • /tools: Display all available tools with their server information
    • /tools --all: Show detailed tool information including parameters
    • /tools --raw: Show raw tool definitions
  • /toolhistory or /th: Show history of tool calls in the current session
    • /th <N>: Show details for a specific tool call
    • /th -n 5: Show only the last 5 tool calls
    • /th --json: Show tool calls in JSON format

Conversation Commands

  • /conversation or /ch: Show the conversation history
    • /ch <N>: Show a specific message from history
    • /ch -n 5: Show only the last 5 messages
    • /ch <N> --json: Show a specific message in JSON format
    • /ch --json: View the entire conversation history in raw JSON format
  • /save <filename>: Save conversation history to a JSON file
  • /compact: Condense conversation history into a summary

Display Commands

  • /cls: Clear the screen while keeping conversation history
  • /clear: Clear both the screen and conversation history
  • /verbose or /v: Toggle between verbose and compact tool display modes

Control Commands

  • /interrupt, /stop, or /cancel: Interrupt running tool execution
  • /servers: List connected servers and their status

๐Ÿ–ฅ๏ธ Using Interactive Mode

Interactive mode provides a command-driven shell interface for direct server interaction.

Starting Interactive Mode

mcp-cli interactive {} --server sqlite

Interactive Commands

In interactive mode, use these commands:

  • help: Show available commands
  • exit or quit or q: Exit interactive mode
  • clear or cls: Clear the terminal screen
  • servers or srv: List connected servers with their status
  • provider or p: Manage LLM providers
    • provider: Show current provider and model
    • provider list: List all configured providers
    • provider config: Show detailed provider configuration
    • provider set <name> <key> <value>: Set a provider configuration value
    • provider <name>: Switch to a different provider
  • model or m: Display or change the current model
    • model: Show current model
    • model <name>: Switch to a different model
  • tools or t: List available tools or call one interactively
    • tools --all: Show detailed tool information
    • tools --raw: Show raw JSON definitions
    • tools call: Launch the interactive tool-call UI
  • resources or res: List available resources from all servers
  • prompts or p: List available prompts from all servers
  • ping: Ping connected servers (optionally filter by index/name)

๐Ÿ“„ Using Command Mode (Cmd)

Command mode provides a Unix-friendly interface for automation and pipeline integration.

Starting Command Mode

mcp-cli cmd {} --server sqlite [options]

Command Mode Options

  • --input: Input file path (use - for stdin)
  • --output: Output file path (use - for stdout, default)
  • --prompt: Prompt template (use {{input}} as placeholder for input)
  • --raw: Output raw text without formatting
  • --tool: Directly call a specific tool
  • --tool-args: JSON arguments for tool call
  • --system-prompt: Custom system prompt
  • --verbose: Enable verbose logging
  • --provider: Specify LLM provider
  • --model: Specify model to use

Command Mode Examples

Process content with LLM:

uv run mcp-cli cmd --server sqlite --input document.md --prompt "Summarize this: {{input}}" --output summary.md

cat document.md | mcp-cli cmd {} --server sqlite --input - --prompt "Extract key points: {{input}}"

uv run mcp-cli cmd {} --server sqlite --input document.md --prompt "Summarize: {{input}}" --provider anthropic --model claude-3-opus

Call tools directly:

uv run mcp-cli cmd {} --server sqlite --tool list_tables --raw

uv run mcp-cli cmd {} --server sqlite --tool read_query --tool-args '{"query": "SELECT COUNT(*) FROM users"}'

Batch processing:

ls *.md | parallel mcp-cli cmd --server sqlite --input {} --output {}.summary.md --prompt "Summarize: {{input}}"

๐Ÿ”ง Direct CLI Commands

Run individual commands without entering interactive mode:

Provider Commands

mcp-cli provider show

mcp-cli provider list

mcp-cli provider config

mcp-cli provider set <provider_name> <key> <value>

Tools Commands

uv run mcp-cli tools list {} --server sqlite

uv run mcp-cli tools list {} --server sqlite --all

uv run mcp-cli tools list {} --server sqlite --raw

uv run mcp-cli tools call {} --server sqlite

Resources and Prompts Commands

uv run mcp-cli resources list {} --server sqlite

uv run mcp-cli prompts list {} --server sqlite

Server Commands

uv run mcp-cli ping {} --server sqlite

uv run mcp-cli ping {} --server sqlite,another-server

๐Ÿ“‚ Server Configuration

Create a server_config.json file with your server configurations:

{
  "mcpServers": {
    "sqlite": {
      "command": "python",
      "args": ["-m", "mcp_server.sqlite_server"],
      "env": {
        "DATABASE_PATH": "your_database.db"
      }
    },
    "another-server": {
      "command": "python",
      "args": ["-m", "another_server_module"],
      "env": {}
    }
  }
}

๐Ÿ” Provider Configuration

Provider configurations are stored with these key settings:

  • api_key: API key for authentication
  • api_base: Base URL for API requests
  • default_model: Default model to use with this provider
  • Other provider-specific settings

Environment Variables

You can also set the default provider and model using environment variables:

export LLM_PROVIDER=openai
export LLM_MODEL=gpt-4o-mini

Configuration Example

The provider configuration is typically stored in a JSON file and looks like:

{
  "openai": {
    "api_key": "sk-...",
    "api_base": "https://api.openai.com/v1",
    "default_model": "gpt-4o-mini"
  },
  "anthropic": {
    "api_key": "sk-...",
    "api_base": "https://api.anthropic.com",
    "default_model": "claude-3-opus"
  },
  "ollama": {
    "api_base": "http://localhost:11434",
    "default_model": "llama3.2"
  }
}

๐Ÿ“ˆ Advanced Usage Examples

Provider and Model Selection

You can change providers or models during a session:

> /provider
Current provider: openai
Current model: gpt-4o-mini
To change provider: /provider <provider_name>

> /provider list
Available Providers
โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”“
โ”ƒ Provider  โ”ƒ Default Model  โ”ƒ API Base                        โ”ƒ
โ”กโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ฉ
โ”‚ openai    โ”‚ gpt-4o-mini    โ”‚ https://api.openai.com/v1       โ”‚
โ”‚ anthropic โ”‚ claude-3-opus  โ”‚ https://api.anthropic.com       โ”‚
โ”‚ ollama    โ”‚ llama3.2       โ”‚ http://localhost:11434          โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

> /provider anthropic
Switched to provider: anthropic with model: claude-3-opus
LLM client updated successfully

> /model claude-3-sonnet
Switched to model: claude-3-sonnet

Working with Tools in Chat Mode

In chat mode, simply ask questions that require tool usage, and the LLM will automatically call the appropriate tools:

You: What tables are available in the database?
[Tool Call: list_tables]
Assistant: There's one table in the database named products. How would you like to proceed?

You: Select top 10 products ordered by price in descending order
[Tool Call: read_query]
Assistant: Here are the top 10 products ordered by price in descending order:
  1 Mini Drone - $299.99
  2 Smart Watch - $199.99
  3 Portable SSD - $179.99
  ...

Using Conversation Management

The MCP CLI provides powerful conversation history management:

> /conversation
Conversation History (12 messages)
1 | system    | You are an intelligent assistant capable of using t...
2 | user      | What tables are available in the database?
3 | assistant | Let me check for you.
...

> /save conversation.json
Conversation saved to conversation.json

> /compact
Conversation history compacted with summary.

๐Ÿ› ๏ธ Implementation Details

The provider configuration is managed by the ProviderConfig class, which:

  • Loads/saves configuration from a local file
  • Manages active provider and model settings
  • Provides helper methods for retrieving configuration values

The LLM client is created using the get_llm_client function, which instantiates the appropriate client based on the provider and model settings.

๐Ÿ“ฆ Dependencies

The CLI is organized with optional dependency groups:

  • cli: Rich terminal UI, command completion, and provider integrations
  • dev: Development tools and testing utilities
  • wasm: (Reserved for future WebAssembly support)
  • chuk-mcp: Protocol implementation library (core dependency)

Install with specific extras using:

pip install "mcp-cli[cli]"     # Basic CLI features
pip install "mcp-cli[cli,dev]" # CLI with development tools

๐Ÿค Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

๐Ÿ“œ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgments

Details
Category Version Control
Scope local
Language Python
License Other
OS Support
linux macos windows