Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ADD ChatMCP - Multi-Provider LLM Chat Client with Tool Integration #765

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

GongRzhe
Copy link
Contributor

@GongRzhe GongRzhe commented Mar 9, 2025

ChatMCP is a powerful command-line chat interface that connects to multiple LLM providers (OpenAI, Anthropic, Groq, etc.) and extends their capabilities with tools using the Model Context Protocol (MCP).

Description

This PR implements the initial version of ChatMCP, a command-line interface that allows users to interact with multiple LLM providers while extending their capabilities through MCP tools. The implementation includes provider switching, token tracking, performance metrics, and comprehensive logging features.

Server Details

  • Server: All MCP servers supported via servers_config.json
  • Changes to: Initial implementation of server integration, tool execution, and response handling

Motivation and Context

This project solves several key problems in the LLM space:

  1. Provides a unified interface to interact with multiple LLM providers
  2. Extends LLM capabilities with tool integrations through MCP
  3. Offers resilient design with circuit breakers and automatic retries
  4. Enables monitoring of token usage and performance metrics

How Has This Been Tested?

The implementation has been tested with the following scenarios:

  • Basic conversation with each supported provider (OpenAI, Anthropic, Groq, Gemini, Ollama, OpenRoute)
  • Tool usage across different providers
  • Connection resilience with simulated failures
  • Token tracking accuracy against provider reported usage
  • Performance under extended conversations

Breaking Changes

N/A - This is the initial implementation.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation update

Checklist

  • I have read the [MCP Protocol Documentation](https://modelcontextprotocol.io)
  • My changes follows MCP security best practices
  • I have updated the server's README accordingly
  • I have tested this with an LLM client
  • My code follows the repository's style guidelines
  • New and existing tests pass locally
  • I have added appropriate error handling
  • I have documented all environment variables and configuration options

Additional context

  • The implementation uses an asyncio-based architecture for high performance and reliability
  • Circuit breaker pattern is implemented to prevent cascading failures
  • Environment variables are used for configuration with reasonable defaults
  • Token usage is tracked across different providers and sessions
  • Command system provides extensibility for future features

ChatMCP is a powerful command-line chat interface that connects to multiple LLM providers (OpenAI, Anthropic, Groq, etc.) and extends their capabilities with tools using the Model Context Protocol (MCP).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant