MCP is an open protocol that standardizes how applications provide context to LLMs - think of it like USB-C for AI applications. It enables seamless connection between AI models and various data sources/tools.
MCP helps build agents and complex workflows on top of LLMs by providing:
- Pre-built integrations for your LLM to plug into
- Flexibility to switch between LLM providers
- Secure data handling best practices
- Standardized interface for AI applications
flowchart LR
A[MCP Host] --> B[MCP Client]
B --> C[Terminal]
B --> D[Filesystem]
B --> E[Memory]
C --> F[Local Data]
D --> G[Local Files]
E --> H[Remote APIs]
- MCP Hosts: Applications (like Claude Desktop, IDEs) that need AI context
- MCP Clients: Protocol handlers that manage server connections
- MCP Servers: Lightweight programs exposing specific capabilities:
- Terminal Server: Execute commands
- Filesystem Server: Access local files
- Memory Server: Persistent data storage
- Data Sources:
- Local: Files, databases on your machine
- Remote: Web APIs and cloud services
flowchart LR
User --> Client
Client --> AI[AI Processing]
Client --> Terminal[Terminal]
Client --> Filesystem[Filesystem]
Client --> Memory[Memory]
Core Components:
- AI Processing: Google Gemini + LangChain for natural language understanding
- Terminal Server: Executes system commands in isolated workspace
- Filesystem Server: Manages file operations
- Memory Server: Stores and retrieves persistent data
Key Features:
- Automatic server startup as needed
- Secure workspace isolation
- Flexible configuration
- Extensible architecture
flowchart TD
A[mcp] --> B[clients]
A --> C[servers]
A --> D[workspace]
B --> E[mcp-client]
E --> F[main.py]
E --> G[client.py]
E --> H[config.json]
E --> I[.env]
C --> J[terminal]
J --> K[server.py]
D --> L[memory.json]
D --> M[notes.txt]
Key Files:
clients/mcp-client/main.py
: Main client entry pointclients/mcp-client/langchain_mcp_client_wconfig.py
: AI integrationclients/mcp-client/theailanguage_config.json
: Server configurationsclients/mcp-client/.env
: Environment variablesservers/terminal_server/terminal_server.py
: Terminal serverworkspace/memory.json
: Persistent memory storageworkspace/notes.txt
: System notes
File Type Breakdown:
-
Python Files (60%):
- Core application logic and business rules
- Server implementations and client applications
- Includes both synchronous and asynchronous code
- Follows PEP 8 style guidelines
-
JSON Files (20%):
- Configuration files for servers and services
- API request/response schemas
- Persistent data storage format
- Strict schema validation enforced
-
Text Files (15%):
- System documentation (READMEs, guides)
- Developer notes and annotations
- Temporary data storage
- Plaintext logs and outputs
-
Other Formats (5%):
- Environment files (.env)
- Git ignore patterns
- License information
- Build configuration files
flowchart TD
A[User Input] --> B[Client]
B --> C{Type?}
C -->|Command| D[Terminal]
C -->|File| E[Filesystem]
C -->|Memory| F[Storage]
C -->|AI| G[Gemini]
D --> H[Response]
E --> H
F --> H
G --> H
H --> I[Output]
langchain_mcp_client_wconfig.py
: Main client applicationtheailanguage_config.json
: Server configurations.env
: Environment variables
Key Features:
- Manages multiple MCP servers
- Integrates Google Gemini for natural language processing
- Handles dynamic response generation
- Processes LangChain objects
Configuration:
- theailanguage_config.json:
{
"mcpServers": {
"terminal_server": {
"command": "uv",
"args": ["run", "../../servers/terminal_server/terminal_server.py"]
},
"memory": {
"command": "npx.cmd",
"args": ["@modelcontextprotocol/server-memory"],
"env": {"MEMORY_FILE_PATH": "workspace/memory.json"}
}
}
}
- .env Setup:
GOOGLE_API_KEY=your_api_key_here
THEAILANGUAGE_CONFIG=clients/mcp-client/theailanguage_config.json
Setup Steps:
- Create
.env
file inclients/mcp-client/
- Add required variables
- Restart client after changes
classDiagram
class TerminalServer {
+path: String
+run()
+validate()
+execute()
}
TerminalServer --|> FastMCP
class FastMCP {
+decorate()
+transport()
}
- Purpose: Executes system commands in isolated workspace
- Key Features:
- Fast command execution
- Secure workspace isolation
- Comprehensive logging
- Technical Details:
- Uses
FastMCP
for transport - Validates commands before execution
- Captures and returns output
- Uses
- Purpose: Persistent data storage
- Operations:
- Store/update/read data
- Query specific information
- Example Structure:
{
"user_preferences": {
"favorite_color": "blue",
"interests": ["science fiction"]
},
"system_state": {
"last_commands": ["git status", "ls"]
}
}
- Purpose: System documentation and notes
- Content Types:
- User documentation (40%)
- System notes (30%)
- Temporary data (20%)
- Other (10%)
- Python 3.9+
- Node.js 16+
- Google API Key
- UV Package Manager
-
Clone the repository:
git clone https://github.com/Techiral/mcp.git cd mcp
-
Set up Python environment:
python -m venv venv # Linux/Mac: source venv/bin/activate # Windows: venv\Scripts\activate pip install -r requirements.txt
-
Configure environment variables:
echo "GOOGLE_API_KEY=your_key_here" > clients/mcp-client/.env echo "THEAILANGUAGE_CONFIG=clients/mcp-client/theailanguage_config.json" >> clients/mcp-client/.env
-
Install Node.js servers:
npm install -g @modelcontextprotocol/server-memory @modelcontextprotocol/server-filesystem
Verification Checklist:
- Repository cloned
- Python virtual environment created and activated
- Python dependencies installed
- .env file configured
- Node.js servers installed
- Start the client:
python clients/mcp-client/langchain_mcp_client_wconfig.py
- Type natural language requests and receive responses
File Operations:
Create a file named example.txt
Search for "function" in all Python files
Count lines in main.py
Web Content:
Summarize https://example.com
Extract headlines from news site
System Commands:
List files in current directory
Check Python version
Run git status
Memory Operations:
Remember my favorite color is blue
What preferences did I set?
Show recent commands
Key Configuration Files:
theailanguage_config.json
: Main server configurations.env
: Environment variables
Example Server Configs:
{
"terminal_server": {
"command": "uv",
"args": ["run", "servers/terminal_server/terminal_server.py"]
},
"memory": {
"command": "npx.cmd",
"args": ["@modelcontextprotocol/server-memory"],
"env": {"MEMORY_FILE_PATH": "workspace/memory.json"}
}
}
Configuration Tips:
- Use absolute paths for reliability
- Set environment variables for sensitive data
- Restart servers after configuration changes
Common Issues & Solutions:
-
Authentication Problems:
- Verify Google API key in
.env
- Check key has proper permissions
- Regenerate key if needed
- Verify Google API key in
-
File Operations Failing:
# Check permissions ls -la workspace/ # Restart filesystem server npx @modelcontextprotocol/inspector uvx mcp-server-filesystem
-
Memory Operations Failing:
# Verify memory.json exists ls workspace/memory.json # Restart memory server npx @modelcontextprotocol/server-memory
Debugging Tools:
- Enable verbose logging:
echo "LOG_LEVEL=DEBUG" >> clients/mcp-client/.env
- List running servers:
npx @modelcontextprotocol/inspector list
Support:
Getting Started:
- Fork and clone the repository
- Set up development environment (see Local Setup Guide)
Development Workflow:
# Create feature branch
git checkout -b feature/your-feature
# Make changes following:
# - Python: PEP 8 style
# - JavaScript: StandardJS style
# - Document all new functions
# Run tests
python -m pytest tests/
# Push changes
git push origin feature/your-feature
Pull Requests:
- Reference related issues
- Describe changes clearly
- Include test results
- Squash commits before merging
Code Review:
- Reviews typically within 48 hours
- Address all feedback before merging
Recommended Setup:
- VSCode with Python/JS extensions
- Docker for testing
- Pre-commit hooks