A comprehensive Model Context Protocol (MCP) server that provides tools for interacting with Comet ML API. This server enables seamless integration with Comet ML's experiment tracking platform through a standardized protocol.
- 🔧 MCP Server: Full Model Context Protocol implementation for tool integration
- 📊 Experiment Management: List, search, and analyze experiments with detailed metrics
- 📁 Project Management: Organize and explore projects across workspaces
- 🔍 Advanced Search: Search experiments by name, description, and project
- 📈 Session Management: Singleton
comet_ml.API()instance with robust error handling
- Python 3.8 or higher
- Comet ML account and API key
pip install comet-mcp --upgradeYou can run the Comet MCP server using Docker to avoid installing Python dependencies on your system.
-
Build the Docker image:
docker build -t comet-mcp . -
Configure your MCP client (see Usage section below for configuration examples)
The server uses standard comet_ml configuration:
- Using
comet init; or - Using environment variables
Example:
export COMET_API_KEY=your_comet_api_key_here
# Optional: Set default workspace (if not provided, uses your default)
export COMET_WORKSPACE=your_workspace_namelist_experiments(workspace, project_name)- List recent experiments with optional filteringget_experiment_details(experiment_id)- Get comprehensive experiment information including metrics and parametersget_experiment_code(experiment_id)- Retrieve source code from experimentsget_experiment_metric_data(experiment_ids, metric_names, x_axis)- Get metric data for multiple experimentsget_default_workspace()- Get the default workspace name for the current userlist_projects(workspace)- List all projects in a workspacelist_project_experiments(project_name, workspace)- List experiments within a specific projectcount_project_experiments(project_name, workspace)- Count and analyze experiments in a projectget_session_info()- Get current session status and connection information
- Structured Data: All tools return properly typed data structures
- Error Handling: Graceful handling of API failures and missing data
- Flexible Filtering: Filter by workspace, project, or search terms
- Rich Metadata: Includes timestamps, descriptions, and status information
- File Resources: Some tools (like
experiment_spreadsheet) create CSV files that are available as MCP resources
The server provides access to generated files (like CSV exports) through the MCP resources API. When a tool creates a file, it returns a resource URI that can be accessed using the MCP read_resource method.
Accessing Resources:
- Tools that create files will return a
resource_uriin their response - Use the MCP
read_resourcemethod with the URI to read the file content - Resources are stored on the server and can be accessed without processing all content through the LLM
Example:
# After calling experiment_spreadsheet, you'll get a resource_uri
# Access it using:
read_resource(uri="file://comet-mcp/experiment_spreadsheet_20251206_103508.csv")Most MCP clients (like Claude Desktop, Cursor, etc.) will automatically handle resource access when you reference the resource URI in your conversation.
Run the server to provide tools to MCP clients:
# Start the MCP server
comet-mcpThe server will:
- Initialize Comet ML session
- Register all available tools
- Listen for MCP client connections via stdio
Create a configuration for your AI system. For example:
Local Installation:
{
"servers": [
{
"name": "comet-mcp",
"description": "Comet ML MCP server for experiment management",
"command": "comet-mcp",
"env": {
"COMET_API_KEY": "${COMET_API_KEY}"
}
}
]
}Docker Installation (Alternative):
{
"mcpServers": {
"comet-mcp": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"COMET_API_KEY",
"-e",
"COMET_WORKSPACE",
"comet-mcp",
"comet-mcp",
"--transport",
"stdio"
],
"env": {
"COMET_API_KEY": "your_api_key_here",
"COMET_WORKSPACE": "your_workspace_name"
}
}
}
}comet-mcp supports "stdio" and "sse" transport modes.
usage: comet-mcp [-h] [--transport {stdio,sse}] [--host HOST] [--port PORT]
Comet ML MCP Server
options:
-h, --help show this help message and exit
--transport {stdio,sse}
Transport method to use (default: stdio)
--host HOST Host for SSE transport (default: localhost)
--port PORT Port for SSE transport (default: 8000)
The Comet MCP server includes built-in OpenTelemetry instrumentation for distributed tracing and structured logging. This provides visibility into server operations, tool calls, and Comet ML API interactions.
- Distributed Tracing: Track requests across server operations, tool calls, and API interactions
- Structured Logging: Capture detailed log events with context
- Dual Export: Export telemetry data to both files and Opik (Comet's observability platform)
- Low Overhead: Minimal performance impact with async-friendly instrumentation
Telemetry is enabled by default but can be configured via environment variables.
# Enable/disable telemetry (default: true)
export OTEL_ENABLED=true
# Service name (default: comet-mcp)
export OTEL_SERVICE_NAME=comet-mcp
# Service version (default: 1.2.0)
export OTEL_SERVICE_VERSION=1.2.0Export traces and logs to local files in JSON Lines format:
# Path for trace export file (default: traces.jsonl, empty to disable)
export OTEL_TRACES_FILE=traces.jsonl
# Path for log export file (default: logs.jsonl, empty to disable)
export OTEL_LOGS_FILE=logs.jsonlFile Format:
- Traces: OTLP JSON format, one span per line
- Logs: Structured JSON format, one log record per line
- Files are append-only and can be rotated externally
Example: Reading trace files:
import json
with open("traces.jsonl", "r") as f:
for line in f:
span = json.loads(line)
print(f"Span: {span['name']}, Duration: {span['end_time_unix_nano'] - span['start_time_unix_nano']}")Export traces and logs to Opik (Comet's observability platform) for cloud-based observability.
Option 1: Using OTLP Environment Variables
# Opik endpoint
export OTEL_EXPORTER_OTLP_ENDPOINT="https://www.comet.com/opik/api/v1/private/otel"
# Headers (comma-separated key=value pairs)
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=your-api-key,projectName=your-project,Comet-Workspace=your-workspace"Option 2: Using Individual Variables
# Opik endpoint (defaults to Comet Cloud if not set)
export OPIK_ENDPOINT="https://www.comet.com/opik/api/v1/private/otel"
# Opik API key
export OPIK_API_KEY=your-api-key
# Opik project name
export OPIK_PROJECT_NAME=your-project
# Comet workspace name
export OPIK_WORKSPACE=your-workspaceFor Self-Hosted Opik:
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:5173/api/v1/private/otel"For Enterprise Deployment:
export OTEL_EXPORTER_OTLP_ENDPOINT="https://<comet-deployment-url>/opik/api/v1/private/otel"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=your-api-key,projectName=your-project,Comet-Workspace=your-workspace"File-only export:
export OTEL_TRACES_FILE=traces.jsonl
export OTEL_LOGS_FILE=logs.jsonl
# Opik export disabled (no endpoint configured)Opik-only export:
export OTEL_EXPORTER_OTLP_ENDPOINT="https://www.comet.com/opik/api/v1/private/otel"
export OPIK_API_KEY=your-api-key
export OPIK_PROJECT_NAME=your-project
export OPIK_WORKSPACE=your-workspace
# File export disabled (empty file paths)
export OTEL_TRACES_FILE=""
export OTEL_LOGS_FILE=""Both file and Opik export:
# File export
export OTEL_TRACES_FILE=traces.jsonl
export OTEL_LOGS_FILE=logs.jsonl
# Opik export
export OTEL_EXPORTER_OTLP_ENDPOINT="https://www.comet.com/opik/api/v1/private/otel"
export OPIK_API_KEY=your-api-key
export OPIK_PROJECT_NAME=your-project
export OPIK_WORKSPACE=your-workspaceThe following operations are automatically instrumented:
- Server Lifecycle: Startup, shutdown, session initialization
- Tool Operations: All MCP tool calls (
list_tools,call_tool,list_resources,read_resource) - Comet ML API Calls: All tool functions that interact with Comet ML API
- Cache Operations: Cache hits, misses, and writes
- Session Management: Session initialization and API access
In Opik:
- Navigate to your Opik project
- Open the Traces view
- Filter by service name:
comet-mcp - Explore trace spans and their relationships
From Files:
- Use tools like
jqto parse JSON Lines files:cat traces.jsonl | jq '.name, .attributes'
- Import into analysis tools that support OTLP JSON format
- Use log aggregation tools for log files
Telemetry not appearing:
- Check that
OTEL_ENABLED=true(or not set, defaults to true) - Verify file paths are writable (for file export)
- Check network connectivity (for Opik export)
- Review server logs for telemetry initialization messages
Opik export errors:
- Verify API key and endpoint are correct
- Check that project name and workspace match your Opik configuration
- Ensure you're using HTTP endpoint (not gRPC)
- Network errors are logged but don't crash the server
For more information about Opik, see the Opik OpenTelemetry documentation.
For complete details on testing this (or any MCP server) see examples/README.
This project is licensed under the MIT License - see the LICENSE file for details.
- Documentation: GitHub Repository
- Issues: GitHub Issues
- Comet ML: Comet ML Documentation