The OpenMLE CLI is an open source AI Engineer Agent that runs in your terminal, built on top of LangChain's DeepAgents and DeepAgents_cli.
Key Features:
- Built-in Skills: Comes with comprehensive set of built-in AI(problem-framing, data-engineering, ML, DL, Agents, evaluation, MLOps, Safety governance) skills
- Cross Platform: Runs Across OS (Linux, Mac, Windows)
- Built-in Tools: File operations (read, write, edit, glob, grep), shell commands, web search, and subagent delegation
- Customizable Skills: Add domain-specific capabilities through a progressive disclosure skill system
- Persistent Memory: Agent remembers your preferences, coding style, and project context across sessions
- Project-Aware: Automatically detects project roots and loads project-specific configurations
open-mle is a Python package that can be installed via pip or uv.
Install via pip:
pip install open-mle # It is recommended to install inside a virtual environmentNOTE: Some issues were noticed when using conda based virtual envs, the API keys set in .env file were not recognizable. If you face similar issues use the raw venv's or use the uv version insted.
Or using uv (recommended):
# Create a virtual environment
uv venv
# Install the package
uv pip install open-mleRun the agent in your terminal:
open-mleGet help:
open-mle helpCommon options:
# Use a specific agent configuration
open-mle --agent mybot
# Auto-approve tool usage (skip human-in-the-loop prompts)
open-mle --auto-approve
# Execute code in a remote sandbox
open-mle --sandbox modal # or runloop, daytona
open-mle --sandbox-id dbx_123 # reuse existing sandboxType naturally as you would in a chat interface. The agent will use its built-in tools, skills, and memory to help you with tasks.
The agent comes with the following built-in tools (always available without configuration):
| Tool | Description |
|---|---|
ls |
List files and directories |
read_file |
Read contents of a file |
write_file |
Create or overwrite a file |
edit_file |
Make targeted edits to existing files |
glob |
Find files matching a pattern (e.g., **/*.py) |
grep |
Search for text patterns across files |
shell |
Execute shell commands (local mode) |
execute |
Execute commands in remote sandbox (sandbox mode) |
web_search |
Search the web using Tavily API |
fetch_url |
Fetch and convert web pages to markdown |
task |
Delegate work to subagents for parallel execution |
write_todos |
Create and manage task lists for complex work |
Warning
Human-in-the-Loop (HITL) Approval Required
Potentially destructive operations require user approval before execution:
- File operations:
write_file,edit_file - Command execution:
shell,execute - External requests:
web_search,fetch_url - Delegation:
task(subagents)
Each operation will prompt for approval showing the action details. Use --auto-approve to skip prompts:
open-mle --auto-approveEach agent has its own configuration directory at ~/.openmle/<agent_name>/, with default agent.
# List all configured agents
open-mle list
# Create a new agent
open-mle create <agent_name>Set these in your .env file or export them
GOOGLE_API_KEY = 'your-api-key'
OPENAI_API_KEY = 'your-api-key'
ANTHROPIC_API_KEY = 'your-api-key'
TAVILY_API_KEY = 'your-api-key'If you want to use a specific model (e.g., gpt-5-mini) set OPENAI_API_KEY along with OPENAI_MODEL name
As a Add-on you can also use your AZURE DEPLOYMENT models set:
AZURE_END_POINT='your-azure-endpoint' # Example: https://server-dev.azure.com
AZURE_API_KEY='your-azure-api-key' # Example: AVrp5.....
AZURE_VERSION="your-version" # Example: 2025-12-01-preview
AZURE_DEPLOYMENT="your-model-name" # Example: gpt-4.1-miniThe CLI supports separate LangSmith project configuration for agent tracing vs user code tracing:
Agent Tracing - Traces openMLE operations (tool calls, agent decisions):
export DEEPAGENTS_LANGSMITH_PROJECT="my-agent-project"User Code Tracing - Traces code executed via shell commands:
export LANGSMITH_PROJECT="my-user-code-project"Complete Setup Example:
# Enable LangSmith tracing
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY="your-api-key"
# Configure separate projects
export DEEPAGENTS_LANGSMITH_PROJECT="agent-traces"
export LANGSMITH_PROJECT="user-code-traces"
# Run open-mle
open-mleWhen both are configured, the CLI displays:
β LangSmith tracing enabled: OpenMLE β 'agent-traces'
User code (shell) β 'user-code-traces'
Why separate projects?
- Keep agent operations separate from your application code traces
- Easier debugging by isolating agent vs user code behavior
- Different retention policies or access controls per project
Backwards Compatibility:
If DEEPAGENTS_LANGSMITH_PROJECT is not set, both agent and user code trace to the same project specified by LANGSMITH_PROJECT.
NOTE: Built-in skills can NOT be overwriteen.
There are two primary ways to customize any agent: memory and skills.
Each agent has its own global configuration directory at ~/.openmle/<agent_name>/:
~/.openmle/<agent_name>/
βββ agent.md # Auto-loaded global personality/style
βββ skills/ # Auto-loaded agent-specific skills
βββ web-research/
β βββ SKILL.md
βββ langgraph-docs/
βββ SKILL.md
Projects can extend the global configuration with project-specific instructions and skills:
my-project/
βββ .git/
βββ .openmle/
βββ agent.md # Project-specific instructions
βββ skills/ # Project-specific skills
βββ custom-tool/
βββ SKILL.md
The CLI automatically detects project roots (via .git) and loads:
- Project-specific
agent.mdfrom[project-root]/.openmle/agent.md - Project-specific skills from
[project-root]/.openmle/skills/
Both global and project configurations are loaded together, allowing you to:
- Keep general coding style/preferences in global agent.md
- Add project-specific context, conventions, or guidelines in project agent.md
- Share project-specific skills with your team (committed to version control)
- Override global skills with project-specific versions (when skill names match)
NOTE: Both global and project skills cannot have same names as built-in skills.
agent.md files provide persistent memory that is always loaded at session start. Both global and project-level agent.md files are loaded together and injected into the system prompt.
Global agent.md (~/.openmle/agent/agent.md)
- Your personality, style, and universal coding preferences
- General tone and communication style
- Universal coding preferences (formatting, type hints, etc.)
- Tool usage patterns that apply everywhere
- Workflows and methodologies that don't change per-project
Project agent.md (.openmle/agent.md in project root)
- Project-specific context and conventions
- Project architecture and design patterns
- Coding conventions specific to this codebase
- Testing strategies and deployment processes
- Team guidelines and project structure
How it works (AgentMemoryMiddleware):
- Loads both files at startup and injects into system prompt as
<user_memory>and<project_memory> - Appends memory management instructions on when/how to update memory files
When the agent updates memory:
- IMMEDIATELY when you describe how it should behave
- IMMEDIATELY when you give feedback on its work
- When you explicitly ask it to remember something
- When patterns or preferences emerge from your interactions
The agent uses edit_file to update memories when learning preferences or receiving feedback.
Beyond agent.md, you can create additional memory files in .openmle/ for structured project knowledge. These work similarly to Anthropic's Memory Tool. The agent receives detailed instructions on when to read and update these files.
How it works:
- Create markdown files in
[project-root]/.openmle/(e.g.,api-design.md,architecture.md,deployment.md) - The agent checks these files when relevant to a task (not auto-loaded into every prompt)
- The agent uses
write_fileoredit_fileto create/update memory files when learning project patterns
Example workflow:
# Agent discovers deployment pattern and saves it
.openmle/
βββ agent.md # Always loaded (personality + conventions)
βββ architecture.md # Loaded on-demand (system design)
βββ deployment.md # Loaded on-demand (deploy procedures)When the agent reads memory files:
- At the start of new sessions (checks what files exist)
- Before answering questions about project-specific topics
- When you reference past work or patterns
- When performing tasks that match saved knowledge domains
Benefits:
- Persistent learning: Agent remembers project patterns across sessions
- Team collaboration: Share project knowledge through version control
- Contextual retrieval: Load only relevant memory when needed (reduces token usage)
- Structured knowledge: Organize information by domain (APIs, architecture, deployment, etc.)
Skills are reusable agent capabilities that provide specialized workflows and domain knowledge. Example skills are provided in the examples/skills/ directory:
- web-research - Structured web research workflow with planning, parallel delegation, and synthesis
- langgraph-docs - LangGraph documentation lookup and guidance
To use an example skill globally with the default agent, just copy them to the agent's skills global or project-level skills directory:
mkdir -p ~/.openmle/agent/skills
cp -r examples/skills/web-research ~/.openmle/agent/skills/To manage skills:
# List all skills (built-in + user + project)
open-mle skills list
# Create a new global(user) skill from template
open-mle skills create my-skill
# Create a new project skill (requires .git to be present)
open-mle skills create my-tool --project
# List only project skills
open-mle skills list --project
# View detailed information about a skill
open-mle skills info web-research
# View info for a project skill only
open-mle skills info my-tool --projectIf you try to run:
open-mle skills create web-researchIt will error out because web-research is a built-in skill and built-in skills cannot be overwriteen
To use skills (e.g., the problem-framing), just type a request relevant to a skill and the skill will be used automatically.
$ open-mle
$ "Analyze the data and frame a ML Problem" Skills follow Anthropic's progressive disclosure pattern - the agent knows skills exist but only reads full instructions when needed.
- At startup - SkillsMiddleware scans
~/.openmle/agent/skills/and.openmle/skills/directories - Parse metadata - Extracts YAML frontmatter (name + description) from each
SKILL.mdfile - Inject into prompt - Adds skill list with descriptions to system prompt: "Available Skills: web-research - Use for web research tasks..."
- Progressive loading - Agent reads full
SKILL.mdcontent withread_fileonly when a task matches the skill's description - Execute workflow - Agent follows the step-by-step instructions in the skill file
Access the initial test report Here
Test by yourself and Report Issues
Discussions Tab is open for Ideas and QA
# From libs/openmle-cli directory
uv run open-mle
# Or install in editable mode
uv pip install -e .
open-mle- UI changes β Edit
ui.pyorinput.py - Add new tools β Edit
tools.py - Change execution flow β Edit
execution.py - Add commands β Edit
commands.py - Agent configuration β Edit
agent.py - Skills system β Edit
skills/modules - Constants/colors β Edit
config.py
- FastAPI - Critical for ML serving
- Hugging Face Transformers - Essential for modern NLP
- MLflow - Industry standard for tracking
- OpenSearch - RAG implementation
- Optuna - Hyperparameter optimization
- SHAP - Model explainability
- XGBoost - Most popular boosting library
- Docker - Containerization basics
- Apache Airflow: Workflow orchestration (data pipelines, retraining)
- DVC: Data versioning and pipeline management