Skip to content

ChandraMohanBusam/ai-deployment-agent

Repository files navigation

AI Deployment Agent

A production-grade AI deployment agent that reduces a multi-step manual deployment process from 14 steps and 20-30 minutes to a single natural language command completing in under 2 minutes.

Use this as a template to build your own AI-powered deployment pipeline. The architecture is generic and adaptable to any server infrastructure, build system, and notification stack.

/deploy latest
/deploy 2026.3.0.71
/deploy yesterday's build
/deploy the build from Monday

What This Does

The agent accepts natural language deployment commands from two interfaces: a Slack slash command for team use, and a Claude MCP interface for conversational deployment directly from claude.ai. Both interfaces share the same underlying deployment logic.

A 3-layer hybrid architecture resolves commands to exact build versions before passing them to a LangChain ReAct agent that executes the deployment steps in sequence across all configured servers.


Architecture

Dual Interface Design

Slack /deploy command          Claude (claude.ai)
        |                              |
        v                              v
main.py (FastAPI, port 8000)   mcp_server.py (FastAPI, port 8001)
        |                              |
        +----------+  +----------------+
                   |  |
                   v  v
              master_handler.py
              (3-layer version resolver + deploy orchestrator)
                   |
         +---------+---------+
         |         |         |
    Layer 1    Layer 2    Layer 3
    Python     GPT-4o    LangChain
    regex      NLP       ReAct Agent
    (free)   (1-2c)    (3-8c)
                   |
         +---------+---------+
         |         |         |
   DownloadBuild  Deploy  RestartAll
   (Azure Blob)  (SSH)    (Paramiko)

3-Layer Version Resolution

Each layer only activates when the layer above cannot resolve the input. This keeps LLM cost near zero for common inputs.

Layer Handles Cost
Layer 1: Python regex deploy 2026.3.0.71, deploy latest, deploy yesterday Free
Layer 2: GPT-4o NLP deploy Monday's build, deploy the April 19th build ~1-2 cents
Layer 3: LangChain ReAct Deployment execution: download, transfer, deploy, restart ~3-8 cents

MCP Tool Set (6 Tools)

Tool Purpose
resolve_build_version Resolves natural language to exact version
deploy_application Executes deployment after user confirmation
get_latest_build Queries Azure DevOps for current latest build
check_service_status Checks service status on all configured servers
get_deployment_log Reads audit log to answer history questions
send_deployment_email Sends notification email after user confirmation

Tech Stack

Component Technology
Agent framework LangChain (ReAct agent)
Alternative orchestration LangGraph (StateGraph -- reference file included)
LLM GPT-4o (temperature=0)
MCP server FastMCP (mcp>=1.25)
Slack interface FastAPI + Slack Bolt
SSH / SCP Paramiko
Build pipeline Azure DevOps REST API
Build storage Azure Blob Storage
Notifications Slack, Microsoft Teams, SMTP email

Project Structure

ai-deployment-agent/
  config.py                 Centralized environment variable config
  agent_config.json         Policy config: human-in-loop, email, notifications
  agent_config.py           Config reader with safe defaults

  layer1_python.py          Pure Python fast path (regex + keywords)
  layer2_gpt_parser.py      GPT-4o NLP parser for complex date inputs
  ado_api.py                Azure DevOps REST API helpers
  deployment_agent.py       LangChain ReAct agent with 4 deployment tools
  master_handler.py         3-layer dispatcher and deploy orchestrator
  notify.py                 Slack, Teams, and email notifications

  main.py                   FastAPI + Slack Bolt (port 8000)
  mcp_server.py             FastMCP server with 6 tools (port 8001)

  langgraph_deployment.py   LangGraph reference alternative

  requirements.txt
  Dockerfile
  .env.example
  logs/                     Audit trail of all deployments

Quick Start

1. Clone and install

git clone https://github.com/your-username/ai-deployment-agent
cd ai-deployment-agent
pip install -r requirements.txt

2. Configure environment

cp .env.example .env

Edit .env with your values. The infrastructure variables are fully configurable:

APP_SERVERS=app-server-01,app-server-02,app-server-03
APP_SERVICE_NAME=app-platform.service
INSTALL_PATH=/sites/app/installs
BUILD_PREFIX=your-build-prefix
BLOB_CONTAINER=builds

3. Configure agent behavior

Edit agent_config.json to control human-in-the-loop gates and email behavior:

{
  "human_in_loop": {
    "confirm_before_deploy": true,
    "confirm_before_email": true
  },
  "email": {
    "auto_send_on_success": false,
    "auto_send_on_failure": true,
    "default_recipients": "team@your-company.com",
    "send_from_slack": true,
    "send_from_claude": true
  }
}

4. Run

# Slack interface (port 8000)
uvicorn main:api --host 0.0.0.0 --port 8000

# Claude MCP interface (port 8001)
uvicorn mcp_server:mcp --host 0.0.0.0 --port 8001

For development, expose both ports publicly using ngrok:

ngrok http 8000   # for Slack
ngrok http 8001   # for Claude MCP

Slack Interface

Setup

  1. Create a Slack app at api.slack.com/apps
  2. Add a /deploy slash command pointing to https://your-domain.com/slack/events
  3. Add bot scopes: chat:write, commands
  4. Copy Bot Token and Signing Secret to your .env
  5. Set ALLOWED_SLACK_USERS to the Slack user IDs authorized to deploy

Usage

/deploy latest
/deploy 2026.3.0.71
/deploy yesterday
/deploy the build from Monday

Flow

User types /deploy latest
     |
     v
main.py acknowledges within 3 seconds (Slack requirement)
     |
     v
Background thread: resolve_build_version("latest")
     |
     v
Layer 1: keyword match "latest" -- calls ADO API
     |
     v
ADO returns 2026.3.0.71
     |
     v
LangChain ReAct agent runs:
  DownloadBuild(2026.3.0.71)
  TransferToServer(2026.3.0.71)
  DeployOnServer(2026.3.0.71)
  RestartServices()
     |
     v
Slack: "Deployment complete. Version 2026.3.0.71 is live. Took 91s."
Teams: webhook notification
Email: auto-send if auto_send_on_success=true in agent_config.json

Claude MCP Interface

Setup

  1. Start the MCP server on port 8001
  2. Make it publicly accessible (ngrok for dev, any cloud host for prod)
  3. In claude.ai: Settings > Integrations > Add MCP server
  4. Name: AI Deployment Agent, URL: https://your-domain.com/sse

Usage

Type naturally in claude.ai:

"deploy the latest build"
"deploy 2026.3.0.71"
"what was deployed last Tuesday?"
"is the service running on all servers?"
"send an email to the team about tonight's deployment"

Conversation Flow (default config)

You:     "deploy the latest build"

Claude:  [calls resolve_build_version]
         "I found build 2026.3.0.71. Ready to deploy to all servers. Confirm?"

You:     "yes go ahead"

Claude:  [calls deploy_application("2026.3.0.71")]
         "Deployment complete. 2026.3.0.71 is live. Took 91 seconds.
          Should I send an email notification?"

You:     "yes, CC the QA lead at qa@company.com"

Claude:  [calls send_deployment_email(...)]
         "Email sent to team@company.com, CC qa@company.com."

Human-in-the-Loop

The confirmation gate is controlled by agent_config.json, not hardcoded. Set confirm_before_deploy: false to skip the confirmation step (useful for dev environments or CI/CD pipelines).

confirm_before_deploy confirm_before_email Claude behavior
true true Asks before deploy, asks before email (default)
true false Asks before deploy, emails automatically
false true Deploys immediately, asks before email
false false Fully automatic, no confirmations

Email Notifications

Email behavior differs between the two interfaces:

From Claude (MCP): Controlled by confirm_before_email. When true, Claude asks before sending. When false, sends automatically after deployment.

From Slack: Fully automatic, controlled by auto_send_on_success and auto_send_on_failure in agent_config.json. No interactive confirmation is possible in the Slack flow.

Failure emails always send by default (auto_send_on_failure: true). Failures always need attention.


Deployment Audit Log

All deployments are logged to logs/deployments.log with source tagging:

2026-04-24 10:15:32 [INFO] [SLACK]  DEPLOY_SUCCESS | version=2026.3.0.64 | elapsed=91.2s | requested_by=U0123456789
2026-04-24 14:22:08 [INFO] [CLAUDE] DEPLOY_SUCCESS | version=2026.3.0.71 | elapsed=94.1s
2026-04-24 15:01:44 [ERROR][SLACK]  DEPLOY_FAILED  | version=2026.3.0.72 | error=SSH timeout

Claude can query this log via the get_deployment_log tool to answer natural language history questions.


LangGraph Alternative

langgraph_deployment.py contains a reference implementation using LangGraph StateGraph instead of LangChain ReAct. It is not wired into the main flow but demonstrates the architectural alternative.

Use LangGraph when step order is fixed and you want guaranteed sequential execution without LLM reasoning overhead. Use LangChain ReAct (current approach) when you are learning agent concepts or when tool selection benefits from LLM reasoning.


Architectural Evolution: Standalone Email MCP Server

The current implementation includes email as one of the six tools in mcp_server.py (Approach A). This is the correct design for a single-agent system.

As the system grows to multiple agents (deployment agent, incident response agent, monitoring alert agent), email becomes a shared concern. The correct evolution is a dedicated Email MCP Server that all agents connect to as a dependency. This is the MCP composition pattern: multiple single-responsibility servers orchestrated by the agent layer.

Deployment Agent ----+
Incident Agent  ----|----> Email MCP Server (shared service)
Monitoring Agent ----+

Approach A is implemented here. Approach B is the designed next step.


Docker

docker build -t ai-deployment-agent .
docker run -d \
  --env-file .env \
  -v $(pwd)/agent_config.json:/app/agent_config.json \
  -v $(pwd)/logs:/app/logs \
  -p 8000:8000 \
  -p 8001:8001 \
  ai-deployment-agent

Adapting to Your Infrastructure

All infrastructure details are environment variables in .env. To adapt this to your environment:

  1. Set APP_SERVERS to your server hostnames
  2. Set APP_SERVICE_NAME to your systemd service name
  3. Set INSTALL_PATH, OPTS_PATH, SITE_ROOT, APP_LINK to your deployment paths
  4. Set BUILD_PREFIX to match your build artifact naming convention
  5. Set BLOB_CONTAINER to your Azure Blob container name
  6. Configure ADO variables to point to your pipeline

No code changes required for infrastructure customization.


License

MIT License. Use freely, adapt to your needs, contributions welcome.

About

Production-grade AI deployment agent using LangChain, FastMCP, and GPT-4o. Reduces multi-step manual deployments to a single natural language command. Supports Slack slash command and Claude MCP interfaces

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors