Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 

README.md

elizaOS A2A (Agent-to-Agent) Server Examples

This directory contains A2A (Agent-to-Agent) server implementations that expose an elizaOS agent as an HTTP server. This enables agent-to-agent communication, webhooks, and integration with other AI systems.

Uses real elizaOS runtime.

  • If OPENAI_API_KEY is set, the server will use an OpenAI-backed model (and SQL where supported).
  • If OPENAI_API_KEY is not set, the server runs in a deterministic “ELIZA classic” mode (no API keys required), backed by @elizaos/plugin-inmemorydb for ephemeral multi-turn state.

Available Examples

Framework Language Directory
Express.js TypeScript .

What is A2A?

A2A (Agent-to-Agent) is a pattern where AI agents communicate with each other over HTTP. Each agent exposes a simple API that allows:

  • Sending messages to the agent
  • Receiving responses
  • Querying agent capabilities
  • Multi-turn conversations with session management

Common API Endpoints

All implementations expose the same REST API:

GET /

Returns information about the agent.

curl http://localhost:3000/

Response:

{
  "name": "Eliza",
  "bio": "A helpful AI assistant",
  "version": "2.0.0-beta.0",
  "capabilities": ["chat", "reasoning", "tool-use"],
  "powered_by": "elizaOS"
}

GET /health

Health check endpoint.

curl http://localhost:3000/health

POST /chat

Send a message to the agent and receive a response.

curl -X POST http://localhost:3000/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "Hello, how are you?", "sessionId": "user-123"}'

Response:

{
  "response": "Hello! I'm doing well, thank you for asking. How can I help you today?",
  "agentId": "eliza-agent-id",
  "sessionId": "user-123",
  "timestamp": "2024-01-10T12:00:00Z"
}

POST /chat/stream

Stream a response from the agent (SSE).

curl -X POST http://localhost:3000/chat/stream \
  -H "Content-Type: application/json" \
  -H "Accept: text/event-stream" \
  -d '{"message": "Tell me a story"}'

Quick Start

cd examples/a2a
bun install
bun run start

Configuration

Optional: OpenAI API key (enables OpenAI-backed responses):

export OPENAI_API_KEY=your-key

Optional configuration:

  • PORT - Server port (default: 3000)
  • OPENAI_BASE_URL - Custom OpenAI-compatible endpoint
  • OPENAI_SMALL_MODEL - Model for quick responses
  • OPENAI_LARGE_MODEL - Model for complex responses

Use Cases

  1. Multi-Agent Systems: Have multiple agents communicate with each other
  2. Webhooks: Receive events from external services and have the agent respond
  3. Chatbots: Deploy agents as backend services for chat applications
  4. API Integration: Integrate agents into existing API-based workflows
  5. Testing: Test agent behavior programmatically

Testing

cd examples/a2a
bun run test

Agent-to-Agent Communication Example

# Agent A calls Agent B
curl -X POST http://agent-b:3000/chat \
  -H "Content-Type: application/json" \
  -H "X-Agent-Id: agent-a" \
  -d '{
    "message": "Can you help me analyze this data?",
    "context": {"source": "agent-a", "task": "data-analysis"}
  }'