Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 

README.md

elizaOS Cloudflare Workers

Deploy elizaOS agents as serverless functions on Cloudflare Workers.

Available Workers

Worker Language Full Runtime Notes
TypeScript TypeScript Yes Uses full elizaOS runtime (recommended)

TypeScript Worker

The TypeScript worker uses the canonical elizaOS implementation pattern:

// Create runtime with plugins
const runtime = new AgentRuntime({
  character,
  plugins: [openaiPlugin],
});
await runtime.initialize();

// Process messages through the message service
await runtime.messageService?.handleMessage(runtime, messageMemory, callback);

Deployment

# Install dependencies
bun install

# Configure environment
cp wrangler.toml.example wrangler.toml
# Edit wrangler.toml and add your OPENAI_API_KEY

# Deploy
wrangler deploy

Local Development

wrangler dev

API Endpoints

All workers expose the same REST API:

GET /

Returns information about the agent.

GET /health

Health check endpoint.

POST /chat

Send a message and receive a response.

curl -X POST https://your-worker.workers.dev/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "Hello!"}'

POST /chat/stream (TypeScript only)

Send a message and receive a streaming response.

Environment Variables

Configure these in your wrangler.toml:

[vars]
CHARACTER_NAME = "Eliza"
CHARACTER_BIO = "A helpful AI assistant"

[[secrets]]
OPENAI_API_KEY = "your-key"  # Use wrangler secret put OPENAI_API_KEY

Limitations

  • No persistent storage (PGLite not available in Workers)
  • Runtime is initialized per-request
  • For persistent state, use Cloudflare Durable Objects

Production Recommendations

For production deployments:

  1. Use the TypeScript worker for the best elizaOS integration
  2. Use Cloudflare KV or Durable Objects for conversation state
  3. Set proper rate limits in your wrangler.toml
  4. Monitor with Cloudflare Analytics

The Canonical Pattern

All workers should follow this pattern (where runtime is available):

// 1. Create runtime with plugins
const runtime = new AgentRuntime({
  character,
  plugins: [openaiPlugin],
});

// 2. Initialize
await runtime.initialize();

// 3. Ensure connection
await runtime.ensureConnection({
  entityId: userId,
  roomId,
  worldId,
  userName: "User",
  source: "cloudflare",
  channelId: "worker-chat",
  type: ChannelType.API,
});

// 4. Create message memory
const messageMemory = createMessageMemory({
  id: uuidv4(),
  entityId: userId,
  roomId,
  content: { text: message, source: "cloudflare_worker" },
});

// 5. Process through message service
await runtime.messageService?.handleMessage(runtime, messageMemory, callback);