Deploy AI chat agents as serverless AWS Lambda functions. These examples show how to run an elizaOS agent as a stateless worker that processes chat messages via HTTP.
All handlers use the full elizaOS runtime with OpenAI as the LLM provider, providing the same capabilities as the chat demo examples.
┌──────────────┐ ┌─────────────────┐ ┌────────────────┐
│ Test Client │────▶│ API Gateway │────▶│ Lambda │
│ (curl/node) │◀────│ (HTTP API) │◀────│ (elizaOS) │
└──────────────┘ └─────────────────┘ └────────────────┘
│
▼
┌────────────────┐
│ OpenAI API │
└────────────────┘
- AWS CLI configured with credentials
- AWS SAM CLI
- Bun or Node.js 20+
- OpenAI API key
export OPENAI_API_KEY="your-openai-api-key"
export AWS_REGION="us-east-1" # or your preferred regioncd examples/aws
bun install
bun run test # Automated tests
bun run start # Local HTTP server on port 3000cd examples/aws
bun install
sam build
sam deploy --guided --parameter-overrides OpenAIApiKey=$OPENAI_API_KEYAfter deployment, SAM outputs your API endpoint URL. Test it:
curl -X POST https://YOUR_API_ID.execute-api.us-east-1.amazonaws.com/prod/chat \
-H 'Content-Type: application/json' \
-d '{"message": "Hello, Eliza!"}'
cd examples/aws
bun install
bun run test-client.ts --endpoint https://YOUR_API_ID.execute-api.us-east-1.amazonaws.com/prod/chatexamples/aws/
├── README.md
├── template.yaml
├── handler.ts
├── server-local.ts
├── test-local.ts
├── test-client.ts
├── package.json
├── tsconfig.json
├── events/
└── scripts/
Send a message to the elizaOS agent.
Request:
{
"message": "Hello, how are you?",
"userId": "optional-user-id",
"conversationId": "optional-conversation-id"
}Response:
{
"response": "I'm doing well, thank you for asking!",
"conversationId": "uuid-for-conversation-tracking",
"timestamp": "2025-01-10T12:00:00.000Z"
}Health check endpoint.
Response:
{
"status": "healthy",
"runtime": "elizaos-typescript",
"version": "2.0.0-beta.0"
}# First-time deployment with guided prompts
sam deploy --guided
# Subsequent deployments
sam deployaws cloudformation deploy \
--template-file template.yaml \
--stack-name eliza-worker \
--parameter-overrides OpenAIApiKey=$OPENAI_API_KEY \
--capabilities CAPABILITY_IAM| Variable | Required | Default | Description |
|---|---|---|---|
OPENAI_API_KEY |
Yes | - | Your OpenAI API key |
OPENAI_SMALL_MODEL |
No | gpt-5-mini |
Small model to use |
OPENAI_LARGE_MODEL |
No | gpt-5 |
Large model to use |
CHARACTER_NAME |
No | Eliza |
Agent's name |
CHARACTER_BIO |
No | A helpful AI assistant. |
Agent's bio |
CHARACTER_SYSTEM |
No | (default) | System prompt |
LOG_LEVEL |
No | INFO |
Logging level |
You can customize the agent's personality by setting environment variables or modifying the character definition in the handler:
const character: Character = {
name: process.env.CHARACTER_NAME ?? "Eliza",
bio: process.env.CHARACTER_BIO ?? "A helpful AI assistant.",
system: process.env.CHARACTER_SYSTEM ?? "You are helpful and concise.",
};Lambda cold starts can take 2-5 seconds due to runtime initialization. To minimize:
-
Provisioned Concurrency: Keep instances warm
ProvisionedConcurrencyConfig: ProvisionedConcurrentExecutions: 1
-
SnapStart (Java only): Not applicable for these runtimes
-
Smaller Package: Use tree-shaking and minimal dependencies
Recommended memory settings:
| Runtime | Memory | Timeout |
|---|---|---|
| TypeScript | 512 MB | 30s |
Lambda automatically logs to CloudWatch. View logs:
sam logs -n ElizaWorkerFunction --stack-name eliza-worker --tailKey metrics to monitor:
- Invocations
- Duration
- Errors
- Throttles
- ConcurrentExecutions
AWS Lambda pricing (as of 2025):
- Requests: $0.20 per 1M requests
- Duration: $0.0000166667 per GB-second
Example (512 MB, 2s avg duration, 10K requests/month):
- Requests: $0.002
- Duration: 10,000 × 2s × 0.5GB × $0.0000166667 = $0.17
- Total: ~$0.20/month
Ensure all dependencies are bundled:
bun run build
sam buildIncrease timeout in template.yaml:
Timeout: 60 # secondsVerify the environment variable is set:
sam deploy --parameter-overrides OpenAIApiKey=$OPENAI_API_KEYRemove all deployed resources:
sam delete --stack-name eliza-worker