___ __ ____
/ _ |___ ____ ___ / /_/ __/__ _____ _____
/ __ / _ `/ -_) _ \/ __/\ \/ -_) __/ |/ / -_)
/_/ |_\_, /\__/_//_/\__/___/\__/_/ |___/\__/
/___/
AgentServe is a lightweight framework for hosting and scaling AI agents. It is designed to be easy to use and integrate with existing projects and agent / LLM frameworks. It wraps your agent in a REST API and supports optional task queuing for scalability.
Join the Discord for support and discussion.
The goal of AgentServe is to provide the easiest way to take an local agent to production and standardize the communication layer between multiple agents, humans, and other systems.
- Standardized: AgentServe provides a standardized way to communicate with AI agents via a REST API.
- Framework Agnostic: AgentServe supports multiple agent frameworks (OpenAI, LangChain, LlamaIndex, and Blank).
- Task Queuing: AgentServe supports optional task queuing for scalability. Choose between local, Redis, or Celery task queues based on your needs.
- Configurable: AgentServe is designed to be configurable via an
agentserve.yamlfile and overridable with environment variables. - Easy to Use: AgentServe aims to be easy to use and integrate with existing projects and make deployment as simple as possible.
- Python 3.9+
To install AgentServe, you can use pip:
pip install -U agentserveAgentServe allows you to easily wrap your agent code in a FastAPI application and expose it via REST endpoints. Below are the steps to integrate AgentServe into your project.
First, install the agentserve package using pip:
pip install -U agentserveMake sure your virtual environment is activated if you're using one.
Within your entry point file (e.g. main.py) we will import agentserve and create an app instance, then decorate an agent function with @app.agent. Finally, we will call app.run() to start the server.
The agent function should take a single argument, task_data, which will be a dictionary of data prequired by your agent.
Example:
# main.py
import agentserve
from openai import OpenAI
app = agentserve.app()
@app.agent
def my_agent(task_data):
# Your agent logic goes here
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": task_data["prompt"]}]
)
return response.choices[0].message.content
if __name__ == "__main__":
app.run()In this example:
- We import agentserve and create an app instance using
agentserve.app(). - We define our agent function
my_agentand decorate it with@app.agent. - Within the agent function, we implement our agent's logic.
- We call
app.run()to start the server.
To run the agent server, use the following command:
python main.pyBy default, AgentServe uses a local task queue, which is suitable for development and testing. If you need more robust queue management for production, you can configure AgentServe to use Redis or Celery.
Using a Configuration File Create a file named agentserve.yaml in your project directory:
# agentserve.yaml
task_queue: celery # Options: 'local', 'redis', 'celery'
celery:
broker_url: pyamqp://guest@localhost//Using Environment Variables
Alternatively, you can set configuration options using environment variables:
export AGENTSERVE_TASK_QUEUE=celery
export AGENTSERVE_CELERY_BROKER_URL=pyamqp://guest@localhost//To start the worker, use the following command:
agentserve startworkerWith the server and worker (if needed) running, you can test your agent using the available endpoints.
Synchronous Task Processing
POST /task/sync
curl -X POST http://localhost:8000/task/sync \
-H "Content-Type: application/json" \
-d '{"input": "Test input"}'Asynchronously process a task
POST /task/async
curl -X POST http://localhost:8000/task/async \
-H "Content-Type: application/json" \
-d '{"input": "Test input"}'Get the status of a task
GET /task/status/:task_id
curl http://localhost:8000/task/status/1234567890Get the result of a task
GET /task/result/:task_id
curl http://localhost:8000/task/result/1234567890AgentServe allows you to configure various aspects of the application using a configuration file or environment variables.
Place an agentserve.yaml file in your project directory with the desired configurations.
Example:
# agentserve.yaml
task_queue: celery # Options: 'local', 'redis', 'celery'
celery:
broker_url: pyamqp://guest@localhost//
redis:
host: localhost
port: 6379
server:
host: 0.0.0.0
port: 8000
queue: # if using local task queue
max_workers: 10 # defaultSet the desired configuration options using environment variables.
You can override configurations using environment variables without modifying the configuration file.
AGENTSERVE_TASK_QUEUEAGENTSERVE_CELERY_BROKER_URLAGENTSERVE_REDIS_HOSTAGENTSERVE_REDIS_PORTAGENTSERVE_SERVER_HOSTAGENTSERVE_SERVER_PORTAGENTSERVE_QUEUE_MAX_WORKERS
Example:
export AGENTSERVE_TASK_QUEUE=redis
export AGENTSERVE_REDIS_HOST=redis-server-host
export AGENTSERVE_REDIS_PORT=6379You can specify FastAPI settings, including CORS configuration, using the fastapi key in your agentserve.yaml configuration file.
Example:
# agentserve.yaml
fastapi:
cors:
allow_origins:
- "http://localhost:3000"
- "https://yourdomain.com"
allow_credentials: true
allow_methods:
- "*"
allow_headers:
- "*"Alternatively, you can set the desired configuration options using environment variables.
Example:
export AGENTSERVE_CORS_ORIGINS="http://localhost:3000,https://yourdomain.com"
export AGENTSERVE_CORS_ALLOW_CREDENTIALS="true"
export AGENTSERVE_CORS_ALLOW_METHODS="GET,POST"
export AGENTSERVE_CORS_ALLOW_HEADERS="Content-Type,Authorization"You can integrate AgentServe into your existing projects by importing agentserve and defining your agent function.
Example:
# main.py
import agentserve
app = agentserve.app()
@app.agent
def my_custom_agent(task_data):
# Your custom agent logic (e.g. using LangChain, LlamaIndex, etc.)
result = perform_complex_computation(task_data)
return {"result": result}
if __name__ == "__main__":
app.run()AgentServe allows you to validate the input to your agent function using Pydantic. Simply add an input schema to your agent function.
Example:
# main.py
import agentserve
from pydantic import BaseModel
class MyInputSchema(BaseModel):
prompt: str
@app.agent(input_schema=MyInputSchema)
def my_custom_agent(task_data):
# Your custom agent logic
return {"result": "Hello, world!"}
if __name__ == "__main__":
app.run()INSTRUCTIONS COMING SOON
- Add support for streaming responses
- Add easy instructions for more hosting options (GCP, Azure, AWS, etc.)
- Add support for external storage for task results
- Add support for multi model agents
- Add support for more agent frameworks
This project is licensed under the MIT License.
Join the Discord for support and discussion.
For any questions or issues, please contact Peter at [email protected].