Build production-ready multi-agent systems in Python. BeeAI framework is also available in TypeScript.
BeeAI framework provides a comprehensive set of features for building powerful AI agents:
Feature | Description |
---|---|
Agents | Create intelligent, autonomous agents using the ReAct pattern. Build agents that can reason about problems, take appropriate actions, and adapt their approach based on feedback. Includes pre-built agent architectures and customizable components. |
Workflows | Orchestrate complex multi-agent systems where specialized agents collaborate to solve problems. Define sequential or conditional execution flows with state management and observability. |
Backend | Connect to various LLM providers like Ollama, watsonx.ai, and more. Offers unified interfaces for chat, embeddings, and structured outputs, making it easy to swap models without changing your code. |
Feature | Description |
---|---|
Tools | Extend agent capabilities with ready-to-use tools for web search, weather forecasting, knowledge retrieval, code execution, and more. Create custom tools to connect agents to any API or service. |
Memory | Manage conversation history with different memory strategies. Choose from unconstrained memory, token-aware memory, sliding window memory, or summarization memory based on your needs. |
Templates | Build flexible prompt templates using an enhanced Mustache syntax. Create reusable templates with variables, conditionals, and loops to generate well-structured prompts. |
Feature | Description |
---|---|
Cache | Optimize performance and reduce costs with caching mechanisms for tool outputs and LLM responses. Implement different caching strategies based on your application requirements. |
Serialization | Save and load agent state for persistence across sessions. Serialize workflows, memory, and other components to support stateful applications. |
Errors | Implement robust error management with specialized error classes. Distinguish between different error types and implement appropriate recovery strategies. |
Note
Cache and serialization features are not yet implemented in Python, but they are coming soon!
Feature | Description |
---|---|
Emitter | Gain visibility into agent decision processes with a flexible event system. Subscribe to events like updates, errors, and tool executions to monitor agent behavior. |
Logger | Track agent actions and system events with comprehensive logging. Configure logging levels and outputs to support debugging and monitoring. |
Instrumentation | Monitor performance and usage with OpenTelemetry integration. Collect metrics and traces to understand system behavior in production environments. |
Version | Access framework version information programmatically to ensure compatibility. |
Note
Instrumentation and version features are not yet implemented in Python, but they are coming soon!
✅ Python >= 3.11
Install BeeAI framework using pip:
pip install beeai-framework
The following example demonstrates how to build a multi-agent workflow using the BeeAI framework:
import asyncio
import traceback
from pydantic import ValidationError
from beeai_framework.agents.bee.agent import AgentExecutionConfig
from beeai_framework.backend.chat import ChatModel
from beeai_framework.backend.message import UserMessage
from beeai_framework.memory import UnconstrainedMemory
from beeai_framework.tools.search.duckduckgo import DuckDuckGoSearchTool
from beeai_framework.tools.weather.openmeteo import OpenMeteoTool
from beeai_framework.workflows.agent import AgentFactoryInput, AgentWorkflow
from beeai_framework.workflows.workflow import WorkflowError
async def main() -> None:
llm = ChatModel.from_name("ollama:granite3.1-dense:8b")
try:
workflow = AgentWorkflow(name="Smart assistant")
workflow.add_agent(
agent=AgentFactoryInput(
name="WeatherForecaster",
instructions="You are a weather assistant. Respond only if you can provide a useful answer.",
tools=[OpenMeteoTool()],
llm=llm,
execution=AgentExecutionConfig(max_iterations=3),
)
)
workflow.add_agent(
agent=AgentFactoryInput(
name="Researcher",
instructions="You are a researcher assistant. Respond only if you can provide a useful answer.",
tools=[DuckDuckGoSearchTool()],
llm=llm,
)
)
workflow.add_agent(
agent=AgentFactoryInput(
name="Solver",
instructions="""Your task is to provide the most useful final answer based on the assistants'
responses which all are relevant. Ignore those where assistant do not know.""",
llm=llm,
)
)
prompt = "What is the weather in New York?"
memory = UnconstrainedMemory()
await memory.add(UserMessage(content=prompt))
response = await workflow.run(messages=memory.messages)
print(f"result: {response.state.final_answer}")
except WorkflowError:
traceback.print_exc()
except ValidationError:
traceback.print_exc()
if __name__ == "__main__":
asyncio.run(main())
Source: python/examples/workflows/multi_agents.py
Note
To run this example, be sure that you have installed ollama with the granite3.1-dense:8b model downloaded.
To run projects, use:
python [project_name].py
➡️ Explore more in our examples library.
BeeAI framework is an open-source project and we ❤️ contributions.
If you'd like to help build BeeAI, take a look at our contribution guidelines.
We are using GitHub Issues to manage public bugs. We keep a close eye on this, so before filing a new issue, please check to make sure it hasn't already been logged.
This project and everyone participating in it are governed by the Code of Conduct. By participating, you are expected to uphold this code. Please read the full text so that you can read which actions may or may not be tolerated.
All content in these repositories including code has been provided by IBM under the associated open source software license and IBM is under no obligation to provide enhancements, updates, or support. IBM developers produced this code as an open source project (not as an IBM product), and IBM makes no assertions as to the level of quality nor security, and will not be maintaining this code going forward.
Special thanks to our contributors for helping us improve BeeAI framework.