Collection of small experiments exploring AutoGen AgentChat, multi‑agent prompting patterns, tool integration (MCP + custom + web search), and research workflows. Each subfolder has its own README, but they share python environment and some utility code from the tools.py
module.
tool_use_example/
– Single agent using mixed MCP + native Python tools (math + optional web search) with reflection.opencv_example/
– OpenCV based exploratory image processing (grayscale transforms, thresholding, contour / shape extraction).basic_example/
– Minimal two‑agent joke generation & refinement demo.deep_research_example/
– Multi‑role research pipeline (research, fact check, critique, summarize) with transcript + token logging.utilities/
– Shared utility functions for model loading, token counting, tool wrapping, etc.
- Python 3.11+ (project uses
pyproject.toml
anduv.lock
). - API keys for any LLM / tool providers you intend to exercise (e.g.
OPENAI_API_KEY
,TAVILY_API_KEY
). - (Optional) Additional keys for Apify or other MCP-integrated services if you enable those flows.
Using uv
(recommended):
uv sync
Or with pip
(fallback):
pip install -e .
Export environment variables (PowerShell examples):
$Env:OPENAI_API_KEY = "your_key"
$Env:TAVILY_API_KEY = "your_tavily_key" # optional
- Pick a subfolder and open its README.md for further instructions on how to operate the notebook or script.
- Ensure the virtual environment is active and keys are exported.
- Inspect any generated transcripts, logs, or image artifacts.
MIT (see LICENSE
).
You can change which model (and provider backend) the examples use without touching most example code. The selection logic lives in utilities.load_model_config()
and the resources.yaml
file.
load_model_config()
picks a model using (highest precedence first):
- Explicit argument:
load_model_config(model_name="gpt-5-mini")
- Environment variable:
DEFAULT_MODEL_NAME
- Internal fallback:
GPT-4.1-Nano
(case-insensitive match against entries inresources.yaml
).
Each entry under models:
defines one selectable model. Common fields:
name
: Identifier you pass asmodel_name
(match is case-insensitive)type
:openai
(includes Azure OpenAI) orollama
deployment
: (OpenAI/Azure) Concrete deployment name if different fromname
api-base
,api-version
: (Azure OpenAI) Endpoint + API versionhost
: (Ollama) Base URL for local or remote Ollama serverinfo
: Optional capabilities metadata (used to buildModelInfo
)
Add a new model by appending a block, e.g. for a local Ollama model:
- name: "llama3.1:8b"
type: "ollama"
host: "http://localhost:11434" # omit if default
info:
family: "llama3.1"
vision: false
function_calling: true
json_output: true
Or an Azure OpenAI deployment (be sure the deployment exists):
- name: "gpt-5-mini"
type: "openai"
deployment: "gpt-5-mini-2025-08-07"
api-version: "2025-03-01-preview"
api-base: "https://your-endpoint.openai.azure.com/openai/v1/"
- OpenAI / Azure OpenAI: set either
AZURE_OPENAI_API_KEY
(preferred for Azure) orOPENAI_API_KEY
. - Ollama (local): no key needed; ensure the server is running (
ollama serve
). - Ollama (remote): ensure reachable host URL and any required network auth (not handled here).
PowerShell examples:
$Env:AZURE_OPENAI_API_KEY = "your_azure_key" # OR
$Env:OPENAI_API_KEY = "your_openai_key"
$Env:DEFAULT_MODEL_NAME = "gpt-5-mini" # optional default
Override per run by passing model_name
:
from utilities import load_model_config, create_chat_completion_client
model_cfg = load_model_config(model_name="qwen3:0.6b") # overrides DEFAULT_MODEL_NAME
client = create_chat_completion_client(model_cfg)
If you omit model_name
, the function will look at DEFAULT_MODEL_NAME
, otherwise fall back.
- "Unsupported model type": Check the
type
field isopenai
orollama
. - Key error / list index: The
name
you requested does not exist inresources.yaml
(spelling / indentation). - Authentication errors: Confirm the correct API key env var is exported in the active shell / process.
- Slow or hanging Ollama model: Try a smaller variant (e.g.
qwen3:0.6b
) first.
Lightweight experimental space – stability not guaranteed; iterate freely.