Skip to content

feat: add Avian as an LLM provider#1278

Open
avianion wants to merge 2 commits intogetzep:mainfrom
avianion:feat/add-avian-llm-provider
Open

feat: add Avian as an LLM provider#1278
avianion wants to merge 2 commits intogetzep:mainfrom
avianion:feat/add-avian-llm-provider

Conversation

@avianion
Copy link

Summary

Adds AvianClient as a new LLM provider for the Avian API (https://api.avian.io/v1), an OpenAI-compatible inference service.

  • Implements AvianClient in graphiti_core/llm_client/avian_client.py following the same patterns as OpenAIGenericClient
  • Supports structured outputs (json_schema response format), retry logic, and tracing
  • Registers 'avian' as a provider type for telemetry/tracing
  • Adds comprehensive unit tests covering initialization, response generation, rate limiting, and retry behavior

Avian API Details

Model Context (Input/Output) Price (Input/Output per 1M tokens)
deepseek/deepseek-v3.2 (default) 164K / 65K $0.26 / $0.38
moonshotai/kimi-k2.5 131K / 8K $0.45 / $2.20
z-ai/glm-5 131K / 16K $0.30 / $2.55
minimax/minimax-m2.5 1M / 1M $0.30 / $1.10

Usage

from graphiti_core.llm_client.avian_client import AvianClient
from graphiti_core.llm_client.config import LLMConfig

# Using environment variable (AVIAN_API_KEY)
client = AvianClient()

# Or with explicit config
client = AvianClient(
    config=LLMConfig(
        api_key="your-api-key",
        model="deepseek/deepseek-v3.2",
    )
)

graphiti = Graphiti(
    uri,
    llm_client=client,
    embedder=embedder,
)

Files Changed

  • graphiti_core/llm_client/avian_client.py — New AvianClient class
  • graphiti_core/llm_client/__init__.py — Export AvianClient
  • graphiti_core/llm_client/client.py — Register 'avian' in _get_provider_type()
  • tests/llm_client/test_avian_client.py — Unit tests

Test Plan

  • All ruff lint checks pass
  • All ruff format checks pass
  • Unit tests cover initialization, response generation, rate limiting, retries, and provider type detection
  • CI unit tests pass

cc @danielchalef @prasmussen15

Add AvianClient for the Avian LLM API (https://api.avian.io/v1), an
OpenAI-compatible service offering DeepSeek, Kimi, GLM, and MiniMax
models. The client follows the same patterns as OpenAIGenericClient
with Avian-specific defaults (base URL, default model, env var).

Changes:
- Add graphiti_core/llm_client/avian_client.py with AvianClient class
- Export AvianClient from llm_client __init__.py
- Register 'avian' in _get_provider_type() for tracing
- Add unit tests in tests/llm_client/test_avian_client.py
@danielchalef
Copy link
Member

danielchalef commented Feb 27, 2026

All contributors have signed the CLA ✍️ ✅
Posted by the CLA Assistant Lite bot.

@avianion
Copy link
Author

I have read the CLA Document and I hereby sign the CLA

danielchalef added a commit that referenced this pull request Feb 27, 2026
@avianion
Copy link
Author

Hey @prasmussen15 @danielchalef, would love your review on this when you get a chance. Happy to address any feedback!

@avianion
Copy link
Author

avianion commented Mar 5, 2026

Friendly follow-up — this PR is still active and ready for review. Would appreciate a look when you get a chance! cc @prasmussen15 @danielchalef

@avianion
Copy link
Author

avianion commented Mar 5, 2026

Friendly follow-up — this PR is still active and ready for review. All feedback has been addressed. Would appreciate a look when you get a chance! cc @prasmussen15 @danielchalef

@avianion
Copy link
Author

avianion commented Mar 5, 2026

Hey @danielchalef @prasmussen15 — friendly follow-up on this PR. Avian is an OpenAI-compatible inference provider that's already live and powering apps like ISEKAI ZERO. This is a lightweight integration (standard OpenAI-compatible endpoint) and we're happy to address any feedback or make adjustments. Would love to get this merged if you have a moment to review. Thanks!

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants