Skip to content

A minimal, modular protocol that turns any database into a structured knowledge base for AI agents. Semantic layer. Agent-ready interface. Cognition starts here.

License

Notifications You must be signed in to change notification settings

0xKoller/orquel

Repository files navigation

Orquel

Build Status npm version License: MIT TypeScript

Status: v0.3.0 - Production-ready with AI SDK integration, MCP, and PostgreSQL support βœ…

The TypeScript-first RAG toolkit. Simple, composable, production-ready.

Make knowledge usable. Anywhere.

  • Ingest – Turn any source into structured, searchable knowledge
  • Embed – Choose your adapter, your model, your storage
  • Retrieve – Query with hybrid search (vector + lexical)
  • Answer – Universal LLM support via AI SDK (20+ providers)
  • Integrate – MCP for Claude Code, API for apps

πŸ“– Product Roadmap | ⭐ Star on GitHub


πŸš€ Orquel + AI SDK Integration

NEW: Universal answerer supports OpenAI, Anthropic, Cohere, and 20+ providers with one line change:

import { createOrquel } from '@orquel/core';
import { aiSDKAnswerer } from '@orquel/answer-aisdk';
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';

// OpenAI
const orq = createOrquel({
  embeddings: openAIEmbeddings(),
  vector: pgvectorStore({ connectionString: process.env.DATABASE_URL }),
  answerer: aiSDKAnswerer({ model: openai('gpt-4-turbo') })
});

// Switch to Anthropic? Just change one line:
const orq = createOrquel({
  embeddings: openAIEmbeddings(),
  vector: pgvectorStore({ connectionString: process.env.DATABASE_URL }),
  answerer: aiSDKAnswerer({ model: anthropic('claude-3-5-sonnet-20241022') })
});

Stream with AI SDK tools:

import { createOrquelTools } from '@orquel/integration-aisdk';
import { streamText } from 'ai';

const tools = createOrquelTools(orq, {
  search: { hybridSearch: true },
  answer: { defaultTopK: 4 }
});

const result = streamText({
  model: openai('gpt-4-turbo'),
  tools, // AI can search your knowledge base
  maxSteps: 5,
  prompt: 'What is RAG?'
});

Benefits:

  • βœ… One adapter, 20+ providers (OpenAI, Anthropic, Cohere, Mistral, Groq, local models)
  • βœ… Easy provider switching (change one line)
  • βœ… Streaming built-in via AI SDK
  • βœ… Tool calling for chat interfaces
  • βœ… Future-proof (new providers supported automatically)

✨ Why Orquel?

Orquel is a TypeScript-first toolkit for building knowledge bases and RAG systems. Today's devs reinvent the wheel: writing chunkers, wiring embeddings, gluing vector stores. Orquel makes this simple, composable, and consistent.

Key Differentiators

Feature Orquel LangChain LlamaIndex Vercel AI SDK
Focus RAG pipelines General LLM RAG (Python) Chat/streaming UI
TypeScript DX ⭐⭐⭐⭐⭐ Best ⭐⭐⭐ Good ⭐⭐ Secondary ⭐⭐⭐⭐⭐ Best
Dependencies 0 (core) Heavy Medium Medium
Hybrid Search βœ… Built-in RRF Manual βœ… Yes ❌ Manual
Benchmarking βœ… Built-in ❌ None ⭐ Limited ❌ None
LLM Providers 20+ via AI SDK Many Many 20+
API Complexity Simple Complex Medium Simple

What Makes Orquel Different

  • DX First: 4 lines to get started; strict TypeScript; minimal, ergonomic API
  • Zero Dependencies (Core): No supply chain risk, minimal bundle size
  • Composable: Swap embeddings, vector DBs, lexical search, answerers via adapters
  • Production Ready: PostgreSQL + pgvector, hybrid search, connection pooling
  • AI SDK Integration: Universal answerer + streaming + tool calling
  • MCP Native: 11 tools for Claude Code and AI assistants
  • Performance Focused: Built-in benchmarking and evaluation harness
  • Extensible: Clean adapter interfaces, easy to customize

Positioning: The "Express.js of RAG" – focused, minimal, developer-friendly.


🎯 Quick Start

Prerequisites

  • Node.js 18+
  • OpenAI API key (or any AI SDK provider)
  • PostgreSQL with pgvector (for production) or in-memory (for development)

Installation

# Option 1: Add to existing project
npm install @orquel/core @orquel/answer-aisdk ai @ai-sdk/openai

# Option 2: Create new project
npx create-orquel-app@latest my-rag-app
cd my-rag-app
cp .env.example .env
# Add your OPENAI_API_KEY
npm run dev

Minimal Example (5 minutes)

import { createOrquel } from '@orquel/core';
import { openAIEmbeddings } from '@orquel/embeddings-openai';
import { memoryStore } from '@orquel/store-memory';
import { aiSDKAnswerer } from '@orquel/answer-aisdk';
import { openai } from '@ai-sdk/openai';

// 1. Create Orquel instance
const orq = createOrquel({
  embeddings: openAIEmbeddings(),
  vector: memoryStore(), // Use pgvectorStore() in production
  answerer: aiSDKAnswerer({ model: openai('gpt-4-turbo') })
});

// 2. Ingest documents
const { chunks } = await orq.ingest({
  source: { title: 'Product Guide' },
  content: '# Features\nOur product has AI-powered search and analytics.'
});

// 3. Index for search
await orq.index(chunks);

// 4. Ask questions
const { answer, contexts } = await orq.answer('What features does the product have?');
console.log(answer);
// β†’ "The product has AI-powered search and analytics features."

πŸ“ View more examples β†’


🀝 AI SDK Integration Patterns

Pattern 1: Tools (Recommended for Chat)

Expose Orquel as AI SDK tools that the model can call:

import { createOrquelTools } from '@orquel/integration-aisdk';
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

const tools = createOrquelTools(orq, {
  search: { hybridSearch: true, defaultLimit: 5 },
  answer: { defaultTopK: 4 },
  ingest: true // Allow dynamic knowledge updates
});

// Use in API route (Next.js example)
export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai('gpt-4-turbo'),
    tools,
    maxSteps: 5,
    system: 'You are a helpful assistant with access to a knowledge base.',
    messages
  });

  return result.toUIMessageStreamResponse();
}

Use case: Chat interfaces, agent systems, dynamic knowledge updates

Pattern 2: Middleware (Auto-Context Injection)

Automatically inject relevant context into every message:

import { createOrquelMiddleware } from '@orquel/integration-aisdk';

const middleware = createOrquelMiddleware(orq, {
  autoInject: true,
  topK: 3,
  threshold: 0.7
});

const result = streamText({
  model: openai('gpt-4-turbo'),
  experimental_providerMetadata: { orquel: middleware },
  messages
});
// Context automatically retrieved and injected!

Use case: Transparent RAG, simpler applications, always-on context

Pattern 3: Direct Usage (Backend APIs)

Use Orquel directly without AI SDK for REST APIs:

// app/api/search/route.ts
export async function POST(req: Request) {
  const { query } = await req.json();
  const { results } = await orq.query(query, { hybrid: true });
  return Response.json({ results });
}

// app/api/answer/route.ts
export async function POST(req: Request) {
  const { question } = await req.json();
  const { answer, contexts } = await orq.answer(question);
  return Response.json({ answer, sources: contexts });
}

Use case: REST APIs, batch processing, non-interactive systems


πŸ“¦ Packages

Core & Tools

  • βœ… @orquel/core – Core orchestrator, types, hybrid search, benchmarking
  • βœ… orquel – Meta package with CLI
  • βœ… create-orquel-app – Project scaffolder

AI SDK Integration (NEW!)

  • βœ… @orquel/answer-aisdk – Universal answerer (20+ providers via AI SDK)
  • βœ… @orquel/integration-aisdk – Helper tools (createOrquelTools, middleware)

Production Adapters

  • βœ… @orquel/store-pgvector – PostgreSQL + pgvector (production storage)
  • βœ… @orquel/lexical-postgres – PostgreSQL full-text search
  • βœ… @orquel/embeddings-openai – OpenAI text-embedding-3-small/large
  • ⚠️ @orquel/answer-openai – DEPRECATED (use @orquel/answer-aisdk)

Development Adapters

  • βœ… @orquel/store-memory – In-memory vector storage

MCP Integration

  • βœ… @orquel/mcp-server – Model Context Protocol server (11 tools)

Examples

  • βœ… Minimal Node.js – Basic RAG implementation
  • βœ… AI SDK Basic – Chat interface with streaming
  • βœ… PostgreSQL Hybrid – Production setup
  • βœ… MCP Integrations – Claude Code integration

Coming Soon

  • 🚧 @orquel/store-pinecone – Pinecone vector database
  • 🚧 @orquel/store-qdrant – Qdrant vector database
  • 🚧 @orquel/rerank-cohere – Cohere reranking
  • 🚧 @orquel/ingest-pdf – PDF document parsing

πŸ—οΈ Architecture

Orquel uses an adapter-driven architecture that makes every component swappable:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                 Your Application                        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                Orquel Orchestrator                      β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”‚
β”‚  β”‚   ingest    β”‚ β”‚    index    β”‚ β”‚    query    β”‚      β”‚
β”‚  β”‚   & chunk   β”‚ β”‚ embeddings  β”‚ β”‚  & answer   β”‚      β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜      β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”‚
β”‚  β”‚   hybrid    β”‚ β”‚ benchmark   β”‚ β”‚ evaluation  β”‚      β”‚
β”‚  β”‚   search    β”‚ β”‚ performance β”‚ β”‚   metrics   β”‚      β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜      β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                    Adapter Layer                        β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”‚
β”‚ β”‚  Embeddings  β”‚ β”‚ Vector Store β”‚ β”‚   Answerer   β”‚    β”‚
β”‚ β”‚   Adapter    β”‚ β”‚   Adapter    β”‚ β”‚   (AI SDK)   β”‚    β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”‚
β”‚ β”‚   Lexical    β”‚ β”‚     MCP      β”‚ β”‚  Benchmark   β”‚    β”‚
β”‚ β”‚   Search     β”‚ β”‚    Tools     β”‚ β”‚    Suite     β”‚    β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚               Implementation Layer                      β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”‚
β”‚ β”‚   OpenAI     β”‚ β”‚ PostgreSQL   β”‚ β”‚ OpenAI       β”‚    β”‚
β”‚ β”‚  Embeddings  β”‚ β”‚ + pgvector   β”‚ β”‚ Anthropic    β”‚    β”‚
β”‚ β”‚              β”‚ β”‚              β”‚ β”‚ Cohere, etc. β”‚    β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key Benefits:

  • πŸ”„ Composable – Mix and match adapters
  • πŸš€ Upgradeable – Swap dev tools for production (memory β†’ pgvector)
  • πŸ§ͺ Testable – Mock any component
  • 🎯 Focused – Each adapter has one job
  • πŸ” Hybrid – Combine vector + lexical search
  • πŸ“Š Measurable – Built-in benchmarking

πŸ” Core Features

1. Hybrid Search (Vector + Lexical)

Combine vector similarity and full-text search for superior results:

const results = await orq.query('machine learning applications', {
  hybrid: true,
  denseWeight: 0.7,    // Vector similarity weight
  lexicalWeight: 0.3,  // Full-text search weight
  k: 10
});

Algorithms:

  • Reciprocal Rank Fusion (RRF) – Robust, no score calibration needed
  • Weighted Combination – Normalized score fusion

2. Performance Benchmarking

Built-in performance testing:

import { benchmarkVectorStore } from '@orquel/core';

const results = await benchmarkVectorStore(vectorStore, {
  chunkCount: 1000,
  queryCount: 100,
  dimensions: 1536
});

console.log(`Avg query time: ${results.averageQueryTime}ms`);
console.log(`Throughput: ${results.queriesPerSecond} QPS`);

3. Evaluation Harness

Measure RAG quality:

import { RAGEvaluator } from '@orquel/core';

const evaluator = new RAGEvaluator(orq);
const metrics = await evaluator.evaluate(groundTruthQueries);

console.log(`F1 Score: ${metrics.f1Score.toFixed(3)}`);
console.log(`MRR: ${metrics.mrr.toFixed(3)}`);

4. MCP Integration (Model Context Protocol)

11 tools for Claude Code and AI assistants:

# Install MCP server
npm install @orquel/mcp-server

# Start server
orquel mcp serve --stdio

# Available tools:
# - ingest, query, answer, list-sources, clear
# - hybrid-search, optimize-search, benchmark
# - analyze-kb, reindex, semantic-clusters

5. Production PostgreSQL

Enterprise-ready vector storage:

import { pgvectorStore } from '@orquel/store-pgvector';
import { postgresLexical } from '@orquel/lexical-postgres';

const orq = createOrquel({
  embeddings: openAIEmbeddings(),
  vector: pgvectorStore({
    connectionString: process.env.DATABASE_URL,
    dimensions: 1536,
    indexType: 'hnsw' // or 'ivfflat'
  }),
  lexical: postgresLexical({
    connectionString: process.env.DATABASE_URL
  })
});

Features:

  • Connection pooling (20 max, 5 min)
  • ACID transactions
  • HNSW and IVFFlat indexes
  • Health checks and stats
  • Batch operations

πŸ“š Use Cases

Documentation & Support

// Build a help center
const docs = createOrquel({
  embeddings: openAIEmbeddings(),
  vector: pgvectorStore({ connectionString: '...' }),
  lexical: postgresLexical({ connectionString: '...' }),
  answerer: aiSDKAnswerer({ model: openai('gpt-4-turbo') })
});

await docs.ingest({ source: { title: 'API Guide' }, content: apiDocs });
const { answer } = await docs.answer('How do I authenticate?');

Research & Analysis

// Analyze research papers
const research = createOrquel({
  embeddings: openAIEmbeddings({ model: 'text-embedding-3-large' }),
  vector: pgvectorStore({ connectionString: '...' }),
  answerer: aiSDKAnswerer({ model: anthropic('claude-3-5-sonnet-20241022') })
});

// Semantic clustering
const clusters = await semanticClusters(research);

Code Search & Understanding

// Make your codebase searchable
const codebase = createOrquel({
  embeddings: openAIEmbeddings(),
  vector: pgvectorStore({ connectionString: '...' }),
  lexical: postgresLexical({ connectionString: '...' })
});

// Hybrid search for best results
const results = await codebase.query('authentication middleware', {
  hybrid: true
});

AI Assistant Integration

# Start MCP server for Claude Code
orquel mcp serve --stdio

# Use in Claude Code:
# "Search my knowledge base for authentication docs"
# Claude automatically calls Orquel MCP tools

πŸ› οΈ Development

Building Custom Adapters

import type { EmbeddingsAdapter } from '@orquel/core';

export function customEmbeddings(): EmbeddingsAdapter {
  return {
    name: 'custom-embeddings',
    dim: 768,
    async embed(texts: string[]): Promise<number[][]> {
      // Your implementation
      return await yourService.embed(texts);
    }
  };
}

Local Development

git clone https://github.com/0xkoller/orquel.git
cd orquel
pnpm install
pnpm build
pnpm test

Performance Testing

# Run benchmarks
pnpm test:performance

# Integration tests (requires PostgreSQL)
pnpm test:integration

πŸ—ΊοΈ Roadmap

See PRODUCT_ROADMAP.md for complete strategic vision.

Current: v0.3.0 βœ…

  • βœ… AI SDK universal answerer (20+ providers)
  • βœ… AI SDK integration helpers (tools, middleware)
  • βœ… MCP server (11 tools)
  • βœ… PostgreSQL + pgvector
  • βœ… Hybrid search (RRF, weighted)
  • βœ… Benchmarking & evaluation

Next: v0.4.0 (Q1-Q2 2026)

  • More embeddings adapters (Cohere, Voyage)
  • Vector store adapters (Pinecone, Qdrant)
  • Document parsers (PDF, DOCX)
  • Reranking (Cohere)
  • Documentation site

Future: v1.0.0

  • Stable API
  • Comprehensive docs site
  • Production templates
  • Hosted platform (Supabase for RAG)

🎯 Who Should Use Orquel?

βœ… Perfect For:

  • TypeScript/Node.js developers
  • Teams building RAG systems
  • Developers who value clean code and type safety
  • Projects needing production PostgreSQL
  • AI assistant builders (MCP integration)
  • Performance-conscious teams
  • Startups and scale-ups

❌ Not Ideal For:

  • Python-first teams β†’ Use LlamaIndex or Haystack
  • Need 100+ pre-built integrations β†’ Use LangChain
  • Chat UI only β†’ Use Vercel AI SDK alone
  • Enterprise legacy requirements β†’ Use Haystack

🀝 Complementary:

  • Use with Vercel AI SDK for streaming chat UIs
  • Use with MCP for AI assistant integration
  • Use with PostgreSQL for production storage
  • Use with Next.js for web applications

🀝 Contributing

We welcome contributions! Orquel is designed to be community-driven.

Ways to contribute:

  • πŸ› Bug reports – Open an issue
  • πŸ’‘ Feature requests – Start a discussion
  • πŸ”Œ Build adapters – Extend the ecosystem
  • πŸ“– Improve docs – Help others get started
  • βœ… Add tests – Increase reliability

Getting started:

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-adapter
  3. Make your changes and add tests
  4. Run pnpm build && pnpm test
  5. Submit a pull request

Popular contribution ideas:

  • Adapters for Cohere, Voyage, Mistral (embeddings)
  • Vector store adapters (Pinecone, Qdrant, Weaviate)
  • Document parsers (PDF, DOCX, Notion)
  • Examples for frameworks (Express, Remix, SvelteKit)

Need help?


πŸ“Š Competitive Positioning

Orquel vs. Alternatives:

Orquel LangChain LlamaIndex Vercel AI SDK
Best for TypeScript RAG General LLM tasks Python RAG Chat/streaming UI
Philosophy Focused, minimal Swiss Army knife RAG specialist UI/streaming
Core deps 0 Many Medium Medium
Learning curve Gentle Steep Medium Gentle
Provider flexibility 20+ (AI SDK) Many Many 20+
Hybrid search Built-in RRF Manual Yes Manual
Benchmarking Built-in None Limited None
TypeScript DX ⭐⭐⭐⭐⭐ ⭐⭐⭐ ⭐⭐ ⭐⭐⭐⭐⭐

Orquel's Sweet Spot: TypeScript teams building RAG systems who value simplicity, type safety, and performance over ecosystem breadth.

Prediction: Orquel aims to become the "Express.js of RAG" – focused, minimal, developer-friendly.


πŸ“œ License

MIT


πŸš€ Get Started

# Quick start
npx create-orquel-app@latest my-app
cd my-app
npm run dev

# Or add to existing project
npm install @orquel/core @orquel/answer-aisdk ai @ai-sdk/openai

πŸ“– View Examples | πŸ“ Read Product Roadmap | ⭐ Star on GitHub


Make knowledge usable. Start building with Orquel today. 🎯

About

A minimal, modular protocol that turns any database into a structured knowledge base for AI agents. Semantic layer. Agent-ready interface. Cognition starts here.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published