Skip to content

Commit 2f2b723

Browse files
committed
WIP improving ai-server docs
1 parent 551424a commit 2f2b723

File tree

6 files changed

+115
-100
lines changed

6 files changed

+115
-100
lines changed

MyApp/_pages/ai-server/configuration.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,13 +27,16 @@ There are two ways to configure AI Providers:
2727
1. **.env File**: Update the `.env` file with your API keys and run the AI Server for the first time.
2828
2. **Admin Portal**: Use the Admin Portal to add, edit, or remove AI Providers and generate AI Server API keys.
2929

30+
The provided `install.sh` script will prompt you to configure your AI Providers during the initial setup and populate the same .env file.
31+
3032
### Using the .env File
3133

3234
The `.env` file is used to configure AI Providers during the initial setup of AI Server, and is the easiest way to get started.
3335

3436
The .env file is located in the root of the AI Server repository and contains the following keys:
3537

3638
- **OPENAI_API_KEY**: OpenAI API Key
39+
- **ANTHROPIC_API_KEY**: Anthropic API Key
3740
- **GOOGLE_API_KEY**: Google Cloud API Key
3841
- **OPENROUTER_API_KEY**: OpenRouter API Key
3942
- **MISTRAL_API_KEY**: Mistral API Key
@@ -47,7 +50,7 @@ The Admin Portal provides a more interactive way to manage your AI Providers aft
4750

4851
To access the Admin Portal:
4952

50-
1. Navigate to [http://localhost:5005/admin](http://localhost:5005/admin).
53+
1. Navigate to [http://localhost:5006/admin](http://localhost:5005/admin).
5154
2. Log in with the default credentials `p@55wOrd`.
5255
3. Click on the **AI Providers** tab to view and manage your AI Providers.
5356

@@ -56,6 +59,7 @@ Here you can add, edit, or remove AI Providers, as well as generate API keys for
5659
AI Server supports the following AI Providers:
5760

5861
- **OpenAI**: OpenAI Chat API
62+
- **Anthropic**: Anthropic Claude API
5963
- **Google**: Google Cloud AI
6064
- **OpenRouter**: OpenRouter API
6165
- **Mistral**: Mistral API
@@ -70,7 +74,6 @@ Here you can create new API keys, view existing keys, and revoke keys as needed.
7074

7175
Keys can be created with expiration dates, and restrictions to specific API endpoints, along with notes to help identify the key's purpose.
7276

73-
7477
## Stored File Management
7578

7679
AI Server stores results of the AI operations in a pre-configured paths.

MyApp/_pages/ai-server/index.md

Lines changed: 41 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -1,65 +1,57 @@
11
---
2-
title: Overview
3-
description: Introduction to AI Server and its key features
2+
title: Quick Start
3+
description: Get AI Server up and running quickly
44
---
55

6-
AI Server allows you to orchestrate your systems AI requests through a single self-hosted application to control what AI Providers App's should use without impacting their client integrations. It serves as a private gateway to process LLM, AI, and image transformation requests, dynamically delegating tasks across multiple providers including Ollama, OpenAI, Anthropic, Mistral AI, Google Cloud, OpenRouter, GroqCloud, Replicate, Comfy UI, utilizing models like Whisper, SDXL, Flux, and tools like FFmpeg.
7-
8-
```mermaid{.not-prose}
9-
flowchart TB
10-
A[AI Server]
11-
A --> D{LLM APIs}
12-
A --> C{Ollama}
13-
A --> E{Media APIs}
14-
A --> F{Comfy UI
15-
+
16-
FFmpeg}
17-
D --> D1[OpenAI, Anthropic, Mistral, Google, OpenRouter, Groq]
18-
E --> E1[Replicate, dall-e-3, Text to speech]
19-
F --> F1[Diffusion, Whisper, TTS]
6+
To get started with AI Server, we need can use the following steps:
7+
8+
- **Clone the Repository**: Clone the AI Server repository from GitHub.
9+
- **Run the Installer**: Run the `install.sh` to set up the AI Server and ComfyUI Agent.
10+
11+
### Quick Start Commands
12+
13+
```sh
14+
git clone https://github.com/ServiceStack/ai-server
15+
cd ai-server
16+
cat install.sh | bash
2017
```
18+
### Running the Installer
2119

22-
## Why Use AI Server?
20+
The installer will detect common environment variables for the supported AI Providers like OpenAI, Google, Anthropic, and others, and prompt ask you if you want to add them to your AI Server configuration.
2321

24-
AI Server simplifies the integration and management of AI capabilities in your applications:
22+
![](/img/pages/ai-server/install-ai-server.gif)
2523

26-
- **Centralized Management**: Manage your LLM, AI and Media Providers, API Keys and usage from a single App
27-
- **Flexibility**: Easily switch 3rd party providers without impacting your client integrations
28-
- **Scalability**: Distribute workloads across multiple providers to handle high volumes of requests efficiently
29-
- **Security**: Self-hosted private gateway to keep AI operations behind firewalls, limit access with API Keys
30-
- **Developer-Friendly**: Simple development experience utilizing a single client and endpoint and Type-safe APIs
31-
- **Manage Costs**: Monitor and control usage across your organization with detailed request history
24+
Alternatively, you can specify which providers you want and provide the APIs before continuing with the installation.
3225

33-
## Key Features
26+
### Optional ComfyUI Agent
3427

35-
- **Unified AI Gateway**: Centralize all your AI requests & API Key management through a single self-hosted service
36-
- **Multi-Provider Support**: Seamlessly integrate with Leading LLMs, Ollama, Comfy UI, FFmpeg, and more
37-
- **Type-Safe Integrations**: Native end-to-end typed integrations for 11 popular programming languages
38-
- **Secure Access**: Use simple API key authentication to control which AI resources Apps can use
39-
- **Managed File Storage**: Built-in cached asset storage for AI-generated assets, isolated per API Key
40-
- **Background Job Processing**: Efficient handling of long-running AI tasks, capable of distributing workloads
41-
- **Monitoring and Analytics**: Real-time monitoring performance and statistics of executing AI Requests
42-
- **Recorded**: Auto archival of completed AI Requests into monthly rolling databases
43-
- **Custom Deployment**: Run as a single Docker container, with optional GPU-equipped agents for advanced tasks
28+
The installer will also ask if you want to install the ComfyUI Agent locally if you run AI server from the installer.
4429

45-
## Supported AI Capabilities
30+
If you choose to run AI Server, it will prompt you to install the ComfyUI Agent as well, and assume you want to run it locally.
31+
32+
If you want to run the ComfyUI Agent separately, you can follow these steps:
33+
34+
```sh
35+
git clone https://github.com/ServiceStack/agent-comfy.git
36+
cd agent-comfy
37+
cat install.sh | bash
38+
```
4639

47-
- **Large Language Models**: Integrates with Ollama, OpenAI, Anthropic, Mistral, Google, OpenRouter and Groq
48-
- **Image Generation**: Leverage self-hosted ComfyUI Agents and SaaS providers like Replicate, DALL-E 3
49-
- **Image Transformations**: Dynamically transform and cache Image Variations for stored assets
50-
- **Audio Processing**: Text-to-speech, and speech-to-text with Whisper integration
51-
- **Video Processing**: Format conversions, scaling, cropping, and more with via FFmpeg
40+
Providing your AI Server URL and Auth Secret when prompted will automatically register the ComfyUI Agent with your AI Server to handle related requests.
5241

53-
## Getting Started for Developers
42+
:::info
43+
You will be prompted to provide the AI Server URL and ComfyUI Agent URL during the installation.
44+
These should be the accessible URLs for your AI Server and ComfyUI Agent. When running locally, the ComfyUI Agent will be populated with a docker accessible path as localhost won't be accessible from the AI Server container.
45+
If you want to reset the ComfyUI Agent settings, remember to remove the provider from the AI Server Admin Portal.
46+
:::
5447

55-
1. **Setup**: Follow the [Quick Start guide](/ai-server/install) to deploy AI Server.
56-
2. **Configuration**: Use the Admin Portal to add your AI providers and generate API keys.
57-
3. **Integration**: Choose your preferred language and use AI Server's type-safe APIs.
58-
4. **Development**: Start making API calls to AI Server from your application, leveraging the full suite of AI capabilities.
48+
## Accessing AI Server
5949

60-
## Learn More
50+
Once the AI Server is running, you can access the Admin Portal at [http://localhost:5006/admin](http://localhost:5005/admin) to configure your AI providers and generate API keys.
51+
If you first ran the AI Server with configured API Keys in your `.env` file, you providers will be automatically configured for the related services.
6152

62-
- Hosted Example: [openai.servicestack.net](https://openai.servicestack.net)
63-
- Source Code: [github.com/ServiceStack/ai-server](https://github.com/ServiceStack/ai-server)
53+
::: info
54+
You can reset the process by deleting your local `App_Data` directory and rerunning `docker compose up`.
55+
:::
6456

65-
AI Server is actively developed and continuously expanding its capabilities.
57+
You will then be able to make requests to the AI Server API endpoints, and access the Admin Portal user interface like the [Chat interface](http://localhost:5005/admin/Chat) to use your AI Provider models.

MyApp/_pages/ai-server/overview.md

Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,65 @@
1+
---
2+
title: Overview
3+
description: Introduction to AI Server and its key features
4+
---
5+
6+
AI Server allows you to orchestrate your systems AI requests through a single self-hosted application to control what AI Providers App's should use without impacting their client integrations. It serves as a private gateway to process LLM, AI, and image transformation requests, dynamically delegating tasks across multiple providers including Ollama, OpenAI, Anthropic, Mistral AI, Google Cloud, OpenRouter, GroqCloud, Replicate, Comfy UI, utilizing models like Whisper, SDXL, Flux, and tools like FFmpeg.
7+
8+
```mermaid{.not-prose}
9+
flowchart TB
10+
A[AI Server]
11+
A --> D{LLM APIs}
12+
A --> C{Ollama}
13+
A --> E{Media APIs}
14+
A --> F{Comfy UI
15+
+
16+
FFmpeg}
17+
D --> D1[OpenAI, Anthropic, Mistral, Google, OpenRouter, Groq]
18+
E --> E1[Replicate, dall-e-3, Text to speech]
19+
F --> F1[Diffusion, Whisper, TTS]
20+
```
21+
22+
## Why Use AI Server?
23+
24+
AI Server simplifies the integration and management of AI capabilities in your applications:
25+
26+
- **Centralized Management**: Manage your LLM, AI and Media Providers, API Keys and usage from a single App
27+
- **Flexibility**: Easily switch 3rd party providers without impacting your client integrations
28+
- **Scalability**: Distribute workloads across multiple providers to handle high volumes of requests efficiently
29+
- **Security**: Self-hosted private gateway to keep AI operations behind firewalls, limit access with API Keys
30+
- **Developer-Friendly**: Simple development experience utilizing a single client and endpoint and Type-safe APIs
31+
- **Manage Costs**: Monitor and control usage across your organization with detailed request history
32+
33+
## Key Features
34+
35+
- **Unified AI Gateway**: Centralize all your AI requests & API Key management through a single self-hosted service
36+
- **Multi-Provider Support**: Seamlessly integrate with Leading LLMs, Ollama, Comfy UI, FFmpeg, and more
37+
- **Type-Safe Integrations**: Native end-to-end typed integrations for 11 popular programming languages
38+
- **Secure Access**: Use simple API key authentication to control which AI resources Apps can use
39+
- **Managed File Storage**: Built-in cached asset storage for AI-generated assets, isolated per API Key
40+
- **Background Job Processing**: Efficient handling of long-running AI tasks, capable of distributing workloads
41+
- **Monitoring and Analytics**: Real-time monitoring performance and statistics of executing AI Requests
42+
- **Recorded**: Auto archival of completed AI Requests into monthly rolling databases
43+
- **Custom Deployment**: Run as a single Docker container, with optional GPU-equipped agents for advanced tasks
44+
45+
## Supported AI Capabilities
46+
47+
- **Large Language Models**: Integrates with Ollama, OpenAI, Anthropic, Mistral, Google, OpenRouter and Groq
48+
- **Image Generation**: Leverage self-hosted ComfyUI Agents and SaaS providers like Replicate, DALL-E 3
49+
- **Image Transformations**: Dynamically transform and cache Image Variations for stored assets
50+
- **Audio Processing**: Text-to-speech, and speech-to-text with Whisper integration
51+
- **Video Processing**: Format conversions, scaling, cropping, and more with via FFmpeg
52+
53+
## Getting Started for Developers
54+
55+
1. **Setup**: Follow the [Quick Start guide](/ai-server/install) to deploy AI Server.
56+
2. **Configuration**: Use the Admin Portal to add your AI providers and generate API keys.
57+
3. **Integration**: Choose your preferred language and use AI Server's type-safe APIs.
58+
4. **Development**: Start making API calls to AI Server from your application, leveraging the full suite of AI capabilities.
59+
60+
## Learn More
61+
62+
- Hosted Example: [openai.servicestack.net](https://openai.servicestack.net)
63+
- Source Code: [github.com/ServiceStack/ai-server](https://github.com/ServiceStack/ai-server)
64+
65+
AI Server is actively developed and continuously expanding its capabilities.

MyApp/_pages/ai-server/quickstart.md

Lines changed: 0 additions & 45 deletions
This file was deleted.

MyApp/_pages/ai-server/sidebar.json

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,6 @@
33
"text": "AI Server",
44
"link": "/ai-server/",
55
"children": [
6-
{
7-
"text": "Quick Start",
8-
"link": "/ai-server/quickstart"
9-
},
106
{
117
"text": "Configuration",
128
"link": "/ai-server/configuration"
@@ -18,6 +14,10 @@
1814
{
1915
"text": "ComfyUI Agent",
2016
"link": "/ai-server/comfy-extension"
17+
},
18+
{
19+
"text": "Overview",
20+
"link": "/ai-server/overview"
2121
}
2222
]
2323
},
1.84 MB
Loading

0 commit comments

Comments
 (0)