|
1 | 1 | --- |
2 | | -title: Overview |
3 | | -description: Introduction to AI Server and its key features |
| 2 | +title: Quick Start |
| 3 | +description: Get AI Server up and running quickly |
4 | 4 | --- |
5 | 5 |
|
6 | | -AI Server allows you to orchestrate your systems AI requests through a single self-hosted application to control what AI Providers App's should use without impacting their client integrations. It serves as a private gateway to process LLM, AI, and image transformation requests, dynamically delegating tasks across multiple providers including Ollama, OpenAI, Anthropic, Mistral AI, Google Cloud, OpenRouter, GroqCloud, Replicate, Comfy UI, utilizing models like Whisper, SDXL, Flux, and tools like FFmpeg. |
7 | | - |
8 | | -```mermaid{.not-prose} |
9 | | -flowchart TB |
10 | | - A[AI Server] |
11 | | - A --> D{LLM APIs} |
12 | | - A --> C{Ollama} |
13 | | - A --> E{Media APIs} |
14 | | - A --> F{Comfy UI |
15 | | - + |
16 | | - FFmpeg} |
17 | | - D --> D1[OpenAI, Anthropic, Mistral, Google, OpenRouter, Groq] |
18 | | - E --> E1[Replicate, dall-e-3, Text to speech] |
19 | | - F --> F1[Diffusion, Whisper, TTS] |
| 6 | +To get started with AI Server, we need can use the following steps: |
| 7 | + |
| 8 | +- **Clone the Repository**: Clone the AI Server repository from GitHub. |
| 9 | +- **Run the Installer**: Run the `install.sh` to set up the AI Server and ComfyUI Agent. |
| 10 | + |
| 11 | +### Quick Start Commands |
| 12 | + |
| 13 | +```sh |
| 14 | +git clone https://github.com/ServiceStack/ai-server |
| 15 | +cd ai-server |
| 16 | +cat install.sh | bash |
20 | 17 | ``` |
| 18 | +### Running the Installer |
21 | 19 |
|
22 | | -## Why Use AI Server? |
| 20 | +The installer will detect common environment variables for the supported AI Providers like OpenAI, Google, Anthropic, and others, and prompt ask you if you want to add them to your AI Server configuration. |
23 | 21 |
|
24 | | -AI Server simplifies the integration and management of AI capabilities in your applications: |
| 22 | + |
25 | 23 |
|
26 | | -- **Centralized Management**: Manage your LLM, AI and Media Providers, API Keys and usage from a single App |
27 | | -- **Flexibility**: Easily switch 3rd party providers without impacting your client integrations |
28 | | -- **Scalability**: Distribute workloads across multiple providers to handle high volumes of requests efficiently |
29 | | -- **Security**: Self-hosted private gateway to keep AI operations behind firewalls, limit access with API Keys |
30 | | -- **Developer-Friendly**: Simple development experience utilizing a single client and endpoint and Type-safe APIs |
31 | | -- **Manage Costs**: Monitor and control usage across your organization with detailed request history |
| 24 | +Alternatively, you can specify which providers you want and provide the APIs before continuing with the installation. |
32 | 25 |
|
33 | | -## Key Features |
| 26 | +### Optional ComfyUI Agent |
34 | 27 |
|
35 | | -- **Unified AI Gateway**: Centralize all your AI requests & API Key management through a single self-hosted service |
36 | | -- **Multi-Provider Support**: Seamlessly integrate with Leading LLMs, Ollama, Comfy UI, FFmpeg, and more |
37 | | -- **Type-Safe Integrations**: Native end-to-end typed integrations for 11 popular programming languages |
38 | | -- **Secure Access**: Use simple API key authentication to control which AI resources Apps can use |
39 | | -- **Managed File Storage**: Built-in cached asset storage for AI-generated assets, isolated per API Key |
40 | | -- **Background Job Processing**: Efficient handling of long-running AI tasks, capable of distributing workloads |
41 | | -- **Monitoring and Analytics**: Real-time monitoring performance and statistics of executing AI Requests |
42 | | -- **Recorded**: Auto archival of completed AI Requests into monthly rolling databases |
43 | | -- **Custom Deployment**: Run as a single Docker container, with optional GPU-equipped agents for advanced tasks |
| 28 | +The installer will also ask if you want to install the ComfyUI Agent locally if you run AI server from the installer. |
44 | 29 |
|
45 | | -## Supported AI Capabilities |
| 30 | +If you choose to run AI Server, it will prompt you to install the ComfyUI Agent as well, and assume you want to run it locally. |
| 31 | + |
| 32 | +If you want to run the ComfyUI Agent separately, you can follow these steps: |
| 33 | + |
| 34 | +```sh |
| 35 | +git clone https://github.com/ServiceStack/agent-comfy.git |
| 36 | +cd agent-comfy |
| 37 | +cat install.sh | bash |
| 38 | +``` |
46 | 39 |
|
47 | | -- **Large Language Models**: Integrates with Ollama, OpenAI, Anthropic, Mistral, Google, OpenRouter and Groq |
48 | | -- **Image Generation**: Leverage self-hosted ComfyUI Agents and SaaS providers like Replicate, DALL-E 3 |
49 | | -- **Image Transformations**: Dynamically transform and cache Image Variations for stored assets |
50 | | -- **Audio Processing**: Text-to-speech, and speech-to-text with Whisper integration |
51 | | -- **Video Processing**: Format conversions, scaling, cropping, and more with via FFmpeg |
| 40 | +Providing your AI Server URL and Auth Secret when prompted will automatically register the ComfyUI Agent with your AI Server to handle related requests. |
52 | 41 |
|
53 | | -## Getting Started for Developers |
| 42 | +:::info |
| 43 | +You will be prompted to provide the AI Server URL and ComfyUI Agent URL during the installation. |
| 44 | +These should be the accessible URLs for your AI Server and ComfyUI Agent. When running locally, the ComfyUI Agent will be populated with a docker accessible path as localhost won't be accessible from the AI Server container. |
| 45 | +If you want to reset the ComfyUI Agent settings, remember to remove the provider from the AI Server Admin Portal. |
| 46 | +::: |
54 | 47 |
|
55 | | -1. **Setup**: Follow the [Quick Start guide](/ai-server/install) to deploy AI Server. |
56 | | -2. **Configuration**: Use the Admin Portal to add your AI providers and generate API keys. |
57 | | -3. **Integration**: Choose your preferred language and use AI Server's type-safe APIs. |
58 | | -4. **Development**: Start making API calls to AI Server from your application, leveraging the full suite of AI capabilities. |
| 48 | +## Accessing AI Server |
59 | 49 |
|
60 | | -## Learn More |
| 50 | +Once the AI Server is running, you can access the Admin Portal at [http://localhost:5006/admin](http://localhost:5005/admin) to configure your AI providers and generate API keys. |
| 51 | +If you first ran the AI Server with configured API Keys in your `.env` file, you providers will be automatically configured for the related services. |
61 | 52 |
|
62 | | -- Hosted Example: [openai.servicestack.net](https://openai.servicestack.net) |
63 | | -- Source Code: [github.com/ServiceStack/ai-server](https://github.com/ServiceStack/ai-server) |
| 53 | +::: info |
| 54 | +You can reset the process by deleting your local `App_Data` directory and rerunning `docker compose up`. |
| 55 | +::: |
64 | 56 |
|
65 | | -AI Server is actively developed and continuously expanding its capabilities. |
| 57 | +You will then be able to make requests to the AI Server API endpoints, and access the Admin Portal user interface like the [Chat interface](http://localhost:5005/admin/Chat) to use your AI Provider models. |
0 commit comments