Modern web interface for the Qwory AI agent framework, designed to run completely with local models via Ollama without requiring any external API keys.
- Chat Interface: Modern chat UI with message history
- Streaming Responses: Real-time streaming via WebSockets
- Local Model Execution: Run models locally with Ollama (no API keys needed)
- Dark/Light Theme: Responsive design that works on all devices
- Model Selection: Support for multiple model providers
- Docker Integration: Easy setup with Docker Compose
-
Clone the repository:
git clone https://github.com/yourusername/qwory.git cd qwory -
Start the application with Docker Compose:
docker-compose up -d
-
Access the web interface:
- Frontend: http://localhost:3000
- API: http://localhost:8000
-
The system will automatically download and setup Llama 3 model on first run.
-
Navigate to the frontend directory:
cd qwory-ui -
Install dependencies:
npm install
-
Start the development server:
npm run dev
-
Navigate to the backend directory:
cd qwory-api -
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Start the development server:
uvicorn app.main:app --reload
Qwory Web UI uses Ollama to run AI models locally without requiring any API keys. Here are the available models:
-
General Purpose:
- Llama 3 (
llama3) - Llama 3 8B (
llama3:8b) - Mistral (
mistral)
- Llama 3 (
-
Code Specialized:
- DeepSeek Coder (
deepseek-coder) - Code Llama (
codellama) - Wizard Coder (
wizard-coder)
- DeepSeek Coder (
-
Smaller Models:
- Phi-3 Mini (
phi3:mini) - Mistral 7B (
mistral:7b)
- Phi-3 Mini (
To use a specific model, select it from the dropdown in the settings page of the web UI, or specify it when starting Qwory from the command line.
qwory/
├── docker-compose.yml # Docker Compose configuration
├── qwory-ui/ # Frontend application
│ ├── src/ # React source code
│ ├── Dockerfile # Frontend Docker configuration
│ └── nginx.conf # Nginx configuration
├── qwory-api/ # Backend application
│ ├── app/ # FastAPI application code
│ ├── Dockerfile # Backend Docker configuration
│ └── requirements.txt # Python dependencies
└── data/ # Shared data volume
GET /api/models- List available modelsPOST /api/chat- Send a message to the agentWS /ws/chat- WebSocket endpoint for streaming responses
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
- Ollama for making local models easily accessible
- FastAPI for the API framework
- React for the frontend framework
- Tailwind CSS for styling