A sophisticated web interface for seamless AI model interaction through Ollama
Transform your local AI experience with this production-ready web application that provides a ChatGPT-like interface for interacting with local AI models through Ollama. Built with modern web technologies and designed for performance, privacy, and ease of use.
Feature | Description |
---|---|
π€ Advanced Model Management | Browse, install, and configure AI models with real-time progress tracking |
π¬ Intelligent Chat Interface | ChatGPT-style chat UI with streaming responses and conversation history |
π§ Smart Context Awareness | Understands references like "that", "it", maintaining conversation continuity |
βοΈ Granular Configuration | Fine-tune model parameters, API endpoints, and application behavior |
π¨ Dynamic Theming | Dark/light mode support with system preference detection |
π± Responsive Design | Optimized for desktop, tablet, and mobile devices |
π Real-time Streaming | Live streaming responses with progress indicators |
πΎ Persistent Storage | Local conversation history and settings management |
π Privacy-First | All data stays on your machine - no external API calls |
π― Zero-Config Setup | Works out of the box with sensible defaults |
- Node.js (v18 or higher)
- npm or yarn
- Git
# Using Homebrew
brew install ollama
# Or using curl
curl -fsSL https://ollama.ai/install.sh | sh
Download from ollama.ai and run the installer.
curl -fsSL https://ollama.ai/install.sh | sh
# Start Ollama service
ollama serve
# Download a model (in a new terminal)
ollama pull llama3.2:3b
# Verify installation
ollama list
# Clone repository
git clone https://github.com/rudra-sah00/Web-AI.git
cd Web-AI
# Install dependencies
npm install
# Start development server
npm run dev
- Open
http://localhost:3000
- Go to Settings β Models
- Select your downloaded model
- Start chatting! π
Web-AI/
βββ π src/
β βββ π app/ # Next.js App Router
β β βββ π api/ # API endpoints
β β β βββ π chats/ # Chat management
β β β βββ π config/ # Configuration
β β β βββ π settings/ # Settings management
β β βββ π models/ # Model management page
β β βββ π modules/ # Modules page
β β βββ π settings/ # Settings page
β βββ π components/ # React components
β β βββ π chat/ # Chat interface
β β βββ π sidebar/ # Navigation sidebar
β β βββ π theme/ # Theme management
β β βββ π ui/ # Base UI components
β βββ π services/ # Business logic
β βββ π lib/ # Utilities
β βββ π config/ # Configuration
βββ π docs/ # Documentation
β βββ π installation.md # Setup guide
β βββ π architecture.md # System architecture
β βββ π api.md # API documentation
βββ π package.json
βββ π README.md
graph TB
subgraph "π₯οΈ Frontend Layer"
A[React Components]
B[Next.js Router]
C[State Management]
end
subgraph "βοΈ Service Layer"
D[Chat Service]
E[Ollama Service]
F[Config Service]
G[Settings Service]
end
subgraph "π External Services"
H[Ollama Server]
I[File System]
J[Local Storage]
end
A --> D
A --> F
B --> A
D --> E
E --> H
F --> I
C --> J
style A fill:#61dafb
style H fill:#ff6b35
style I fill:#4ade80
- Next.js 15 - Full-stack React framework with App Router
- TypeScript - Type-safe development
- React 19 - User interface library
- Tailwind CSS - Utility-first CSS framework
- Shadcn/ui - Modern component library
- Lucide React - Beautiful icon set
- Lottie React - Smooth animations
- Ollama - Local AI model runtime
- Server-Sent Events - Real-time streaming responses
- Custom Context Engine - Intelligent conversation continuity
- ESLint - Code linting and quality
- PostCSS - CSS processing
- Autoprefixer - CSS vendor prefixes
Document | Description |
---|---|
π Installation Guide | Comprehensive setup instructions |
ποΈ Architecture Guide | System design and patterns |
π API Documentation | REST API reference |
Create .env.local
:
# Ollama Configuration
OLLAMA_BASE_URL=http://localhost:11434
# Application Settings
NEXT_PUBLIC_APP_NAME=Web-AI
NEXT_PUBLIC_APP_VERSION=1.0.0
{
"defaultModel": "llama3.2:3b",
"modelConfigs": {
"llama3.2:3b": {
"name": "Llama 3.2 3B",
"parameters": {
"temperature": 0.7,
"top_p": 0.9,
"max_tokens": 2048
}
}
}
}
// Start a conversation
const response = await ChatService.generateStreamingResponse(
"Explain quantum computing",
chatHistory,
(chunk) => console.log(chunk)
);
// The AI understands references
User: "Write a Python function to calculate factorial"
AI: [provides factorial function]
User: "Now write that in Java"
AI: [converts the factorial function to Java]
User: "Add error handling to it"
AI: [adds error handling to the Java version]
// Get available models
const models = await OllamaService.getAvailableModels();
// Install a new model
await OllamaService.pullModel("llama3.1:8b", onProgress);
- Conversation Memory: Maintains chat history for better responses
- Smart References: Understands "that", "it", "this" from previous context
- Adaptive Prompting: Adjusts based on conversation type (coding, Q&A, etc.)
- Live Responses: See AI responses as they're generated
- Progress Indicators: Visual feedback during generation
- Error Recovery: Graceful handling of connection issues
- ChatGPT-style UI: Familiar and intuitive design
- Dark/Light Themes: Automatic system preference detection
- Responsive Layout: Works on all device sizes
- Smooth Animations: Lottie-powered micro-interactions
We welcome contributions! Please see our contributing guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
# Check if Ollama is running
ollama serve
# Verify accessibility
curl http://localhost:11434/api/tags
# Install a model
ollama pull llama3.2:3b
# List installed models
ollama list
# Use different port
npm run dev -- -p 3001
- π Check the Installation Guide
- ποΈ Review the Architecture Guide
- π Consult the API Documentation
- π Open an Issue
This project is licensed under the MIT License - see the LICENSE file for details.
- Ollama - For providing the local AI runtime
- Vercel - For the excellent Next.js framework
- Shadcn - For the beautiful component library
- Tailwind CSS - For the utility-first CSS framework
Built with β€οΈ for the local AI community
β Star this project β’ π Report Bug β’ π‘ Request Feature
curl -fsSL https://ollama.ai/install.sh | sh
# Start Ollama service
ollama serve
# Download a model (in a new terminal)
ollama pull llama3.2:3b
# Verify installation
ollama list
# Clone repository
git clone https://github.com/rudra-sah00/Web-AI.git
cd Web-AI
# Install dependencies
npm install
# Start development server
npm run dev
- Open
http://localhost:3000
- Go to Settings β Models
- Select your downloaded model
- Start chatting! π
Web-AI/
βββ π src/
β βββ π app/ # Next.js App Router
β β βββ π api/ # API endpoints
β β β βββ π chats/ # Chat management
β β β βββ π config/ # Configuration
β β β βββ π settings/ # Settings management
β β βββ π models/ # Model management page
β β βββ π modules/ # Modules page
β β βββ π settings/ # Settings page
β βββ π components/ # React components
β β βββ π chat/ # Chat interface
β β βββ π sidebar/ # Navigation sidebar
β β βββ π theme/ # Theme management
β β βββ π ui/ # Base UI components
β βββ π services/ # Business logic
β βββ π lib/ # Utilities
β βββ π config/ # Configuration
βββ π docs/ # Documentation
β βββ π installation.md # Setup guide
β βββ π architecture.md # System architecture
β βββ π api.md # API documentation
βββ π package.json
βββ π README.md
graph TB
subgraph "π₯οΈ Frontend Layer"
A[React Components]
B[Next.js Router]
C[State Management]
end
subgraph "βοΈ Service Layer"
D[Chat Service]
E[Ollama Service]
F[Config Service]
G[Settings Service]
end
subgraph "π External Services"
H[Ollama Server]
I[File System]
J[Local Storage]
end
A --> D
A --> F
B --> A
D --> E
E --> H
F --> I
C --> J
style A fill:#61dafb
style H fill:#ff6b35
style I fill:#4ade80
- Next.js 15 - Full-stack React framework with App Router
- TypeScript - Type-safe development
- React 19 - User interface library
- Tailwind CSS - Utility-first CSS framework
- Shadcn/ui - Modern component library
- Lucide React - Beautiful icon set
- Lottie React - Smooth animations
- Ollama - Local AI model runtime
- Server-Sent Events - Real-time streaming responses
- Custom Context Engine - Intelligent conversation continuity
- ESLint - Code linting and quality
- PostCSS - CSS processing
- Autoprefixer - CSS vendor prefixes
Document | Description |
---|---|
π Installation Guide | Comprehensive setup instructions |
ποΈ Architecture Guide | System design and patterns |
π API Documentation | REST API reference |
Create .env.local
:
# Ollama Configuration
OLLAMA_BASE_URL=http://localhost:11434
# Application Settings
NEXT_PUBLIC_APP_NAME=Web-AI
NEXT_PUBLIC_APP_VERSION=1.0.0
{
"defaultModel": "llama3.2:3b",
"modelConfigs": {
"llama3.2:3b": {
"name": "Llama 3.2 3B",
"parameters": {
"temperature": 0.7,
"top_p": 0.9,
"max_tokens": 2048
}
}
}
}
// Start a conversation
const response = await ChatService.generateStreamingResponse(
"Explain quantum computing",
chatHistory,
(chunk) => console.log(chunk)
);
// The AI understands references
User: "Write a Python function to calculate factorial"
AI: [provides factorial function]
User: "Now write that in Java"
AI: [converts the factorial function to Java]
User: "Add error handling to it"
AI: [adds error handling to the Java version]
// Get available models
const models = await OllamaService.getAvailableModels();
// Install a new model
await OllamaService.pullModel("llama3.1:8b", onProgress);
- Conversation Memory: Maintains chat history for better responses
- Smart References: Understands "that", "it", "this" from previous context
- Adaptive Prompting: Adjusts based on conversation type (coding, Q&A, etc.)
- Live Responses: See AI responses as they're generated
- Progress Indicators: Visual feedback during generation
- Error Recovery: Graceful handling of connection issues
- ChatGPT-style UI: Familiar and intuitive design
- Dark/Light Themes: Automatic system preference detection
- Responsive Layout: Works on all device sizes
- Smooth Animations: Lottie-powered micro-interactions
We welcome contributions! Please see our contributing guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
# Check if Ollama is running
ollama serve
# Verify accessibility
curl http://localhost:11434/api/tags
# Install a model
ollama pull llama3.2:3b
# List installed models
ollama list
# Use different port
npm run dev -- -p 3001
- π Check the Installation Guide
- ποΈ Review the Architecture Guide
- π Consult the API Documentation
- π Open an Issue
This project is licensed under the MIT License - see the LICENSE file for details.
- Ollama - For providing the local AI runtime
- Vercel - For the excellent Next.js framework
- Shadcn - For the beautiful component library
- Tailwind CSS - For the utility-first CSS framework
Built with β€οΈ for the local AI community
β Star this project β’ π Report Bug β’ π‘ Request Feature
π€ Advanced Model Management | Browse, install, and configure AI models with real-time progress tracking |
π¬ Intelligent Chat Interface | Modern, responsive chat UI with streaming responses and conversation history |
βοΈ Granular Configuration | Fine-tune model parameters, API endpoints, and application behavior |
π¨ Dynamic Theming | Dark/light mode support with system preference detection |
π± Responsive Design | Optimized for desktop, tablet, and mobile devices |
π Real-time Updates | Live streaming responses with progress indicators and error handling |
οΏ½ Persistent Storage | Local chat history and configuration management |
π‘οΈ Type Safety | Full TypeScript implementation with comprehensive error handling |
- Frontend Framework: Next.js 15.3.1 (App Router)
- Language: TypeScript 5.0 with strict type checking
- UI Framework: React 19.0 with modern hooks
- Styling: TailwindCSS 4.0 + shadcn/ui components
- State Management: React Context + Custom hooks
- API Integration: RESTful APIs with streaming support
- Build Tool: Turbopack for ultra-fast development
π¦ Modular Component Structure
βββ π― AppLayout (Main application shell)
βββ π¬ ChatModule (Core chat functionality)
βββ ποΈ SettingsDialog (Configuration management)
βββ π± Sidebar (Navigation & chat history)
βββ π¨ ThemeProvider (Dark/light mode)
βββ π§© UI Components (Reusable design system)
- OllamaService: Direct API communication with Ollama
- ChatService: Chat session management and persistence
- ConfigService: Application configuration handling
- ModelParameterService: Model configuration management
# 1. Clone the repository
git clone https://github.com/rudra-sah00/Web-AI.git
cd Web-AI
# 2. Install dependencies
npm install
# or using yarn
yarn install
# or using bun
bun install
# 3. Start Ollama service (in separate terminal)
ollama serve
# 4. Run development server
npm run dev
# or
yarn dev
# or
bun dev
# 5. Open your browser
# Navigate to http://localhost:3000
# Build optimized production bundle
npm run build
# Start production server
npm run start
- App Router: Utilizing Next.js 15's latest routing paradigm
- Server Components: Optimized rendering with React Server Components
- Turbopack: Lightning-fast development with next-generation bundling
- API Routes: RESTful endpoints for chat management and configuration
// Type-safe service layer with comprehensive interfaces
interface OllamaModel {
id: string;
name: string;
description: string;
parameters: ModelParameters;
installed: boolean;
}
interface ChatMessage {
id: string;
content: string;
role: 'user' | 'assistant';
timestamp: Date;
model?: string;
}
- shadcn/ui: Professional component library implementation
- Radix UI: Accessible, unstyled component primitives
- Framer Motion: Smooth animations and transitions
- Responsive Design: Mobile-first approach with TailwindCSS
- Theme System: Dynamic dark/light mode with system preference detection
- Streaming Responses: Live AI response generation with progress indicators
- WebSocket-like Experience: Seamless real-time communication
- Progress Tracking: Model installation and download progress
- Error Handling: Comprehensive error boundaries and user feedback
- Custom Hooks: Reusable logic with
useOllamaModels
,useModelSearch
- Context API: Global state management for themes and configuration
- Memoization: Optimized re-rendering with React.memo and useMemo
- Lazy Loading: Code splitting for optimal bundle sizes
Web-AI/
βββ π src/
β βββ π app/ # Next.js App Router
β β βββ π layout.tsx # Root layout with providers
β β βββ π page.tsx # Main application page
β β βββ π api/ # API routes
β β βββ π chats/ # Chat management endpoints
β β βββ π config/ # Configuration endpoints
β βββ π components/
β β βββ π chat/ # Chat interface components
β β β βββ π ChatModule.tsx
β β β βββ π ChatHeader.tsx
β β β βββ π MessageInput.tsx
β β β βββ π MessageItem.tsx
β β βββ π setting/ # Settings components
β β βββ π sidebar/ # Navigation components
β β βββ π theme/ # Theme system
β β βββ π ui/ # Reusable UI components
β βββ π services/ # Business logic layer
β β βββ π OllamaService.ts # Ollama API integration
β β βββ π ChatService.ts # Chat management
β β βββ π ConfigService.ts # Configuration handling
β βββ π lib/ # Utilities and helpers
βββ π data/ # Application data
β βββ π runtime-config.json # Runtime configuration
β βββ π chats/ # Stored conversations
βββ π components.json # shadcn/ui configuration
class OllamaService {
private apiUrl: string;
private modelInstallProgress: Map<string, ProgressData>;
async streamGeneration(prompt: string, model: string): Promise<ReadableStream> {
// Implementation of streaming responses with error handling
}
async pullModel(modelName: string, onProgress: ProgressCallback): Promise<void> {
// Real-time model installation with progress tracking
}
}
- Compound Components: Flexible, composable UI patterns
- Render Props: Dynamic component composition
- Custom Hooks: Reusable stateful logic
- Higher-Order Components: Cross-cutting concerns
// Custom hook for Ollama models
const useOllamaModels = () => {
const [models, setModels] = useState<OllamaModel[]>([]);
const [loading, setLoading] = useState(false);
const [error, setError] = useState<string | null>(null);
// Comprehensive state management with error boundaries
};
- Real-time model discovery and installation
- Progress tracking with visual indicators
- Model parameter fine-tuning interface
- Automatic model updates and health checks
- Stream-based response rendering
- Message history with search and filtering
- Conversation branching and management
- Prompt template system for common use cases
- Runtime configuration updates
- API endpoint management
- Model parameter presets
- Export/import settings functionality
- Code splitting with dynamic imports
- Image optimization with Next.js Image component
- Bundle analysis and size optimization
- Efficient re-rendering with React.memo
# .env.local
NEXT_PUBLIC_OLLAMA_API_URL=http://localhost:11434
NEXT_PUBLIC_APP_NAME=Ollama Web AI
NEXT_PUBLIC_MAX_CHAT_HISTORY=100
{
"ollamaModels": [
{
"id": "qwen:0.5b",
"name": "Qwen 2.5 (0.5B)",
"description": "Efficient small language model",
"parameters": {
"temperature": 0.9,
"top_p": 0.5,
"max_tokens": 4070
}
}
]
}
- ESLint: Strict linting with Next.js recommended rules
- TypeScript: Full type coverage with strict mode
- Prettier: Consistent code formatting
- Husky: Pre-commit hooks for quality assurance
- Unit Tests: Component testing with Jest & React Testing Library
- Integration Tests: API route testing
- E2E Tests: User journey validation with Playwright
- Type Safety: Comprehensive TypeScript coverage
- Lighthouse: Performance, accessibility, and SEO optimization
- Bundle Analyzer: Code splitting optimization
- Core Web Vitals: Real user metrics tracking
# One-click deployment
npx vercel
# Or connect your GitHub repository for automatic deployments
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]
# Production build
npm run build
# Start with PM2
pm2 start npm --name "ollama-web-ai" -- start
We welcome contributions! Please see our Contributing Guidelines for details.
# Fork the repository
git clone https://github.com/your-username/Web-AI.git
# Create feature branch
git checkout -b feature/amazing-feature
# Make your changes and commit
git commit -m "Add amazing feature"
# Push to your fork and create PR
git push origin feature/amazing-feature
- Multi-language Support - Internationalization (i18n)
- Plugin System - Extensible architecture for custom integrations
- Advanced Analytics - Usage statistics and performance metrics
- Team Collaboration - Shared workspaces and chat rooms
- API Documentation - Interactive OpenAPI documentation
- Mobile App - React Native companion application
Category | Technologies | Implementation |
---|---|---|
Frontend | React 19, Next.js 15, TypeScript | Modern React patterns, Server Components, App Router |
Styling | TailwindCSS, shadcn/ui, Framer Motion | Design system, responsive design, animations |
State Management | Context API, Custom Hooks | Global state, local state optimization |
API Integration | REST APIs, Streaming, WebSockets | Real-time communication, error handling |
Performance | Code Splitting, Lazy Loading, Memoization | Bundle optimization, render optimization |
Developer Experience | TypeScript, ESLint, Hot Reload | Type safety, code quality, fast development |
β Star this repository if you find it helpful!
This project showcases modern web development practices and is actively maintained.