Transform your WhatsApp experience with a locally-deployed intelligent AI powered by DeepSeek-R1, ensuring 100% data privacy while delivering state-of-the-art conversational intelligence.
- Local LLM Deployment: All processing happens on your hardware - no data leaves your system
- Zero Cloud Dependencies: No connections to DeepSeek servers or third-party services
- Complete Data Control: You own and control all conversation data
- Open Source Transparency: Fully auditable codebase
Our solution leverages the latest DeepSeek-R1 model, offering:
- Advanced natural language understanding
- Context-aware responses
- Multi-turn conversation capabilities
- Customizable knowledge domains based on your data input
Transform your business communications with privacy-preserving AI technology. This versatile solution enables:
- 24/7 instant customer support
- Intelligent query handling
- Automated ticket creation and tracking
- Multi-language support
- Meeting scheduling and calendar management
- Order tracking and updates
- Product recommendations
- Invoice generation and processing
- Interactive tutoring
- Course information delivery
- Assignment assistance
- Study planning
- Appointment scheduling
- Medication reminders
- Basic health inquiries
- Wellness tips and tracking
- Task management
- Event planning
- Travel booking assistance
- Restaurant recommendations
- Real-time conversation with local LLM models
- Webhook integration for instant message processing
- Media message support (images, documents, audio)
- Encrypted media handling
- Message status tracking
- Node.js
- Express.js
- Ollama (Local LLM)
- WhatsApp Business API
- Webhook Integration
- Node.js installed
- Ngrok account
- Meta Developer account
- Ollama installed locally
git clone https://github.com/yourusername/whatsapp_ollama_bot.git
cd whatsapp_ollama_bot
npm install express axios ollama-node dotenv
Create a .env
file in the root directory with the following content:
WHATSAPP_TOKEN=your_whatsapp_token
VERIFY_TOKEN=your_verify_token
- Download Ngrok from the official website.
- Extract and run:
./ngrok http 5001
- Copy the generated HTTPS URL.
- Go to the Meta Developers Portal.
- Navigate to WhatsApp > Configuration.
- Set Callback URL to:
your_ngrok_url/webhook
- Enter your Verify Token.
- Start the Ollama service:
ollama serve
- Pull and run the DeepSeek-R1 model:
ollama run deepseek-r1
- Verify the model is running:
ollama ps
node index.js
- Real-time WhatsApp message processing
- Local LLM integration
- Basic conversation handling
- Webhook implementation
- Web search integration
- Vector-DB/MongoDB for RAG search
- Multi-modal processing (image/video/voice)
- Enhanced RAG capabilities
- Performance optimization
- Admin dashboard
- GPU: 12GB+ VRAM recommended
- Optimal performance with RTX 4070/4080/4090
- M1/M2 Macs with 16GB+ RAM supported
- Currently using deepseek-r1 7B model
- Optimized for performance/quality balance
This project is licensed under the MIT License - see the LICENSE file for details.