This project involves the development of an advanced hybrid AI chatbot that integrates multiple AI models to enhance interaction capabilities. The chatbot provides users with the flexibility to choose between different AI models, including OpenAI, Gemini, DeepSeek, GorQ APIs, and a locally hosted Large Language Model (LLM) powered by HuggingFace Transformers. The system is designed with a user-friendly front-end interface and a robust backend to ensure seamless processing and scalability.
- Multiple AI Model Integration: Users can select from OpenAI, Gemini, DeepSeek, GorQ APIs, or a locally hosted LLM.
- Model Switching: Easily switch between different AI models during interactions.
- Hybrid Outputs: Combine outputs from multiple models for more comprehensive responses.
- JSON-Based Conversation Storage: Efficiently store and manage conversations in JSON format for easy data handling and scalability.
- User-Friendly Interface: Built with HTML, CSS, and JavaScript for a smooth user experience.
- Robust Backend: Developed with FastAPI for reliable and efficient processing.
- Front-End: HTML, CSS, JavaScript
- Back-End: FastAPI
- AI Models: OpenAI, Gemini, DeepSeek, GorQ, HuggingFace Transformers
- Data Handling: JSON
Follow these steps to set up and run the chatbot:
-
Clone the Repository:
git clone https://github.com/JANNATHA-MANISH/AI-Chatbot-Hybrid.git cd AI-Chatbot-Hybrid
-
Set Up a Virtual Environment:
python -m venv env .\env\Scripts\activate # On Windows # On macOS/Linux: source env/bin/activate
-
Install Dependencies:
pip install -r requirements.txt pip install accelerate
-
Configure API Keys and Model Settings:
- Create a
.env
file in the root directory and set your API keys and model-related variables:HUGGINGFACE_TOKEN=your_huggingface_token GEMINI_API_KEY=your_gemini_api_key OPENAI_API_KEY=your_openai_api_key DEEPSEEK_API_KEY=your_deepseek_api_key GROQ_API_KEY=your_groq_api_key # Path or name of the model to load MODEL_PATH=microsoft/DialoGPT-small # Generation parameters MAX_NEW_TOKENS=1024 TEMPERATURE=0.7 TOP_K=40 TOP_P=0.95 NUM_BEAMS=5 NO_REPEAT_NGRAM_SIZE=2
- Create a
-
Run the Backend Server:
uvicorn app.main:app --reload
- Open the Front-End:
- Navigate to the
frontend
folder (if applicable) and open theindex.html
file in your browser. - Alternatively, you can serve it using a local HTTP server:
python -m http.server 8000
- Open your browser and navigate to
http://localhost:8000
.
- Navigate to the
-
Run the Backend: Ensure the backend server is running with the command:
uvicorn app.main:app --reload
-
Run the Front-End: Open the
index.html
file in your browser or use a local HTTP server. -
Select AI Model: Use the dropdown menu in the interface to choose your desired AI model.
-
Start Chatting: Enter your message in the chat interface and press send.
-
Switch Models: You can switch between different AI models during the conversation.
-
View Hybrid Outputs: Enable hybrid outputs to combine responses from multiple models.
-
Save Conversations: Conversations are automatically saved in JSON format for future reference.
Below is an example of how the FastAPI backend processes user queries:
-
Input:
{ "query": "hi how are you", "model": "gemini" }
-
Output:
{ "response": "hi how are you doing" }
- Special thanks to the developers of OpenAI, Gemini, DeepSeek, GorQ, and HuggingFace Transformers for their powerful AI models.
- Gratitude to the FastAPI community for their excellent framework.
Enjoy using the AI Chatbot (Hybrid)! We hope it enhances your interaction experience and provides valuable insights.