We use exaone3.5:32b as our default LLM:
1. Install Ollama and run model
Install Ollama
curl -fsSL https://ollama.com/install.sh | shRun and pull manifest of your preferred LLM model
ollama run exaone3.5:32b 'Hey!'You can find more LLM's here, adjust app.py accordingly.
python3 -m venv ~/.venvs/aienv
source ~/.venvs/aienv/bin/activatepip install -r package.txtMake sure to contact james@mach3db.com to have the pgvector extension enabled for your mach3db database.
streamlit run app.py- Open localhost:8501 to view your local RAG app.
