Skip to content

mach3db vectorstore backed LLM with Ollama & PgVector (defaults to exaone3.5:32b)

Notifications You must be signed in to change notification settings

mach3db/mach3db-RAG

 
 

Repository files navigation

Run an LLM on your machine...

But store its vector database on mach3db!


We use exaone3.5:32b as our default LLM:

exaone3.5:32b

1. Install Ollama and run model

Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

Run and pull manifest of your preferred LLM model

ollama run exaone3.5:32b 'Hey!'

You can find more LLM's here, adjust app.py accordingly.

2. Create a virtual environment

python3 -m venv ~/.venvs/aienv
source ~/.venvs/aienv/bin/activate

3. Install libraries

pip install -r package.txt

4. edit assistant.py line 10 and add your mach3db username and password

Make sure to contact james@mach3db.com to have the pgvector extension enabled for your mach3db database.

5. Run RAG app

streamlit run app.py

view.png

About

mach3db vectorstore backed LLM with Ollama & PgVector (defaults to exaone3.5:32b)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.5%
  • Shell 2.5%