Skip to content

All in one docker compose with ollama #68

@cedricwalter

Description

@cedricwalter

an all in one docker compose, it start ollama and pull model, when ready it start magic-lists

Ollama 3.2 truncate the prompt, and is therefore not usable without increasing the Ollama prompt size, the all in one docker increase from 4096 to 32768

I keep the model loaded in memory and this has speed up dramatically the playlist creation, it still run on CPU through. Best is to use Ollama native on macOS so it use Metal NPU hardware. This docker is portable and run on linux, windows, macOs

Attention: this require enough RAM to keep the model loaded in memory! otherwise you can comment it

services:
  ollama:
       image: ollama/ollama
       container_name: ollama
       environment:
         - OLLAMA_CONTEXT_LENGTH=32768 # set max prompt size
         - OLLAMA_KEEP_ALIVE=1 #  Set this to -1 to keep the model permanently loaded in memory. This eliminates the "load time" delay for every new request.
         - OLLAMA_NUM_PARALLEL=4 # For the mac mini M4 Pro/Max specifically, you can increase this (e.g., to 4 or 8) to handle multiple concurrent requests more efficiently.
         - OLLAMA_MAX_LOADED_MODELS=1 # ensure the M4 focuses its entire memory bandwidth on a single model for maximum speed.
       ports:
         - "11434:11434"
       volumes:
         - ollama:/root/.ollama 

  ollama-pull:
    image: ollama/ollama:latest
    container_name: ollama-pull
    environment:
      # CRITICAL: Tell the puller to connect to the 'ollama' service container
      - OLLAMA_HOST=ollama:11434
    volumes:
      - ollama:/root/.ollama 
    # Add a small delay to let the server start before pulling
    entrypoint: /bin/sh
    command: -c "sleep 5 && ollama pull llama3.2"
    depends_on:
      - ollama

  magiclists:
       image: rickysynnot/magic-lists-for-navidrome:latest
       container_name: magiclists
       ports:
         - "4545:8000"
       environment:
         - NAVIDROME_URL=http://IP:4533
         - NAVIDROME_USERNAME=YYYYYY
         - NAVIDROME_PASSWORD=XXXXXXX
         - DATABASE_PATH=/app/data/magiclists.db
         - AI_PROVIDER=ollama
         - OLLAMA_MODEL=llama3.2
         - OLLAMA_BASE_URL=http://IP:11434/v1/chat/completions
       security_opt:
         - no-new-privileges=true
       volumes:
         - magiclists:/app/data
         - magiclists/payloads:/app/payloads
       restart: unless-stopped
       depends_on:
         - ollama-pull

volumes:
  ollama:
  magiclists:

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions