Fast-Start Recipes (Docker)

Save these as compose.yml in an empty folder, then run docker compose up -d. Stop with docker compose down. Adjust ports and paths as needed.

Ollama + Open WebUI (GPU-agnostic)

services:
  ollama:
    image: ollama/ollama:latest
    ports: ["11434:11434"]
    volumes:
      - ./ollama:/root/.ollama
    # NVIDIA users: add --gpus all via deploy reservations or docker run
    # Intel/AMD iGPU users: add devices: ["/dev/dri:/dev/dri"] on Linux
    restart: unless-stopped

  open-webui:
    image: ghcr.io/open-webui/open-webui:main
    environment:
      - OLLAMA_BASE_URL=http://ollama:11434
    depends_on: [ollama]
    ports: ["3000:8080"]
    restart: unless-stopped

Ollama’s quickstart and Open WebUI’s getting started pages cover model pulls, auth, and extensions. (Docker Documentation)

n8n automation + Qdrant vector DB

services:
  qdrant:
    image: qdrant/qdrant:latest
    ports: ["6333:6333"]
    volumes:
      - ./qdrant:/qdrant/storage
    restart: unless-stopped

  n8n:
    image: n8nio/n8n:latest
    ports: ["5678:5678"]
    environment:
      - N8N_HOST=localhost
      - N8N_PORT=5678
    volumes:
      - ./n8n:/home/node/.n8n
    restart: unless-stopped

Use an HTTP Request node in n8n to call http://ollama:11434/api/generate, store embeddings in Qdrant for simple RAG. (Vultr Docs), (NVIDIA Docs)

Private meta-search with SearxNG

services:
  searxng:
    image: searxng/searxng:latest
    ports: ["8080:8080"]
    environment:
      - SEARXNG_BASE_URL=http://localhost:8080/
    volumes:
      - ./searxng:/etc/searxng
    restart: unless-stopped

Run your own meta-search so agents can look things up without leaking your history. The Docker install doc has extra knobs. (Docker Documentation)

Quick checks before you call it “done”

  • docker version, docker info, docker compose version all return cleanly.

  • /etc/docker/daemon.json exists with log rotation and address pools, and Docker restarted cleanly. (Docker Documentation)

  • If using GPUs: nvidia-smi works on the host and in a test container with --gpus all for NVIDIA, or your container sees /dev/dri for VA-API. (NVIDIA Docs)


Everything on Shared Sapience is free and open to all. However, it takes a tremendous amount of time and effort to keep these resources and guides up to date and useful for everyone.

If enough of my amazing readers could help with just a few dollars a month, I could dedicate myself full-time to helping Seekers, Builders, and Protectors collaborate better with AI and work toward a better future.

Even if you can’t support financially, becoming a free subscriber is a huge help in advancing the mission of Shared Sapience.

If you’d like to help by becoming a free or paid subscriber, simply use the Subscribe/Upgrade button below, or send a one-time quick tip with Buy me a Coffee by clicking here. I’m deeply grateful for any support you can provide - thank you!

This post is for paid subscribers