Setting Up Your Unraid Engine

Ollama on Unraid

From CA, search “Ollama” or deploy manually with Docker. Map a persistent volume (e.g., /mnt/user/appdata/ollama:/root/.ollama) and expose 11434. On NVIDIA, add --gpus all; on Intel/AMD, add /dev/dri. Test with curl http://SERVER_IP:11434/api/tags then ollama pull llama3.2. Ollama’s docs are succinct and current. (Ollama) Tip: If you prefer Compose, install Docker Compose Manager, create a stack, paste the official services: ollama example, and save to your USB-backed stack folder. For bigger stacks, Composerize can convert your existing CA templates to Compose. (Unraid, docs.ibracorp.io, GitHub)

A humane interface: Open WebUI

Install Open WebUI and point it at your Ollama endpoint. If both containers share a custom bridge network, OLLAMA_BASE_URL=http://ollama:11434. If not, use the server IP. Open WebUI’s docs cover auth, multi-model switching, tools, and search integrations. (Open WebUI)

n8n for automation

Pull the official n8n container. Persist /home/node/.n8n to /mnt/user/appdata/n8n. In a workflow, call your local Ollama with an HTTP Request node hitting http://ollama:11434/api/generate. n8n’s Docker guide is solid and updates frequently. (n8n Docs)

Give your AI memory

Two good paths:

  • PostgreSQL + pgvector - familiar SQL plus vectors in one box. Use the official Postgres container and enable pgvector. Great when you already want relational storage. (Medium)

  • Qdrant - purpose-built vector DB, buttery simple in Docker, excellent performance for semantic search. (qdrant.tech)

For both, create a db share or place data under appdata/ on your SSD pool so backups are straightforward.

Private web search

Run SearxNG so agents and workflows can fetch fresh info without leaking search history. Bind to a custom network and set the base_url. You can later proxy it with NPM. (SearXNG Documentation)

Secure ingress with TLS

For beginners I recommend Nginx Proxy Manager:

  1. Forward router ports 80/443 to NPM.

  2. Add a Proxy Host for chat.yourdomain.tld → your Open WebUI container.

  3. Request a Let’s Encrypt certificate in one click. NPM’s docs are clear, and the CA template Just Works. (Docker Hub)

(You can graduate to Caddy later if you want declarative configs and auto-HTTPS magic everywhere, but NPM is the fastest on-ramp.)


Everything on Shared Sapience is free and open to all. However, it takes a tremendous amount of time and effort to keep these resources and guides up to date and useful for everyone.

If enough of my amazing readers could help with just a few dollars a month, I could dedicate myself full-time to helping Seekers, Builders, and Protectors collaborate better with AI and work toward a better future.

Even if you can’t support financially, becoming a free subscriber is a huge help in advancing the mission of Shared Sapience.

If you’d like to help by becoming a free or paid subscriber, simply use the Subscribe/Upgrade button below, or send a one-time quick tip with Buy me a Coffee by clicking here. I’m deeply grateful for any support you can provide - thank you!

This post is for paid subscribers