Setting Up Your Proxmox Engine
Storage patterns for a single node
You have two great baselines:
ZFS Pool for VM images and fast datasets. Add a pool after install or choose ZFS at install time. ZFS gives you snapshots and clones on tap. (Storage - ZFS)
LVM-Thin on an SSD/NVMe. Thin provisioning with snapshot support. Keep it local to the node. (Storage - LVM-Thin)
If you think you will experiment with Ceph or a small cluster, park that thought for the section on clusters below. For now, keep it simple and fast.
Networking - bridges now, SDN later
Linux bridge is your default virtual switch. Add VLAN aware bridges as you segment networks. (Network Configuration)
SDN Fabrics arrive in PVE 9.0 for routed underlays across nodes. When you scale into a cluster, Fabrics give you OpenFabric or OSPF options for a clean, redundant underlay. (SDN - Fabrics (9.0)), (Proxmox VE 9.0 - Press release)
GPU acceleration and passthrough
For local AI workloads, pass a GPU into a VM for near-native performance.
PCI(e) passthrough: enable IOMMU, bind the GPU to VFIO, then attach it to the VM. Follow the official wiki for the minimal, correct steps. (PCI(e) Passthrough)
NVIDIA vGPU: if you have a supported data-center GPU and license, vGPU lets multiple VMs share a single card. The wiki outlines setup and caveats. (NVIDIA vGPU on Proxmox VE)
If you only need one heavy AI VM, classic passthrough is simplest. If you need multiple concurrent AI VMs, evaluate vGPU. (Proxmox Community Forum)
Golden template with cloud-init
This is where Proxmox shines for Builders: make one perfect base VM, then clone it in seconds.
Download a 24.04 LTS cloud image from Canonical. (Ubuntu 24.04 LTS Cloud Images)
Create a VM, attach the image as a disk, set BIOS to OVMF (UEFI), SCSI controller VirtIO SCSI single, and network VirtIO on
vmbr0.Add a cloud-init drive, set your SSH key and a user, optionally set static networking or hostname in the cloud-init panel. Convert to template and clone as needed. (Cloud-Init Support)
If you need richer first-boot customizations, provide a NoCloud user-data and meta-data via the cloud-init drive or cicustom. (Cloud-Init Documentation)
Your AI platform on Proxmox
The cleanest pattern is one Ubuntu VM running Docker and your AI stack. Proxmox handles hardware, snapshots, and backups; Docker inside the VM gives you portability and isolation.
Recommended baseline inside the VM:
Ollama for local models and API. (Admin Guide 9.x - Index)
Open WebUI as your front-end. (Admin Guide 9.x - Index)
n8n for automation flows. (Admin Guide 9.x - Index)
Vector DB of your choice - Postgres + pgvector or Qdrant. (Admin Guide 9.x - Index)
Yes, Docker can run in LXC with nesting, but it trades away security boundaries and can be fragile with cgroups. For Builder reliability, keep Docker in a VM. (Proxmox Community Forum)
GPU inside the VM: after host passthrough, install the vendor driver in the guest and add --gpus all for NVIDIA workloads or expose /dev/dri if using an iGPU path. Follow the passthrough and vGPU docs above for the host side. (PCI(e) Passthrough)
Secure ingress
You can terminate TLS at a reverse proxy VM or container and route to services on other VMs.
Start with a lightweight reverse proxy like Nginx Proxy Manager or graduate to Caddy when you want config-as-code. Point DNS records at your WAN IP, forward 80/443, issue Let’s Encrypt certs, and route to Open WebUI or your n8n UI. (Proxy choice is up to you - your Docker guide already has patterns.)
Use the Proxmox Firewall for basic node and VM policy. Turn it on at Datacenter, Node, and VM levels and use macros and security groups to avoid one-off rules. (Firewall)
Grow into a cluster
When you add nodes, Proxmox’s SDN and Ceph integrations minimize glue work:
SDN Fabrics provide a routed underlay between nodes that can serve your overlay networks and even your Ceph mesh. (SDN - Fabrics (9.0))
Ceph Squid 19.2 is the current target with PVE 9.0. Use
pvecephor the GUI to deploy and manage OSDs, MONs, and MGRs. Follow the official upgrade docs if you are coming from Reef or Quincy. (Ceph Documentation)
Everything on Shared Sapience is free and open to all. However, it takes a tremendous amount of time and effort to keep these resources and guides up to date and useful for everyone.
If enough of my amazing readers could help with just a few dollars a month, I could dedicate myself full-time to helping Seekers, Builders, and Protectors collaborate better with AI and work toward a better future.
Even if you can’t support financially, becoming a free subscriber is a huge help in advancing the mission of Shared Sapience.
If you’d like to help by becoming a free or paid subscriber, simply use the Subscribe/Upgrade button below, or send a one-time quick tip with Buy me a Coffee by clicking here. I’m deeply grateful for any support you can provide - thank you!


