GPU passthrough lets you dedicate a physical GPU to a virtual machine with near-native performance — useful for gaming VMs, remote workstations, or AI inference inside an isolated environment.
What you need: a CPU with IOMMU support (Intel VT-d or AMD-Vi), a motherboard BIOS that exposes it, Proxmox VE 8.x, and ideally a secondary GPU for the Proxmox host. Intel integrated graphics on the host while passing a discrete GPU to the VM is the cleanest setup.
Step 1: Enable IOMMU
In BIOS/UEFI: Intel → enable VT-d. AMD → enable AMD-Vi or IOMMU.
Edit /etc/default/grub:
Intel:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"AMD:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"iommu=pt reduces overhead for devices not being passed through — leave it in regardless of vendor.
update-grub && rebootVerify after reboot:
dmesg | grep -e DMAR -e IOMMUIntel should show DMAR: IOMMU enabled, AMD shows AMD-Vi: Supported feature. Nothing means IOMMU isn’t active — go back to BIOS.
Also verify interrupt remapping is enabled — passthrough won’t work without it:
dmesg | grep 'remapping'You should see AMD-Vi: Interrupt remapping enabled or DMAR-IR: Enabled IRQ remapping. If not, and your hardware doesn’t support it, you can allow unsafe interrupts as a workaround:
echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.confCheck IOMMU Groups
for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*} printf 'IOMMU Group %s ' "$n" lspci -nns "${d##*/}"doneYour GPU and its audio device need to be in the same IOMMU group, ideally isolated. If they share a group with a SATA controller or USB hub, see ACS override in the troubleshooting section.
Step 2: Load VFIO Modules
Add to /etc/modules:
vfiovfio_iommu_type1vfio_pcivfio_virqfdupdate-initramfs -u -k all && rebootStep 3: Bind the GPU to VFIO
Find your GPU’s PCI IDs:
lspci -nn | grep -iE "nvidia|amd|vga|3d"Output example:
01:00.0 VGA compatible controller [0300]: NVIDIA GeForce RTX 3080 [10de:2206]01:00.1 Audio device [0403]: NVIDIA HD Audio [10de:1aef]Bind both the GPU and its audio device to VFIO:
echo "options vfio-pci ids=10de:2206,10de:1aef disable_vga=1" > /etc/modprobe.d/vfio.confupdate-initramfs -u -k all && rebootVerify:
lspci -nnk -d 10de:2206Look for Kernel driver in use: vfio-pci. If you see nvidia or nouveau instead, blacklist them:
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.confecho "blacklist nvidia*" >> /etc/modprobe.d/blacklist.confupdate-initramfs -u -k all && rebootStep 4: Create the VM
In the Proxmox web UI, these settings matter:
- Machine: q35 — required for PCIe passthrough
- BIOS: OVMF (UEFI) — add an EFI disk when prompted
- CPU type: host — don’t use kvm64 or default
- Memory: disable ballooning, set a fixed amount
- OS: for Windows, add the VirtIO drivers ISO as a second CD-ROM
Add the GPU
Hardware → Add → PCI Device, then check:
- All Functions — passes GPU + audio together
- Primary GPU — if this GPU handles the VM’s display output
- PCI-Express — enables PCIe bandwidth
- ROM-Bar — required for NVIDIA cards
Warning
If “Primary GPU” is checked, set vga: none in the VM config — otherwise the VM may boot to the virtual display instead of the GPU.
Step 5: NVIDIA — Fix Error 43
Note
NVIDIA drivers 465 and newer no longer trigger Error 43 in VMs. If you’re on a recent driver, skip this step. Only apply it if you’re on an older driver or still seeing the error.
Older consumer NVIDIA drivers detect virtualization and refuse to initialize. Hide the hypervisor by editing /etc/pve/qemu-server/[VMID].conf:
cpu: host,hidden=1args: -cpu 'host,hv_vendor_id=NV43FIX,kvm=off'hv_vendor_id can be any 12-character string. kvm=off hides the KVM signature from the driver.
For Windows VMs with NVIDIA GeForce Experience or other GPU software that crashes the VM, add:
echo "options kvm ignore_msrs=1 report_ignored_msrs=0" > /etc/modprobe.d/kvm.confStep 6: Tesla P40 for AI Workloads
The P40 is a datacenter card — 24GB GDDR5, no display output, pure compute. Around €100–150 used, it’s the best value-per-GB option for homelab AI workloads.
Key differences from gaming GPU passthrough:
- No display output — don’t check “Primary GPU”; the VM uses VNC for management
- No Error 43 workaround needed — datacenter drivers load cleanly in VMs
- Passive cooling — no fan, relies on chassis airflow; in a desktop case point active airflow at the card
- 250W TDP — full-length, full-height; verify your slot and PSU
Warning
The P40 has 24GB VRAM. With OVMF, the default 32GB MMIO window isn’t enough and the driver will fail with a BAR0 error. Using CPU type host (already recommended in Step 4) fixes this automatically — OVMF adjusts the MMIO window based on host CPU info. If you use a different CPU type, run: qm set VMID --cpu x86-64-v2-AES,phys-bits=host,flags=+pdpe1gb
Install CUDA drivers inside the VM (Ubuntu 22.04):
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.debsudo dpkg -i cuda-keyring_1.1-1_all.debsudo apt update && sudo apt install -y cuda-toolkit-12-4 nvidia-driver-550-serverReboot and verify:
nvidia-smi# Tesla P40 | 24576MiB | ...Troubleshooting
Black Screen After VM Boot
- GPU not bound to vfio-pci — confirm
Kernel driver in use: vfio-pciwithlspci -nnk - Machine type isn’t q35
- “Primary GPU” checked but
vga: nonenot set (or vice versa) - Wrong PCIe slot — some boards only expose IOMMU correctly on specific slots
Error 43 Still Showing
If the fix in Step 5 doesn’t work, check that you’re not mixing cpu: and args: across multiple lines — Proxmox only supports one args entry. Also try changing the hv_vendor_id string.
GPU Hangs After VM Shutdown (Reset Bug)
Some NVIDIA cards can’t complete a PCIe reset — the GPU hangs and the host needs a full reboot. Fix with vendor-reset:
apt install pve-headers-$(uname -r) build-essential gitgit clone https://github.com/gnif/vendor-resetcd vendor-reset && make && make installecho "vendor-reset" >> /etc/modulesupdate-initramfs -u -k all && rebootBad IOMMU Groups
Warning
ACS override weakens IOMMU isolation. Use only if you understand the security implications.
Add to GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub:
pcie_acs_override=downstream,multifunctionupdate-grub && reboot