Skip to main content
Overview

Proxmox GPU Passthrough (NVIDIA, AMD, Tesla)

5 min read

GPU passthrough lets you dedicate a physical GPU to a virtual machine with near-native performance — useful for gaming VMs, remote workstations, or AI inference inside an isolated environment.

What you need: a CPU with IOMMU support (Intel VT-d or AMD-Vi), a motherboard BIOS that exposes it, Proxmox VE 8.x, and ideally a secondary GPU for the Proxmox host. Intel integrated graphics on the host while passing a discrete GPU to the VM is the cleanest setup.

Step 1: Enable IOMMU

In BIOS/UEFI: Intel → enable VT-d. AMD → enable AMD-Vi or IOMMU.

Edit /etc/default/grub:

Intel:

Terminal window
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"

AMD:

Terminal window
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"

iommu=pt reduces overhead for devices not being passed through — leave it in regardless of vendor.

Terminal window
update-grub && reboot

Verify after reboot:

Terminal window
dmesg | grep -e DMAR -e IOMMU

Intel should show DMAR: IOMMU enabled, AMD shows AMD-Vi: Supported feature. Nothing means IOMMU isn’t active — go back to BIOS.

Also verify interrupt remapping is enabled — passthrough won’t work without it:

Terminal window
dmesg | grep 'remapping'

You should see AMD-Vi: Interrupt remapping enabled or DMAR-IR: Enabled IRQ remapping. If not, and your hardware doesn’t support it, you can allow unsafe interrupts as a workaround:

Terminal window
echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf

Check IOMMU Groups

Terminal window
for d in /sys/kernel/iommu_groups/*/devices/*; do
n=${d#*/iommu_groups/*}; n=${n%%/*}
printf 'IOMMU Group %s ' "$n"
lspci -nns "${d##*/}"
done

Your GPU and its audio device need to be in the same IOMMU group, ideally isolated. If they share a group with a SATA controller or USB hub, see ACS override in the troubleshooting section.

Step 2: Load VFIO Modules

Add to /etc/modules:

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
Terminal window
update-initramfs -u -k all && reboot

Step 3: Bind the GPU to VFIO

Find your GPU’s PCI IDs:

Terminal window
lspci -nn | grep -iE "nvidia|amd|vga|3d"

Output example:

01:00.0 VGA compatible controller [0300]: NVIDIA GeForce RTX 3080 [10de:2206]
01:00.1 Audio device [0403]: NVIDIA HD Audio [10de:1aef]

Bind both the GPU and its audio device to VFIO:

Terminal window
echo "options vfio-pci ids=10de:2206,10de:1aef disable_vga=1" > /etc/modprobe.d/vfio.conf
update-initramfs -u -k all && reboot

Verify:

Terminal window
lspci -nnk -d 10de:2206

Look for Kernel driver in use: vfio-pci. If you see nvidia or nouveau instead, blacklist them:

Terminal window
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidia*" >> /etc/modprobe.d/blacklist.conf
update-initramfs -u -k all && reboot

Step 4: Create the VM

In the Proxmox web UI, these settings matter:

  • Machine: q35 — required for PCIe passthrough
  • BIOS: OVMF (UEFI) — add an EFI disk when prompted
  • CPU type: host — don’t use kvm64 or default
  • Memory: disable ballooning, set a fixed amount
  • OS: for Windows, add the VirtIO drivers ISO as a second CD-ROM

Add the GPU

Hardware → Add → PCI Device, then check:

  • All Functions — passes GPU + audio together
  • Primary GPU — if this GPU handles the VM’s display output
  • PCI-Express — enables PCIe bandwidth
  • ROM-Bar — required for NVIDIA cards
Warning

If “Primary GPU” is checked, set vga: none in the VM config — otherwise the VM may boot to the virtual display instead of the GPU.

Step 5: NVIDIA — Fix Error 43

Note

NVIDIA drivers 465 and newer no longer trigger Error 43 in VMs. If you’re on a recent driver, skip this step. Only apply it if you’re on an older driver or still seeing the error.

Older consumer NVIDIA drivers detect virtualization and refuse to initialize. Hide the hypervisor by editing /etc/pve/qemu-server/[VMID].conf:

cpu: host,hidden=1
args: -cpu 'host,hv_vendor_id=NV43FIX,kvm=off'

hv_vendor_id can be any 12-character string. kvm=off hides the KVM signature from the driver.

For Windows VMs with NVIDIA GeForce Experience or other GPU software that crashes the VM, add:

Terminal window
echo "options kvm ignore_msrs=1 report_ignored_msrs=0" > /etc/modprobe.d/kvm.conf

Step 6: Tesla P40 for AI Workloads

The P40 is a datacenter card — 24GB GDDR5, no display output, pure compute. Around €100–150 used, it’s the best value-per-GB option for homelab AI workloads.

Key differences from gaming GPU passthrough:

  • No display output — don’t check “Primary GPU”; the VM uses VNC for management
  • No Error 43 workaround needed — datacenter drivers load cleanly in VMs
  • Passive cooling — no fan, relies on chassis airflow; in a desktop case point active airflow at the card
  • 250W TDP — full-length, full-height; verify your slot and PSU
Warning

The P40 has 24GB VRAM. With OVMF, the default 32GB MMIO window isn’t enough and the driver will fail with a BAR0 error. Using CPU type host (already recommended in Step 4) fixes this automatically — OVMF adjusts the MMIO window based on host CPU info. If you use a different CPU type, run: qm set VMID --cpu x86-64-v2-AES,phys-bits=host,flags=+pdpe1gb

Install CUDA drivers inside the VM (Ubuntu 22.04):

Terminal window
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt update && sudo apt install -y cuda-toolkit-12-4 nvidia-driver-550-server

Reboot and verify:

Terminal window
nvidia-smi
# Tesla P40 | 24576MiB | ...

Troubleshooting

Black Screen After VM Boot

  1. GPU not bound to vfio-pci — confirm Kernel driver in use: vfio-pci with lspci -nnk
  2. Machine type isn’t q35
  3. “Primary GPU” checked but vga: none not set (or vice versa)
  4. Wrong PCIe slot — some boards only expose IOMMU correctly on specific slots

Error 43 Still Showing

If the fix in Step 5 doesn’t work, check that you’re not mixing cpu: and args: across multiple lines — Proxmox only supports one args entry. Also try changing the hv_vendor_id string.

GPU Hangs After VM Shutdown (Reset Bug)

Some NVIDIA cards can’t complete a PCIe reset — the GPU hangs and the host needs a full reboot. Fix with vendor-reset:

Terminal window
apt install pve-headers-$(uname -r) build-essential git
git clone https://github.com/gnif/vendor-reset
cd vendor-reset && make && make install
echo "vendor-reset" >> /etc/modules
update-initramfs -u -k all && reboot

Bad IOMMU Groups

Warning

ACS override weakens IOMMU isolation. Use only if you understand the security implications.

Add to GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub:

pcie_acs_override=downstream,multifunction
Terminal window
update-grub && reboot

Resources

Share this post

Related Posts

Loading comments...