This article will show you how simple it is to enable GPU passthrough on your Proxmox VE host. My best experience have been with AMD GPUs, specifically the AMD Radeon Vega 56 and the AMD Radeon RX 580. Here will use an integrated Intel GPU, though, in an old Intel NUC. This guide can also be used to passthrough other devices such as NICs.

This article assumes your hardware has the necessary support for virtualization, IOMMU, VFIO, and so on, and that your hardware is running Proxmox VE 6.0 or higher.

The process for enabling GPU passthrough on other Debian based Linux distributions (including Debian itself) should be similar.

Retrieve GPU device IDs

Run the following command to get the device IDs (note: if lspci is missing on your system, you can install it by running sudo apt install pciutils):

$ sudo lspci -nnk | grep "VGA\|Audio"

It should produce a result similar to this:

00:02.0 VGA compatible controller [0300]: Intel Corporation Haswell-ULT Integrated Graphics Controller [8086:0a16] (rev 09)
00:03.0 Audio device [0403]: Intel Corporation Haswell-ULT HD Audio Controller [8086:0a0c] (rev 09)
	Subsystem: Intel Corporation Haswell-ULT HD Audio Controller [8086:2054]
00:1b.0 Audio device [0403]: Intel Corporation 8 Series HD Audio Controller [8086:9c20] (rev 04)
	Subsystem: Intel Corporation 8 Series HD Audio Controller [8086:2054]

What we are interested in are the values at the end of lines 1 and 2, 8086:0a16 and 8086:2054, respectively. Make sure that the device type is VGA compatible controller. The first value is the device ID for the GPU and the second value is the device ID for the audio device. It’s not necessarily needed our example case, but on some systems, for example using the AMD GPUs mentioned above, you’ll have to passthrough both the GPU and the associated audio device for it to work properly.

Enable device passthrough

Load the modules vfiovfio_iommu_type1vfio_pci and vfio_virqfd and enable VFIO by adding the device IDs as options for the VFIO module in modprobe.

$ sudo echo "vfio" > \
  /etc/modules-load.d/vfio.conf
$ sudo echo "vfio_iommu_type1" >> \
  /etc/modules-load.d/vfio.conf
$ sudo echo "vfio_pci" >> \
  /etc/modules-load.d/vfio.conf
$ sudo echo "vfio_virqfd" >> \
  /etc/modules-load.d/vfio.conf
$ sudo echo "options vfio-pci ids=8086:0a16,8086:2054" > \
  /etc/modprobe.d/vfio.conf

Update the initramfs images using update-initramfs:

$ sudo update-initramfs -u -k all

Edit the GRUB bootloader configuration by running:

$ sudo vi /etc/default/grub

Make sure that the line that starts with GRUB_CMDLINE_LINUX_DEFAULT has intel_iommu=on (if using an Intel CPU) or amd_iommu=on (if using an AMD CPU). It should look like this on a newly installed Proxmox VE 6.0 host:

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

Then apply your new GRUB bootloader configuration by running:

$ sudo update-grub

And finally sudo reboot. Next we’ll add the GPU to an already existing virtual machine using the Proxmox VE 6.0 web interface.

Add a GPU to a virtual machine

Select your virtual machine in the web interface under your newly configured host. Power down the VM. Then go to “Hardware”, the “Add” menu and choose “PCI Device”.

In this example the GPU is called “Haswell-ULT Integrated Graphics Controller” (remember the lspci -nnk command from before? The name is the same!). Select the GPU and check the boxes for “All Functions”, “Primary GPU” then finally finally press the “Add” button.

Also, the Proxmox VE documentation recommends setting q35 as the machine type, and enabling OVMF instead of SeaBIOS, and PCIe instead of PCI.

Once added, boot up your virtual machine and you are ready to go!

About the Author

Pauraic Morrissey

Born and bred Irishman with a slight obsession with everything IT.

View All Articles