Last Updated on December 23, 2022 by Thiago Crepaldi
If you are a Proxmox Virtual Environment (aka PVE) user, chances are that you probably have a few LXC containers running on server. If that is indeed your case and you are interested in leveraging GPU hardware in them, this is for you!
In this post, we are going to setup Intel GPU passthrough on those containers so that you can run multimedia servers (e.g. Plex, Emby, Jellyfin, etc) or whatever with hardware transcoding capabilities.
The configuration process is two folded, but simple, don’t worry. First we need to make sure Proxmox server itself is using proper GPU drivers and next we do the actual passthrough to the desired LXC container.
Proxmox server configuration
Although it might be obvious to some, it is worth mentioning that before we can passthrough the Intel GPU to the container, the Proxmox host itself must be properly configured so that all devices are recognized by the host SO and exposed downstream.
Selecting the correct Intel driver
Each Linux distribution may differ in terms of package naming for the same drivers. I will assume you are using either Debian Bullseye (aka 11.6) or Ubuntu Jammy (aka 22.04) for simplicity, but a quick research for the correct package name can be done to adjust this post for different distros.
For Intel GPUs, drivers are distributed according to both GPU generation and driver license (aka free/non-free). The free drivers are available by default and will enable the hardware to decode video, whereas the non-free driver requires adding “non-free” to your APT sources in order to enable the hardware to both encode and decode video streams.
Therefore, you basically have four choices of packages, as the table below shows
Hardware generation | ||||
Gen 8+ | Older (up to Gen 9) | |||
Driver license | Free | intel-media-va-driver | i965-va-driver | |
Non-free | intel-media-va-driver-non-free | i965-va-driver-shaders |
If you are unsure which GPU generation your CPU embeds, check the table below copied from a nice Linux Reviews post:

Based on the table above, you will be able to decide whether you need the i965-va-driver or intel-media-va-driver. Hopefully you were lucky enough to go with the newer gen 🙂
Finally, and just FYI, each hardware/driver supports specific CODECs, as show from the following table copied from a Wikipedia post:

In this post, I am going to go with intel-media-va-driver-non-free as my Skylake CPU is supported by it. I opted for the non-free version because I might want to encode videos to lower qualities when network bandwidth is limited or the target hardware is not powerful enough to decode the original format.
Let’s get to it. On your Proxmox terminal, type:
# apt-get update
# apt-get install -y intel-media-va-driver-non-free # or whatever driver you need
# apt-get install -y vainfo intel-gpu-tools # tools to verify/debug installation
# reboot
Once your server is back, let’s see if the driver is properly loaded and get some evidences of encoding/decoding capabilities from the hardware:
# vainfo
error: can't connect to X server!
libva info: VA-API version 1.10.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_10
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.10 (libva 2.10.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 21.1.1 ()
vainfo: Supported profile and entrypoints
VAProfileNone : VAEntrypointVideoProc
VAProfileNone : VAEntrypointStats
VAProfileMPEG2Simple : VAEntrypointVLD
VAProfileMPEG2Simple : VAEntrypointEncSlice
VAProfileMPEG2Main : VAEntrypointVLD
VAProfileMPEG2Main : VAEntrypointEncSlice
VAProfileH264Main : VAEntrypointVLD
VAProfileH264Main : VAEntrypointEncSlice
VAProfileH264Main : VAEntrypointFEI
VAProfileH264Main : VAEntrypointEncSliceLP
VAProfileH264High : VAEntrypointVLD
VAProfileH264High : VAEntrypointEncSlice
VAProfileH264High : VAEntrypointFEI
VAProfileH264High : VAEntrypointEncSliceLP
VAProfileVC1Simple : VAEntrypointVLD
VAProfileVC1Main : VAEntrypointVLD
VAProfileVC1Advanced : VAEntrypointVLD
VAProfileJPEGBaseline : VAEntrypointVLD
VAProfileJPEGBaseline : VAEntrypointEncPicture
VAProfileH264ConstrainedBaseline: VAEntrypointVLD
VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
VAProfileH264ConstrainedBaseline: VAEntrypointFEI
VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
VAProfileVP8Version0_3 : VAEntrypointVLD
VAProfileHEVCMain : VAEntrypointVLD
VAProfileHEVCMain : VAEntrypointEncSlice
VAProfileHEVCMain : VAEntrypointFEI
Each hardware is different, but on my system, vainfo returned a bunch of stuff, including support to some variants of MPEG2, H264, VC1, JPEG, VP8 and HEVC. VAEntrypointVLD entries refer to decoding while VAEntrypointEnc* refers to encoding support.
When the GPU is in use, you can run the following command to check how busy it is during transcoding. My machine was idle, so the it didn’t catch anything interesting, but we can try again during heavy use.
# intel_gpu_top
intel-gpu-top - 0/ 0 MHz; 100% RC6; 0.00 Watts; 0 irqs/s
IMC reads: 81 MiB/s
IMC writes: 4 MiB/s
ENGINE BUSY MI_SEMA MI_WAIT
Render/3D/0 0.00% | | 0% 0%
Blitter/0 0.00% | | 0% 0%
Video/0 0.00% | | 0% 0%
Video/1 0.00% | | 0% 0%
VideoEnhance/0 0.00% | | 0% 0%
LXC Container configuration
Now that the hardware is up and running on the Proxmox server, let’s take note of some information to feed to the Proxmox LXC container.
Getting Container info on Proxmox
This is actually the easiest step ever and probably didn’t need a section of its own, but for the heck of completeness, here we go. The GPU passthrough is not a global setting, meaning you need to configure each container individually with the passthrough, this is why knowing the container ID is important.
Go to your Proxmox Web UI and after logging in, select Folder View on the drop down menu on the top left corner. Next, expand Datacenter >> LXC Container to view the list of containers available. Each row in the list is a container and they are listed following the format “ID (name)”. IDs start at 100 by default, although you can change it during the container creation. Take note of the ID you are interested at.
Because we are going to modify the container, make sure it is off. Click on the container followed by a click on Shutdown button on the top right menu. Confirm that you are serious by clicking Yes.
Getting GPU info on Proxmox
Now that you know which LXC you want to passthrough GPU, let’s get info about your GPU. On the Proxmox terminal, run:
# ls -lh /dev/dri
total 0
crw-rw-rw- 1 root root 226, 0 Dec 23 02:30 card0
crw-rw-rw- 1 root root 226, 128 Dec 23 02:30 renderD128
From the output above, we have to take note of the devices card0 with ID 226,0 and device renderD128 with ID 226, 128. These identify the GPU hardware on the system and we will use that to setup the LXC in the next steps.
Updating LXC container spec on Proxmox
I will assume your LXC ID is 100, but you can use whatever ID is right for you. On a Proxmox terminal, let’s edit the LXC container so that it can see the GPU hardware:
# vim /etc/pve/lxc/100.conf
Add the following to the bottom and save the file. If you are not a vim user, type “:x” inside the editor (without the quotes) to save and exit the editor 🙂
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.autodev: 1
lxc.hook.autodev: /var/lib/lxc/100/mount_hook.sh
You should note a couple things in the code. The first two rows map the hosts GPUs through their IDs 226:0 and 226:128, which came from “ls -lh /dev/dri” from the previous step. The third row enables the creation of /dev on the container and the forth line asks Proxmox to run mount_hook.sh every time the container is started. /var/lib/lxc/100/mount_hook.sh doesn’t exist, yet… Let’s create it
# vim /var/lib/lxc/100/mount_hook.sh
and add the following to it
mkdir -p ${LXC_ROOTFS_MOUNT}/dev/dri
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/card0 c 226 0
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/renderD128 c 226 128
Don’t forget to adjust the device paths and IDs according to your system. The first line added /dev/dri in your host while the second and third lines added the GPU devices with the same names and IDs as the Proxmox’s.
Lastly, make the file executable:
# chmod 755 /var/lib/lxc/100/mount_hook.sh
Configuring LXC container’s GPU driver
At this point, Proxmox exposes the GPU devices to the container. Go to Folder View >> LXC Container >> 100 and click on Start on the top right button. Next, click on LXC container’s Console button to open a terminal. After logging in, execute
# ls -lh /dev/dri
total 0
crw-rw-rw- 1 root root 226, 0 Dec 23 02:30 card0
crw-rw-rw- 1 root root 226, 128 Dec 23 02:30 renderD128
The output should be the same as the Proxmox server. Two devices, card0 and renderD128 with the same IDs 226,0 and 226:128. This means your container now have access to the hardware, let’s put it to good use! On the terminal, let’s install the GPU drivers, the same ones installed on the Proxmox server!
# apt-get update
# apt-get install -y intel-media-va-driver-non-free # or whatever driver you need
# apt-get install -y vainfo intel-gpu-tools # tools to verify/debug installation
You can run vainfo and verify the output is also the same
# vainfo
error: can't connect to X server!
libva info: VA-API version 1.10.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_10
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.10 (libva 2.10.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 21.1.1 ()
vainfo: Supported profile and entrypoints
VAProfileNone : VAEntrypointVideoProc
VAProfileNone : VAEntrypointStats
VAProfileMPEG2Simple : VAEntrypointVLD
VAProfileMPEG2Simple : VAEntrypointEncSlice
VAProfileMPEG2Main : VAEntrypointVLD
VAProfileMPEG2Main : VAEntrypointEncSlice
VAProfileH264Main : VAEntrypointVLD
VAProfileH264Main : VAEntrypointEncSlice
VAProfileH264Main : VAEntrypointFEI
VAProfileH264Main : VAEntrypointEncSliceLP
VAProfileH264High : VAEntrypointVLD
VAProfileH264High : VAEntrypointEncSlice
VAProfileH264High : VAEntrypointFEI
VAProfileH264High : VAEntrypointEncSliceLP
VAProfileVC1Simple : VAEntrypointVLD
VAProfileVC1Main : VAEntrypointVLD
VAProfileVC1Advanced : VAEntrypointVLD
VAProfileJPEGBaseline : VAEntrypointVLD
VAProfileJPEGBaseline : VAEntrypointEncPicture
VAProfileH264ConstrainedBaseline: VAEntrypointVLD
VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
VAProfileH264ConstrainedBaseline: VAEntrypointFEI
VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
VAProfileVP8Version0_3 : VAEntrypointVLD
VAProfileHEVCMain : VAEntrypointVLD
VAProfileHEVCMain : VAEntrypointEncSlice
VAProfileHEVCMain : VAEntrypointFEI
[Optional] Install extra CODECs on Debian/Ubuntu containers
Although Debian based systems come with some CODECs, to maximize compatibility with whatever you want to play, I recommend installing extra CODEC packages as described by It’s FOSS post:
# apt install -y software-properties-common # Install add-apt-repository
# add-apt-repository multiverse # Ensure 'multiverse' is enabled
# apt update # update package repository with new settings
# apt install -y ubuntu-restricted-extras # install extra CODECs
That is it. From now on, if your Plex or whatever needs to decode or encode a video, it will automatically detect and use the GPU. No extra configuration needed. Enjoy it!
Thank you sooooooooo much for this easy to follow tutorial! It worked perfectly for me. (I have a Proxmox 7.2 setup with a small NUC PC with a Jasper Lake Celeron N5105!!!