Skip to content
Geek is the Way!
Menu
  • Forums
  • Sobre o blog
  • Contato
  • English
Menu

Setting up Intel GPU passthrough on Proxmox LXC containers

Posted on December 23, 2022December 30, 2024 by Thiago Crepaldi

Last Updated on December 30, 2024 by Thiago Crepaldi

If you are a Proxmox Virtual Environment (aka PVE) user, chances are that you probably have a few LXC containers running on server. If that is indeed your case and you are interested in leveraging GPU hardware in them, this is for you!

In this post, we are going to setup Intel GPU passthrough on those containers so that you can run multimedia servers (e.g. Plex, Emby, Jellyfin, etc) or whatever with hardware transcoding capabilities.

DISCLAIMER: This post has been tested on a Intel NUC NUC8i3PNH which features a i3-8145U CPU (Whiskey Lake CPU family) with UHD 8th gen GPU and on a Intel NUC NUC6i5SYH which features a i5-6260 CPU (SkyLake CPU family) with Iris Graphics 540 GPU

The configuration process is two folded, but simple, don’t worry. First we need to make sure Proxmox server itself is using proper GPU drivers and next we do the actual passthrough to the desired LXC container.

Proxmox server configuration

Although it might be obvious to some, it is worth mentioning that before we can passthrough the Intel GPU to the container, the Proxmox host itself must be properly configured so that all devices are recognized by the host SO and exposed downstream.

Selecting the correct Intel driver

Each Linux distribution may differ in terms of package naming for the same drivers. I will assume you are using either Debian Bullseye (aka 11.6) or Ubuntu Jammy (aka 22.04) for simplicity, but a quick research for the correct package name can be done to adjust this post for different distros.

For Intel GPUs, drivers are distributed according to both GPU generation and driver license (aka free/non-free). The free drivers are available by default and will enable the hardware to decode video, whereas the non-free driver requires adding “non-free” to your APT sources in order to enable the hardware to both encode and decode video streams.

Therefore, you basically have four choices of packages, as the table below shows

Hardware generation
Gen 8+Older (up to Gen 9)
Driver licenseFreeintel-media-va-driveri965-va-driver
Non-freeintel-media-va-driver-non-freei965-va-driver-shaders
Table with Intel GPU package names

If you are unsure which GPU generation your CPU embeds, check the table below copied from a nice Linux Reviews post:

Table with Intel GPU generations

Based on the table above, you will be able to decide whether you need the i965-va-driver or intel-media-va-driver. Hopefully you were lucky enough to go with the newer gen 🙂

Finally, and just FYI, each hardware/driver supports specific CODECs, as show from the following table copied from a Wikipedia post:

Table with encoding/decoding capabilities of each Intel GPU by generation
As a side note, according to Debian’s Video Acceleration wiki, for Nouveau and the various AMD drivers, install mesa-va-drivers package. You can also research how to install NVIDIA Proprietary Drivers on Debian or maybe on Ubuntu 20.04 or even Ubuntu 22.04. I won’t go over this variants, but try your luck and return here once the drivers are working on Proxmox host!

In this post, I am going to go with intel-media-va-driver-non-free as my Skylake CPU is supported by it. I opted for the non-free version because I might want to encode videos to lower qualities when network bandwidth is limited or the target hardware is not powerful enough to decode the original format.

Let’s get to it. On your Proxmox terminal, type:

# apt-get update
# apt-get install -y intel-media-va-driver-non-free # or whatever driver you need
# apt-get install -y vainfo intel-gpu-tools # tools to verify/debug installation
# reboot

Once your server is back, let’s see if the driver is properly loaded and get some evidences of encoding/decoding capabilities from the hardware:

# vainfo

error: can't connect to X server!
libva info: VA-API version 1.10.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_10
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.10 (libva 2.10.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 21.1.1 ()
vainfo: Supported profile and entrypoints
      VAProfileNone                   : VAEntrypointVideoProc
      VAProfileNone                   : VAEntrypointStats
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Simple            : VAEntrypointEncSlice
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointFEI
      VAProfileH264Main               : VAEntrypointEncSliceLP
      VAProfileH264High               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointEncSlice
      VAProfileH264High               : VAEntrypointFEI
      VAProfileH264High               : VAEntrypointEncSliceLP
      VAProfileVC1Simple              : VAEntrypointVLD
      VAProfileVC1Main                : VAEntrypointVLD
      VAProfileVC1Advanced            : VAEntrypointVLD
      VAProfileJPEGBaseline           : VAEntrypointVLD
      VAProfileJPEGBaseline           : VAEntrypointEncPicture
      VAProfileH264ConstrainedBaseline: VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
      VAProfileH264ConstrainedBaseline: VAEntrypointFEI
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
      VAProfileVP8Version0_3          : VAEntrypointVLD
      VAProfileHEVCMain               : VAEntrypointVLD
      VAProfileHEVCMain               : VAEntrypointEncSlice
      VAProfileHEVCMain               : VAEntrypointFEI

Each hardware is different, but on my system, vainfo returned a bunch of stuff, including support to some variants of MPEG2, H264, VC1, JPEG, VP8 and HEVC. VAEntrypointVLD entries refer to decoding while VAEntrypointEnc* refers to encoding support.

When the GPU is in use, you can run the following command to check how busy it is during transcoding. My machine was idle, so the it didn’t catch anything interesting, but we can try again during heavy use.

# intel_gpu_top
intel-gpu-top -    0/   0 MHz;  100% RC6;  0.00 Watts;        0 irqs/s

      IMC reads:       81 MiB/s
     IMC writes:        4 MiB/s

          ENGINE      BUSY                                                                                MI_SEMA MI_WAIT
     Render/3D/0    0.00% |                             |      0%      0%
       Blitter/0    0.00% |                             |      0%      0%
         Video/0    0.00% |                             |      0%      0%
         Video/1    0.00% |                             |      0%      0%
  VideoEnhance/0    0.00% |                             |      0%      0%

LXC Container configuration

Now that the hardware is up and running on the Proxmox server, let’s take note of some information to feed to the Proxmox LXC container.

Getting Container info on Proxmox

This is actually the easiest step ever and probably didn’t need a section of its own, but for the heck of completeness, here we go. The GPU passthrough is not a global setting, meaning you need to configure each container individually with the passthrough, this is why knowing the container ID is important.

Go to your Proxmox Web UI and after logging in, select Folder View on the drop down menu on the top left corner. Next, expand Datacenter >> LXC Container to view the list of containers available. Each row in the list is a container and they are listed following the format “ID (name)”. IDs start at 100 by default, although you can change it during the container creation. Take note of the ID you are interested at.

Because we are going to modify the container, make sure it is off. Click on the container followed by a click on Shutdown button on the top right menu. Confirm that you are serious by clicking Yes.

Getting GPU info on Proxmox

Now that you know which LXC you want to passthrough GPU, let’s get info about your GPU. On the Proxmox terminal, run:

# ls -lh /dev/dri

total 0
crw-rw-rw- 1 root root 226,   0 Dec 23 02:30 card0
crw-rw-rw- 1 root root 226, 128 Dec 23 02:30 renderD128

From the output above, we have to take note of the devices card0 with ID 226,0 and device renderD128 with ID 226, 128. These identify the GPU hardware on the system and we will use that to setup the LXC in the next steps.

Updating LXC container spec on Proxmox

I will assume your LXC ID is 100, but you can use whatever ID is right for you. On a Proxmox terminal, let’s edit the LXC container so that it can see the GPU hardware:

# vim /etc/pve/lxc/100.conf

Add the following to the bottom and save the file. If you are not a vim user, type “:x” inside the editor (without the quotes) to save and exit the editor 🙂

lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.autodev: 1
lxc.hook.autodev: /var/lib/lxc/100/mount_hook.sh

You should note a couple things in the code. The first two rows map the hosts GPUs through their IDs 226:0 and 226:128, which came from “ls -lh /dev/dri” from the previous step. The third row enables the creation of /dev on the container and the forth line asks Proxmox to run mount_hook.sh every time the container is started. /var/lib/lxc/100/mount_hook.sh doesn’t exist, yet… Let’s create it

# vim /var/lib/lxc/100/mount_hook.sh

and add the following to it

mkdir -p ${LXC_ROOTFS_MOUNT}/dev/dri
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/card0 c 226 0
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/renderD128 c 226 128

Don’t forget to adjust the device paths and IDs according to your system. The first line added /dev/dri in your host while the second and third lines added the GPU devices with the same names and IDs as the Proxmox’s.

Lastly, make the file executable:

# chmod 755 /var/lib/lxc/100/mount_hook.sh

Configuring LXC container’s GPU driver

At this point, Proxmox exposes the GPU devices to the container. Go to Folder View >> LXC Container >> 100 and click on Start on the top right button. Next, click on LXC container’s Console button to open a terminal. After logging in, execute

# ls -lh /dev/dri

total 0
crw-rw-rw- 1 root root 226,   0 Dec 23 02:30 card0
crw-rw-rw- 1 root root 226, 128 Dec 23 02:30 renderD128

The output should be the same as the Proxmox server. Two devices, card0 and renderD128 with the same IDs 226,0 and 226:128. This means your container now have access to the hardware, let’s put it to good use! On the terminal, let’s install the GPU drivers, the same ones installed on the Proxmox server!

# apt-get update
# apt-get install -y intel-media-va-driver-non-free # or whatever driver you need
# apt-get install -y vainfo intel-gpu-tools # tools to verify/debug installation

You can run vainfo and verify the output is also the same

# vainfo

error: can't connect to X server!
libva info: VA-API version 1.10.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_10
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.10 (libva 2.10.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 21.1.1 ()
vainfo: Supported profile and entrypoints
      VAProfileNone                   : VAEntrypointVideoProc
      VAProfileNone                   : VAEntrypointStats
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Simple            : VAEntrypointEncSlice
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointFEI
      VAProfileH264Main               : VAEntrypointEncSliceLP
      VAProfileH264High               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointEncSlice
      VAProfileH264High               : VAEntrypointFEI
      VAProfileH264High               : VAEntrypointEncSliceLP
      VAProfileVC1Simple              : VAEntrypointVLD
      VAProfileVC1Main                : VAEntrypointVLD
      VAProfileVC1Advanced            : VAEntrypointVLD
      VAProfileJPEGBaseline           : VAEntrypointVLD
      VAProfileJPEGBaseline           : VAEntrypointEncPicture
      VAProfileH264ConstrainedBaseline: VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
      VAProfileH264ConstrainedBaseline: VAEntrypointFEI
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
      VAProfileVP8Version0_3          : VAEntrypointVLD
      VAProfileHEVCMain               : VAEntrypointVLD
      VAProfileHEVCMain               : VAEntrypointEncSlice
      VAProfileHEVCMain               : VAEntrypointFEI

[Optional] Install extra CODECs on Debian/Ubuntu containers

Although Debian based systems come with some CODECs, to maximize compatibility with whatever you want to play, I recommend installing extra CODEC packages as described by It’s FOSS post:

# apt install -y software-properties-common # Install add-apt-repository
# add-apt-repository multiverse # Ensure 'multiverse' is enabled
# apt update # update package repository with new settings
# apt install -y ubuntu-restricted-extras # install extra CODECs

That is it. From now on, if your Plex or whatever needs to decode or encode a video, it will automatically detect and use the GPU. No extra configuration needed. Enjoy it!

Share this:

  • Tweet

Related

15 thoughts on “Setting up Intel GPU passthrough on Proxmox LXC containers”

  1. Eric A. says:
    March 26, 2023 at 7:42 PM

    Thank you sooooooooo much for this easy to follow tutorial! It worked perfectly for me. (I have a Proxmox 7.2 setup with a small NUC PC with a Jasper Lake Celeron N5105!!!

    Reply
  2. Kaman says:
    May 11, 2023 at 11:05 PM

    Does the container must be privileged container? Can I use unprivileged container?

    Reply
    1. Thiago Crepaldi says:
      May 12, 2023 at 11:16 AM

      Unprivileged works

      Reply
      1. SKAL says:
        October 3, 2024 at 4:39 AM

        I’ve tried on and UNprivileged container and it does not work :-(. On a Priviliged one works as charm …

        Reply
  3. Kaman says:
    May 13, 2023 at 1:33 PM

    After making this changes, the container cannot started. Here is the error message:

    safe_mount: 1220 Invalid argument – Failed to mount “/sys/kernel/debug” onto “/usr/lib/x86_64-linux-gnu/lxc/rootfs/sys/kernel/debug”
    run_buffer: 322 Script exited with status 1
    lxc_setup: 4445 Failed to run autodev hooks
    do_start: 1272 Failed to setup container “103”
    sync_wait: 34 An error occurred in another process (expected sequence number 4)
    __lxc_start: 2107 Failed to spawn container “103”
    TASK ERROR: startup for container ‘103’ failed

    Here is my current setup:

    root@home:~# cat /etc/pve/lxc/103.conf
    arch: amd64
    cores: 2
    features: nesting=1
    hostname: jellyfin
    memory: 2048
    net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.0.1,hwaddr=8A:5E:E5:7B:33:57,ip=192.168.0.223/24,type=veth
    onboot: 1
    ostype: ubuntu
    rootfs: local-zfs:subvol-103-disk-0,size=16G
    swap: 2048
    unprivileged: 1
    lxc.cgroup2.devices.allow: c 226:0 rwm
    lxc.cgroup2.devices.allow: c 226:128 rwm
    lxc.autodev: 1
    lxc.hook.autodev: /var/lib/lxc/103/mount_hook.sh

    root@home:~# cat /var/lib/lxc/103/mount_hook.sh
    mkdir -p ${LXC_ROOTFS_MOUNT}/dev/dri
    mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/card0 c 226 0
    mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/renderD128 c 226 128

    root@home:~# ls /var/lib/lxc/103/mount_hook.sh -l
    -rwxr-xr-x 1 root root 154 May 13 13:13 /var/lib/lxc/103/mount_hook.sh

    Reply
    1. Jaynostop says:
      May 31, 2023 at 2:34 AM

      Same here.

      Reply
    2. Uninvited says:
      June 3, 2023 at 8:26 PM

      Same issue on an unprivileged container. Haven’t tried this with privileged.

      Reply
  4. Grunchy says:
    August 23, 2023 at 3:31 AM

    Funny I got the same error “failed to run autodev hooks”. Unprivileged, same as the rest.

    run_buffer: 322 Script exited with status 1
    lxc_setup: 4445 Failed to run autodev hooks
    do_start: 1272 Failed to setup container “100”
    sync_wait: 34 An error occurred in another process (expected sequence number 4)
    __lxc_start: 2107 Failed to spawn container “100”
    TASK ERROR: startup for container ‘100’ failed

    Setup:
    # cat /etc/pve/lxc/100.conf
    arch: amd64
    cores: 2
    features: nesting=1
    hostname: Datto
    memory: 6144
    nameserver: 1.1.1.1
    net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.1.254,hwaddr=62:FF:A2:38:8E:03,ip=192.168.1.30/24,type=veth
    ostype: ubuntu
    rootfs: ZFS:subvol-100-disk-0,size=150G
    swap: 512
    unprivileged: 1
    lxc.cgroup2.devices.allow: c 226:0 rwm
    lxc.cgroup2.devices.allow: c 226:128 rwm
    lxc.autodev: 1
    lxc.hook.autodev: /var/lib/lxc/100/mount_hook.sh

    Reply
  5. Grunchy says:
    August 23, 2023 at 1:46 PM

    I tried something else (since this could be a “blacklist” issue):

    # lspci -v
    00:02.0 VGA compatible controller: Intel Corporation Device 3ea1 (rev 02) (prog-if 00 [VGA controller])
    DeviceName: Onboard – Video
    Subsystem: Intel Corporation Device 2212
    Flags: bus master, fast devsel, latency 0, IRQ 126, IOMMU group 1
    Memory at a0000000 (64-bit, non-prefetchable) [size=16M]
    Memory at 90000000 (64-bit, prefetchable) [size=256M]
    I/O ports at 3000 [size=64]
    Expansion ROM at 000c0000 [virtual] [disabled] [size=128K]
    Capabilities: [40] Vendor Specific Information: Len=0c
    Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00
    Capabilities: [ac] MSI: Enable+ Count=1/1 Maskable- 64bit-
    Capabilities: [d0] Power Management version 2
    Capabilities: [100] Process Address Space ID (PASID)
    Capabilities: [200] Address Translation Service (ATS)
    Capabilities: [300] Page Request Interface (PRI)
    Kernel driver in use: i915
    Kernel modules: i915

    So, I tried this:
    # echo “blacklist i915” >> /etc/modprobe.d/blacklist.conf
    # update-initramfs -u -k all
    # reboot

    Funny thing, lspci -v gives the same “Kernel driver in use: i915” (despite being “blacklisted”) and same error with the Ubuntu 22.04 LXC.

    run_buffer: 322 Script exited with status 1
    lxc_setup: 4445 Failed to run autodev hooks
    do_start: 1272 Failed to setup container “100”
    sync_wait: 34 An error occurred in another process (expected sequence number 4)
    __lxc_start: 2107 Failed to spawn container “100”
    TASK ERROR: startup for container ‘100’ failed

    I guess I’ll try again with “privileged” LXC? Anybody else getting anywhere?

    Reply
  6. Grunchy says:
    August 23, 2023 at 5:39 PM

    I think the problem is with this construct:
    # nano /var/lib/lxc/100/mount_hook.sh
    mkdir -p ${LXC_ROOTFS_MOUNT}/dev/dri
    mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/card0 c 226 0
    mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/renderD128 c 226 128

    Specifically, I think there is no such variable: ${LXC_ROOTFS_MOUNT}

    So I changed “/var/lib/lxc/100/mount_hook.sh” to this:
    mkdir -p /ZFS/subvol-100-disk-0/dev/dri
    mknod -m 666 /ZFS/subvol-100-disk-0/dev/dri/card0 c 226 0
    mknod -m 666 /ZFS/subvol-100-disk-0/dev/dri/renderD128 c 226 128

    Proxmox still won’t start up the container, and it gives the same error message “Failed to run autodev hooks”.

    So I disabled that hook altogether:
    # nano /etc/pve/lxc/100.conf
    lxc.cgroup2.devices.allow: c 226:0 rwm
    lxc.cgroup2.devices.allow: c 226:128 rwm
    lxc.autodev: 1
    #lxc.hook.autodev: /var/lib/lxc/100/mount_hook.sh

    NOW the container can boot, but if I go
    # ls -lh /dev/dri
    ls: cannot access ‘/dev/dri’: No such file or directory

    So, from the shell in my Linux container:
    #mkdir -p /dev/dri
    #mknod -m 666 /dev/dri/card0 c 226 0
    #mknod -m 666 /dev/dri/renderD128 c 226 128

    All of a sudden here we go:
    # ls -lh /dev/dri
    total 0
    crw-rw-rw- 1 root root 226, 0 Aug 23 20:58 card0
    crw-rw-rw- 1 root root 226, 128 Aug 23 20:58 renderD128

    # vainfo
    error: can’t connect to X server!
    libva info: VA-API version 1.15.0
    libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
    libva info: Found init function __vaDriverInit_1_14
    libva info: va_openDriver() returns 0
    vainfo: VA-API version: 1.15 (libva 2.12.0)
    vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics – 22.4.3 ()
    vainfo: Supported profile and entrypoints
    VAProfileMPEG2Simple : VAEntrypointVLD
    VAProfileMPEG2Main : VAEntrypointVLD
    VAProfileH264Main : VAEntrypointVLD
    VAProfileH264Main : VAEntrypointEncSliceLP
    VAProfileH264High : VAEntrypointVLD
    VAProfileH264High : VAEntrypointEncSliceLP
    VAProfileJPEGBaseline : VAEntrypointVLD
    VAProfileJPEGBaseline : VAEntrypointEncPicture
    VAProfileH264ConstrainedBaseline: VAEntrypointVLD
    VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
    VAProfileVP8Version0_3 : VAEntrypointVLD
    VAProfileHEVCMain : VAEntrypointVLD
    VAProfileHEVCMain10 : VAEntrypointVLD
    VAProfileVP9Profile0 : VAEntrypointVLD
    VAProfileVP9Profile2 : VAEntrypointVLD

    # lspci -v
    00:02.0 VGA compatible controller: Intel Corporation Whiskey Lake-U GT1 [UHD Graphics 610] (rev 02) (prog-if 00 [VGA controller])
    DeviceName: Onboard – Video
    Subsystem: Intel Corporation Device 2212
    Flags: bus master, fast devsel, latency 0, IRQ 126, IOMMU group 1
    Memory at a0000000 (64-bit, non-prefetchable) [size=16M]
    Memory at 90000000 (64-bit, prefetchable) [size=256M]
    I/O ports at 3000 [size=64]
    Expansion ROM at 000c0000 [virtual] [disabled] [size=128K]
    Capabilities: [40] Vendor Specific Information: Len=0c
    Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00
    Capabilities: [ac] MSI: Enable+ Count=1/1 Maskable- 64bit-
    Capabilities: [d0] Power Management version 2
    Capabilities: [100] Process Address Space ID (PASID)
    Capabilities: [200] Address Translation Service (ATS)
    Capabilities: [300] Page Request Interface (PRI)
    Kernel driver in use: i915

    So there’s some issue with the way this “autodev hooks” is written, there’s a mystery surrounding this Proxmox container variable “${LXC_ROOTFS_MOUNT}” that doesn’t seem to work.

    I wonder what the solution is?

    Reply
  7. Grunchy says:
    August 24, 2023 at 12:02 PM

    OK: first of all, I have found this only works with “privileged” container.
    That’s “Unprivileged container?” selection cleared.

    In Proxmox, configure the container with the following extra lines:
    # nano /etc/pve/lxc/100.conf
    lxc.cgroup2.devices.allow: c 226:0 rwm
    lxc.cgroup2.devices.allow: c 226:128 rwm
    lxc.autodev: 1

    When you log in to the container as root, create a new file:
    # nano mount_hook.sh
    mkdir -p /dev/dri
    mknod -m 666 /dev/dri/card0 c 226 0
    mknod -m 666 /dev/dri/renderD128 c 226 128

    # chmod +x mount_hook.sh

    crontab -e
    (select 1 for nano editor)
    Add this line:
    @reboot /root/mount_hook.sh

    After that, do:
    # apt-get update
    # apt-get install -y intel-media-va-driver # or whatever driver you need
    # apt-get install -y vainfo intel-gpu-tools # tools to verify/debug installation
    # reboot

    At this point I was able to go:
    # vainfo
    error: can’t connect to X server!
    libva info: VA-API version 1.10.0
    libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
    libva info: Found init function __vaDriverInit_1_10
    libva info: va_openDriver() returns 0
    vainfo: VA-API version: 1.10 (libva 2.10.0)
    vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics – 21.1.1 ()
    vainfo: Supported profile and entrypoints
    VAProfileMPEG2Simple : VAEntrypointVLD
    VAProfileMPEG2Main : VAEntrypointVLD
    VAProfileH264Main : VAEntrypointVLD
    VAProfileH264Main : VAEntrypointEncSliceLP
    VAProfileH264High : VAEntrypointVLD
    VAProfileH264High : VAEntrypointEncSliceLP
    VAProfileJPEGBaseline : VAEntrypointVLD
    VAProfileJPEGBaseline : VAEntrypointEncPicture
    VAProfileH264ConstrainedBaseline: VAEntrypointVLD
    VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
    VAProfileVP8Version0_3 : VAEntrypointVLD
    VAProfileHEVCMain : VAEntrypointVLD
    VAProfileHEVCMain10 : VAEntrypointVLD
    VAProfileVP9Profile0 : VAEntrypointVLD
    VAProfileVP9Profile2 : VAEntrypointVLD

    Finally works!

    Reply
  8. Doug says:
    December 28, 2023 at 5:45 PM

    You *can* use Intel GPU within an unprivileged container, and this method can be simplified a little bit. I’ve been using mine with an unprivileged Emby container for quite a long time. You do, of course, have to map a gid in addition to allowing the devices. That’s the point of unprivileged, and you’ll have the peace of mind of improved host security.

    You do not need the card0 device or the post-hook script unless you feel you really need the vainfo or intel_gpu_top tools available from within the container itself. They aren’t needed for ffmpeg within the container to access the GPU, transcoding to work, or for the software and GPU to just do their thing. Troubleshooting tools within the container are useful, but not necessary.

    Assuming you have installed the appropriate packages on the host, as detailed here, the driver package within the container, and the gid for the device on your host is the same as the example (gid 128), this will work in an unprivileged container’s config file:

    unprivileged: 1
    lxc.cgroup2.devices.allow: c 226:128 rwm
    lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 0 0
    lxc.idmap: g 0 100000 106
    lxc.idmap: g 106 103 1
    lxc.idmap: g 107 100107 1893

    Note that I don’t bother mapping uid 226, because gid 128 has the same rw access to the device as uid 226, so why bother?

    I also have the nesting and fuse options enabled that I need for other things, and I don’t think they are needed for transcoding, but mentioning theme here just in case.

    Reply
    1. Doug says:
      December 28, 2023 at 6:11 PM

      Erp, slight error. The gid owning /dev/dri/renderD128 on my system is actually 103 (the “render” group on my host) and that’s the gid I mapped in my example. Not 128. Just pretend I wrote “103” everywhere I wrote “128”. Sorry for the confusion.

      Here’s a quick screenshot to show that it does work, though. 🙂

      https://share.icloud.com/photos/04dB23TqfkJxbacHStjOl9cAQ

      Reply
  9. Miguel says:
    March 13, 2024 at 2:24 PM

    Hi, with this can I use hdmi output from intel nuc to TV from lxc or only is igpu passtrhoug for use igpu on lxc?

    Reply
  10. Pingback: Enabling Hardware Transcoding for the BeeLink N100 on a Debian 12 Container using Proxmox – Vastagon Blog

Leave a ReplyCancel reply

LIKED? SUPPORT IT :)

Buy Me a Coffee


Search


Categories

  • Cooking (1)
  • Homelab (79)
    • APC UPS (6)
    • pfSense (40)
    • Proxmox (20)
    • Shopping (1)
    • Supermicro (2)
    • Synology NAS (8)
    • Ubiquiti (6)
    • UDM-Pro (4)
  • Random (3)
  • Wordpress (1)

Tags

Agentless monitoring (3) AP9631 (3) Apache2 (3) APC UPS (6) apt-get software (2) Bind9 (3) certificates (5) DDNS (5) debian (3) DNS (7) DNSBL (2) DSM (6) Dynamic DNS (4) Firewall (9) gmail (3) Let's Encrypt Certificates (7) monitoring (18) networking (21) NMC (2) PBS (3) pfsense (43) port forwarding (3) privacy (2) proxmox (17) proxmox backup server (3) proxmox virtual environment (16) pve (5) rev202207eng (76) security (28) SNMP (4) SNMPv1 (3) ssh (4) SSL (6) Supermicro (2) Synology (7) udm-pro (5) unifi (6) unifi controller (3) UPS (5) VLAN (4) vpn (9) vpn server (2) wifi (4) Zabbix (18) Zabbix Agent2 (11)

See also

Privacy policy

Sitemap

©2025 Geek is the Way! | Design by Superb
 

Loading Comments...