Last Updated on December 23, 2022 by Thiago Crepaldi
If you are a Proxmox Virtual Environment (aka PVE) user, chances are that you probably have a few LXC containers running on server. If that is indeed your case and you are interested in leveraging GPU hardware in them, this is for you!
In this post, we are going to setup Intel GPU passthrough on those containers so that you can run multimedia servers (e.g. Plex, Emby, Jellyfin, etc) or whatever with hardware transcoding capabilities.
The configuration process is two folded, but simple, don’t worry. First we need to make sure Proxmox server itself is using proper GPU drivers and next we do the actual passthrough to the desired LXC container.
Proxmox server configuration
Although it might be obvious to some, it is worth mentioning that before we can passthrough the Intel GPU to the container, the Proxmox host itself must be properly configured so that all devices are recognized by the host SO and exposed downstream.
Selecting the correct Intel driver
Each Linux distribution may differ in terms of package naming for the same drivers. I will assume you are using either Debian Bullseye (aka 11.6) or Ubuntu Jammy (aka 22.04) for simplicity, but a quick research for the correct package name can be done to adjust this post for different distros.
For Intel GPUs, drivers are distributed according to both GPU generation and driver license (aka free/non-free). The free drivers are available by default and will enable the hardware to decode video, whereas the non-free driver requires adding “non-free” to your APT sources in order to enable the hardware to both encode and decode video streams.
Therefore, you basically have four choices of packages, as the table below shows
Hardware generation | ||||
Gen 8+ | Older (up to Gen 9) | |||
Driver license | Free | intel-media-va-driver | i965-va-driver | |
Non-free | intel-media-va-driver-non-free | i965-va-driver-shaders |
If you are unsure which GPU generation your CPU embeds, check the table below copied from a nice Linux Reviews post:
Based on the table above, you will be able to decide whether you need the i965-va-driver or intel-media-va-driver. Hopefully you were lucky enough to go with the newer gen 🙂
Finally, and just FYI, each hardware/driver supports specific CODECs, as show from the following table copied from a Wikipedia post:
In this post, I am going to go with intel-media-va-driver-non-free as my Skylake CPU is supported by it. I opted for the non-free version because I might want to encode videos to lower qualities when network bandwidth is limited or the target hardware is not powerful enough to decode the original format.
Let’s get to it. On your Proxmox terminal, type:
# apt-get update
# apt-get install -y intel-media-va-driver-non-free # or whatever driver you need
# apt-get install -y vainfo intel-gpu-tools # tools to verify/debug installation
# reboot
Once your server is back, let’s see if the driver is properly loaded and get some evidences of encoding/decoding capabilities from the hardware:
# vainfo
error: can't connect to X server!
libva info: VA-API version 1.10.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_10
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.10 (libva 2.10.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 21.1.1 ()
vainfo: Supported profile and entrypoints
VAProfileNone : VAEntrypointVideoProc
VAProfileNone : VAEntrypointStats
VAProfileMPEG2Simple : VAEntrypointVLD
VAProfileMPEG2Simple : VAEntrypointEncSlice
VAProfileMPEG2Main : VAEntrypointVLD
VAProfileMPEG2Main : VAEntrypointEncSlice
VAProfileH264Main : VAEntrypointVLD
VAProfileH264Main : VAEntrypointEncSlice
VAProfileH264Main : VAEntrypointFEI
VAProfileH264Main : VAEntrypointEncSliceLP
VAProfileH264High : VAEntrypointVLD
VAProfileH264High : VAEntrypointEncSlice
VAProfileH264High : VAEntrypointFEI
VAProfileH264High : VAEntrypointEncSliceLP
VAProfileVC1Simple : VAEntrypointVLD
VAProfileVC1Main : VAEntrypointVLD
VAProfileVC1Advanced : VAEntrypointVLD
VAProfileJPEGBaseline : VAEntrypointVLD
VAProfileJPEGBaseline : VAEntrypointEncPicture
VAProfileH264ConstrainedBaseline: VAEntrypointVLD
VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
VAProfileH264ConstrainedBaseline: VAEntrypointFEI
VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
VAProfileVP8Version0_3 : VAEntrypointVLD
VAProfileHEVCMain : VAEntrypointVLD
VAProfileHEVCMain : VAEntrypointEncSlice
VAProfileHEVCMain : VAEntrypointFEI
Each hardware is different, but on my system, vainfo returned a bunch of stuff, including support to some variants of MPEG2, H264, VC1, JPEG, VP8 and HEVC. VAEntrypointVLD entries refer to decoding while VAEntrypointEnc* refers to encoding support.
When the GPU is in use, you can run the following command to check how busy it is during transcoding. My machine was idle, so the it didn’t catch anything interesting, but we can try again during heavy use.
# intel_gpu_top
intel-gpu-top - 0/ 0 MHz; 100% RC6; 0.00 Watts; 0 irqs/s
IMC reads: 81 MiB/s
IMC writes: 4 MiB/s
ENGINE BUSY MI_SEMA MI_WAIT
Render/3D/0 0.00% | | 0% 0%
Blitter/0 0.00% | | 0% 0%
Video/0 0.00% | | 0% 0%
Video/1 0.00% | | 0% 0%
VideoEnhance/0 0.00% | | 0% 0%
LXC Container configuration
Now that the hardware is up and running on the Proxmox server, let’s take note of some information to feed to the Proxmox LXC container.
Getting Container info on Proxmox
This is actually the easiest step ever and probably didn’t need a section of its own, but for the heck of completeness, here we go. The GPU passthrough is not a global setting, meaning you need to configure each container individually with the passthrough, this is why knowing the container ID is important.
Go to your Proxmox Web UI and after logging in, select Folder View on the drop down menu on the top left corner. Next, expand Datacenter >> LXC Container to view the list of containers available. Each row in the list is a container and they are listed following the format “ID (name)”. IDs start at 100 by default, although you can change it during the container creation. Take note of the ID you are interested at.
Because we are going to modify the container, make sure it is off. Click on the container followed by a click on Shutdown button on the top right menu. Confirm that you are serious by clicking Yes.
Getting GPU info on Proxmox
Now that you know which LXC you want to passthrough GPU, let’s get info about your GPU. On the Proxmox terminal, run:
# ls -lh /dev/dri
total 0
crw-rw-rw- 1 root root 226, 0 Dec 23 02:30 card0
crw-rw-rw- 1 root root 226, 128 Dec 23 02:30 renderD128
From the output above, we have to take note of the devices card0 with ID 226,0 and device renderD128 with ID 226, 128. These identify the GPU hardware on the system and we will use that to setup the LXC in the next steps.
Updating LXC container spec on Proxmox
I will assume your LXC ID is 100, but you can use whatever ID is right for you. On a Proxmox terminal, let’s edit the LXC container so that it can see the GPU hardware:
# vim /etc/pve/lxc/100.conf
Add the following to the bottom and save the file. If you are not a vim user, type “:x” inside the editor (without the quotes) to save and exit the editor 🙂
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.autodev: 1
lxc.hook.autodev: /var/lib/lxc/100/mount_hook.sh
You should note a couple things in the code. The first two rows map the hosts GPUs through their IDs 226:0 and 226:128, which came from “ls -lh /dev/dri” from the previous step. The third row enables the creation of /dev on the container and the forth line asks Proxmox to run mount_hook.sh every time the container is started. /var/lib/lxc/100/mount_hook.sh doesn’t exist, yet… Let’s create it
# vim /var/lib/lxc/100/mount_hook.sh
and add the following to it
mkdir -p ${LXC_ROOTFS_MOUNT}/dev/dri
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/card0 c 226 0
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/renderD128 c 226 128
Don’t forget to adjust the device paths and IDs according to your system. The first line added /dev/dri in your host while the second and third lines added the GPU devices with the same names and IDs as the Proxmox’s.
Lastly, make the file executable:
# chmod 755 /var/lib/lxc/100/mount_hook.sh
Configuring LXC container’s GPU driver
At this point, Proxmox exposes the GPU devices to the container. Go to Folder View >> LXC Container >> 100 and click on Start on the top right button. Next, click on LXC container’s Console button to open a terminal. After logging in, execute
# ls -lh /dev/dri
total 0
crw-rw-rw- 1 root root 226, 0 Dec 23 02:30 card0
crw-rw-rw- 1 root root 226, 128 Dec 23 02:30 renderD128
The output should be the same as the Proxmox server. Two devices, card0 and renderD128 with the same IDs 226,0 and 226:128. This means your container now have access to the hardware, let’s put it to good use! On the terminal, let’s install the GPU drivers, the same ones installed on the Proxmox server!
# apt-get update
# apt-get install -y intel-media-va-driver-non-free # or whatever driver you need
# apt-get install -y vainfo intel-gpu-tools # tools to verify/debug installation
You can run vainfo and verify the output is also the same
# vainfo
error: can't connect to X server!
libva info: VA-API version 1.10.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_10
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.10 (libva 2.10.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 21.1.1 ()
vainfo: Supported profile and entrypoints
VAProfileNone : VAEntrypointVideoProc
VAProfileNone : VAEntrypointStats
VAProfileMPEG2Simple : VAEntrypointVLD
VAProfileMPEG2Simple : VAEntrypointEncSlice
VAProfileMPEG2Main : VAEntrypointVLD
VAProfileMPEG2Main : VAEntrypointEncSlice
VAProfileH264Main : VAEntrypointVLD
VAProfileH264Main : VAEntrypointEncSlice
VAProfileH264Main : VAEntrypointFEI
VAProfileH264Main : VAEntrypointEncSliceLP
VAProfileH264High : VAEntrypointVLD
VAProfileH264High : VAEntrypointEncSlice
VAProfileH264High : VAEntrypointFEI
VAProfileH264High : VAEntrypointEncSliceLP
VAProfileVC1Simple : VAEntrypointVLD
VAProfileVC1Main : VAEntrypointVLD
VAProfileVC1Advanced : VAEntrypointVLD
VAProfileJPEGBaseline : VAEntrypointVLD
VAProfileJPEGBaseline : VAEntrypointEncPicture
VAProfileH264ConstrainedBaseline: VAEntrypointVLD
VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
VAProfileH264ConstrainedBaseline: VAEntrypointFEI
VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
VAProfileVP8Version0_3 : VAEntrypointVLD
VAProfileHEVCMain : VAEntrypointVLD
VAProfileHEVCMain : VAEntrypointEncSlice
VAProfileHEVCMain : VAEntrypointFEI
[Optional] Install extra CODECs on Debian/Ubuntu containers
Although Debian based systems come with some CODECs, to maximize compatibility with whatever you want to play, I recommend installing extra CODEC packages as described by It’s FOSS post:
# apt install -y software-properties-common # Install add-apt-repository
# add-apt-repository multiverse # Ensure 'multiverse' is enabled
# apt update # update package repository with new settings
# apt install -y ubuntu-restricted-extras # install extra CODECs
That is it. From now on, if your Plex or whatever needs to decode or encode a video, it will automatically detect and use the GPU. No extra configuration needed. Enjoy it!
Thank you sooooooooo much for this easy to follow tutorial! It worked perfectly for me. (I have a Proxmox 7.2 setup with a small NUC PC with a Jasper Lake Celeron N5105!!!
Does the container must be privileged container? Can I use unprivileged container?
Unprivileged works
I’ve tried on and UNprivileged container and it does not work :-(. On a Priviliged one works as charm …
After making this changes, the container cannot started. Here is the error message:
safe_mount: 1220 Invalid argument – Failed to mount “/sys/kernel/debug” onto “/usr/lib/x86_64-linux-gnu/lxc/rootfs/sys/kernel/debug”
run_buffer: 322 Script exited with status 1
lxc_setup: 4445 Failed to run autodev hooks
do_start: 1272 Failed to setup container “103”
sync_wait: 34 An error occurred in another process (expected sequence number 4)
__lxc_start: 2107 Failed to spawn container “103”
TASK ERROR: startup for container ‘103’ failed
Here is my current setup:
root@home:~# cat /etc/pve/lxc/103.conf
arch: amd64
cores: 2
features: nesting=1
hostname: jellyfin
memory: 2048
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.0.1,hwaddr=8A:5E:E5:7B:33:57,ip=192.168.0.223/24,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-zfs:subvol-103-disk-0,size=16G
swap: 2048
unprivileged: 1
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.autodev: 1
lxc.hook.autodev: /var/lib/lxc/103/mount_hook.sh
root@home:~# cat /var/lib/lxc/103/mount_hook.sh
mkdir -p ${LXC_ROOTFS_MOUNT}/dev/dri
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/card0 c 226 0
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/renderD128 c 226 128
root@home:~# ls /var/lib/lxc/103/mount_hook.sh -l
-rwxr-xr-x 1 root root 154 May 13 13:13 /var/lib/lxc/103/mount_hook.sh
Same here.
Same issue on an unprivileged container. Haven’t tried this with privileged.
Funny I got the same error “failed to run autodev hooks”. Unprivileged, same as the rest.
run_buffer: 322 Script exited with status 1
lxc_setup: 4445 Failed to run autodev hooks
do_start: 1272 Failed to setup container “100”
sync_wait: 34 An error occurred in another process (expected sequence number 4)
__lxc_start: 2107 Failed to spawn container “100”
TASK ERROR: startup for container ‘100’ failed
Setup:
# cat /etc/pve/lxc/100.conf
arch: amd64
cores: 2
features: nesting=1
hostname: Datto
memory: 6144
nameserver: 1.1.1.1
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.1.254,hwaddr=62:FF:A2:38:8E:03,ip=192.168.1.30/24,type=veth
ostype: ubuntu
rootfs: ZFS:subvol-100-disk-0,size=150G
swap: 512
unprivileged: 1
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.autodev: 1
lxc.hook.autodev: /var/lib/lxc/100/mount_hook.sh
I tried something else (since this could be a “blacklist” issue):
# lspci -v
00:02.0 VGA compatible controller: Intel Corporation Device 3ea1 (rev 02) (prog-if 00 [VGA controller])
DeviceName: Onboard – Video
Subsystem: Intel Corporation Device 2212
Flags: bus master, fast devsel, latency 0, IRQ 126, IOMMU group 1
Memory at a0000000 (64-bit, non-prefetchable) [size=16M]
Memory at 90000000 (64-bit, prefetchable) [size=256M]
I/O ports at 3000 [size=64]
Expansion ROM at 000c0000 [virtual] [disabled] [size=128K]
Capabilities: [40] Vendor Specific Information: Len=0c
Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00
Capabilities: [ac] MSI: Enable+ Count=1/1 Maskable- 64bit-
Capabilities: [d0] Power Management version 2
Capabilities: [100] Process Address Space ID (PASID)
Capabilities: [200] Address Translation Service (ATS)
Capabilities: [300] Page Request Interface (PRI)
Kernel driver in use: i915
Kernel modules: i915
So, I tried this:
# echo “blacklist i915” >> /etc/modprobe.d/blacklist.conf
# update-initramfs -u -k all
# reboot
Funny thing, lspci -v gives the same “Kernel driver in use: i915” (despite being “blacklisted”) and same error with the Ubuntu 22.04 LXC.
run_buffer: 322 Script exited with status 1
lxc_setup: 4445 Failed to run autodev hooks
do_start: 1272 Failed to setup container “100”
sync_wait: 34 An error occurred in another process (expected sequence number 4)
__lxc_start: 2107 Failed to spawn container “100”
TASK ERROR: startup for container ‘100’ failed
I guess I’ll try again with “privileged” LXC? Anybody else getting anywhere?
I think the problem is with this construct:
# nano /var/lib/lxc/100/mount_hook.sh
mkdir -p ${LXC_ROOTFS_MOUNT}/dev/dri
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/card0 c 226 0
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/renderD128 c 226 128
Specifically, I think there is no such variable: ${LXC_ROOTFS_MOUNT}
So I changed “/var/lib/lxc/100/mount_hook.sh” to this:
mkdir -p /ZFS/subvol-100-disk-0/dev/dri
mknod -m 666 /ZFS/subvol-100-disk-0/dev/dri/card0 c 226 0
mknod -m 666 /ZFS/subvol-100-disk-0/dev/dri/renderD128 c 226 128
Proxmox still won’t start up the container, and it gives the same error message “Failed to run autodev hooks”.
So I disabled that hook altogether:
# nano /etc/pve/lxc/100.conf
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.autodev: 1
#lxc.hook.autodev: /var/lib/lxc/100/mount_hook.sh
NOW the container can boot, but if I go
# ls -lh /dev/dri
ls: cannot access ‘/dev/dri’: No such file or directory
So, from the shell in my Linux container:
#mkdir -p /dev/dri
#mknod -m 666 /dev/dri/card0 c 226 0
#mknod -m 666 /dev/dri/renderD128 c 226 128
All of a sudden here we go:
# ls -lh /dev/dri
total 0
crw-rw-rw- 1 root root 226, 0 Aug 23 20:58 card0
crw-rw-rw- 1 root root 226, 128 Aug 23 20:58 renderD128
# vainfo
error: can’t connect to X server!
libva info: VA-API version 1.15.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_14
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.15 (libva 2.12.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics – 22.4.3 ()
vainfo: Supported profile and entrypoints
VAProfileMPEG2Simple : VAEntrypointVLD
VAProfileMPEG2Main : VAEntrypointVLD
VAProfileH264Main : VAEntrypointVLD
VAProfileH264Main : VAEntrypointEncSliceLP
VAProfileH264High : VAEntrypointVLD
VAProfileH264High : VAEntrypointEncSliceLP
VAProfileJPEGBaseline : VAEntrypointVLD
VAProfileJPEGBaseline : VAEntrypointEncPicture
VAProfileH264ConstrainedBaseline: VAEntrypointVLD
VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
VAProfileVP8Version0_3 : VAEntrypointVLD
VAProfileHEVCMain : VAEntrypointVLD
VAProfileHEVCMain10 : VAEntrypointVLD
VAProfileVP9Profile0 : VAEntrypointVLD
VAProfileVP9Profile2 : VAEntrypointVLD
# lspci -v
00:02.0 VGA compatible controller: Intel Corporation Whiskey Lake-U GT1 [UHD Graphics 610] (rev 02) (prog-if 00 [VGA controller])
DeviceName: Onboard – Video
Subsystem: Intel Corporation Device 2212
Flags: bus master, fast devsel, latency 0, IRQ 126, IOMMU group 1
Memory at a0000000 (64-bit, non-prefetchable) [size=16M]
Memory at 90000000 (64-bit, prefetchable) [size=256M]
I/O ports at 3000 [size=64]
Expansion ROM at 000c0000 [virtual] [disabled] [size=128K]
Capabilities: [40] Vendor Specific Information: Len=0c
Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00
Capabilities: [ac] MSI: Enable+ Count=1/1 Maskable- 64bit-
Capabilities: [d0] Power Management version 2
Capabilities: [100] Process Address Space ID (PASID)
Capabilities: [200] Address Translation Service (ATS)
Capabilities: [300] Page Request Interface (PRI)
Kernel driver in use: i915
So there’s some issue with the way this “autodev hooks” is written, there’s a mystery surrounding this Proxmox container variable “${LXC_ROOTFS_MOUNT}” that doesn’t seem to work.
I wonder what the solution is?
OK: first of all, I have found this only works with “privileged” container.
That’s “Unprivileged container?” selection cleared.
In Proxmox, configure the container with the following extra lines:
# nano /etc/pve/lxc/100.conf
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.autodev: 1
When you log in to the container as root, create a new file:
# nano mount_hook.sh
mkdir -p /dev/dri
mknod -m 666 /dev/dri/card0 c 226 0
mknod -m 666 /dev/dri/renderD128 c 226 128
# chmod +x mount_hook.sh
crontab -e
(select 1 for nano editor)
Add this line:
@reboot /root/mount_hook.sh
After that, do:
# apt-get update
# apt-get install -y intel-media-va-driver # or whatever driver you need
# apt-get install -y vainfo intel-gpu-tools # tools to verify/debug installation
# reboot
At this point I was able to go:
# vainfo
error: can’t connect to X server!
libva info: VA-API version 1.10.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_10
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.10 (libva 2.10.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics – 21.1.1 ()
vainfo: Supported profile and entrypoints
VAProfileMPEG2Simple : VAEntrypointVLD
VAProfileMPEG2Main : VAEntrypointVLD
VAProfileH264Main : VAEntrypointVLD
VAProfileH264Main : VAEntrypointEncSliceLP
VAProfileH264High : VAEntrypointVLD
VAProfileH264High : VAEntrypointEncSliceLP
VAProfileJPEGBaseline : VAEntrypointVLD
VAProfileJPEGBaseline : VAEntrypointEncPicture
VAProfileH264ConstrainedBaseline: VAEntrypointVLD
VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
VAProfileVP8Version0_3 : VAEntrypointVLD
VAProfileHEVCMain : VAEntrypointVLD
VAProfileHEVCMain10 : VAEntrypointVLD
VAProfileVP9Profile0 : VAEntrypointVLD
VAProfileVP9Profile2 : VAEntrypointVLD
Finally works!
You *can* use Intel GPU within an unprivileged container, and this method can be simplified a little bit. I’ve been using mine with an unprivileged Emby container for quite a long time. You do, of course, have to map a gid in addition to allowing the devices. That’s the point of unprivileged, and you’ll have the peace of mind of improved host security.
You do not need the card0 device or the post-hook script unless you feel you really need the vainfo or intel_gpu_top tools available from within the container itself. They aren’t needed for ffmpeg within the container to access the GPU, transcoding to work, or for the software and GPU to just do their thing. Troubleshooting tools within the container are useful, but not necessary.
Assuming you have installed the appropriate packages on the host, as detailed here, the driver package within the container, and the gid for the device on your host is the same as the example (gid 128), this will work in an unprivileged container’s config file:
unprivileged: 1
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 0 0
lxc.idmap: g 0 100000 106
lxc.idmap: g 106 103 1
lxc.idmap: g 107 100107 1893
Note that I don’t bother mapping uid 226, because gid 128 has the same rw access to the device as uid 226, so why bother?
I also have the nesting and fuse options enabled that I need for other things, and I don’t think they are needed for transcoding, but mentioning theme here just in case.
Erp, slight error. The gid owning /dev/dri/renderD128 on my system is actually 103 (the “render” group on my host) and that’s the gid I mapped in my example. Not 128. Just pretend I wrote “103” everywhere I wrote “128”. Sorry for the confusion.
Here’s a quick screenshot to show that it does work, though. 🙂
https://share.icloud.com/photos/04dB23TqfkJxbacHStjOl9cAQ
Hi, with this can I use hdmi output from intel nuc to TV from lxc or only is igpu passtrhoug for use igpu on lxc?