Module alias ftdi_sio not found - docker

I would like to install hardware environment in a docker container. One of the installation scripts uses modinfo utility to detect ftdi_sio module, but it can't find this particular part. There is also other error:
No FTDI driver present
I'm using centos7 image from the docker hub in this container. Is there any way this OS has not all necessary drivers and if so how to install necessary components in this image?
Appreciate for any help

You cannot install Linux kernel drivers from a Docker container, and generally one of the major design goals of Docker is to hide details of the underlying hardware from you.
If you’re trying to use tools like modinfo to inspect the system you’re actually running on and see if some specific kernel driver or piece of hardware is available, you need to run these directly on the host, not in Docker. If you’re trying to develop a hardware driver or interface, simulating it in a virtual machine (with its own kernel) is probably better than trying to work with it in Docker.
(In principle you can disable enough of Docker’s protections to do this, but it makes your container setup very tightly bound to your host setup and removes basically all of the isolation; you’re getting nothing but complexity from having Docker in the mix.)

One of the installation scripts uses modinfo utility to detect ftdi_sio module, but it can't find this particular part.
Actually, you can make this work, because modinfo does not require module to be running.
The reason modinfo can't find it is that podman/docker is using the kernel of the host. modinfo uses uname system call to get name of the current running kernel, which then is used as part of the path to find the module. Since kernel is the host one, the path may only accidentally be correct.
To make it work you gotta explicitly pass kernel name to modinfo call with -k. Example of how it works from my podman container:
$ uname -a
Linux 43d87d63879d 5.9.8-arch1-1 #1 SMP PREEMPT Tue, 10 Nov 2020 22:44:11 +0000 x86_64 x86_64 x86_64 GNU/Linux
$ modinfo zfs
modinfo: ERROR: Module alias zfs not found.
$ modinfo -k 4.15.0-123-generic zfs
filename: /lib/modules/4.15.0-123-generic/updates/dkms/zfs.ko
version: 0.7.5-1ubuntu16.10
license: CDDL
author: OpenZFS on Linux
description: ZFS
srcversion: EAC384B1885CDDD467439E9
[…]

Related

Docker container CPU features do not match the host's ones (RDTSCP)

I am using a Docker container to run a C++ compiled executable. The Docker container is built using the latest Linux Debian distribution, while the host is a MacOS system (MacOS 12.6, on MacBook Pro 16 Latest 2019).
Within the C++ code, I call the function __rdtscp(unsigned int *__A) including x86intrin.h for monitoring purpose. Compiling and executing the application on the MacOS host it works correctly. But if I try to run it within the Docker container, I obtain a Illegal instruction error (it is compiled on another physical Linux host, I need this: anyway, I can run the same executable on different Linux machines and also on the container generated by the same Docker image I use if executed on another host).
Looking deeper into the issue, I found that __rdtscp(unsigned int *__A) must be supported by the CPU. It should be supported by all the CPUs after 2010/2011. In fact, it seems that flag is reported within the host CPU's features (RDTSCP). The problem is that I cannot find it within the container CPU's features.
Note that using __rdtsc() it works correctly, but this is not serializable, so I want to use __rdtscp(unsigned int *__A).
Following the MacOS host output of sysctl -a | grep machdep.cpu
And this is the output of the Debian docker container of lscpu
Could you help me to figure out the reason of this difference? Is there a way to force Docker to provide the same host CPU's features?
Thank you!

How to install linux-modules-extra?

When I run sudo apt install linux-modules-extra-$(uname -r) in a Docker container based on a Ubuntu 20.04 on a single board computer running Ubuntu 18.04, I get the following errors:
E: Unable to locate package linux-modules-extra-4.15.0-143-generic
E: Couldn't find any package by glob 'linux-modules-extra-4.15.0-143-generic'
E: Couldn't find any package by regex 'linux-modules-extra-4.15.0-143-generic'
To me, this makes me wonder whether it is even possible to install linux-modules-extra-4.15.0-143-generic in Ubuntu 20.04? Maybe it is only compatible with Ubuntu 18.04?
Could anyone clarify this for me please?
In general, if you're building a kernel module, it has to match exactly the kernel that's running on the host system. If you're using a native Debian or Ubuntu system (without Docker), there's a system where kernel modules can be rebuilt or reinstalled when the host kernel is updated. See for example the Debian wiki KernelDKMS page.
In contrast, a Docker image is generally supposed to be portable across hosts. If you upgrade the host's kernel, or if you run a FROM ubuntu:18.04 image on an Ubuntu 20.04 host, the image isn't really supposed to be aware of this.
In your particular case, you can't get the kernel headers you need, because they're not part of the Ubuntu 18.04 distribution. For this particular case it might be possible to get the headers from the later version of Ubuntu, but it might not be possible in the general case; maybe because the system is actually running plain Debian or RHEL and the kernel build is different, maybe because the operator built their own kernel.
Since a Linux kernel module is so specific to the host it runs on, and since it can bypass any and all security concerns, it's not appropriate to try to install one in a container. Do it directly on the host instead.

Create a docker image from old linux distro without distro's repository

I have a bootable iso image (live cd) with Linux system that is pretty old. That distro doesn't have remote repo (all installations are done from cdrom and separate disk with packages). I wanted to turn it into a docker image. Reading through articles google gave me, I've found several ways to do that. The first one is to mount the iso and find filesystem.squashfs - only modern distros use that way, not my case. My distro doesn't have that file available. The second approach is to call debootstrap but it requires to specify the repo for the distro with dist directory available in it. My distro doesn't have a public repo. What can I do? Is it even possible? I think that should be possible by doing a lot of things manually but how?
I faced similar problems when I had to containerize an old build server (building natively for legacy systems), eventually I succeeded. This approach describes how to containerize some old Linux distro (kernel 2.6.27 in my case), in the present Linux kernel 5 era.
General steps
if necessary: boot the old OS (or Live CD image)
login to the old system as root (or use sudo)
create a tarball from the relevant folders present in root
cd / ; tar cfvz image.tar.gz --one-file-system --exclude=/var/log --exclude=/image.tar.gz /
the selection worked in my case; review for yourself which folders to include or exclude
transfer the tarball to the Docker host (step not shown here)
and import it:
docker import image.tar.gz
the previous command will print out some hash
if convenient, tag the imported image:
docker tag <import-hash> <your-label>
Legacy problem: unsupported system calls
The imported image contains a Linux distribution snapshot. Some binaries can be executed from Docker, eg.:
docker run --rm <your-label> bin/ls
may actually work.
Some important binaries initially did not work for me, most notably bash:
docker run -it --rm <your-label> bin/bash
was failing silently. (Also, running with strace was possible but gave no clear indication.)
As #hiranchaudhuri pointed out, this is likely due to an API discrepancy between the host's kernel and the container's user space code.
In my case the problem was solved by enabling the legacy vsyscall kernel API
for Windows WSL2, this is described here https://learn.microsoft.com/en-us/windows/wsl/wsl-config
for native Linux systems of today, I guess this can be set in the boot configuration, with the kernel command-line parameter vsyscall=emulate, if the present kernel supports this option
I seriously doubt you will succeed on that.
Be aware Docker is not a full virtualization like KVM or VirtualBox. The lightweight virtualization benefits from the docker containers running on the host's Linux kernel. Which means the kernel is the same inside and outside of the container.
If you now try to install some old distro inside the container you may end up with an incompatible combination. Patching the kernel may involve upgrading glibc, and patching that may involve recompiling the rest of the OS.
I am not sure why you want to stick to the old distro, but seriously I believe you are better off with real virtualization.

How to install 32-bit docker container

I'm trying to create a 32-bit docker image with Ubuntu 14.04 and, any time that I run uname, I see that it is x86_64 instead of i386. Could anyone tell me why this is happening?
docker run talex5/lucid32 uname -m
The weird thing is when I look up the architecture type a different way, it says 32-bit:
docker run i386/ubuntu:14.04 file /sbin/init
/sbin/init: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=c394677bccc720a3bb4f4c42a48e008ff33e39b1, stripped`
This happens consistently whenever I download different docker images that say they are 32-bit and even when I create my own docker image using debootstrap.
Thanks!
uname reports the version and OS details of the kernel, but Docker containers always use the host system's kernel, and if it's a 64-bit kernel it will report x86_64.
You should see the same results running this with a mixed 32-/64-bit OS install (in Ubuntu land installing packages like libc6:i686); with a 32-bit filesystem tree in a chroot; and in a Docker container; which are all the same case of running 32-bit binaries on a system with a 64-bit kernel.
This is possible these days, with just a simple script. You could use https://github.com/docker-32bit/ubuntu.

Getting Docker to recognize nvidia graphics card on mac

When I am in my container, I run
lspci | grep -i nvidia
and nothing shows.
When I run ./deviceQuery from the samples NVIDIA provides I get
no CUDA-capable device is detected
I know I have a nvidia driver on my mac. I just can't figure out how to get my docker container to realize that.
On OS X, docker is a container running inside a separate virtualbox vm which does not expose the host GPU.
You'll first need to make the graphics card available in the Virtual Box VM. I'm not sure how to do that, but this looks like it might help:
https://www.virtualbox.org/manual/ch04.html#guestadd-video
Once you've got it mounted within the VM, then you can also share it with the container.
I haven't tried this myself, but this guy says that he can run native X11 Apps on a Mac using a beta docker client called Kinematic along with socat, XQuartz, and QGIS, and he seems to imply that NVidia driver issues were thus avoided. This looks worth a try!

Resources