docker error while loading shared libraries (RHEL 7.5) - docker

I installed Docker on a Red Hat Enterprise Linux Server 7.5 (Maipo) system:
docker version
Version: 1.13.1
API version: 1.26
Package version: docker-1.13.1-58.git87f2fab.e17.x86_64
OS/Arch: linux/amd64
Now if I try to run a docker image, I get errors similar to this:
docker run docker.io/jupyter/datascience-notebook
tini: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
I have searched for help and have already taken a multitude of possible actions:
libraries seem to be linked correctly
all libraries are up to date
Hello-World example works
I also came across information saying that running containers from docker.io / hub.docker.com under RHEL is not supported - which I don't really get, as main purpose of docker is to enable running programs independent from their OS...?
https://access.redhat.com/solutions/1408853 Does this mean using docker under RHEL does not really provide me with the possibility of easily deploying/sharing a docker-image with non-RHEL users?
Also, does this mean I can only access and use official RHEL-docker images?
https://access.redhat.com/containers/?start=90#/search/
As I wanted to use docker to have ready-to-go environments with R-Python/Jupyter/H2o (and similar), I'm disappointed because I could not find suitable images for RHEL there.
So, my questions would be:
Is it possible to run docker.io / hub.docker.com images under RHEL7.5?
if not, could I share my own created docker images under RHEL7.5 to other users with different OS versions?
Are there other projects / sites to share docker-images for data science purposes on RHEL?
Would you agree that my next step would be: building my own docker-image, adding R/Python/jupyter step by step?
Best regards,
workah0lic

This error message
tini: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
comes from within the container image. It could be a corrupted container image, but the message is also printed when the glibc dynamic linker determines that the kernel features are not sufficient for loading libc.so.6. I looked at the image (digest is sha256:79f929bd0e58fa9cb238dceda48b0c8360e748d09b476b429216c93dac0bd783), and it appears to require kernel 3.2, so the Red Hat Enterprise Linux 7 kernel version of 3.10 should be sufficient.
In fact, I cannot reproduce this problem with kernel-3.10.0-862.6.3.el7.x86_64 and docker-1.13.1-58.git87f2fab.el7.x86_64. You could try to run this command to obtain additional information about dynamic linker behavior:
docker run -e LD_DEBUG=all docker.io/jupyter/datascience-notebook

Related

GitHub Codespaces: how to set x86_64, AMD64, ARM64 platform?

First, the question: is there a way to choose the platform (e.g. x86_64, AMD64, ARM64) for a GitHub Codespace?
Here's what I've found so far:
Attempt 1 (not working):
From within GitHub.com, you can choose the "machine" for a Codespace, but the only options are RAM and disk size.
Attempt 2 (EDIT: not working): devcontainer.json
When you create a Codespace, you can specify options by creating a top-level .devcontainer folder with two files: devcontainer.json and Dockerfile
Here you can customize runtimes, installed packages, etc., but the docs don't say anything about determining architecture...
...however, the VSCode docs for devcontainer.json has a runArgs option, which "accepts Docker CLI arguments"...
and the Docker CLI docs on --platform say you should be able to pass --platform linux/amd64 or --platform linux/arm64, but...
When I tried this, the Codespace would just hang, never finishing building.
Attempt 3 (in progress): specify in Dockerfile
This route seems the most promising, but it's all new to me (containerization, codespaces, docker). It's possible that Attempts 2 and 3 work in conjunction with one another. At this point, though, there are too many new moving pieces, and I need outside help.
Does GitHub Codespaces support this?
Would you pass it in the Dockerfile or devcontainer.json? How?
How would you verify this, anyway? [Solved: dpkg --print-architecture or uname -a]
For Windows, presumably you'd need a license (I didn't see anything on GitHub about pre-licensed codespaces) -- but that might be out of scope for the question.
References:
https://code.visualstudio.com/docs/remote/devcontainerjson-reference
https://docs.docker.com/engine/reference/commandline/run/
https://docs.docker.com/engine/reference/builder/
https://docs.docker.com/desktop/multi-arch/
https://docs.docker.com/buildx/working-with-buildx/
EDIT: December 2021
I received a response from GitHub support:
The VM hosts for Codespaces are only x86_64 and we do not offer any ARM64 machines.
So for now, setting the platform does nothing, or fails.
But if they end up supporting multiple platforms, you should be able to (in Dockerfile)
RUN --platform=arm64|amd64|x86-64 [image-name],
Which is working for me in the non-cloud version of Docker.
Original answer:
I may have answered my own question
In Dockerfile:
I had RUN alpine
changed to
RUN --platform=linux/amd64 alpine
or
RUN --platform=linux/x86-64 alpine
checked at the command line with
uname -a to print the architecture.
Still verifying, but seems promising. [EDIT: Nope]
So, despite the above, I can only get GitHub codespaces to run x86-64. Nevertheless, the above syntax seems correct.
A clue:
In the logs that appear while the codespace is building, I saw target OS: x86
Maybe GitHub just doesn't support other architectures yet.
Still investigating.
Currently only x64 based hosts running Linux are supported for Codespaces. Other hardware and host is types are yet to be announced.

How to install USBIP in Docker Container

I want to use USBIP in an Ubuntu 20.04 Docker Container. I tried installing the "linux-tools-generic" Package, but when i run USBIP afterwards i get the message:
You may need to install the following packages for this specific kernel:
linux-tools-5.10.16.3-microsoft-standard-WSL2
linux-cloud-tools-5.10.16.3-microsoft-standard-WSL2
You may also want to install one of the following packages to keep up to date:
linux-tools-standard-WSL2
linux-cloud-tools-standard-WSL2
How can i install these Packages ? Could'nt find them with apt-get.
Since Docker relies on the features of the Linux kernel, you'll need to make sure that you have the USB/IP module compiled into your WSL kernel. It is not there in the stock WSL kernel, so you'll need to build your own. I haven't done this with USB/IP myself, but there are reports from the Home Assistant (home automation) forums that indicate that it works.
See this answer for more details.

Create a docker image from old linux distro without distro's repository

I have a bootable iso image (live cd) with Linux system that is pretty old. That distro doesn't have remote repo (all installations are done from cdrom and separate disk with packages). I wanted to turn it into a docker image. Reading through articles google gave me, I've found several ways to do that. The first one is to mount the iso and find filesystem.squashfs - only modern distros use that way, not my case. My distro doesn't have that file available. The second approach is to call debootstrap but it requires to specify the repo for the distro with dist directory available in it. My distro doesn't have a public repo. What can I do? Is it even possible? I think that should be possible by doing a lot of things manually but how?
I faced similar problems when I had to containerize an old build server (building natively for legacy systems), eventually I succeeded. This approach describes how to containerize some old Linux distro (kernel 2.6.27 in my case), in the present Linux kernel 5 era.
General steps
if necessary: boot the old OS (or Live CD image)
login to the old system as root (or use sudo)
create a tarball from the relevant folders present in root
cd / ; tar cfvz image.tar.gz --one-file-system --exclude=/var/log --exclude=/image.tar.gz /
the selection worked in my case; review for yourself which folders to include or exclude
transfer the tarball to the Docker host (step not shown here)
and import it:
docker import image.tar.gz
the previous command will print out some hash
if convenient, tag the imported image:
docker tag <import-hash> <your-label>
Legacy problem: unsupported system calls
The imported image contains a Linux distribution snapshot. Some binaries can be executed from Docker, eg.:
docker run --rm <your-label> bin/ls
may actually work.
Some important binaries initially did not work for me, most notably bash:
docker run -it --rm <your-label> bin/bash
was failing silently. (Also, running with strace was possible but gave no clear indication.)
As #hiranchaudhuri pointed out, this is likely due to an API discrepancy between the host's kernel and the container's user space code.
In my case the problem was solved by enabling the legacy vsyscall kernel API
for Windows WSL2, this is described here https://learn.microsoft.com/en-us/windows/wsl/wsl-config
for native Linux systems of today, I guess this can be set in the boot configuration, with the kernel command-line parameter vsyscall=emulate, if the present kernel supports this option
I seriously doubt you will succeed on that.
Be aware Docker is not a full virtualization like KVM or VirtualBox. The lightweight virtualization benefits from the docker containers running on the host's Linux kernel. Which means the kernel is the same inside and outside of the container.
If you now try to install some old distro inside the container you may end up with an incompatible combination. Patching the kernel may involve upgrading glibc, and patching that may involve recompiling the rest of the OS.
I am not sure why you want to stick to the old distro, but seriously I believe you are better off with real virtualization.

Can I run NVIDIA DeepStream SDK in Windows Server 2019?

System: I've a Windows Server 2019 OS installed with a NVIDIA Tesla T4 Tensor Core GPU.
Goal: Planning to read real time streaming videos from an IP camera and to further process frame by frame. Goal is to leverage NVIDIA DeepStream SDK, but issue is, it isn't available for Windows OS. So, I'm thinking on the docker lines, but since am very new to docker containers, would like to know if I can install a docker on Windows and can run this deepstream docker image on that.
If not, is there any way I can run this Linux based DeepStream docker image on Windows? Any help shall be greatly acknowledged.
I have never worked with the windows server before it should be the same as a docker in Linux VM.
First, you need to pull docker images for deepstream
docker pull nvcr.io/nvidia/deepstream:5.0-dp-20.04-triton
and then try to run sample apps provided in the docker image.
Refer this for the procedure.
if you are interested in python apps you can check sample apps here.
Note:- make sure you are able to access display from inside the container cause deepstream use eglsink in their samples app which will try to open a display window on your screen or you can change the sink type to filesink if you want to save it is a file.
Refer this for available plugins and their attributes.
According to the post in Nivida forum, Windows not supported.
As alternative, I wonder if anyone used the Nvidia Graph Composer in Windows.

How to build an image with Dockerfile in Kitematic?

Is there anyway to build an image with Dockerfile while using Kitematic?
From the top of the docs for Kitematic
Legacy desktop solution. Kitematic is a legacy solution, bundled with Docker Toolbox. We recommend updating to Docker for Mac or Docker for Windows if your system meets the requirements for one of those applications.
If possible, you should avoid using the tool.
If you have to use Kitematic, the feature you are asking about is tracked by this GitHub issue: Import Dockerfile - (Docker build). At the time of writing the feature is not implemented.

Resources