I'm trying to run a OpenCV application inside an Apptainer container, in a remote machine, passing the --nv flag for granting GPU support. But then I get this error:
import cv2
ImportError: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34` not found (required by /.singularity.d/libs/libGLdispatch.so.0)
If I run the same container without the --nv flag, cv2 imports correctly, but then of course I loose the GPU access. The same container runs correctly on my local machine (with NVIDIA GPU) with --nv flag.
So, I suspect --nv flag specifically in my remote machine. It is probably binding incompatible libraries form the host to the container.
I was hoping someone with Apptainer (singularity) experience could help me figure out how to fix this for my remote machine, or at least point me in the right direction.
Thanks!
Related
I am using a Docker container to run a C++ compiled executable. The Docker container is built using the latest Linux Debian distribution, while the host is a MacOS system (MacOS 12.6, on MacBook Pro 16 Latest 2019).
Within the C++ code, I call the function __rdtscp(unsigned int *__A) including x86intrin.h for monitoring purpose. Compiling and executing the application on the MacOS host it works correctly. But if I try to run it within the Docker container, I obtain a Illegal instruction error (it is compiled on another physical Linux host, I need this: anyway, I can run the same executable on different Linux machines and also on the container generated by the same Docker image I use if executed on another host).
Looking deeper into the issue, I found that __rdtscp(unsigned int *__A) must be supported by the CPU. It should be supported by all the CPUs after 2010/2011. In fact, it seems that flag is reported within the host CPU's features (RDTSCP). The problem is that I cannot find it within the container CPU's features.
Note that using __rdtsc() it works correctly, but this is not serializable, so I want to use __rdtscp(unsigned int *__A).
Following the MacOS host output of sysctl -a | grep machdep.cpu
And this is the output of the Debian docker container of lscpu
Could you help me to figure out the reason of this difference? Is there a way to force Docker to provide the same host CPU's features?
Thank you!
I have some trouble when running gdb in a docker image and I am not sure what is causing the problems since I am a complete newbie when it comes to gdb and docker.
Background
I am trying to write a c++ program for the Lego EV3 which runs on ev3dev. In order to compile it, I have set up docker with the ev3dev image on my Windows machine and am able to successfully build inside that image, transfer the binary to the EV3 and execute it there. This all works well until I need to start (remote) debugging. My plan is to start a gdbserver on the EV3 with the program and the open a gdb session inside the docker container in my Windows machine and connect to the EV3 gdbserver. After I have fixed the first error when remote debugging - I needed to use gdb-multiarch on my Windows machine - I have encountered more problems which I don't really find a solution too.
Problem
When running gdb directly on my Windows machine inside the docker container (or when connecting to the gdbserver from docker using gdb-multiarch) I always get the following output after starting the program with run:
(gdb) run
Starting program: /src/ev3/build/src/EV3_main
warning: Unable to find dynamic linker breakpoint function.
GDB will be unable to debug shared library initializers
and track explicitly loaded dynamic code.
Warning:
Cannot insert breakpoint -1.
Cannot access memory at address 0x4f58
and when using next or step I get:
(gdb) next
Cannot find bounds of current function
Since I couldn't really find any solution online, I would really appreciate any help!
Thanks in advance!
I always get the following output after starting the program with run
This error usually means that the dynamic loader in your docker container has been fully stripped. It's a packaging mistake by the creator of that container.
If you are not using dlopen(), this isn't a big problem.
(gdb) next
Don't do that: you are not stopped in a location where GDB knows where the next line is. Do continue instead.
I have a bootable iso image (live cd) with Linux system that is pretty old. That distro doesn't have remote repo (all installations are done from cdrom and separate disk with packages). I wanted to turn it into a docker image. Reading through articles google gave me, I've found several ways to do that. The first one is to mount the iso and find filesystem.squashfs - only modern distros use that way, not my case. My distro doesn't have that file available. The second approach is to call debootstrap but it requires to specify the repo for the distro with dist directory available in it. My distro doesn't have a public repo. What can I do? Is it even possible? I think that should be possible by doing a lot of things manually but how?
I faced similar problems when I had to containerize an old build server (building natively for legacy systems), eventually I succeeded. This approach describes how to containerize some old Linux distro (kernel 2.6.27 in my case), in the present Linux kernel 5 era.
General steps
if necessary: boot the old OS (or Live CD image)
login to the old system as root (or use sudo)
create a tarball from the relevant folders present in root
cd / ; tar cfvz image.tar.gz --one-file-system --exclude=/var/log --exclude=/image.tar.gz /
the selection worked in my case; review for yourself which folders to include or exclude
transfer the tarball to the Docker host (step not shown here)
and import it:
docker import image.tar.gz
the previous command will print out some hash
if convenient, tag the imported image:
docker tag <import-hash> <your-label>
Legacy problem: unsupported system calls
The imported image contains a Linux distribution snapshot. Some binaries can be executed from Docker, eg.:
docker run --rm <your-label> bin/ls
may actually work.
Some important binaries initially did not work for me, most notably bash:
docker run -it --rm <your-label> bin/bash
was failing silently. (Also, running with strace was possible but gave no clear indication.)
As #hiranchaudhuri pointed out, this is likely due to an API discrepancy between the host's kernel and the container's user space code.
In my case the problem was solved by enabling the legacy vsyscall kernel API
for Windows WSL2, this is described here https://learn.microsoft.com/en-us/windows/wsl/wsl-config
for native Linux systems of today, I guess this can be set in the boot configuration, with the kernel command-line parameter vsyscall=emulate, if the present kernel supports this option
I seriously doubt you will succeed on that.
Be aware Docker is not a full virtualization like KVM or VirtualBox. The lightweight virtualization benefits from the docker containers running on the host's Linux kernel. Which means the kernel is the same inside and outside of the container.
If you now try to install some old distro inside the container you may end up with an incompatible combination. Patching the kernel may involve upgrading glibc, and patching that may involve recompiling the rest of the OS.
I am not sure why you want to stick to the old distro, but seriously I believe you are better off with real virtualization.
My Python project is very windows-centric, we want the benefits of containers but we can't give up Windows just yet.
I'd like to be able to use the Dockerized remote python interpreter feature that comes with IntelliJ. This works flawlessly with Python running on a standard Linux container, but appears to work not at all for Python running on a Windows container.
I've built a new image based on a standard Microsoft Server core image. I've installed Miniconda, bootstrapped a Python environment and verified that I can start an interactive Python session from the command prompt.
Whenever I try to set this up I get an error message: "Can't retrieve image ID from build stream". This occurs at the moment when IntelliJ would have normally detected the python interpreter and it's installed libraries.
I also tried giving the full path for the interpreter: c:\miniconda\envs\htp\python.exe
I've never seen any mention that this works in the documentation, but nor have I seen any mention that it does not work. I totally accept that Windows Containers are an oddity, so it's entirely possible that IntelliJ's remote-Python feature was never tested on Python running in Windows containers.
So, has anybody got this feature working with Python running on a Windows container yet? Is there any reason to believe that it does or does not work?
Regrettably, it is not supported yet. Please vote for the feature request https://youtrack.jetbrains.com/issue/PY-45222 in order to increase its priority.
I have built a docker image with several packages loaded for a development environment.
I plan to use the image to make porting the environment to various machines simple.
In a container of the image, I can build my binary (using cmake & g++) on any of the machines I have loaded the docker image to. The binary has executed well within such containers on most machines.
But, on one machine executing the binary in the container results in a core dump with "illegal instruction" reported.
The crash happens on a machine with Intel Xeon CPUs. But it runs fine on another Xeon machine. It also runs fine on an AMD Ryzen machine.
I haven't tried executing the binary outside the container because the environment is so hard to set up.. why I'm using docker.
I just wonder if anyone's seen this happen much and how to start trying to resolve?
If it helps & anyone is familiar it, the base image I pulled from docker hub, to add other packages to, is floopcz/tensorflow_cc:ubuntu-shared. It is a Ubuntu image with the Tensorflow C++ API built for CPU use only (not CUDA).
The binary that's crashing does attempt to open a Tensorflow session before doing anything else.
I'm running Docker 19.03 on Ubuntu 16.04 and 18.04. The image has Ubuntu 18.04 loaded.