Getting Docker to recognize nvidia graphics card on mac - docker

When I am in my container, I run
lspci | grep -i nvidia
and nothing shows.
When I run ./deviceQuery from the samples NVIDIA provides I get
no CUDA-capable device is detected
I know I have a nvidia driver on my mac. I just can't figure out how to get my docker container to realize that.

On OS X, docker is a container running inside a separate virtualbox vm which does not expose the host GPU.

You'll first need to make the graphics card available in the Virtual Box VM. I'm not sure how to do that, but this looks like it might help:
https://www.virtualbox.org/manual/ch04.html#guestadd-video
Once you've got it mounted within the VM, then you can also share it with the container.

I haven't tried this myself, but this guy says that he can run native X11 Apps on a Mac using a beta docker client called Kinematic along with socat, XQuartz, and QGIS, and he seems to imply that NVidia driver issues were thus avoided. This looks worth a try!

Related

Docker container CPU features do not match the host's ones (RDTSCP)

I am using a Docker container to run a C++ compiled executable. The Docker container is built using the latest Linux Debian distribution, while the host is a MacOS system (MacOS 12.6, on MacBook Pro 16 Latest 2019).
Within the C++ code, I call the function __rdtscp(unsigned int *__A) including x86intrin.h for monitoring purpose. Compiling and executing the application on the MacOS host it works correctly. But if I try to run it within the Docker container, I obtain a Illegal instruction error (it is compiled on another physical Linux host, I need this: anyway, I can run the same executable on different Linux machines and also on the container generated by the same Docker image I use if executed on another host).
Looking deeper into the issue, I found that __rdtscp(unsigned int *__A) must be supported by the CPU. It should be supported by all the CPUs after 2010/2011. In fact, it seems that flag is reported within the host CPU's features (RDTSCP). The problem is that I cannot find it within the container CPU's features.
Note that using __rdtsc() it works correctly, but this is not serializable, so I want to use __rdtscp(unsigned int *__A).
Following the MacOS host output of sysctl -a | grep machdep.cpu
And this is the output of the Debian docker container of lscpu
Could you help me to figure out the reason of this difference? Is there a way to force Docker to provide the same host CPU's features?
Thank you!

Create a docker image from old linux distro without distro's repository

I have a bootable iso image (live cd) with Linux system that is pretty old. That distro doesn't have remote repo (all installations are done from cdrom and separate disk with packages). I wanted to turn it into a docker image. Reading through articles google gave me, I've found several ways to do that. The first one is to mount the iso and find filesystem.squashfs - only modern distros use that way, not my case. My distro doesn't have that file available. The second approach is to call debootstrap but it requires to specify the repo for the distro with dist directory available in it. My distro doesn't have a public repo. What can I do? Is it even possible? I think that should be possible by doing a lot of things manually but how?
I faced similar problems when I had to containerize an old build server (building natively for legacy systems), eventually I succeeded. This approach describes how to containerize some old Linux distro (kernel 2.6.27 in my case), in the present Linux kernel 5 era.
General steps
if necessary: boot the old OS (or Live CD image)
login to the old system as root (or use sudo)
create a tarball from the relevant folders present in root
cd / ; tar cfvz image.tar.gz --one-file-system --exclude=/var/log --exclude=/image.tar.gz /
the selection worked in my case; review for yourself which folders to include or exclude
transfer the tarball to the Docker host (step not shown here)
and import it:
docker import image.tar.gz
the previous command will print out some hash
if convenient, tag the imported image:
docker tag <import-hash> <your-label>
Legacy problem: unsupported system calls
The imported image contains a Linux distribution snapshot. Some binaries can be executed from Docker, eg.:
docker run --rm <your-label> bin/ls
may actually work.
Some important binaries initially did not work for me, most notably bash:
docker run -it --rm <your-label> bin/bash
was failing silently. (Also, running with strace was possible but gave no clear indication.)
As #hiranchaudhuri pointed out, this is likely due to an API discrepancy between the host's kernel and the container's user space code.
In my case the problem was solved by enabling the legacy vsyscall kernel API
for Windows WSL2, this is described here https://learn.microsoft.com/en-us/windows/wsl/wsl-config
for native Linux systems of today, I guess this can be set in the boot configuration, with the kernel command-line parameter vsyscall=emulate, if the present kernel supports this option
I seriously doubt you will succeed on that.
Be aware Docker is not a full virtualization like KVM or VirtualBox. The lightweight virtualization benefits from the docker containers running on the host's Linux kernel. Which means the kernel is the same inside and outside of the container.
If you now try to install some old distro inside the container you may end up with an incompatible combination. Patching the kernel may involve upgrading glibc, and patching that may involve recompiling the rest of the OS.
I am not sure why you want to stick to the old distro, but seriously I believe you are better off with real virtualization.

Can I run NVIDIA DeepStream SDK in Windows Server 2019?

System: I've a Windows Server 2019 OS installed with a NVIDIA Tesla T4 Tensor Core GPU.
Goal: Planning to read real time streaming videos from an IP camera and to further process frame by frame. Goal is to leverage NVIDIA DeepStream SDK, but issue is, it isn't available for Windows OS. So, I'm thinking on the docker lines, but since am very new to docker containers, would like to know if I can install a docker on Windows and can run this deepstream docker image on that.
If not, is there any way I can run this Linux based DeepStream docker image on Windows? Any help shall be greatly acknowledged.
I have never worked with the windows server before it should be the same as a docker in Linux VM.
First, you need to pull docker images for deepstream
docker pull nvcr.io/nvidia/deepstream:5.0-dp-20.04-triton
and then try to run sample apps provided in the docker image.
Refer this for the procedure.
if you are interested in python apps you can check sample apps here.
Note:- make sure you are able to access display from inside the container cause deepstream use eglsink in their samples app which will try to open a display window on your screen or you can change the sink type to filesink if you want to save it is a file.
Refer this for available plugins and their attributes.
According to the post in Nivida forum, Windows not supported.
As alternative, I wonder if anyone used the Nvidia Graph Composer in Windows.

Emulating Raspberry Pi with Docker on OS X

I've been doing a lot of Raspberry Pi work, but that means I have to carry about my Pi (or SSH home), and well, the Pi isn't the fastest in the world. I've been using Docker for running things like Postgres, and was thinking it would be awesome to just download a Docker image of the ARM build of Debian Jessie, and have everything function as if it was actually running in a real rPi. Even better if I could just somehow then quickly mirror this to an SD card and throw it into a real rPi.
Has anyone explored this? Everything I'm finding is about running Docker on the rPi, not running Docker to emulate an rPi.
Based on the answers and comments to similar questions - such as this one on the Raspberry Pi Stack Exchange site I think that the short answer to "no" (or at least not without a lot of effort)
Your problem is that as mentioned in the comments Docker doesn't do full-on virtualisation (that's kind of the point of it) so you can't get an ARM Raspbian Docker image and run it on an x86 Virtualbox host - which is what it sounds like you'd like to do.
The Docker image needs to be built for the same architecture as the host system. you get the same problem if you try to run x86 Docker images on the Raspberry Pi if it is acting as a Docker host.
By way of a solution - what I'd suggest is running a Debian VM on your Mac. Raspbian is close enough to Debian that you'll have a fairly "Pi-like" environment to develop in and can copy your code to an SD card when you're done.
If you want an easy way to manage the configuration so that the number of cores, RAM, disk space etc matches your Pi, then Vagrant may be a good solution.

Is it possible to run GUI apps in windows containers?

So I'm playing around with this containers concept and specificlly windows containers.
I managed to run containers using the windows nanoserver image, however this image meant to services and does not support gui applications (or 32 bit apps).
Couldn't find any mentioning of running gui applications (and see there gui) using windows container (found only linux container gui).
is there a way to run GUI apps in containers? and so how do I can create my own image containing this support?
As per my knowledge, its impossible because docker does not allow rdp inside container
The nano server is not supporting GUI. That's why I cannot see how this should work if your base image for your container is a nano server
No, it is not possible on Windows regardless of image. It is a system limitation. As a last hope to get this somehow running I would try to install a VNC server inside a container and would try to connect to it from outside. This approach works for Linux-based containers. But I'm doubting that it will work on Windows.

Resources