I have a docker arm ubuntu image and simply want to test if it is running. However, seems like I cannot run the image on normal Desktop environment since the binaries are different for arm than x86_x64. Is there any way I can simulate this testing for ARM without actually getting a raspberry pi?
You can run Raspberry Pi in a VM. You can download a complete package with qemu and the image for Windows. For Ubuntu, you can probably apt-get install qemu qemu-system-arm, download the same package, and then do the same as run.bat but with correct paths for Linux:
qemu-system-arm -M versatilepb -cpu arm1176 -hda 2012-07-15-wheezy-raspbian.img -kernel kernel-qemu -m 192 -append "root=/dev/sda2"
Related
My goal is to be able to run Vulkan application in a docker container using the Nvidia Container Toolkit. Ideally running Ubuntu 22.04 on the host and in the container.
I've created a git repo to allow others to better reproduce this issue: https://github.com/rickyjames35/vulkan_docker_test
The README explains my findings but I will reiterate them here.
For this test I'm running Ubuntu 22.04 on my host as well as in the container FROM ubuntu:22.04. For this test I'm seeing that the only device vulkaninfo is finding is llvmpipe which is a CPU based graphics driver. I'm also seeing that llvmpipe can't render when running vkcube both in the container and on the host for Ubuntu 22.04. Here is the container output for vkcube:
Selected GPU 0: llvmpipe (LLVM 13.0.1, 256 bits), type: 4
Could not find both graphics and present queues
On my host I can tell it to use llvmpipe:
vkcube --gpu_number 1
Selected GPU 1: llvmpipe (LLVM 13.0.1, 256 bits), type: Cpu
Could not find both graphics and present queues
As you can see they have the same error. What's interesting is if I swap the container to FROM ubuntu:20.04 then llvmpipe can render but this is moot since I do not wish to do CPU rendering. The main issue here is that Vulkan is unable to detect my Nvidia GPU from within the container when using the Nvidia Container Toolkit with NVIDIA_DRIVER_CAPABILITIES=all and NVIDIA_VISIBLE_DEVICES=all. I've also tried using nvidia/vulkan. When running vulkaninfo in this container I get:
vulkaninfo
ERROR: [Loader Message] Code 0 : vkCreateInstance: Found no drivers!
Cannot create Vulkan instance.
This problem is often caused by a faulty installation of the Vulkan driver or attempting to use a GPU that does not support Vulkan.
ERROR at /vulkan-sdk/1.3.236.0/source/Vulkan-Tools/vulkaninfo/vulkaninfo.h:674:vkCreateInstance failed with ERROR_INCOMPATIBLE_DRIVER
I'm suspecting this has to to with me running Ubuntu 22.04 on the host although the whole point of docker is the host OS generally should not affect the container.
In the test above I was using nvidia-driver-525 I've tried using different versions of the driver with the same results. At this point I'm not sure if I'm doing something wrong or if Vulkan is not supported in the Nvidia Container Toolkit for Ubuntu 22.04 even though it claims to be.
I had a similar problem when trying to set up a docker container using the nvidia/cuda:12.0.0-devel-ubuntu22.04 image.
I was able to get it to work using the unityci/editor image. This is the docker command I used.
docker run -dit -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix/ -v /dev:/dev --gpus='all,"capabilities=compute,utility,graphics,display"' unityci/editor:ubuntu-2022.2.1f1-base-1.0.1
After setting up the container, I had to apt install vulkan-utils and libnvidia-gl-525 then everything works.
Hope this helps!
What I need:
A test container on a x86_64 machine for a raspberry pi zero, which works with qemu emulation for armv6l.
What I've got so far:
A Dockerfile with a test code
FROM python:3.7.9
COPY hello.py ./
CMD [ "python3", "./hello.py" ]
The image is built with this command:
docker buildx build --platform linux/arm/v6 -t test/hello --push .
After it has been uploaded and built for linux/arm/v6 I try to run it with this command:
docker run --platform=linux/arm/v6 --rm -t test/hello uname -mpi
Output: armv7l unknown unknown
I have set up qemu and binfmt like they say on their github page:
https://github.com/docker/buildx#building-multi-platform-images
I do not understand why the output is armv7l, because I did everything to make an armv6l image. I do not know if I need to make adjustments to docker or qemu itself.
I am quite new to the buildx system of docker and how to emulate the container under qemu so if anyone could help me out with this problem I would be very grateful.
EDIT:
Thanks to Peter the container is know forced to use armv6l.
docker run -e QEMU_CPU=arm1176 --platform=linux/arm/v6 --rm -t test/hello uname -mpi
Output: armv6l unknown unknown
uname is telling you 'armv7l' because you have not specified to QEMU that it should emulate any particular CPU type, and its default is "all the features we can emulate".
This should not be a problem, because all software that can run on a v6 CPU will run on a v7 CPU. (That's why QEMU's default is what it is: it means that in general guest programs will all just work.)
I'm not familiar with docker, but I suspect that its 'platform' argument is simply configuring what the code inside the container is built to run on. So you have a container full of v6 binaries, which will run on either a v6 CPU or a v7 one.
If you really need to force QEMU to emulate a v6 CPU and not a v7 one, you can set the environment variable QEMU_CPU to 'arm1176', which will make QEMU emulate that specific CPU.
I happened to find that my macos(x86) can run a docker container for an arm image arm64v8/alpine, but with the following warning:
docker run -it arm64v8/alpine uname -a
WARNING: The requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64) and no specific platform was requested
Linux d5509c57dd24 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 aarch64 Linux
And I'm pretty sure the image is not a multi-arch image (docker manifest inspect --verbose arm64v8/alpine). Why can x86 run an arm container?
You are correct, the image is not multi architecture, yet, docker can run it. Reason behind this is a kernel subsystem called binfmt_misc which allows to set the magic numbers of a binary file to specific actions for their execution. You can read more in this nice wikipedia post about it.
Docker for Mac is arriving prepared for the binfmt magic, so there is nothing to be done to enable it. It will be enabled out-of-box with the installation, all you need to do is to fetch the image and run. The details of the mechanism can be found in repository of docker-for-mac project on this link.
To explain it simply, the binary images have the magic number that allows the kernel to decide how to handle the execution. When binfmt_misc intercepts a file for which it recognizes the magic numbers it will invoke the handler that is associated with the magic numbers.
This alone is not enough to run the container. The next part of the magic is QEMU which is the emulator for various CPU architectures. The kernel (binfmt_misc) will invoke the quemy for each of the binaries that are ARM64 and will emulate the ARM64v8.
This is not limited to docker nor to the virtual machine that is running the docker on macOS. Any linux system can be configured to do this.
You can use following to install it setup Ubuntu to run the emulation.
sudo apt-get install qemu binfmt-support qemu-user-static # Install the qemu packages
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes # This step will execute the registering scripts
docker run --rm -t arm64v8/ubuntu uname -m # Testing the emulation environment
More details about the whole process of the set-up can be found in the qemu-user-static repository
OP: If you are wondering what is the usefulness of this, from my personal experiance, I am using this functionality heavily when porting applications from X86 to other architectures (mainly ARM64). This allows me to run build systems for various architectures without having a physical machine on which I can run the build.
I'm trying to create a Windows Docker Container with access to GPUs. To start I just wanted to try check if I can access GPU on Docker containers.
Dockerfile
FROM mcr.microsoft.com/windows:1903
CMD [ "ping", "-t", "localhost" ]
Build and run
docker build -t debug_image .
docker run -d --gpus all --mount src="C:\Program Files\NVIDIA Corporation\NVSMI",target="C:\Program Files\NVIDIA Corporation\NVSMI",type=bind debug_image
docker exec -it CONTAINER_ID powershell
Problem and question
Now that I'm inside, I try to execute my shared NVIDIA SMI executable. However, I got an error and it's not capable of running. The obvious question is why, if the host is capable.
PS C:\Program Files\NVIDIA Corporation\NVSMI> .\nvidia-smi.exe
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA
driver. Make sure that the latest NVIDIA driver is installed and
running. This can also be happening if non-NVIDIA GPU is running as
primary display, and NVIDIA GPU is in WDDM mode.
About NVIDIA Driver, AFAIK it should not return any problem, since it works on the HOST, where NVIDIA Driver is installed.
My host has 2 NVIDIA GPUs, and it has no "primary" display as it's a server with no screen connected. AFAIK, it's CPU doesn't have an integrated GPU, so I would assume one of the connected NVIDIA GPUs is the primary display (if it does exist when no display is connected to the server)(also, I think one should be it, because one renders the screen when I connect through TeamViewer if needed, and dxdiag returns one of them as Display 1).
About WDDM mode, I've found ways to change it, but didn't found ways to check the current mode.
So basically the question, is why is it not working? Any insight or help in the previous points would be helpful.
Update.
About:
1) I've updated my drivers from 431 to 441, latest version available for GTX 1080 Ti, and the error message remains the same.
2-3) I've confirmed that GTX (Except some Titan models) cannot run in TCC mode. Therefore they're running in WDDM mode.
I have successfully been able to create a docker image with a matlab compiler runtime engine installed with centos6.9 as parent image. This works wonderfully and enables running matlab scripts within the container.
However, we also have a MATLAB GUI application for linux which we would like to launch from within the container. I was successful in running the GUI by X11 forwaring on Windows 10 by using xming server.
Question is: Is it possible to create a docker image for centos 6.9 with GUI capabilities(linux desktop) so that the X11 forwarding is not required? If yes, please point to some resources.
yes it is possible by sharing X11 socket:
docker run -ti --rm \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
MATLAB
shamelessly copied from here
update: for windows follow this