I am on Ubuntu 19.10. and would like to use OpenCL inside docker.
Inside of the docker container I have installed opencl-headers,ocl-icd-opencl-dev and clinfo.
When I run clinfo on my machine outside of docker I have following response:
Number of platforms 1
Platform Name NVIDIA CUDA
Platform Vendor NVIDIA Corporation
Platform Version OpenCL 1.2 CUDA 10.2.159
...
When same is run in docker:
Number of platforms 0
I thought docker container should be able to use my graphic card, but am unsure if/how I should allow it.
Thank you for some insights
You need to start docker with the --gpus all option, eg:
docker run --rm --gpus all nvidia/opencl clinfo
You can also expose just a specific gpu:
docker run -it --rm --gpus "device=0" ubuntu nvidia-smi
Read more here: https://docs.docker.com/config/containers/resource_constraints/
If you get this error:
$ docker run --rm --gpus all nvidia/opencl clinfo
docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].
On Ubuntu, you can remedy it with:
$ sudo apt install -y nvidia-container-toolkit; sudo systemctl restart docker
Related
I have used the docker image Rotating TOR on amd64 architectures with no problem. Now I try to run the same image on Raspberry OS (arm 32 bit) but I have not succeeded.
This is the error when executing the image:
$ docker run -d -p 5566:5566 -p 4444:4444 --env tors=25 mattes/rotating-proxy
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm/v7) and no specific platform was requested
I have tried adding platform linux/adm64 after the run but the image does not work either.
Does anyone know how to run this image on Raspberry OS or is there just no way to do it? Thanks for the help.
That won't work
Docker is a virtualisation platform, not an emulator. It cannot be used to run images from one architecture on another (AMD64 on ARM or vice versa). You need a matching image (or install the ARM version directly on the PI, if there is one).
You can run docker images across platforms with QEMU, which is an emulator.
Docs: https://dbhi.github.io/qus/
Docker images: https://github.com/multiarch/qemu-user-static
You can run QEMU with one of the images maintained at the above link with something like:
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
Which should then allow you to run your x86 images, e.g.:
docker run --rm -t i386/ubuntu uname -m
I have a dockerized application that requires the nscd socket from the docker host. So I bind mount the socket at run time. DNS, getpwnam, getpwuid, etc. all work fine. Strangely though, I have found that gethostbyname doesn't work anymore. For example:
docker run --rm -v /var/run/nscd/socket:/var/run/nscd/socket ubuntu hostname -i
hostname: Name or service not known
However, under alpine, it does work:
docker run --rm -v /var/run/nscd/socket:/var/run/nscd/socket alpine hostname -i
172.18.85.4
Does anyone know why this breaksgethostbyname and how to fix it?
Update: if I use the same glibc on the host and container, it still breaks:
ldd --version
ldd (GNU libc) 2.17
docker run --rm centos ldd --version
ldd (GNU libc) 2.17
docker run --rm -v /var/run/nscd/socket:/var/run/nscd/socket centos hostname -i
hostname: Name or service not known
Setting the LOCALDOMAIN to nothing works:
docker run -it --rm -v /var/run/nscd/socket:/var/run/nscd/socket --env LOCALDOMAIN='' centos hostname -i
I saw similar treads but they are different because I am using WSL2 and docker and GPU aware docker.
I have windows 10 version 2004 (build 20161.1000)
I have installed WSL 2 and have Docker Desktop 2.3.0.3 on my Windows System running.
I have Ubuntu 18.04 LTS installed in WSL 2 too.
I have installed the NVIDIA driver
The linux version is 4.19.121-microsoft-standard.
The NVIDIA driver version is 455.41 for my Laptop GPU QUADRO M2000M.
Actually I followed all the steps described in https://ubuntu.com/blog/getting-started-with-cuda-on-ubuntu-on-wsl-2 until the step where I have to run "sudo service docker stop" in an Ubuntu terminal.
This results in a message docker: unrecognized service.
I have to restart docker desktop in WIndows 10 in order to get the deamon running.
I test then in the Ubuntu terminal : docker run hello-world ==> this runs fine
Also the command docker run -it ubuntu bash ==> runs file in the Ubuntu terminal os WSL 2.
BUT when I run :
docker run -u $(id -u):$(id -g) -it --gpus all -p 8888:8888 tensorflow/tensorflow:latest-gpu-py3-jupyter
then I get the error : docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]
This invoves microsoft,Ubuntu,NVIDIA. I have search the support sites but could not find anything that solves my prblem.
Can anyone help me here?
There is this strange answer mentioned here and here:
sudo service docker start
sudo mkdir /sys/fs/cgroup/systemd
sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd
This worked for me on WSL (Ubuntu 20.04), so I added it to the ~/.bashrc script.
Note, the first part may need to be restarting docker!
I have a GPU application that does unit-testing during the image building stage.
With Docker 19.03, one can specify nvidia runtime with docker run --gpus all but I also need access to the gpus for docker build because I do unit-testing. How can I achieve this goal?
For older version of docker that use nvidia-docker2 it was not possible to specifiy runtime during build stage, BUT you can set the default runtime to be nvidia, and docker build works fine that way. Can I do that in Docker 19.03 that doesn't need nvidia-docker anymore? If so, how?
You need use nvidia-container-runtime as explained in docs: "It is also the only way to have GPU access during docker build".
Steps for Ubuntu:
Install nvidia-container-runtime:
sudo apt-get install nvidia-container-runtime
Edit/create the /etc/docker/daemon.json with content:
{
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
Restart docker daemon:
sudo systemctl restart docker
Build your image (now GPU available during build):
docker build -t my_image_name:latest .
A "solution" I found is to first run a base image with the host nvidia drivers mounted on it
docker run -it --rm --gpus ubuntu
And then build my app within the container manually and commit the resulting image.
This is not ideal and it would be best to have access to nvidia-smi during the build phase.
I would like to setup cuda using the following code:
docker run -ti --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=0 nvidia/cuda
I Kept getting these Errors:
Command 'docker' not found, but can be installed with:
snap install docker # version 18.06.1-ce, or
apt install docker.io # version 18.09.7-0ubuntu1~19.04.5
See 'snap info docker' for additional versions.
I tried to google these Errors, but failed.
System Environment: Ubuntu Desktop 19.04
I should explain that this is a clean System I'm currently using.
I should tell you one thing that, installing anything with docker comes with a prerequisite, which is that you should install docker first.
You can find the tutorials on how you could install docker in the following link:
How to install Docker
And then you could install Nvidia compiled docker container with the following command:
docker pull nvidia/cuda
docker run -ti --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=0 nvidia/cuda
which was referenced from Nvidia CUDA Docker Hub and Nvidia CUDA GitHub page