I use singularity and I need to install a nvidia driver in my singularity container to do some deep learning with a gtx 1080.
This singularity image is created from a nvidia docker from here:
https://ngc.nvidia.com/catalog/containers/nvidia:kaldi and converted to a singularity container.
There was no nvidia drivers I think because nvidia-smi was not found before I install the driver.
I did the following commmands :
add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
apt install nvidia-418
after that I wanted to see if the driver was well installed, I did the command :
nvidia-smi
which return : Failed to initialize NVML: Driver/library version mismatch
I searched about how to solve this error and found this topic :
NVIDIA NVML Driver/library version mismatch
One answer says to do the command :
lsmod | grep nvidia
and then to rmmod on each except nvidia and finally to rmmod nvidia.
rmmod drm
But when I do this, as the topic excepted it, I have the error :
rmmod: ERROR: Module nvidia is in use.
The topic says to tap lsof /dev/nvidia*, and to kill the process that use the module, but I see nothing with drm written, and it seems to be a very bad idea to kill the process (Xorg, gnome-she).
Here is the answer to the command lsof /dev/nvidia*, followed by the command lsmod | grep nvidia, and then rmmod drm
Rebooting the computer also didn't work.
what should I do to manage using nvidia-smi and be able to use my GPU from inside the singularity container ?
Thank you
You may need to do the above steps in the host OS and not in the container itself. /dev is mounted into the container as is and still subject to use by the host, though the processes are run in a different userspace.
thank you for your answer.
I wanted to install the GPU driver in the singularity container because when inside the container, I wasn't able to use the GPU (nvidia-smi : command not found) while outside of the container I could use nvidia-smi.
You are right, the driver should be installed outside of the container, I wanted to install it in the container to avoid my problem of not having access to the driver from inside the container.
Now I found the solution : To use GPU from inside the singularity container, you must add --nv when calling the container.
example :
singularity exec --nv singularity_container.simg ~/test_gpu.sh
or
singularity shell --nv singularity_container.simg
When you add --nv, the container will have access to the nvidia driver and nvidia-smi will work.
Without this you will not be able to use GPU, nvidia-smi will not work.
Related
My goal is to be able to run Vulkan application in a docker container using the Nvidia Container Toolkit. Ideally running Ubuntu 22.04 on the host and in the container.
I've created a git repo to allow others to better reproduce this issue: https://github.com/rickyjames35/vulkan_docker_test
The README explains my findings but I will reiterate them here.
For this test I'm running Ubuntu 22.04 on my host as well as in the container FROM ubuntu:22.04. For this test I'm seeing that the only device vulkaninfo is finding is llvmpipe which is a CPU based graphics driver. I'm also seeing that llvmpipe can't render when running vkcube both in the container and on the host for Ubuntu 22.04. Here is the container output for vkcube:
Selected GPU 0: llvmpipe (LLVM 13.0.1, 256 bits), type: 4
Could not find both graphics and present queues
On my host I can tell it to use llvmpipe:
vkcube --gpu_number 1
Selected GPU 1: llvmpipe (LLVM 13.0.1, 256 bits), type: Cpu
Could not find both graphics and present queues
As you can see they have the same error. What's interesting is if I swap the container to FROM ubuntu:20.04 then llvmpipe can render but this is moot since I do not wish to do CPU rendering. The main issue here is that Vulkan is unable to detect my Nvidia GPU from within the container when using the Nvidia Container Toolkit with NVIDIA_DRIVER_CAPABILITIES=all and NVIDIA_VISIBLE_DEVICES=all. I've also tried using nvidia/vulkan. When running vulkaninfo in this container I get:
vulkaninfo
ERROR: [Loader Message] Code 0 : vkCreateInstance: Found no drivers!
Cannot create Vulkan instance.
This problem is often caused by a faulty installation of the Vulkan driver or attempting to use a GPU that does not support Vulkan.
ERROR at /vulkan-sdk/1.3.236.0/source/Vulkan-Tools/vulkaninfo/vulkaninfo.h:674:vkCreateInstance failed with ERROR_INCOMPATIBLE_DRIVER
I'm suspecting this has to to with me running Ubuntu 22.04 on the host although the whole point of docker is the host OS generally should not affect the container.
In the test above I was using nvidia-driver-525 I've tried using different versions of the driver with the same results. At this point I'm not sure if I'm doing something wrong or if Vulkan is not supported in the Nvidia Container Toolkit for Ubuntu 22.04 even though it claims to be.
I had a similar problem when trying to set up a docker container using the nvidia/cuda:12.0.0-devel-ubuntu22.04 image.
I was able to get it to work using the unityci/editor image. This is the docker command I used.
docker run -dit -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix/ -v /dev:/dev --gpus='all,"capabilities=compute,utility,graphics,display"' unityci/editor:ubuntu-2022.2.1f1-base-1.0.1
After setting up the container, I had to apt install vulkan-utils and libnvidia-gl-525 then everything works.
Hope this helps!
Sometimes I can't communicate with my Nvidia GPUs inside a docker container when I came back to my workplace from home, even though the previously launched process that utilizes GPUs is running well. The running process (training a neural network via Pytorch) is not affected by the disconnection but I cannot launch a new process.
nvidia-smi gives Failed to initialize NVML: Unknown Error and torch.cuda.is_available() returns False likewise.
I met two different cases:
nvidia-smi works fine when it is done at the host machine. In this case, the situation can be solved by restarting the docker container via docker stop $MYCONTAINER followed by docker start $MYCONTAINER at the host machine.
nvidia-smi doesn't work at the host machine nor nvcc --version, throwing Failed to initialize NVML: Driver/library version mismatch and Command 'nvcc' not found, but can be installed with: sudo apt install nvidia-cuda-toolkit error. Strange point is that the current process still runs well. In this case, installing the driver again or rebooting the machine solves the problem.
However, these solutions require stopping all current processes. It would be unavailable when I should not stop the current process.
Does somebody has suggestion for solving this situation?
Many thanks.
(sofwares)
Docker version: 20.10.14, build a224086
OS: Ubuntu 22.04
Nvidia driver version: 510.73.05
CUDA version: 11.6
(hardwares)
Supermicro server
Nvidia A5000 * 8
(pic1) nvidia-smi not working inside of a docker container, but worked well on the host machine.
(pic2) nvidia-smi works after restarting a docker container, which is the case 1 I mentioned above
For the problem of Failed to initialize NVML: Unknown Error and having to restart the container, please see this ticket and post your system/package information there as well:
https://github.com/NVIDIA/nvidia-docker/issues/1671
There's a workaround on the ticket, but it would be good to have others post their configuration to help fix the issue.
Downgrading containerd.io to 1.6.6 works as long as you specify no-cgroups = true in /etc/nvidia-container-runtime/config.toml and specify the devices to docker run like docker run --gpus all --device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidia-modeset:/dev/nvidia-modeset --device /dev/nvidia-uvm:/dev/nvidia-uvm --device /dev/nvidia-uvm-tools:/dev/nvidia-uvm-tools --device /dev/nvidiactl:/dev/nvinvidiactl --rm -it nvidia/cuda:11.4.2-base-ubuntu18.04 bash
so sudo apt-get install -y --allow-downgrades containerd.io=1.6.6-1 and sudo apt-mark hold containerd.io to prevent the package from being updated. So do that, edit the config file, and pass all of the /dev/nvidia* devices in to docker run.
For the Failed to initialize NVML: Driver/library version mismatch issue, that is caused by the drivers updating but you haven't rebooted yet. If this is a production machine, I would also hold the driver package to stop that from auto-updating as well. You should be able to figure out the package name from something like sudo dpkg --get-selections "*nvidia*"
I'm a complete newcomer to Docker, so the following questions might be a bit naive, but I'm stuck and I need help.
I'm trying to reproduce some results in research. The authors just released code along with a specification of how to build a Docker image to reproduce their results. The relevant bit is copied below:
I believe I installed Docker correctly:
$ docker --version
Docker version 19.03.13, build 4484c46d9d
$ sudo docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
However, when I try checking that my nvidia-docker installation was successful, I get the following error:
$ sudo docker run --gpus all --rm nvidia/cuda:10.1-base nvidia-smi
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: nvml error: driver not loaded\\\\n\\\"\"": unknown.
It looks like the key error is:
nvidia-container-cli: initialization error: nvml error: driver not loaded
I don't have a GPU locally and I'm finding conflicting information on whether CUDA needs to be installed before NVIDIA Docker. For instance, this NVIDIA moderator says "A proper nvidia docker plugin installation starts with a proper CUDA install on the base machine."
My questions are the following:
Can I install NVIDIA Docker without having CUDA installed?
If so, what is the source of this error and how do I fix it?
If not, how do I create this Docker image to reproduce the results?
Can I install NVIDIA Docker without having CUDA installed?
Yes, you can. The readme states that nvidia-docker only requires NVIDIA GPU driver and Docker engine installed:
Note that you do not need to install the CUDA Toolkit on the host system, but the NVIDIA driver needs to be installed
If so, what is the source of this error and how do I fix it?
That's either because you don't have a GPU locally or it's not NVIDIA, or you messed up somewhere when installed drivers. If you have a CUDA-capable GPU I recommend using NVIDIA guide to install drivers. If you don't have a GPU locally, you can still build an image with CUDA, then you can move it somewhere where there is a GPU.
If not, how do I create this Docker image to reproduce the results?
The problem is that even if you manage to get rid of CUDA in Docker image, there is software that requires it. In this case fixing the Dockerfile seems to me unnecessary - you can just ignore Docker and start fixing the code to run it on CPU.
I think you need
ENV NVIDIA_VISIBLE_DEVICES=void
then
RUN your work
finally
ENV NVIDIA_VISIBLE_DEVICES=all
I'm trying to create a Windows Docker Container with access to GPUs. To start I just wanted to try check if I can access GPU on Docker containers.
Dockerfile
FROM mcr.microsoft.com/windows:1903
CMD [ "ping", "-t", "localhost" ]
Build and run
docker build -t debug_image .
docker run -d --gpus all --mount src="C:\Program Files\NVIDIA Corporation\NVSMI",target="C:\Program Files\NVIDIA Corporation\NVSMI",type=bind debug_image
docker exec -it CONTAINER_ID powershell
Problem and question
Now that I'm inside, I try to execute my shared NVIDIA SMI executable. However, I got an error and it's not capable of running. The obvious question is why, if the host is capable.
PS C:\Program Files\NVIDIA Corporation\NVSMI> .\nvidia-smi.exe
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA
driver. Make sure that the latest NVIDIA driver is installed and
running. This can also be happening if non-NVIDIA GPU is running as
primary display, and NVIDIA GPU is in WDDM mode.
About NVIDIA Driver, AFAIK it should not return any problem, since it works on the HOST, where NVIDIA Driver is installed.
My host has 2 NVIDIA GPUs, and it has no "primary" display as it's a server with no screen connected. AFAIK, it's CPU doesn't have an integrated GPU, so I would assume one of the connected NVIDIA GPUs is the primary display (if it does exist when no display is connected to the server)(also, I think one should be it, because one renders the screen when I connect through TeamViewer if needed, and dxdiag returns one of them as Display 1).
About WDDM mode, I've found ways to change it, but didn't found ways to check the current mode.
So basically the question, is why is it not working? Any insight or help in the previous points would be helpful.
Update.
About:
1) I've updated my drivers from 431 to 441, latest version available for GTX 1080 Ti, and the error message remains the same.
2-3) I've confirmed that GTX (Except some Titan models) cannot run in TCC mode. Therefore they're running in WDDM mode.
docker is gigving me a hard time currently. I followed these instructions in order to install docker on my virtual server running Ubuntu 14.04 hosted by strato.de.
wget -qO- https://get.docker.com/ | sh
Executing this line runs me directly into this error message:
modprobe: ERROR: ../libkmod/libkmod.c:507 kmod_lookup_alias_from_builtin_file() could not open builtin file '/lib/modules/3.13.0-042stab092.3/modules.builtin.bin'modprobe: FATAL: Module aufs not found.
Warning: current kernel is not supported by the linux-image-extra-virtual
package. We have no AUFS support. Consider installing the packages linux-image-virtual kernel and linux-image-extra-virtual for AUFS support.
After the installation was done, I installed the two mentioned packages. Now my problem is that I can't get docker to run.
service docker start
results in:
start: Job failed to start
docker -d
results in
INFO[0000] +job serveapi(unix:///var/run/docker.sock)
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
ERRO[0000] 'overlay' not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay support loaded.
INFO[0000] +job init_networkdriver()
WARN[0000] Running modprobe bridge nf_nat failed with message: , error: exit status 1
package not installed
INFO[0000] -job init_networkdriver() = ERR (1)
FATA[0000] Shutting down daemon due to errors: package not installed
and
docker run hello-world
results in
FATA[0000] Post http:///var/run/docker.sock/v1.18/containers/create: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
Does anybody have a clue about what dependencies could be missing? What else could have gone wrong? Are there any logs which docker provides?
I'm searching back and forth for a solution, but couldn't find one.
Just to mention this is a fresh Ubuntu 14.04 setup. I didn't install any other services except for java. And the reason why I need docker is for using the dockerimage of sharelatex.
I'm thankful for any help!
Here's what I tried/found out, hoping that it will save you some time or even help you solve it.
Docker's download script is trying to identify the kernel through uname -r to be able to install the right kernel extras for your host.
I suspect two problems:
My (united-hoster.de) and probably your provider use customized kernel images (eg. 3.13.0-042stab108.2) for virtual hosts. Since the script is explicitly looking for -generic in the name, the lookup fails.
While the naming problem would be easy to fix, I wasn't able to install the generic kernel extras with my hoster's custom kernel. It seems like using a upgrading the kernel does not work either, since it would affect all users/vHosts on the same physical machine. This is because the kernel is shared (stated in some support ticket).
To get around that ..
I skipped it, hoping that Docker would work without AUFS support, but it didn't.
I tried to force Docker to use devicemapper instead, but to no avail.
I see two options: get a dedicated host so you can mess with kernels and filesystems or a least let the docker installer do it or install the binaries manually.
You need to start docker
sudo start docker
and then
sudo docker run hello-world
I faced same problem on ubuntu 14.04, solved.
refer comment of Nino-K https://github.com/docker-library/hello-world/issues/3