Getting Docker to run on a Raspberry Pi 3 - docker

When trying to build a docker container on my raspberry pi 3, I encountered this error
---> Running in adfb8905587b
failed to create endpoint amazing_hamilton on network bridge: failed to add the host (vetha45fbc5) <=> sandbox (veth7714d12) pair interfaces: operation not supported
I was able to find someone else with the same issue here, and their solution was that we're missing the "Raspberry Pi Linux kernel extra modules" and to install it with these command
sudo apt update
sudo apt install linux-modules-extra-raspi
I've found that this command does not work for me, and returns the following error
E: Unable to locate package linux-modules-extra-raspi
How can I resolve this issue and get docker running on my Raspberry Pi 3?

A kernel update will do the job, simply call:
sudo rpi-update

Related

docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]. AFTER installing nvidia-docker2

I followed the instructions to install the nvidia-docker2 from the official documentation https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html
Whenever I run their test example:
sudo docker run --rm --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi
I still get the error:
docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]. 3
I rebooted but still no effect.
I am on Ubuntu 22.04 with my nvidia drivers updated.
Nvidia-smi works on the machine but not working using docker
EDIT (SOLVED): Finally I found out what was going on.
When reinstalling, it was working, however if rebooting, it was going again to a previous state where it was not working.
This was due to the installation of another docker service installed using "snapd" so I had to purge completely docker:
sudo snap remove docker and after I could "Reinstall everything" and it finally is stable, even after rebooting
Unfortunately I was not able to "Fix" properly the issue so I purge all docker package and all nvidia container packages and reinstalled everything and now it works!!
Good old methods work fine :)
you need to restart the docker daemon :
sudo systemctl restart docker
if the problem still occurs install the nvidia-container-toolkit then restart docker daemon.

how to run linux kvm qcow2 image file in docker

background
My learning objective is to set up an aws s3 gateway server locally on my raspberry pi for kubernetes to connect to s3 via nfs. aws has also provided some instructions on gateway server creation. (source: aws nfs csi, gateway creation).
problem
what I am unsure of, is how to set up the gateway server in kubernetes. So for starters, I'm trying to build a docker image that could launch the linux kvm qcow2 image that they have provided. but this is where i am failing.
what i've tried to do so far
dockerfile
FROM ubuntu:latest
COPY ./aws-storage-gateway-ID_NUMBER /
RUN apt-get update && apt-get upgrade -y &&\
apt-get install qemu qemu-kvm libvirt-clients libvirt-daemon-system virtinst bridge-utils -y
within this docker image, i tried to follow the instructions in gateway creation but i'm met with this error from virsh
root#ac48abdfc902:/# virsh version
Authorization not available. Check if polkit service is running or see debug message for more information.
error: failed to connect to the hypervisor
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
true enough, /var/run/libvirt/libvirt-sock does not exist. but I am stuck and can't find any useful information to resolve this error to get virsh running.
any thoughts and ideas would be appreciated.

nvidia-smi gives an error inside of a docker container

Sometimes I can't communicate with my Nvidia GPUs inside a docker container when I came back to my workplace from home, even though the previously launched process that utilizes GPUs is running well. The running process (training a neural network via Pytorch) is not affected by the disconnection but I cannot launch a new process.
nvidia-smi gives Failed to initialize NVML: Unknown Error and torch.cuda.is_available() returns False likewise.
I met two different cases:
nvidia-smi works fine when it is done at the host machine. In this case, the situation can be solved by restarting the docker container via docker stop $MYCONTAINER followed by docker start $MYCONTAINER at the host machine.
nvidia-smi doesn't work at the host machine nor nvcc --version, throwing Failed to initialize NVML: Driver/library version mismatch and Command 'nvcc' not found, but can be installed with: sudo apt install nvidia-cuda-toolkit error. Strange point is that the current process still runs well. In this case, installing the driver again or rebooting the machine solves the problem.
However, these solutions require stopping all current processes. It would be unavailable when I should not stop the current process.
Does somebody has suggestion for solving this situation?
Many thanks.
(sofwares)
Docker version: 20.10.14, build a224086
OS: Ubuntu 22.04
Nvidia driver version: 510.73.05
CUDA version: 11.6
(hardwares)
Supermicro server
Nvidia A5000 * 8
(pic1) nvidia-smi not working inside of a docker container, but worked well on the host machine.
(pic2) nvidia-smi works after restarting a docker container, which is the case 1 I mentioned above
For the problem of Failed to initialize NVML: Unknown Error and having to restart the container, please see this ticket and post your system/package information there as well:
https://github.com/NVIDIA/nvidia-docker/issues/1671
There's a workaround on the ticket, but it would be good to have others post their configuration to help fix the issue.
Downgrading containerd.io to 1.6.6 works as long as you specify no-cgroups = true in /etc/nvidia-container-runtime/config.toml and specify the devices to docker run like docker run --gpus all --device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidia-modeset:/dev/nvidia-modeset --device /dev/nvidia-uvm:/dev/nvidia-uvm --device /dev/nvidia-uvm-tools:/dev/nvidia-uvm-tools --device /dev/nvidiactl:/dev/nvinvidiactl --rm -it nvidia/cuda:11.4.2-base-ubuntu18.04 bash
so sudo apt-get install -y --allow-downgrades containerd.io=1.6.6-1 and sudo apt-mark hold containerd.io to prevent the package from being updated. So do that, edit the config file, and pass all of the /dev/nvidia* devices in to docker run.
For the Failed to initialize NVML: Driver/library version mismatch issue, that is caused by the drivers updating but you haven't rebooted yet. If this is a production machine, I would also hold the driver package to stop that from auto-updating as well. You should be able to figure out the package name from something like sudo dpkg --get-selections "*nvidia*"

NVIDIA Docker - initialization error: nvml error: driver not loaded

I'm a complete newcomer to Docker, so the following questions might be a bit naive, but I'm stuck and I need help.
I'm trying to reproduce some results in research. The authors just released code along with a specification of how to build a Docker image to reproduce their results. The relevant bit is copied below:
I believe I installed Docker correctly:
$ docker --version
Docker version 19.03.13, build 4484c46d9d
$ sudo docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
However, when I try checking that my nvidia-docker installation was successful, I get the following error:
$ sudo docker run --gpus all --rm nvidia/cuda:10.1-base nvidia-smi
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: nvml error: driver not loaded\\\\n\\\"\"": unknown.
It looks like the key error is:
nvidia-container-cli: initialization error: nvml error: driver not loaded
I don't have a GPU locally and I'm finding conflicting information on whether CUDA needs to be installed before NVIDIA Docker. For instance, this NVIDIA moderator says "A proper nvidia docker plugin installation starts with a proper CUDA install on the base machine."
My questions are the following:
Can I install NVIDIA Docker without having CUDA installed?
If so, what is the source of this error and how do I fix it?
If not, how do I create this Docker image to reproduce the results?
Can I install NVIDIA Docker without having CUDA installed?
Yes, you can. The readme states that nvidia-docker only requires NVIDIA GPU driver and Docker engine installed:
Note that you do not need to install the CUDA Toolkit on the host system, but the NVIDIA driver needs to be installed
If so, what is the source of this error and how do I fix it?
That's either because you don't have a GPU locally or it's not NVIDIA, or you messed up somewhere when installed drivers. If you have a CUDA-capable GPU I recommend using NVIDIA guide to install drivers. If you don't have a GPU locally, you can still build an image with CUDA, then you can move it somewhere where there is a GPU.
If not, how do I create this Docker image to reproduce the results?
The problem is that even if you manage to get rid of CUDA in Docker image, there is software that requires it. In this case fixing the Dockerfile seems to me unnecessary - you can just ignore Docker and start fixing the code to run it on CPU.
I think you need
ENV NVIDIA_VISIBLE_DEVICES=void
then
RUN your work
finally
ENV NVIDIA_VISIBLE_DEVICES=all

Raspberry Pi 4B... Issues with docker not running because of aufs

To begin with I bought a raspberry pi 4b. I installed berry boot on the sd and connected a SSD 480gb. I managed to get it working with a Tv as a monitor hdmi group = 1 etc.
I then set up ssh with macbook so I can access my pi with my labtop or use the desktop raspbian.
Followed a 4 month old youtube video (relatively new).
https://www.youtube.com/watch?v=tx-Hfq5Hc6E
I used the commands as follows:
//install docker
#curl -sSL https://get.docker.com/ |sh
//Set up so root isn't used as user - recommended
#sudo usermod -aG docker pi
//https://hub.docker.com/r/michaelmiklis/rpi-monitor (last updated 2 years ago)...
//used quick start command for project (this worked for the video just 4 months ago)
#docker run --device=/dev/vchiq --device=/dev/vcsm --volume=/opt/vc:/opt/vc --volume=/boot:/boot --volume=/sys:/dockerhost/sys:ro --volume=/etc:/dockerhost/etc:ro --volume=/proc:/dockerhost/proc:ro --volume=/usr/lib:/dockerhost/usr/lib:ro -p=8888:8888 --name="rpi-monitor" -d michaelmiklis/rpi-monitor:latest
this command produces an error response:
#docker: Error response from daemon: error creating aufs mount to /var/lib/docker/aufs/mnt/b46dae086de8d3cf89a07e9079fb311d3989768f482418fbfdcf3c886aa5f516-init: mount target=/var/lib/docker/aufs/mnt/b46dae086de8d3cf89a07e9079fb311d3989768f482418fbfdcf3c886aa5f516-init data=br:/var/lib/docker/aufs/diff/b46dae086de8d3cf89a07e9079fb311d3989768f482418fbfdcf3c886aa5f516-init=rw:/var/lib/docker/aufs/diff/987d36feb1d5eda275ebd7191449f82db9921b8ea21d9f4d1089529b5bdcf30f=ro+wh:/var/lib/docker/aufs/diff/19ff179d7ce8587e1174b2cbba701a46910d7a5a4dbc62e9c58498cff5eb9bba=ro+wh:/var/lib/docker/aufs/diff/13be75a7cabe4da1b971b1fb8b25b6bc6fdb6ab13456790dfc25da23e97b9aa0=ro+wh:/var/lib/docker/aufs/diff/8b2d2a79ba8e0f4235d3ce623f6cda8486f7293af60abe24623f9fd70b2f7613=ro+wh:/var/lib/docker/aufs/diff/776dcea3b43ecd880a459ec7182305eea7d9779dab0fe45b69d3558a5401fc7b=ro+wh:/var/lib/docker/aufs/diff/a3f67653f5d5507a1ca3a4627563bd916be979ae1593dda8d0073f1f152b01a3=ro+wh:/var/lib/docker/aufs/diff/3176f9f6ed8047e376bacf9ee42370158329d66a8cfe7afd7e1a4d65d5022698=ro+wh:/var/lib/docker/aufs/diff/7427fa9add426dee44acf434935d13c6fddaac36ad222b12037b8a0a8529c222=ro+wh:/var/lib/docker/aufs/diff/27ffbeb11ca61262f65d85dfd748190b9b64223112e7faaf5f7f69eb47ada066=ro+wh:/var/lib/docker/aufs/diff/6781db1c1025f9d601bb0c4dd54df180c8fbda8d0632cc77835574b74a3f7179=ro+wh:/var/lib/docker/aufs/diff/1fa0fdfb5625eb000d3f6424b84dd8d9c19fb3ce721789277a72f2588f547659=ro+wh:/var/lib/docker/aufs/diff/a3da4ba9bfe80df4efc955929d4fa0b63d0dbf922b483f9d582507adf38e0192=ro+wh:/var/lib/docker/aufs/diff/b027240667462a4ff2e540b0cbdbbd78fdf3000c1ecdae734efc0ef794bc90f0=ro+wh:/var/lib/docker/aufs/diff/7aea6d1beaa5a57f1eac445759c1acfa563b9a8c87ce592d61797caa92547f38=ro+wh:/var/lib/docker/aufs/diff/e3ff1cda15a224b9c3af5a1a772f3373635abd031c9a4546f157b4550b7df245=ro+wh:/var/lib/docker/aufs/diff/e751ee6e4e942642958f1d9459e5b707d3970b53d70e5ae1986a70cbd2973435=ro+wh,dio,xino=/dev/shm/aufs.xino: invalid argument.
See 'docker run --help'.
I have read about the my issue except in regards to the raspberry pi and haven't been able to workout a solution please help!
BTW I have tried uninstalling docker and installing it again (sudo apt remove docker-ce). If it is a driver issue I don't know how to tell docker to not use aufs but to only use ?"overlay2"?
Remember my pi boots from sd but runs raspbian os on my ssd (there might be an issue there with docker maybe). I very much appreciate any help.

Resources