How can docker-runc be used inside a container? - docker

What is necessary to enable docker-runc to work inside a container?
The following command will list the plugins, but with PID = 0 and status = stopped. Are additional volumes needed?
docker run -v /var/run/docker.sock:/var/run/docker.sock -v /run/docker/plugins/runtime-root/plugins.moby:/run/docker/plugins/runtime-root/plugins.moby docker docker-runc --root /run/docker/plugins/runtime-root/plugins.moby list
Result:
ID PID STATUS BUNDLE CREATED OWNER
abf6245ea65ee121ff48c30f99c283dac49d225221579ee4a140b7d8a843f200 0 stopped /run/docker/containerd/daemon/io.containerd.runtime.v1.linux/plugins.moby/abf6245ea65ee121ff48c30f99c283dac49d225221579ee4a140b7d8a843f200 2018-11-01T16:26:26.605625462Z root

Related

Is there a way to know the status of a systemctl process running in the host from a docker container(base image python:3.9)?

I am trying to get the status of a systemctl service which is in the host from the docker container.
I tried to volume mount the necessary unit files and made the docker work in privileged mode. But still i get an error saying the status of the service is inactive while the status in the host is in active.
docker run --privileged -d -v /etc/systemd/system/:/etc/systemd/system/ -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /usr/lib/systemd/system/:/usr/lib/systemd/system/ test-docker:latest
Is there a way to achieve this or any equivalent way of doing this ?
The way systemctl communicates with systemd to get service status is through a d-bus socket. The socket is hosted in /run/systemd. So we can try this:
docker run --privileged -v /run/systemd:/run/systemd -v /sys/fs/cgroup:/sys/fs/cgroup test-docker:latest
But when we run systemctl status inside the container, we get:
[root#1ed9d836a142 /]# systemctl status
Failed to connect to bus: No data available
It turns out that for this to work, systemctl expects systemd to be pid 1, but inside the container, pid 1 is something else. We can resolve this by running the container in the host PID namespace:
docker run --privileged -v /run/systemd:/run/systemd --pid=host test-docker:latest
And now we're able to successfully communicate with systemd:
[root#7dccc711a471 /]# systemctl status | head
● 7dccc711a471
State: degraded
Jobs: 0 queued
Failed: 2 units
Since: Sun 2022-06-26 03:11:44 UTC; 5 days ago
CGroup: /
├─kubepods
│ ├─burstable
│ │ ├─pod23889a3e-0bc3-4862-b07c-d5fc9ea1626c
│ │ │ ├─1b9508f0ab454e2d39cdc32ef3e35d25feb201923d78ba5f9bc2a2176ddd448a
...

How to run crictl command as non root user

How to run crictl as non-root user.
My docker commands work with non-root user because my user is added to docker group.
id
uid=1002(kube) gid=100(users) groups=100(users),10(wheel),1001(dockerroot),1002(docker)
I am running dockerD daemon which uses containerd and runc as runtime.
I installed crictl binary and pointed it to connect to existing dockershim socket with config file as below.
cat /etc/crictl.yaml
runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
timeout: 2
debug: false
pull-image-on-create: false
crictl works fine with sudo but without sudo it fails like this.
[user#hostname~]$ crictl ps
FATA[0002] connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded
I also tried to change group of dockershim.sock to 'docker' from 'root' just like docker.sock was to try, still same.
srwxr-xr-x 1 root docker 0 Jan 2 23:36 /var/run/dockershim.sock
srw-rw---- 1 root docker 0 Jan 2 23:33 /var/run/docker.sock
sudo usermod -aG docker $USER
or you can see docker postinstall

Docker device-cgroups-rule, mknod and mount

I'm attempting to implement what is described here:
https://docs.docker.com/engine/reference/commandline/create/#dealing-with-dynamically-created-devices---device-cgroup-rule
Similar to the page I am creating (and then starting) a container as follows:
docker create --device-cgroup-rule='b 8:* rmw' -name my-container my-image
Quoting from the above page
Then, a user could ask udev to execute a script that would docker exec
my-container mknod newDevX c 42 the required device when it is
added.
Within the container (docker exec -it my-container sh) I then mknod a device:
mknod /dev/sdc1 b 8 33
The device was reported as above by lsblk:
sdc 8:32 1 500M 0 disk
└─sdc1 8:33 1 500M 0 part
mknod succeeds but mounting /dev/sdc1 gives an error:
$ mount /dev/sdc1 /mnt
mount: /mnt: permission denied.
I also tried various other things like
mknod with -m
docker start with --cap-add=CAP_MKNOD
EDIT:
I also tried starting with --privileged but without the /dev/sdc1 precreated and it worked. It must have something to do with Capabilities or other differences between privileged and non-privileged mode. I tried with --cap-add=CAP_MKNOD and CAP_SYS_ADMIN but it now reports a difference message:
$ mount /dev/sdc1 /mnt
mount: /mnt: cannot mount /dev/sdc1 read-only.

docker - driver "devicemapper" failed to remove root filesystem after process in container killed

I am using Docker version 17.06.0-ce on Redhat with devicemapper storage. I am launching a container running a long-running service. The master process inside the container sometimes dies for whatever reason. I get the following error message.
/bin/bash: line 1: 40 Killed python -u scripts/server.py start go
I would like the container to exit and to be restarted by docker. However docker never exits. If I do it manually I get the following error:
Error response from daemon: driver "devicemapper" failed to remove root filesystem.
After googling, I tried a bunch of things:
docker rm -f <container>
rm -f <pth to mount>
umount <pth to mount>
All result in device is busy. The only remedy right now is to reboot the host system which is obviously not a long-term solution.
Any ideas?
I had the same problem and the solution was a real surprise.
So here is the error om docker rm:
$ docker rm 08d51aad0e74
Error response from daemon: driver "devicemapper" failed to remove root filesystem for 08d51aad0e74060f54bba36268386fe991eff74570e7ee29b7c4d74047d809aa: remove /var/lib/docker/devicemapper/mnt/670cdbd30a3627ae4801044d32a423284b540c5057002dd010186c69b6cc7eea: device or resource busy
Then I did the following (basically go through all processes and look for docker in mountinfo):
$ grep docker /proc/*/mountinfo | grep 958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac
/proc/20416/mountinfo:629 574 253:15 / /var/lib/docker/devicemapper/mnt/958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac rw,relatime shared:288 - xfs /dev/mapper/docker-253:5-786536-958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac rw,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota
This got be the PID of the offending process keeping it busy - 20416 (the item after /proc/)
So I did a ps -p and to my surprise find:
[devops#dp01app5030 SeGrid]$ ps -p 20416
PID TTY TIME CMD
20416 ? 00:00:19 ntpd
A true WTF moment. So I pair problem solved with Google and found this:
Then found this https://github.com/docker/for-linux/issues/124
Turns out I had to restart ntp daemon and that fixed the issue!!!

lxc-kill: failed to get the init pid

I got a problem ,it seems that the container is already stopped. Cause I ping the container's ip, and got no answer.
lxc-info indicate that the process is STOP
[root#matrix-node04 mnt]# lxc-info -n f3a939113d6e12450829a2dc76be3c761b818e63fbd33df513772e6e4485565e
state: STOPPED
pid: -1
But docker ps indicate the process is still running
[root#matrix-node04 mnt]# docker ps | grep f3a939113d6e
f3a939113d6e c69436ea2169 /bin/sh -c '/usr/loc 4 weeks ago Up 2 weeks d-mcl-354_lisx_test_kr22-n-3
can I use lxc-start to manualy start the container ? I tried the following cmd
[root#matrix-node04 mnt]# lxc-start -n f3a939113d6e12450829a2dc76be3c761b818e63fbd33df513772e6e4485565e -f /srv/docker/containers/f3a939113d6e12450829a2dc76be3c761b818e63fbd33df513772e6e4485565e/config.lxc
lxc-start: No such file or directory - failed to get real path for '/srv/docker/devicemapper/mnt/f3a939113d6e12450829a2dc76be3c761b818e63fbd33df513772e6e4485565e/rootfs'
lxc-start: failed to pin the container's rootfs
lxc-start: failed to spawn 'f3a939113d6e12450829a2dc76be3c761b818e63fbd33df513772e6e4485565e'
Has someone met this ?

Resources