As the title indicates, is it possible to restart the host from a container? I have a docker container running with systemd as described here and started as:
$ docker run -privileged --net host -ti -d -v /sys/fs/cgroup:/sys/fs/cgroup:ro <image_name>
Once I issue the systemctl reboot command, I see:
# systemctl reboot
[root#dhcp-40-115 /]#
[3]+ Stopped
The host doesn't reboot. However, I see [1915595.016950] systemd-journald[17]: Received SIGTERM from PID 1 (systemd-shutdow). on host's kernel buffer.
Use case:
I am experimenting with running the restraint test harness in a container and some of the tests reboot the host and hence if this is possible to do from a container, the tests can run unchanged.
Update
As I mention in my answer:
There is a detail I missed in my question above which is once I have
systemd running in the container itself, the systemctl reboot is
(roughly saying) connecting to systemd on the container itself which
is not what I want.
The accepted answer has the advantage that it is not dependent on the host and the container distro be have compatible systemd. However, on a setup where they are, my answer is what I think is a more acceptable one, since you can just use the usual reboot command.
Other init systems such as upstart is untested.
I was able to send sysrq commands to the host mounting /proc/sysrq-trigger as a volume.
This booted the host.
docker-server# docker run -i -t -v /proc/sysrq-trigger:/sysrq centos bash
docker-container# echo b > /sysrq
You can set a bit-mask permission on /proc/sys/kernel/sysrq on the host to only allow eg, sync the disks and boot. More information about this at http://en.wikipedia.org/wiki/Magic_SysRq_key but something like this (untested) should set those permissions:
echo 144 > /proc/sys/kernel/sysrq
Also remember to add kernel.sysrq = 144 to /etc/sysctl.conf to have it saved over reboots.
There is a detail I missed in my question above which is once I have systemd running in the container itself, the systemctl reboot is (roughly saying) connecting to systemd on the container itself which is not what I want.
On the hint of a colleague, here is what I did on a "stock" fedora image (nothing special in it):
$ docker run -ti -v /run/systemd:/run/systemd fedora /bin/bash
Then in the container:
bash-4.2# systemctl status docker
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled)
Active: active (running) since Tue 2014-07-01 04:57:22 UTC; 2 weeks 0 days ago
Docs: http://docs.docker.io
Main PID: 2589
CGroup: /system.slice/docker.service
Here, the container is able to access systemd on the host. Then, issuing a reboot command actually reboots the host:
bash-4.2# reboot
Thus, it is possible to reboot the host from the container.
The point to note here is that the host is running Fedora 20 and so is the container. If the host was a different distro not running systemd, this would not be possible. Generally speaking, if the host and the container are running distros which are not running systemd or incompatible versions of systemd, this will not work.
Adding to user59634's answer:
-v /run/systemd:/run/systemd works on fedora 27 and ubuntu 16
But the only socket you need is
docker run -it --rm -v /run/systemd/private:/run/systemd/private fedora reboot
You can also use /run/dbus, but I like this systemd method more. I do not fully understand how much power you are giving the container, I suspect it is a enough to take over your host. So I would only suggest using this in a container that you wrote, and then communicate with any another container see here.
Unrelated similar information
Sleeping/suspending/hibernating can be done with only the -v /sys/power/state:/sys/power/state, and using /lib/systemd/systemd-sleep suspend for example. If you know how to, you can echo a string directly to /sys/power/state, for example echo mem > /sys/power/state here for more explanation of the different options you get from cat /sys/power/state
docker run -d --name network_monitor --net host --restart always --privileged --security-opt apparmor=unconfined --cap-add=SYS_ADMIN \
-v /proc:/proc \
$IMAGE_URI
docker container must be granted enough permission to mount /proc
Related
I run into this issue:
when I execute systemctl, there get error:
[root#eb00fc55c925 yum.repos.d]# systemctl start salt-minion
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
I found the solution run the container should add --privileged=true:
docker run -tid --name test --privileged=true centos /sbin/init
docker exec -it test /bin/bash
but now, in my container, I have installed some software and configured the network for it.
If I exit it, there will lost the network which is configured for it.
is it possible to assign the --privileged=true property to this container without lost network configured to it?
No, you can't do that because of how privileges are assigned by operating system.
I have implemented docker native health check by adding HEALTHCHECK command in Docker file as shown below,
HEALTHCHECK --interval=60s --timeout=15s --retries=3 CMD ["/svc/app/healthcheck/healthCheck.sh"]
set the entry point for the container
CMD [".././run.sh"]
executing the docker run command as shown below,
docker run -d --net=host --pid=host --publish-all=true -p 7000:7000/udp applicationname:temp
healthCheck.sh is exiting with 1, when my application is not up and I can see the container status as unhealthy, but it is not getting restarted.
STATUS
Up 45 minutes (unhealthy)
Below are the docker and OS details:
[root#localhost log]# docker -v
Docker version 18.09.7, build 2d0083d
OS version
NAME="CentOS Linux"
VERSION="7 (Core)"
How to restart my container automatically when it becomes unhealthy?
Docker only reports the status of the healthcheck. Acting on the healthcheck result requires an extra layer running on top of docker. Swarm mode provides this functionality and is shipped with the docker engine. To enable:
docker swarm init
Then instead of managing individual containers with docker run, you would declare your target state with docker service or docker stack commands and swarm mode will manage the containers to achieve the target state.
docker service create -d --net=host applicationname:temp
Note that host networking and publishing ports are incompatible (they make no logical sense together), net requires two dashes to be a valid flag, and changing the pid namespace is not supported in swarm mode. Many other features should work similar to docker run.
https://docs.docker.com/engine/reference/commandline/service_create/
There is no auto restart mechanism for unhealth container currently, see this, but you can make a workaround as mentioned here:
docker run -d \
--name autoheal \
--restart=always \
-e AUTOHEAL_CONTAINER_LABEL=all \
-v /var/run/docker.sock:/var/run/docker.sock \
willfarrell/autoheal
It add docker unix domain socket to the monitor container, then it could monitor all unhealthy container and restart it for you if other container is not healthy.
I'm trying to use docker command inside container.
i use this command to mount /var/run/docker.sock and run container
docker run -d --name gitlab-runner --restart always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
gitlab/gitlab-runner:latest
but when i try to use docker inside container(gitlab-runner) i get an error
docker: not found
host:
srw-rw---- 1 root docker 0 Mar 23 15:13 docker.sock
container:
0 srw-rw---- 1 root gitlab-runner 0 Mar 23 15:13 docker.sock
this worked fine, before i removed old container and created new one, and now i'm unable to run docker inside container. Please help.
You should differentiate between docker daemon and docker CLI. First one is a service, which actually performs all work - builds and runs containers. The second one is an executable, used to send commands to daemon.
Executable (docker CLI) is lightweight and uses /var/run/docker.sock to access daemon (by default, there are different transports actually).
When you start your container with -v /var/run/docker.sock:/var/run/docker.sock you actually share your host's docker daemon to docker CLI in container. Thus, you still need to install docker CLI inside container to make use of Docker, but you dont need to setup daemon inside (which is pretty complicated and requires priviledged mode).
Conclusion
Install docker CLI inside container, share socket and enjoy. But upon using host's docker daemon, you will probably be confused with bind mounting volumes because daemon doesn't see the container's internal file system.
I am using archlinux/base official image from docker hub.
I am trying to use systemctl and it says.
$ docker run --rm -it ac16c4b756ed systemctl start httpd
System has not been booted with systemd as init system (PID 1). Can't operate.
How to solve this.
If your goal is to run an Apache Web Server (httpd), you should use the httpd image.
Docker containers are generally meant to run a single process. So, you wouldn't normally design a container to run something like systemd as the root process, and then run httpd as a child process. You would just run httpd directly in the foreground. The httpd image does this.
Well, "systemctl" does not do anything by itself but it will ask the systemd daemon to perform some task. It ususally communitcates with it by means of a socket. So the systemd daemon has to be started already. There are some base images which do actually run systemd as PID-1 if needed.
Personally, I would not recommend that however. If you really need to stick with running systemctl commands then you can also try to use the docker-systemctl-replacement script on that operating system as well. It can also serve as the PID-1 of a container.
I am using docker for windows 10 for development. Before I used Docker Toolbox on windows 8. I am used to "tune" the host virtual machine in this case the MobyLinuxVM.
When I try to connect in hyper-v manager i get error cannot connect. When I try to docker-machine ls I get no docker machines. How can I possibly access the underlying machine on docker for windows 10?
Problems I want to solve are (aka why I want to connect):
Ubuntu apt-get doesnt work for me (I am behind proxy) get errors like E: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/xenial/universe/source/Sources Cannot initiate the connection to 3128:80 (0.0.12.56). - connect (22: Invalid argument). On other hand Centos yum, curl,... works. http_proxy variables are set.
I want to turn off swap on the host.
update
Solved problem with apt-get by changin configuration of http proxy in docker settings from 1.2.3.4:1234 to http://1.2.3.4:1234/.
update 2
Worked around the problem by modifying /etc/init.d/automount in host and added swapoff -a.
I was able to access host MobyLinuxVM through container run with various privieleges.
First I ran container like that (note the double slash when mounting root filesystem. Single slash didnt work for me in powershell)
$ docker run --net=host --ipc=host --uts=host --pid=host -it --security-opt=seccomp=unconfined --privileged --rm -v //:/host alpine sh
after that when I got into container I just did
$ chroot /host
and then I could access all i needed. /etc/fstab or swapoff -a.