I have implemented docker native health check by adding HEALTHCHECK command in Docker file as shown below,
HEALTHCHECK --interval=60s --timeout=15s --retries=3 CMD ["/svc/app/healthcheck/healthCheck.sh"]
set the entry point for the container
CMD [".././run.sh"]
executing the docker run command as shown below,
docker run -d --net=host --pid=host --publish-all=true -p 7000:7000/udp applicationname:temp
healthCheck.sh is exiting with 1, when my application is not up and I can see the container status as unhealthy, but it is not getting restarted.
STATUS
Up 45 minutes (unhealthy)
Below are the docker and OS details:
[root#localhost log]# docker -v
Docker version 18.09.7, build 2d0083d
OS version
NAME="CentOS Linux"
VERSION="7 (Core)"
How to restart my container automatically when it becomes unhealthy?
Docker only reports the status of the healthcheck. Acting on the healthcheck result requires an extra layer running on top of docker. Swarm mode provides this functionality and is shipped with the docker engine. To enable:
docker swarm init
Then instead of managing individual containers with docker run, you would declare your target state with docker service or docker stack commands and swarm mode will manage the containers to achieve the target state.
docker service create -d --net=host applicationname:temp
Note that host networking and publishing ports are incompatible (they make no logical sense together), net requires two dashes to be a valid flag, and changing the pid namespace is not supported in swarm mode. Many other features should work similar to docker run.
https://docs.docker.com/engine/reference/commandline/service_create/
There is no auto restart mechanism for unhealth container currently, see this, but you can make a workaround as mentioned here:
docker run -d \
--name autoheal \
--restart=always \
-e AUTOHEAL_CONTAINER_LABEL=all \
-v /var/run/docker.sock:/var/run/docker.sock \
willfarrell/autoheal
It add docker unix domain socket to the monitor container, then it could monitor all unhealthy container and restart it for you if other container is not healthy.
Related
I'm attempting to start a docker container from a docker image using the following command:
docker run -d \
--name mycontainer \
--network my-net \
-p 192.168.0.19:7777:7777/tcp \
-p 192.168.0.19:7777:7777/udp \
-p 192.168.0.19:27015:27015/tcp \
-p 192.168.0.19:27015:27015/udp \
cspringer/myimage
The container starts fine but when I use docker ps to list the running containers I see the following output:
aa3deb723745 cspringer/mycontainer "/bin/sh -c '/ark-de…" 2 seconds ago Up 1 second 127.0.0.1:32769->7777/tcp, 127.0.0.1:32769->7777/udp, 127.0.0.1:32768->27015/tcp, 127.0.0.1:32768->27015/udp mycontainer
I'm currently running Docker Desktop on a Windows 10 Home machine using WSL2 and Debian Linux. 192.168.0.19 is the IP address of my host system.
My questions are:
Why is Docker changing the assigned IP address to the loopback address?
Why is Docker assigning random port numbers?
To make things even a little stranger, I can actually connect to the service in the running container without a problem. Additionally, the first time I created the container I did not experience this issue. I created it last night and stopped the container before the end of the evening. Then today after starting the container using docker start mycontainer it did this. I then deleted the container and re-created it using the preceding commands. I have not been able to get it to display properly since that time.
I'm trying to use docker command inside container.
i use this command to mount /var/run/docker.sock and run container
docker run -d --name gitlab-runner --restart always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
gitlab/gitlab-runner:latest
but when i try to use docker inside container(gitlab-runner) i get an error
docker: not found
host:
srw-rw---- 1 root docker 0 Mar 23 15:13 docker.sock
container:
0 srw-rw---- 1 root gitlab-runner 0 Mar 23 15:13 docker.sock
this worked fine, before i removed old container and created new one, and now i'm unable to run docker inside container. Please help.
You should differentiate between docker daemon and docker CLI. First one is a service, which actually performs all work - builds and runs containers. The second one is an executable, used to send commands to daemon.
Executable (docker CLI) is lightweight and uses /var/run/docker.sock to access daemon (by default, there are different transports actually).
When you start your container with -v /var/run/docker.sock:/var/run/docker.sock you actually share your host's docker daemon to docker CLI in container. Thus, you still need to install docker CLI inside container to make use of Docker, but you dont need to setup daemon inside (which is pretty complicated and requires priviledged mode).
Conclusion
Install docker CLI inside container, share socket and enjoy. But upon using host's docker daemon, you will probably be confused with bind mounting volumes because daemon doesn't see the container's internal file system.
I am running single docker container on two different ports using below command
docker run -p ${EXTERNAL_PORT_NUMBER}:${INTERNAL_PORT_NUMBER} -p ${EXTERNAL_PORT_NUMBER_SECOND}:${INTERNAL_PORT_NUMBER_SECOND} --network ${NETWORK} --name ${SERVICE_NAME} --restart always -m 1024M --memory-swap -1 -itd ${ORGANISATION}/${SERVICE_NAME}:${VERSION}
I can see the container is running fine
My question is How can I see the logs of this docker container.
Every time I do sudo docker logs database-service -f I can see the log of container running on 9003 port only.
How can I view the logs of container running on 9113
You are getting all the logs that was displayed on stdout or stderr in the container.
It has nothing to do with the processes which are exposed on different ports.
If 2 instance is running inside the container and both are showing there logs on system console then you will be getting both logs on the docker logs command for the container.
You can try multitail utility to tail more than one log files in docker exec command.
For that you have to install it in that container.
You can bind external volumes to container service logs and see the logs
docker run -v 'path_to_you_host_logs':'container_service_log_path'
docker run -v 'home/user/app/apache_access.log':
'/var/log/apache_access.log'
On my host machine, I have installed docker. Then I pull a Jenkins image.
I want to run that image like daemon service like some services runs on my host machine after rebooting my machine every time. And how can I fix Jenkins port permanent(like 8080) in mine docker?
docker run -d --restart always -p 8080:8080 -p 50000:50000 -v /your/home:/var/jenkins_home jenkins
-d: for running the container in background
--restart always: for the container to always restart (unless manually stopped), it will start automatically at boot.
The rest of the arguments are from the jenkins image documentation, you may need to adapt your port mapping and volume path.
As the title indicates, is it possible to restart the host from a container? I have a docker container running with systemd as described here and started as:
$ docker run -privileged --net host -ti -d -v /sys/fs/cgroup:/sys/fs/cgroup:ro <image_name>
Once I issue the systemctl reboot command, I see:
# systemctl reboot
[root#dhcp-40-115 /]#
[3]+ Stopped
The host doesn't reboot. However, I see [1915595.016950] systemd-journald[17]: Received SIGTERM from PID 1 (systemd-shutdow). on host's kernel buffer.
Use case:
I am experimenting with running the restraint test harness in a container and some of the tests reboot the host and hence if this is possible to do from a container, the tests can run unchanged.
Update
As I mention in my answer:
There is a detail I missed in my question above which is once I have
systemd running in the container itself, the systemctl reboot is
(roughly saying) connecting to systemd on the container itself which
is not what I want.
The accepted answer has the advantage that it is not dependent on the host and the container distro be have compatible systemd. However, on a setup where they are, my answer is what I think is a more acceptable one, since you can just use the usual reboot command.
Other init systems such as upstart is untested.
I was able to send sysrq commands to the host mounting /proc/sysrq-trigger as a volume.
This booted the host.
docker-server# docker run -i -t -v /proc/sysrq-trigger:/sysrq centos bash
docker-container# echo b > /sysrq
You can set a bit-mask permission on /proc/sys/kernel/sysrq on the host to only allow eg, sync the disks and boot. More information about this at http://en.wikipedia.org/wiki/Magic_SysRq_key but something like this (untested) should set those permissions:
echo 144 > /proc/sys/kernel/sysrq
Also remember to add kernel.sysrq = 144 to /etc/sysctl.conf to have it saved over reboots.
There is a detail I missed in my question above which is once I have systemd running in the container itself, the systemctl reboot is (roughly saying) connecting to systemd on the container itself which is not what I want.
On the hint of a colleague, here is what I did on a "stock" fedora image (nothing special in it):
$ docker run -ti -v /run/systemd:/run/systemd fedora /bin/bash
Then in the container:
bash-4.2# systemctl status docker
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled)
Active: active (running) since Tue 2014-07-01 04:57:22 UTC; 2 weeks 0 days ago
Docs: http://docs.docker.io
Main PID: 2589
CGroup: /system.slice/docker.service
Here, the container is able to access systemd on the host. Then, issuing a reboot command actually reboots the host:
bash-4.2# reboot
Thus, it is possible to reboot the host from the container.
The point to note here is that the host is running Fedora 20 and so is the container. If the host was a different distro not running systemd, this would not be possible. Generally speaking, if the host and the container are running distros which are not running systemd or incompatible versions of systemd, this will not work.
Adding to user59634's answer:
-v /run/systemd:/run/systemd works on fedora 27 and ubuntu 16
But the only socket you need is
docker run -it --rm -v /run/systemd/private:/run/systemd/private fedora reboot
You can also use /run/dbus, but I like this systemd method more. I do not fully understand how much power you are giving the container, I suspect it is a enough to take over your host. So I would only suggest using this in a container that you wrote, and then communicate with any another container see here.
Unrelated similar information
Sleeping/suspending/hibernating can be done with only the -v /sys/power/state:/sys/power/state, and using /lib/systemd/systemd-sleep suspend for example. If you know how to, you can echo a string directly to /sys/power/state, for example echo mem > /sys/power/state here for more explanation of the different options you get from cat /sys/power/state
docker run -d --name network_monitor --net host --restart always --privileged --security-opt apparmor=unconfined --cap-add=SYS_ADMIN \
-v /proc:/proc \
$IMAGE_URI
docker container must be granted enough permission to mount /proc