Health Check command for docker(1.12) container (Not in Dockerfile!) - docker

Docker Version 1.12,
I got a Dockerfile from Here
FROM nginx:latest
RUN touch /marker
ADD ./check_running.sh /check_running.sh
RUN chmod +x /check_running.sh
HEALTHCHECK --interval=5s --timeout=3s CMD ./check_running.sh
I'm able to roll the updates and health checks with check_running.sh shell script. Here, the check_running.sh script is copied to image, so the launched container has it.
Now, my question is there any way to Health Check from out side of the container and script also located outside.
I'm excepting a health check command to get the container performance(Depends on what we wrote in script), IF the container is not performing good it should roll-back to previous version ( Kind of a process that monitors the containers, if it is not good, it should roll-back to previous)
Thanks

is there any way to Health Check from out side of the container and
script also located outside.
Kind of a process that monitors the containers, if it is not good, it should roll-back to previous
You have several options:
From outside, you run a process inside the container to check its health with docker exec. This could be any sequence of shell commands. If you want to keep your scripts outside of the container, you might use something like cat script.sh | docker exec -it container sh -s.
You check the container health from outside the container, e.g. by looking for a process that should be running inside the container (try to set a security profile and use ps -Zax or try looking for children of the daemon), or you can give each container a specific user ID with --user 12345 and then look for that or e.g. connecting to its services. You'd have to make sure it's running inside the right container. You can access the containers' filesystem below /var/lib/docker/devicemapper/mnt/<hash>/rootfs.
You run a HEALTHCHECK inside the container and check its health with docker inspect --format='{{json .State.Health.Status}}' <containername> combined with e.g. a line in the Dockerfile:
HEALTHCHECK CMD wget -q -s http://some.host to check the container has internet access.
I'd recommend option 3, because it's likely to be more compatible with other tools in the future.

Just got comment from a blog!. He refered Docker documentation HealthCheck section. There is a health check "option" for docker command to "override" the dockerfile defaults. I have not checked yet!. But it seems good for me to get what I want. Will check and update the answer!

The Docker inspect command lets you view the output of commands that succeed or fail
docker inspect --format='{{json .State.Health}}' your-container-name

That's not available with the Dockerfile HEALTHCHECK option, all checks run inside the container. To me, this is a good thing since it avoids potentially untrusted code running directly on the host, and it allows you to include the dependencies for the health check inside your container.
If you need to monitor your container from outside, you'll need to use another tool or monitoring application, there are quite a few of them out there.

You can view the results of the health check by running docker inspect on a container.
Another approach depending on your application would be to expose a /healthz endpoint that the healthcheck also probes, this way it can be queried externally or internally as needed.

Related

Is there a point in Docker start?

So, is there a point in the command "start"? like in "docker start -i albineContainer".
If I do this, I can't really do anything with the albine inside the container, I would have to do a run and create another container with the "-it" command and "sh" after (or "/bin/bash", don't remember it correctly right now).
Is that how it will go most of the times? delete and rebuilt containers and do the command "-it" if you want to do stuff in them? or would it more depend on the Dockerfile, how you define the cmd.
New to Docker in general and trying to understand the basics on how to use it. Thanks for the help.
Running docker run/exec with -it means you run the docker container and attach an interactive terminal to it.
Note that you can also run docker applications without attaching to them, and they will still run in the background.
Docker allows you to run a program (which can be bash, but does not have to be) in an isolated environment.
For example, try running the jenkins docker image: https://hub.docker.com/_/jenkins.
this will create a container, without you having attach to it, and you would still be able to use it.
You can also attach to an existing, running container by using docker exec -it [container_name] bash.
You can also use docker logs to peek at the stdout of a certain docker container, without actually attaching to its shell interactively.
You almost never use docker start. It's only possible to use it in two unusual circumstances:
If you've created a container with docker create, then docker start will run the process you named there. (But it's much more common to use docker run to do both things together.)
If you've stopped a container with docker stop, docker start will run its process again. (But typically you'll want to docker rm the container once you've stopped it.)
Your question and other comments hint at using an interactive shell in an unmodified Alpine container. Neither is a typical practice. Usually you'll take some complete application and its dependencies and package it into an image, and docker run will run that complete packaged application. Tutorials like Docker's Build and run your image go through this workflow in reasonable detail.
My general day-to-day workflow involves building and testing a program outside of Docker. Once I believe it works, then I run docker build and docker run, and docker rm the container once I'm done. I rarely run docker exec: it is a useful debugging tool but not the standard way to interact with a process. docker start isn't something I really ever run.

How to determine a process inside a docker container is run compelety programmatically

I init a docker container as my compile env, now I just want to know how to determine if the entry code is run completely, then I can do another thing...
I'm not really sure if that is what you are asking about but when command pointed in CMD or ENTRYPOINT ends the docker container will stop.
To list running containers use:
docker ps
To list all containers (including stopped) use:
docker ps -a
You can take a look in the container with docker logs.
To use this you should support something in your code to determine that your code run succesfull e.g. for python the logger library (or simpler: print-statements).

Start service using systemctl inside docker container

In my Dockerfile I am trying to install multiple services and want to have them all start up automatically when I launch the container.
One among the services is mysql and when I launch the container I don't see the mysql service starting up. When I try to start manually, I get the error:
Failed to get D-Bus connection: Operation not permitted
Dockerfile:
FROM centos:7
RUN yum -y install mariadb mariadb-server
COPY start.sh start.sh
CMD ["/bin/bash", "start.sh"]
My start.sh file:
service mariadb start
Docker build:
docker build --tag="pbellamk/mariadb" .
Docker run:
docker run -it -d --privileged=true pbellamk/mariadb bash
I have checked the centos:systemd image and that doesn't help too. How do I launch the container with the services started using systemctl/service commands.
When you do docker run with bash as the command, the init system (e.g. SystemD) doesn’t get started (nor does your start script, since the command you pass overrides the CMD in the Dockerfile). Try to change the command you use to /sbin/init, start the container in daemon mode with -d, and then look around in a shell using docker exec -it <container id> sh.
Docker is designed around the idea of a single service/process per container. Although it definitely supports running multiple processes in a container and in no way stops you from doing that, you will run into areas eventually where multiple services in a container doesn't quite map to what Docker or external tools expect. Things like moving to scaling of services, or using Docker swarm across hosts only support the concept of one service per container.
Docker Compose allows you to compose multiple containers into a single definition, which means you can use more of the standard, prebuilt containers (httpd, mariadb) rather than building your own. Compose definitions map to Docker Swarm services fairly easily. Also look at Kubernetes and Marathon/Mesos for managing groups of containers as a service.
Process management in Docker
It's possible to run systemd in a container but it requires --privileged access to the host and the /sys/fs/cgroup volume mounted so may not be the best fit for most use cases.
The s6-overlay project provides a more docker friendly process management system using s6.
It's fairly rare you actually need ssh access into a container, but if that's a hard requirement then you are going to be stuck building your own containers and using a process manager.
You can avoid running a systemd daemon inside a docker container altogether. You can even avoid to write a special start.sh script - that is another benefit when using the docker-systemctl-replacement script.
The docker systemctl.py can parse the normal *.service files to know how to start and stop services. You can register it as the CMD of an image in which case it will look for all the systemctl-enabled services - those will be started and stopped in the correct order.
The current testsuite includes testcases for the LAMP stack including centos, so it should run fine specifically in your setup.
I found this project:
https://github.com/defn/docker-systemd
which can be used to create an image based on the stock ubuntu image but with systemd and multiuser mode.
My use case is the first one mentioned in its Readme. I use it to test the installer script of my application that is installed as a systemd service. The installer creates a systemd service then enables and starts it. I need CI tests for the installer. The test should create the installer, install the application on an ubuntu, and connect to the service from outside.
Without systemd the installer would fail, and it would be much more difficult to write the test with vagrant. So, there are valid use cases for systemd in docker.

Using docker swarm to execute singular containers rather than "services"

I really enjoy the concept of having a cluster of docker machines available to execute docker services. I also like the additional features not available to singular docker containers (such as docker secret).
But I really have no need for long-standing services. My use case is to simply execute a bash script to use the docker swarm to take in an arbitrary number of finite commands, and execute each as a running docker container on the same docker image, while using the secrets loaded up with docker swarm's secrets.
Can I do this?
I do not want to have this container be "long running". I want it to run, and then exit with the output when the bash script loaded into the container is finished.
You can apply the ideas presented in "One-shot containers on Docker Swarm" from alex ellis.
You still neeeds to create a service, but with the right restart policy.
For instance, for a quick web server:
docker service create --restart-condition=none --name crawler1 -e url=http://blog.alexellis.io -d crawl_site alexellis2/href-counter
(--restart-condition, not --restart-policy, as commented by ethergeist)
So by setting a restart condition of 0, the container will be scheduled somewhere in the swarm as a (task). The container will execute and then when ready - it will exit.
If the container fails to start for a valid reason then the restart policy will mean the application code never executes. It would also be ideal if we could immediately return the exit code (if non-zero) and the accompanying log output, too.
For the last part, use his tool: alexellis/jaas.
Run your first one-shot container:
# jaas -rm -image alexellis2/cows:latest
The -rm flag removes the Swarm service that was used to run your container.
The exit code from your container will also be available, you can check it with echo $?.

How can i force remove a docker container using 'docker_container' module of Ansible?

I am having problems removing containers from my docker host after introducing cadvisor - https://github.com/google/cadvisor/issues/771
I have a large number of Ansible (2.2.1.0) scripts that i use to install my service containers on these docker host and internally they are using the docker_container module. Many times these scripts would want to remove a container, but because of the problem stated above, they are failing.
I can force kill docker container on these docker hosts easily:
docker rm container_name -fv
So i would expect that the force_kill option provided in the docker_container module (docker_container module docs) should be able to do the same:
- name: Delete docker containers
docker_container:
name: "container_name"
force_kill: true
keep_volumes: false
state: absent
But the script fail always. I do not know what is the purpose of this option if it is not able to force kill it. All of my scripts are failing even after enabling force_kill and i need to know how to make sure this option works as intended ?
Update:
I now understand the force_kill option is sending the docker kill signal. I have updated the question to better reflect what i really want to achieve.
I think force_kill sends a kill signal as docker kill so
The main process inside the container will be sent SIGKILL, or any
signal specified with option --signal.
I would check CMD and/or ENTRYPOINT in the Dockerfile of the image (if you used it) because
ENTRYPOINT and CMD in the shell form run as a subcommand of /bin/sh
-c, which does not pass signals. This means that the executable is not the container’s PID 1 and does not receive Unix signals.
CMD ["/init.sh"] # exec form
CMD /init.sh # shell form
Not sure this is the reason of your problem, but may be that a CMD in shell form doesn't forward the signal to your command.
As for Dominique Burton's blog
Always use the exec form if you want Docker to forward signals to your
(sub-)process. This is not only important for sending custom signals
to a Docker container, it’s also important to properly stop (i.e.
docker stop) a container.

Resources