I have been readying here and there online but the answer does not come thoroughly explained. I hope this question here if answered, can provide an updated and thorough explanation of the matter.
Why would someone define a container with the following parameters:
stdin: true
tty: true
Also if
`docker run -it`
bind the executed container process to the calling client stdin and tty, what would setting those flag on a container bind its executed process it to ?
I could only envision one scenario, which is, if the command is let say bash, then you can attach to it (i.e. that bash running instance) later after the container is running.
But again one could just run docker run it when necessary. I mean one launch a new bash and do whatever needs to be done. No need to attach to a running one
So the first part of the question is:
a) What is happening under the hood ?
b) Why and when to use it, what difference does it make and what is the added value ?
AFAIK, setting stdin: true in container spec will simply keep container process stdin open waiting for somebody to attach to it with kubectl attach.
As for tty: true - this simply tells Kubernetes that stdin should be also a terminal. Some applications may change their behavior based on the fact that stdin is a terminal, e.g. add some interactivity, command completion, colored output and so on. But in most cases you generally don't need it.
Btw kubectl exec -it POD bash also contains flags -it but in this case this really needed because you're spawning shell process in container's namespace which expects both stdin and terminal from user.
Related
Is there any way to make docker-compose start a service without running the declared command?
Not sure if any such option exists, nothing obvious in the flags for docker-compose up. It would be useful for debugging as presently I have to comment out the command in order to enter a container that otherwise exits on startup.
In this case, there's no command in the Dockerfile, but there's a command in docker-compose.yml.
Based on jonrsharpe's comment, the answer is to use run instead as it will start the container.
docker-compose run service bash
This makes it possible to enter the container and debug the problem so the real command can run.
I am having problems removing containers from my docker host after introducing cadvisor - https://github.com/google/cadvisor/issues/771
I have a large number of Ansible (2.2.1.0) scripts that i use to install my service containers on these docker host and internally they are using the docker_container module. Many times these scripts would want to remove a container, but because of the problem stated above, they are failing.
I can force kill docker container on these docker hosts easily:
docker rm container_name -fv
So i would expect that the force_kill option provided in the docker_container module (docker_container module docs) should be able to do the same:
- name: Delete docker containers
docker_container:
name: "container_name"
force_kill: true
keep_volumes: false
state: absent
But the script fail always. I do not know what is the purpose of this option if it is not able to force kill it. All of my scripts are failing even after enabling force_kill and i need to know how to make sure this option works as intended ?
Update:
I now understand the force_kill option is sending the docker kill signal. I have updated the question to better reflect what i really want to achieve.
I think force_kill sends a kill signal as docker kill so
The main process inside the container will be sent SIGKILL, or any
signal specified with option --signal.
I would check CMD and/or ENTRYPOINT in the Dockerfile of the image (if you used it) because
ENTRYPOINT and CMD in the shell form run as a subcommand of /bin/sh
-c, which does not pass signals. This means that the executable is not the container’s PID 1 and does not receive Unix signals.
CMD ["/init.sh"] # exec form
CMD /init.sh # shell form
Not sure this is the reason of your problem, but may be that a CMD in shell form doesn't forward the signal to your command.
As for Dominique Burton's blog
Always use the exec form if you want Docker to forward signals to your
(sub-)process. This is not only important for sending custom signals
to a Docker container, it’s also important to properly stop (i.e.
docker stop) a container.
I'm trying to pull and set a container with an database (mysql) with the following command:
docker-compose up
But for some reason is failing to start the container is showing me this:
some_db exited with code 0
But that doesn't give me any information of why that happened so I can fix it, so how can I see the details of that error?.
Instead of looking for how to find an error in your composition, you should change your process. With docker it is easy to fall into using all of the lovely tools to make things work seamlessly together, but this sounds like an individual container issue, not an issue with docker compse. You should try building an empty container with the baseimage you want then using docker exec -it <somecontainername> sh to go into it and run the commands you're trying to execute in your entrypoint manually to find the specific breakpoint.
Try at least docker logs <exited_container_name/id> (or more recently: docker container logs <exited_container_name/id>)
That way, you can check if there was any error message.
An exited container means the main process stopped: you need to make sure that main process remains up in order for the container to not exit immediately.
Try also docker inspect <exited_container_name/id> (or, again, docker container inspect <exited_container_name/id>)
If you see for instance "OOMKilled": true, that would mean the container memory was exceeded. You can also look for the "ExitCode" for clues.
See more at "Ten tips for debugging Docker containers" from Mark Betz.
Docker Version 1.12,
I got a Dockerfile from Here
FROM nginx:latest
RUN touch /marker
ADD ./check_running.sh /check_running.sh
RUN chmod +x /check_running.sh
HEALTHCHECK --interval=5s --timeout=3s CMD ./check_running.sh
I'm able to roll the updates and health checks with check_running.sh shell script. Here, the check_running.sh script is copied to image, so the launched container has it.
Now, my question is there any way to Health Check from out side of the container and script also located outside.
I'm excepting a health check command to get the container performance(Depends on what we wrote in script), IF the container is not performing good it should roll-back to previous version ( Kind of a process that monitors the containers, if it is not good, it should roll-back to previous)
Thanks
is there any way to Health Check from out side of the container and
script also located outside.
Kind of a process that monitors the containers, if it is not good, it should roll-back to previous
You have several options:
From outside, you run a process inside the container to check its health with docker exec. This could be any sequence of shell commands. If you want to keep your scripts outside of the container, you might use something like cat script.sh | docker exec -it container sh -s.
You check the container health from outside the container, e.g. by looking for a process that should be running inside the container (try to set a security profile and use ps -Zax or try looking for children of the daemon), or you can give each container a specific user ID with --user 12345 and then look for that or e.g. connecting to its services. You'd have to make sure it's running inside the right container. You can access the containers' filesystem below /var/lib/docker/devicemapper/mnt/<hash>/rootfs.
You run a HEALTHCHECK inside the container and check its health with docker inspect --format='{{json .State.Health.Status}}' <containername> combined with e.g. a line in the Dockerfile:
HEALTHCHECK CMD wget -q -s http://some.host to check the container has internet access.
I'd recommend option 3, because it's likely to be more compatible with other tools in the future.
Just got comment from a blog!. He refered Docker documentation HealthCheck section. There is a health check "option" for docker command to "override" the dockerfile defaults. I have not checked yet!. But it seems good for me to get what I want. Will check and update the answer!
The Docker inspect command lets you view the output of commands that succeed or fail
docker inspect --format='{{json .State.Health}}' your-container-name
That's not available with the Dockerfile HEALTHCHECK option, all checks run inside the container. To me, this is a good thing since it avoids potentially untrusted code running directly on the host, and it allows you to include the dependencies for the health check inside your container.
If you need to monitor your container from outside, you'll need to use another tool or monitoring application, there are quite a few of them out there.
You can view the results of the health check by running docker inspect on a container.
Another approach depending on your application would be to expose a /healthz endpoint that the healthcheck also probes, this way it can be queried externally or internally as needed.
Right now, for my workflow (personal project, but for sharing with friends I would like to simplfy it), I have the steps:
1. docker run -it -p 3456:3456 -p 39499:39499 maccam912/tahoe-node /bin/bash
2. In docker, run the script tahoe-run.sh which will either do a first-time configuration, or just run the tahoe service if it is not the first run.
My question is how I would simplify this. A few specific points below:
I would like to tell docker to just run the container and let it do its thing in the background. if I changed it by replacing the -it with -d I understand that would get me closer, but it never seems to leave anything running when I do a docker ps the same way the above workflow does, and then me detaching with CTRL+P, CTRL+Q.
Do I need to -p those ports? I EXPOSE them in the Dockerfile. Also, since they are being mapped directly to the same port in the container, do I need to write each twice? Those ports are what the container is using AND what the outside world is expecting to be able to use to connect to the tahoe "server".
How could I get this working so that instead of running /bin/bash I run /tahoe-run.sh right away?
My perfect command would look something like docker run -d maccam912/tahoe-node /tahoe-run.sh if the port mapping stuff is not necessary. The end goal of this would be to let the container run in the background, with tahoe-run.sh kicking off a server that just runs "forever".
Let me know if I can clarify anything or if there is a simpler way to do this. Feel free to take a look at my Dockerfile and make any suggestions if it would simplify things there, but this project is intentionally following ease of use (hopefully getting things down to 1 command) over separation of duties, as is recommended.
I do believe I figured it out:
The problem with having just -d specified is that as soon as the command given returns/exits, the container stops running. My solution was to end the last line with a &, leaving the script hanging and not exiting.
As for the ports, the EXPOSE in the Dockerfile does nothing more than make those ports available to the host machine, not to remote machines like a server would need. the -p is still necessary. The doubling of the ports (in:out) is needed because specifying only one will map that port to a random high numbered port in the container.
Please correct me if I'm wrong on any of this, but in this case the best solution was to modify my script, and will be to use docker run -d -p 3456:3456 -p 39499:39499 maccam912/tahoe-node /tahoe-run.sh and the new script will leave it hanging and the container running.