Docker dealing with processes that don't end? - docker

I have a docker container that has services running on multiple ports.
When I try to start one of these processes mid-way through my Dockerfile it causes the build process to stall indefinitely.
RUN /opt/webhook/webhook-linux-amd64/webhook -hooks /opt/webhook/hooks.json -verbose
So the program is running as it should but it never moves on.
I've tried adding & to the end of the command to tell bash to run the next step in parallel but this causes the service to not be running in the final image. I also tried redirecting the output of the program to /dev/null.
How can I get around this?

You have a misconception here. The commands in the Dockerfile are executed to create a docker image before it is executed. One type of command in the Dockerfile is RUN which allows you to run an arbitrary shell command whose actions influence the image under creation in some sense.
Therefore, the build process waits until the command terminates.
It seems you want to start the service when the image is started. To do so use the CMD command instead. It tells Docker what is supposed to be executed when the image is started.

Related

Does running docker-compose restart re-run init script

I have a Docker container with a init script CMD ["init_server.sh"]
which is orchestrated by docker-compose.
Does running docker-compose restart re-run the init script,
or will only running docker-compose down followed by docker-compose up
trigger the script to be run again?
I imagine whatever the answer to this will apply to docker restart as well.
Am I correct?
A Docker container only runs one process, defined by the "entrypoint" and "command" settings (typically from a Dockerfile, you can override them in a docker-compose.yml). Whatever that process does, it will do every time the container starts.
In terms of Docker commands, the Compose commands you show aren't different from their underlying plain-Docker variants. restart is just stop followed by start, so it will re-run the main container process in its existing container with the existing (possibly modified) container filesystem. If you do a docker rm in between these (or docker-compose down) the process starts in a clean container based on the image.
It's typical for an initialization script to check if the initialization it requires has already been done. For things like the standard Docker Hub database images, this works by checking if the data directory is totally empty; initialization only happens on the very first startup. An init script that runs something like database migrations will generally keep track of which migrations have already been done and won't repeat work.

Run docker with just copied file inside

I'm building an image that just contains a copied file, with the following Dockerfile:
FROM alpine:3.8
COPY ./somefile /srv/somefile
When I try to docker run the image, it exits immediately, that is just after docker run I have:
Exited (0) 1 second ago.
I tried adding CMD ["/bin/sh"] or ENTRYPOINT ["/bin/sh"] but it doesn't change anything.
Is it possible to have such container with just copied file and make it up and running, until I stop it?
So there is really no problem - you have succeeded running your docker with file inside. But as long as you didn't supply any additional job for your container, your run process had taken 1 second.
First you should get acquainted with what is considered as "running" container in Docker terminology. Container is "running" until its main process (PID 1) is running. Process exits => container stops. When you want your container to remain running (as a service for example), you need to keep your main process active.
Second, what is your main process? It is the process being launched upon container start. It is combined from ENTRYPOINT and CMD directives (with some rules). These directives are often given the default value in Dockerfile but you can override them. If you just run docker run <image>, default values are taken, but if you provide some arguments after <image>, they override CMD.
So, for alpine you can simply run shell like docker run -it alpine sh. And until you exit the shell, your container is running.
And the last. Argument -it connects both STDIN and STDOUT/STDERR to your console. So your sh process, which is main, keeps alive till you close the console.

Which process does ` docker attach ` attaches to?

The doc says
docker attach: Attach local standard input, output, and error streams to a running container
From my understanding, a running container can have many running processes, including those started using docker exec. So When using docker attach, which process am I attaching to exactly?
It should attach rather to the attach terminal’s standard input, output, and error, displaying the ongoing output or to control it interactively of the ENTRYPOINT/CMD process.
So it does not seem to be related to a specific process.
docker attach adds:
You can attach to the same contained process multiple times simultaneously, from different sessions on the Docker host.
Still the same process though.
Whatever process has pid 1 in the container. If the image declared an ENTRYPOINT in the Dockerfile (or if you docker run --entrypoint ...), it's that program; if not, it's the command passed on the docker run command line or the Dockerfile's CMD.

Difference between Docker Build and Docker Run

If I want to run a python script in my container what is the point in having the RUN command, if I can pass in an argument at build along with running the script?
Each time I run the container I want x.py to be run on an ENV variable passed in during the build stage.
If I were to use Swarm, and the only goal was to run the x.py script, swarm would only be building nodes, rather than building and eventually running, since the CMD and ENTRYPOINT instructions only happen at run time.
Am I missing something?
The docker build command creates an immutable image. The docker run command creates a container that uses the image as a base filesystem, and other metadata from the image is used as defaults to run that image.
Each RUN line in a Dockerfile is used to add a layer to the image filesystem in docker. Docker actually performs that task in a temporary container, hence the selection of the confusing "run" term. The only thing preserved from that RUN command are the filesystem changes, running processes, changes to environment variables, shell settings like the current working directory, are all lost when the temporary container is cleaned up at the completion of the RUN command.
The ENTRYPOINT and CMD value are used to specify the default command to run when the container is started. When both are defined, the result is the value of the entrypoint is run with the value of the cmd appended as a command line argument. The value of CMD is easily overridden at the end of the docker run command line, so by using both you can get easy to reconfigure containers that run the same command with different user input parameters.
If the command you are trying to run needs to be performed every time the container starts, rather than being stored in the immutable image, then you need to perform that command in your ENTRYPOINT or CMD. This will add to the container startup time, so if the result of that command can be stored as a filesystem change and cached for all future containers being run, you want to make that setting in a RUN line.

Dockerfile entrypoint

I'm trying to customize the docker image presented in the following repository
https://github.com/erkules/codership-images
I created a cron job in the Dockerfile and tried to run it with CMD, knowing the Dockerfile for the erkules image has an ENTRYPOINT ["/entrypoint.sh"]. It didn't work.
I tried to create a separate cron-entrypoint.sh and add it into the dockerfile, then test something like this ENTRYPOINT ["/entrypoint.sh", "/cron-entrypoint.sh"]. But also get an error.
I tried to add the cron job to the entrypoint.sh of erkules image, when I put it at the beginning, then the container runs the cron job but doesn't execute the rest of the entrypoint.sh. And when I put the cron script at the end of the entrypoint.sh, the cron job doesn't run but anything above in the entrypoint.sh gets executed.
How can I be able to run what's in the the entrypoint.sh of erkules image and my cron job at the same time through the Dockerfile?
You need to send the cron command to background, so either use & or remove the -f (-f means: Stay in foreground mode, don't daemonize.)
So, in your entrypoint.sh:
#!/bin/bash
cron -f &
(
# the other commands here
)
Edit: I am totally agree with #BMitch regarding the way that you should handle multiple processes, but inside the same container, which is something not so recommended.
See examples here: https://docs.docker.com/engine/admin/multi-service_container/
The first thing to look at is whether you need multiple applications running in the same container. Ideally, the container would only run a single application. You may be able to run multiple containers for different apps and connect them together with the same networks or share a volume to achieve your goals.
Assuming your design requires multiple apps in the same container, you can launch some in the background and run the last in the foreground. However, I would lean towards using a tool that manages multiple processes. Two tools I can think of off the top of my head are supervisord and foreman in go. The advantage of using something like supervisord is that it will handle signals to shutdown the applications cleanly and if one process dies, you can configure it to automatically restart that app or consider the container failed and abend.

Resources