start multiple processes in docker container from Dockerfile - docker

I want to start multiple processes p1, p2 ... pn when I start docker container. I can achieve that for one process by:
CMD p1
But I want to do that for multiple processes and I want to run all processes in background. Is there anyway to do that?

You could have a start script that executes the processes.
eg
Dockerfile
CMD ./start.sh
start.sh
./process-1.sh
./process-2.sh
./process-3.sh &
It's import to keep the parent process running otherwise docker will kill all processes and the container will stop running.(that's tripped me up before)
You could alternatively use supervisor or somethng to that effect.

Related

How can I run script automatically after Docker container startup without altering main process of container

I have a Docker container which runs a web service. After the container process is started, I need to run a single command. How can I do this automatically, either by using Docker Compose or Docker?
I'm looking for a solution that does not require me to substitute the original container process with a Bash script that runs sleep infinity etc. Is this even possible?

Docker container exits after executing entrypoint point

FROM openjdk:8
LABEL maintainer="test"
EXPOSE 8080
ADD test-demo.jar assembly.jar
ENTRYPOINT ["java","-jar","assembly.jar"]
The container i start with this docker file exits soon after it starts. Please advice what to do to keep this running.
You need to make sure a java -jar assembly.jar will keep being active as a foreground process, or said main process would exit, and with it the docker container itself.
You should wrap the java -jar call in its own script, which allows various ways to keep that script alive, as described here.

Docker dealing with processes that don't end?

I have a docker container that has services running on multiple ports.
When I try to start one of these processes mid-way through my Dockerfile it causes the build process to stall indefinitely.
RUN /opt/webhook/webhook-linux-amd64/webhook -hooks /opt/webhook/hooks.json -verbose
So the program is running as it should but it never moves on.
I've tried adding & to the end of the command to tell bash to run the next step in parallel but this causes the service to not be running in the final image. I also tried redirecting the output of the program to /dev/null.
How can I get around this?
You have a misconception here. The commands in the Dockerfile are executed to create a docker image before it is executed. One type of command in the Dockerfile is RUN which allows you to run an arbitrary shell command whose actions influence the image under creation in some sense.
Therefore, the build process waits until the command terminates.
It seems you want to start the service when the image is started. To do so use the CMD command instead. It tells Docker what is supposed to be executed when the image is started.

Dockerfile entrypoint

I'm trying to customize the docker image presented in the following repository
https://github.com/erkules/codership-images
I created a cron job in the Dockerfile and tried to run it with CMD, knowing the Dockerfile for the erkules image has an ENTRYPOINT ["/entrypoint.sh"]. It didn't work.
I tried to create a separate cron-entrypoint.sh and add it into the dockerfile, then test something like this ENTRYPOINT ["/entrypoint.sh", "/cron-entrypoint.sh"]. But also get an error.
I tried to add the cron job to the entrypoint.sh of erkules image, when I put it at the beginning, then the container runs the cron job but doesn't execute the rest of the entrypoint.sh. And when I put the cron script at the end of the entrypoint.sh, the cron job doesn't run but anything above in the entrypoint.sh gets executed.
How can I be able to run what's in the the entrypoint.sh of erkules image and my cron job at the same time through the Dockerfile?
You need to send the cron command to background, so either use & or remove the -f (-f means: Stay in foreground mode, don't daemonize.)
So, in your entrypoint.sh:
#!/bin/bash
cron -f &
(
# the other commands here
)
Edit: I am totally agree with #BMitch regarding the way that you should handle multiple processes, but inside the same container, which is something not so recommended.
See examples here: https://docs.docker.com/engine/admin/multi-service_container/
The first thing to look at is whether you need multiple applications running in the same container. Ideally, the container would only run a single application. You may be able to run multiple containers for different apps and connect them together with the same networks or share a volume to achieve your goals.
Assuming your design requires multiple apps in the same container, you can launch some in the background and run the last in the foreground. However, I would lean towards using a tool that manages multiple processes. Two tools I can think of off the top of my head are supervisord and foreman in go. The advantage of using something like supervisord is that it will handle signals to shutdown the applications cleanly and if one process dies, you can configure it to automatically restart that app or consider the container failed and abend.

Is it possible to run a command when stopping a Docker container?

I'm using docker-compose to organize containers for a JS application.
The source container sports command: npm start, pretty standard, to spin up the live application. However, that timeouts when I ask it to stop.
I was wondering if it's possible to have docker-compose stop to run a command inside the container - that could properly terminate the application.
docker-compose stop just sends a SIGTERM to your container and if it does not stop after 10secs (configurable) SIGKILL follows. So if you want to customize this behavior, you should handle signal inside your entrypoint (if you have one).

Resources