I have a Swarm cluster and I have a service that will just create a container, execute one script and then delete the container. And currently it works fine, the container execute the script then delete him-self after the execution. But after a few seconds, the service restart the container and execute again the script.
But I would like to delete the service after the first execution, like that the service don't do the job several times.
My only idea is to in my first script that launch the service, I put a timer, like 4 seconds and after that i just delete the service in the script like that :
#!/bin/bash
docker stack deploy -c docker-compose.yaml database-preapre
sleep 4
docker service rm database-prepare
But if one day, the service make more time to execute his script and I cut the service when he is running. Or if the service execute itself one time real fast and then after 3 seconds restart it-self for the second execution. I don't want to cut him...
To resolve this issue, I just adjust some things in my script. First, in the script that create the temporary container, I just sleep 10 seconds and I destroy the container. (10 seconds is very large but i want to be sure the container have the time to do the job)
Secondly, when the container finished executing the entrypoint script he will just deleted himself and restart (cause swarm restart it), but I don't want that my script go Up, down and Up and be cut during the second execution and destroy the job. So I add a sleep timer to the script that my temporary container run, so he will don't go down and up and down and up ... He executes his job and then wait.
Related
I have a Docker container with a init script CMD ["init_server.sh"]
which is orchestrated by docker-compose.
Does running docker-compose restart re-run the init script,
or will only running docker-compose down followed by docker-compose up
trigger the script to be run again?
I imagine whatever the answer to this will apply to docker restart as well.
Am I correct?
A Docker container only runs one process, defined by the "entrypoint" and "command" settings (typically from a Dockerfile, you can override them in a docker-compose.yml). Whatever that process does, it will do every time the container starts.
In terms of Docker commands, the Compose commands you show aren't different from their underlying plain-Docker variants. restart is just stop followed by start, so it will re-run the main container process in its existing container with the existing (possibly modified) container filesystem. If you do a docker rm in between these (or docker-compose down) the process starts in a clean container based on the image.
It's typical for an initialization script to check if the initialization it requires has already been done. For things like the standard Docker Hub database images, this works by checking if the data directory is totally empty; initialization only happens on the very first startup. An init script that runs something like database migrations will generally keep track of which migrations have already been done and won't repeat work.
I have a docker container that has services running on multiple ports.
When I try to start one of these processes mid-way through my Dockerfile it causes the build process to stall indefinitely.
RUN /opt/webhook/webhook-linux-amd64/webhook -hooks /opt/webhook/hooks.json -verbose
So the program is running as it should but it never moves on.
I've tried adding & to the end of the command to tell bash to run the next step in parallel but this causes the service to not be running in the final image. I also tried redirecting the output of the program to /dev/null.
How can I get around this?
You have a misconception here. The commands in the Dockerfile are executed to create a docker image before it is executed. One type of command in the Dockerfile is RUN which allows you to run an arbitrary shell command whose actions influence the image under creation in some sense.
Therefore, the build process waits until the command terminates.
It seems you want to start the service when the image is started. To do so use the CMD command instead. It tells Docker what is supposed to be executed when the image is started.
I'm trying to customize the docker image presented in the following repository
https://github.com/erkules/codership-images
I created a cron job in the Dockerfile and tried to run it with CMD, knowing the Dockerfile for the erkules image has an ENTRYPOINT ["/entrypoint.sh"]. It didn't work.
I tried to create a separate cron-entrypoint.sh and add it into the dockerfile, then test something like this ENTRYPOINT ["/entrypoint.sh", "/cron-entrypoint.sh"]. But also get an error.
I tried to add the cron job to the entrypoint.sh of erkules image, when I put it at the beginning, then the container runs the cron job but doesn't execute the rest of the entrypoint.sh. And when I put the cron script at the end of the entrypoint.sh, the cron job doesn't run but anything above in the entrypoint.sh gets executed.
How can I be able to run what's in the the entrypoint.sh of erkules image and my cron job at the same time through the Dockerfile?
You need to send the cron command to background, so either use & or remove the -f (-f means: Stay in foreground mode, don't daemonize.)
So, in your entrypoint.sh:
#!/bin/bash
cron -f &
(
# the other commands here
)
Edit: I am totally agree with #BMitch regarding the way that you should handle multiple processes, but inside the same container, which is something not so recommended.
See examples here: https://docs.docker.com/engine/admin/multi-service_container/
The first thing to look at is whether you need multiple applications running in the same container. Ideally, the container would only run a single application. You may be able to run multiple containers for different apps and connect them together with the same networks or share a volume to achieve your goals.
Assuming your design requires multiple apps in the same container, you can launch some in the background and run the last in the foreground. However, I would lean towards using a tool that manages multiple processes. Two tools I can think of off the top of my head are supervisord and foreman in go. The advantage of using something like supervisord is that it will handle signals to shutdown the applications cleanly and if one process dies, you can configure it to automatically restart that app or consider the container failed and abend.
I have a Logstash container that keeps two data sources sync. When it runs, it queries non-synced entries in one database and posts them into the other. I would like to run this container say every 10 seconds.
What I have been doing is to specify --restart=always so that when the container exits, it restarts itself, around which takes around 5 seconds, which is a bit too often for this use case.
Does Docker support what I want to achieve (waiting X seconds between restarts, or any kind of scheduling) or should I remove the restart policy and schedule it with cron to run every 10 seconds?
If your container exits succesfully, it will be restarted immediately with --restart=always
An ever increasing delay (double the previous delay, starting at 100 milliseconds) is added before each restart to prevent flooding the server. This means the daemon will wait for 100 ms, then 200 ms, 400, 800, 1600, and so on until either the on-failure limit is hit, or when you docker stop or docker rm -f the container.
Here is your part I guess:
If a container is successfully restarted (the container is started and runs for at least 10 seconds), the delay is reset to its default value of 100 ms.
What you can do is:
Restart your container with a cron every 10 seconds
Configure a cron inside your container and launch logstash every 10 seconds
Use a shell script which, in a loop, launch logstash then sleep 10
Maybe logstash has something like that already built-in? (I know that for example the jdbc-input-plugin has some schedule params)
I can suspend the processes running inside a container with the PAUSE command. Is it possible to clone the Docker container whilst paused, so that it can be resumed (i.e. via the UNPAUSE command) several times in parallel?
The use case for this is a process which takes long time to start (i.e. ~20 seconds). Given that I want to have a pool of short-living Docker containers running that process in parallel, I would reduce start-up time for each container a lot if this was somehow possible.
No, you can only clone the container's disk image, not any running processes.
Yes, you can, using docker checkpoint (criu). This does not have anything to do with pause though, it is a seperate docker command.
Also see here.