Does Docker support restarting containers every X seconds - docker

I have a Logstash container that keeps two data sources sync. When it runs, it queries non-synced entries in one database and posts them into the other. I would like to run this container say every 10 seconds.
What I have been doing is to specify --restart=always so that when the container exits, it restarts itself, around which takes around 5 seconds, which is a bit too often for this use case.
Does Docker support what I want to achieve (waiting X seconds between restarts, or any kind of scheduling) or should I remove the restart policy and schedule it with cron to run every 10 seconds?

If your container exits succesfully, it will be restarted immediately with --restart=always
An ever increasing delay (double the previous delay, starting at 100 milliseconds) is added before each restart to prevent flooding the server. This means the daemon will wait for 100 ms, then 200 ms, 400, 800, 1600, and so on until either the on-failure limit is hit, or when you docker stop or docker rm -f the container.
Here is your part I guess:
If a container is successfully restarted (the container is started and runs for at least 10 seconds), the delay is reset to its default value of 100 ms.
What you can do is:
Restart your container with a cron every 10 seconds
Configure a cron inside your container and launch logstash every 10 seconds
Use a shell script which, in a loop, launch logstash then sleep 10
Maybe logstash has something like that already built-in? (I know that for example the jdbc-input-plugin has some schedule params)

Related

How to change the grace period of a docker image so it's not SIGKILL'ed after 10 seconds?

When a container is stopped with docker stop, docker sends SIGTERMto the process and waits for 10 seconds as grace period, then if it's still running, docker sends SIGKILL to kill the process.
Let's say that I've a database image that needs more than 10 seconds for grafecully shutdown. So how do I make an image that does not get killed after 10 seconds? Can I increase the grace period time or even let completely skip SIGKILL'ed?
PS: Not asking the -t parameter of the docker stop. I want my image to decide the grace period.

Swarm | Start a service to execute just one time then be deleted

I have a Swarm cluster and I have a service that will just create a container, execute one script and then delete the container. And currently it works fine, the container execute the script then delete him-self after the execution. But after a few seconds, the service restart the container and execute again the script.
But I would like to delete the service after the first execution, like that the service don't do the job several times. 
My only idea is to in my first script that launch the service, I put a timer, like 4 seconds and after that i just delete the service in the script like that :
#!/bin/bash
docker stack deploy -c docker-compose.yaml database-preapre
sleep 4
docker service rm database-prepare
But if one day, the service make more time to execute his script and I cut the service when he is running. Or if the service execute itself one time real fast and then after 3 seconds restart it-self for the second execution. I don't want to cut him...
To resolve this issue, I just adjust some things in my script. First, in the script that create the temporary container, I just sleep 10 seconds and I destroy the container. (10 seconds is very large but i want to be sure the container have the time to do the job)
Secondly, when the container finished executing the entrypoint script he will just deleted himself and restart (cause swarm restart it), but I don't want that my script go Up, down and Up and be cut during the second execution and destroy the job. So I add a sleep timer to the script that my temporary container run, so he will don't go down and up and down and up ... He executes his job and then wait.

Control over the docker images while scaling up and scaling down

Lets say we have a microservice which runs in docker container.
Now to bring up this service, it uses cache which is mounted on the host volume which gets shared by all
the other docker images for same microservice. And to build this cache in app it takes 10 mins and then application gets ready to serve the request.
But this scenario gets failed when we will scale up and scale down,
Lets say I am scaling up container will be available but its still not fully up because we need to wait
to build the cache.
How you suggest to handle this scenario.
And at the font of this docker services we are planning to bring Nginx to load balance the request.
Thanks in Advance
If I understand you right, you want to know when your container is fully up and running. One option could be the Health Check. This feature was added in Docker 1.12.
Description (from Docker Docs):
The health check will first run interval seconds after the container is started, and then again interval seconds after each previous check completes.
If a single run of the check takes longer than timeout seconds then the check is considered to have failed.
It takes retries consecutive failures of the health check for the container to be considered unhealthy.
There you can specify to run any command to check your server status.
The Health of your container can be checked by using the inspect-command
docker inspect --format='{{json .State.Health}}' <container-id>
This feature adds also the "(healthy)"-information to the status in docker ps.

What is the limiting factor when docker restart slows down with a large number of workers and a short task?

Expected behavior:
We want to have docker containers perform small jobs. Say we have ten containers and each just sleeps for 5 seconds. We want these to continue to restart quickly. If you have a docker compose with 10 containers defined thusly where each container sleeps for 5 seconds and dies.
some-worker1:
image: some-worker
build: ./some-worker
restart: always
We expect these containers to restart right away after dying.
Observed behavior:
If you run watch docker ps, you notice that the restart time slowly increases. After running for a few minutes, the containers will only restart after a minute. And they will consistently restart after a minute.
Guesses:
I imagine that the docker-engine or whatever restarts the containers has some policy for how quickly to restart them. They begin restarting quickly, so maybe some resource becomes scarce, and docker has to slow the restart speed or as an optimization slows the restart speed, but sets the max at a minute.
I think this explains it:
"An ever increasing delay (double the previous delay, starting at 100 milliseconds) is added before each restart to prevent flooding the server. This means the daemon will wait for 100 ms, then 200 ms, 400, 800, 1600, and so on until either the on-failure limit is hit, or when you docker stop or docker rm -f the container.
If a container is successfully restarted (the container is started and runs for at least 10 seconds), the delay is reset to its default value of 100 ms."
https://docs.docker.com/engine/reference/run/#restart-policies---restart

Is it possible to set maximum delay timeout for restart policy for a container?

In docker docks it states:
An ever increasing delay (double the previous delay, starting at 100 milliseconds) is added before each restart to prevent flooding the server. This means the daemon will wait for 100 ms, then 200 ms, 400, 800, 1600, and so on until either the on-failure limit is hit, or when you docker stop or docker rm -f the container.
Let's say my container connects to database on startup. If database server is down, then container process exits with error. If database will be offline for a long time restart delay can grow to 5 minutes for example.
Is it possible to limit max delay to 10 seconds for example?
I don't think you can do that with a container configuration.
Can you implement this delay directly on your application ?

Resources