How to Run container from docker compose after 1 hour - docker

I have 10 container in docker compose.
I want 9 containers to start working when i up the docker compose and allow docker-compose to run the 10th container after 1 hour of time.
Currently its running all containers at once.
How i can achieve this ?

Docker Compose doesn’t directly have this functionality. (Kubernetes doesn’t either, though it does have the ability to run a short-lived container at a specified time of day.)
Probably the best workaround to the problem as you’ve stated it is to use a tool like at(1) to run an additional container at a later time
at +1h docker run ...
My experience has generally been that it can get a little messy to depend on starting and stopping Docker containers for workflow management. You may be better off starting a pool of workers against some job queue system like RabbitMQ and injecting a job after an hour, or using a language-native scheduled-task library in your application, and just always start every container every time.

Related

Best Docker Stack equivalent for docker-compose "--exit-code-from" option?

I have a docker-compose file with 4 services. Services 1,2,3 are job executors. Service 4 is the job scheduler. After the scheduler has finished running all its jobs on executors, it returns 0 and terminates. However the executor services still need to be shut down. With standard docker-compose this is easy. Just use the "--exit-code-from" option:
Terminate docker compose when test container finishes
However when a version 3.0+ compose file is deployed via Docker Stack, I see no equivalent way to wait for 1 service to complete and then terminate all remaining services. https://docs.docker.com/engine/reference/commandline/stack/
A few possible approaches are discussed here -
https://github.com/moby/moby/issues/30942
The solution from miltoncs seems reasonable at first:
https://github.com/moby/moby/issues/30942#issuecomment-540699206
The concept suggested is querying every second with docker stack ps to get service status. Then removing all services with docker stack rm when done. I'm not sure how all the constant stack ps traffic would scale with thousands of jobs running in a cluster. Potentially bogging down the ingress network?
Does anyone have experience / success with this or similar solutions?

Should I create a docker container or docker start a stopped container?

From the docker philosophy's point of view it is more advisable:
create a container every time we need to use a certain environment and then remove it after use (docker run <image> all the time); or
create a container for a specific environment (docker run <image>), stop it when it is not necessary and whenever it is initialized again (docker start <container>);
If you docker rm the old container and docker run a new one, you will always get a clean filesystem that starts from exactly what's in the original image (plus any volume mounts). You will also fairly routinely need to delete and recreate a container to change basic options: if you need to change a port mapping or an environment variable, or if you need to update the image to have a newer version of the software, you'll be forced to delete the container.
This is enough reason for me to make my standard process be to always delete and recreate the container.
# docker build -t the-image . # can be done first if needed
docker stop the-container # so it can cleanly shut down and be removed
docker rm the-container
docker run --name the-container ... the-image
Other orchestrators like Docker Compose and Kubernetes are also set up to automatically delete and recreate the container (or Kubernetes pod) if there's a change; their standard workflows do not generally involve restarting containers in-place.
I almost never use docker start. In a Compose-based workflow I generally use only docker-compose up -d, letting it restart things if needed; docker-compose down if I need the CPU/memory resources the container stack was using but not in routine work.
I'm talking with regards to my experience in the industry so take my answer with a grain of salt, because there might be no hard evidence or reference to the theory.
Here's the answer:
TL;DR:
In short, you never need the docker stop and docker start because taking this approach is unreliable and you might lose the container and all the data inside if no proper action is applied beforehand.
Long answer:
You should only work with images and not the containers. Whenever you need some specific data or you need the image to have some customization, you better use docker save to have the image for future use.
If you're just testing out on your local machine, or in your dev virtual machine on a remote host, you're free to use either one you like. I personally take each of the approaches on different scenarios.
But if you're talking about a production environment, you'd better use some orchestration tool; it could be as simple and easy to work with as docker-compose or docker swarm or even Kubernetes on more complex environments.
You better not take the second approach (docker run, docker stop & docker start) in those environments because at any moment in time you might lose that container and if you are solely dependent on that specific container or it's data, then you're gonna have a bad weekend.

docker run/start: Is there a significant impact on disk space by using the "run" command over and over?

I'm wondering what is the best practice for launching many containers (on the order of thousands per day) in terms of using docker container run or docker container start. I realize that start is used on a stopped container and run would be used to create a new container, but does it matter which one is used if the same underlying image is used across all the containers?
My guess is that since all the containers use the same image there would be very little overhead for creating many thousands of containers. In other words, just use docker container run over and over again.
Should I instead try to search for an existing container before starting a new one?
The easiest solution is to pass --rm to docker run. This will cause the container to be deleted as soon as it's done running, so repeated calls to it won't keep using more and more space.

Why does the time it takes to start a Docker container vary that much?

I have discovered that starting a Docker container varies a lot in time. E.g., I have started a RabbitMQ container using the official image on Docker for Mac 17.06, and the very same docker run … call took from 0.9s to 2.5s, with no specific pattern occuring.
And it is not only RabbitMQ, it happens with other containers as well: The time to run them varies a lot.
Why is that? Is there something I can do about it, to make startup times more deterministic? On other words: Is there something I might do wrong, what causes this strange behavior?

Can Docker Engine start containers in parallel

If I have scripts issueing docker run commands in parallel, the docker engine appears to handle these commands in series. Since runing a minimal container image with "docker run" takes around 100ms to start does this mean issueing commands in parallel to run 1000 containers will take the docker engine 100ms x 1000 = 100 s or nearly 2 minutes? Is there some reason why the docker engine is serial instead of parallel? How do people get around this?
How do people get around this?
a/ They don't start 1000 containers at the same time
b/ if they do, they might use a cluster management system like docker swarm to manage the all process
c/ they do run 1000 containers, in advance in order to take into account the starting time.
Truly parallelize docker run command could be tricky considering some of those command might depend on other containers to be created/started first (like a docker run --volumes-from=xxx)

Resources