Restart task in docker service after a certain time - docker

I have a swarm with 3 nodes. On it, I want to launch one service for a Database and then another, with some replicas that run a python application. The program will take approximately 30 minutes to finish. After that, the container is shut down and a new one starts. Sometimes, however, some problem occur and the container does not stop. Is there any option that I can use when I launch the service so that, after 1 hour, a container is automatically killed and a new one is created?

You can create an application using the Docker Remote API, that automatically creates that container, waits for one hour, deploys it to the swarm and then deletes that container. This is not a feature to look for in docker. You should manually implement it using Docker API.
You can find in here complete list of docker libraries to help you get started.

Related

Is there a way to set the "--rm" option for a docker container deployed in a GCP compute instance?

I'm admittedly very new to Docker so this might be a dumb question but here it goes.
I have a Python ETL script that I've packaged in a Docker container essentially following this tutorial, then using cloud functions and cloud scheduler, I have the instance turn start every hour, run the sync and then shut down the instance.
I've run into an issue though where after this process has been running for a while the VM runs out of hard drive space. The script doesn't require any storage or persistence of state - it pulls any state data from external systems and only uses temporary files which are supposed to be deleted when the machine shuts down.
This has caused particular problems where updates I make to the script stop working because the machine doesn't have the space to download the latest version of the container.
I'm guessing it's either logs or perhaps files created automatically to try to persist the state - either within the Docker container or on the VM.
I'm wondering whether if I could get the VM to run the instance with the "--rm" flag so that the image was removed when it was finished this could solve this problem. This would theoretically guarantee that I'm always starting with the most recent image.
The trouble is, I can't for the life of my find a way to configure the "rm" option within the instance settings and the documentation for container options only covers passing arguments to the container ENTRYPOINT and not the docker run options docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
I feel like I'm either missing something obvious or it's not designed to be used this way. Is this something that can be configured in the Dockerfile or is there a different way I have to set up the VM in the first place?
Basically I just want the docker image to be pulled fresh and run each time and not leave any remnants on the VM that will slowly run out of space.
Also, I know Cloud Run might work in some similar situations but I need the script to be able to run for as long as it needs to (particularly at the start when it's backfilling data) and so the 15 minute cap on runtime would be a problem.
Any suggestions would be appreciated!
Note: I'm posting this as an answer as I need more space than a comment. If anyone feels it is not a good answer and wants it deleted, I will be delighted to do such.
Recapping the story, we have a Compute Engine configured to start a Docker Container. The Compute Engine runs the container and then we stop it. An hour later we restart it, let it run and then we stop it again. This continues on into the future. What we seem to find is that the disk associated with the Compute Engine fills up and we end up breaking. The thinking is that the container contained within the Compute Engine is created at first launch of the Compute Engine and then each time it is restarted, it is being "re-used" as opposed to a brand new container instance being created. This means that resources consumed by the container from one run to the next (eg disk storage) continues to grow.
What we would like to happen is that when the Compute Engine starts, it will always create a brand new instance of the container with no history / resource usage of the past. This means that we won't consume resources over time.
One way to achieve this outside of GCP would be to start the container through Docker with the "--rm" flag. This means that when the container ends, it will be auto-deleted and hence there will be no previous container to start the next time the Compute Engine starts. Again ... this is a recap.
If we dig through how GCP Compute Engines work as they relate to containers, we come across a package called "Konlet" (Konlet). This is the package responsible for loading the container in the Compute engine. This appears to be itself a Docker container application written in Go. It appears to read the metadata associated with the Compute Engine and based on that, performs API calls to Docker to launch the target container. The first thing to see from this is that the launch of the target Docker container does not appear to be executed through simple docker command line. This then implies that we can't "simply" edit a script.
Konlet is open source so in principle, we could study it in detail and see if there are special flags associated with it to achieve the equivalent of --rm. However, my immediate recommendation is to post an issue at the Konlet GitHub site and ask the author whether there is a --rm equivalent option for Konlet and, if not, could one be added (and if not, what is the higher level thinking).
In the meantime, let me offer you an alternative to your story. If I am hearing you correctly, every hour you fire a job to start a compute engine, do work and then shutdown the compute engine. This compute engine hosts your "leaky" docker container. What if instead of starting/stopping your compute engine you created/destroyed your compute engine? While the creation/destruction steps may take a little longer to run, given that you are running this once an hour, a minute or two delay might not be egregious.

What steps does docker swarm take when doing a rolling update with start-first?

When docker swarm does a rolling update with stop-first on multiple running container instances, it takes -among others- the following steps in order for each container in a row:
Remove the container from its internal load balancer
Send a SIGTERM signal to the container.
With respect to the stop-grace-period, send a SIGKILL signal.
Start a new container
Add the new container to its internal load balancer
But which order of steps are taken when I want to do a rolling update with start-first?
Will the old- and new- container be available through the loadbalancer at the same time (until the old one has stopped and removed from the lb)?
Or will the new container first be started and not be added to the loadbalancer until the old container is stopped and removed from the loadbalancer?
The latter would nescesarry for processes that are bounded to a specific instance of a service (container).
But which order of steps are taken when I want to do a rolling update
with start-first?
It's basically the reverse. New container starts, added to LB, then the old one is removed from LB and sent shutdown signal.
Will the old- and new- container be available through the loadbalancer
at the same time (until the old one has stopped and removed from the
lb)?
Yes.
A reminder that most of this will not be seamless (or near zero downtime) unless you (at a minimum) have healthchecks enabled in the service. I talk about this a little in this YouTube video.

How to delay Docker Swarm updating a stateful container until it's ready?

Problem domain
Imagine that a stateful container is being managed by Swarm, e.g. a database, and another container is relying on it, e.g. a service that is executing a long-running job (minutes, sometimes hours) that does not tolerate the database (or even itself) to go down while it's executing.
To give an example, a database importing a multi GB dump.
There's also a CI/CD system in place which takes care of building new versions of the containers and deploying them to the Swarm, or pushing the image to Docker Hub which then calls a defined webhook which fires off the deployment event.
Question
Is there any way I can build my containers so that Swarm can know whether it's ok to update it or not? Similarly how HEALTHCHECK reports whether it needs to be restarted, something that would let Swarm know that 'It's safe to restart this container now'.
Or is it the CI/CD system's responsibility to check whether the stateful containers are safe to restart, and only then issue the update command to swarm?
Thanks in advance!
Docker will not check with a container if it is ready to be stopped, once you give docker the command to stop a container it will perform that action. However it performs the stop in two steps. The first step is a SIGTERM that your container can trap and gracefully handle. By default, after 10 seconds, a SIGKILL is sent that the Linux kernel immediately applies and cannot be trapped by the container. For your goals, you'll want to make sure your app knows when it's safe to exit after receiving the first signal, and you'll probably want to extend the time to much longer than 10 seconds between signals.
The healthcheck won't tell docker that your container is at a safe point to stop. It does tell swarm when your container has finished starting, or when it's misbehaving and needs to be stopped and replaced. The healthcheck defines a command to run inside your container, and the exit code is checked for whether it's 0 (healthy) or 1 (unhealthy). No other exit codes are currently valid.
If you need more than the simple signal handling inside the container, then yes, you're likely moving up the stack to a ci/cd tool to manage the deployment.

Kubernetes: How do I deploy container from saved checkpoint?

I am using experimental checkpoint feature to start up my app in the container and save its state.
I do so because tests on the app cannot be run in pararell and startup takes long.
I want to migrate to kubernetes to manage test containers
Build and start up an app in the container
Save state
Spin X instances from saved container
Run one test on each container
How do I use Kubernetes to do that?
I uses GCP
Container state migration (CRIU) is a feature that Docker has in a experimental state. According to Kubernetes devs (https://github.com/kubernetes/kubernetes/issues/3949), looks like it is not something Kubernetes will support in the short term. Therefore, you currently cannot migrate pods with checkpoints (i.e. it will need to start again). Not sure if creating a container image of your started application could help, that would depend on how the container image was created.

How to keep a certain number of Docker containers running the same application and add/remove them as needed?

I've working with Docker containers. What Ive done is lunching 5 containers running the same application, I use HAProxy to redirect requests to them, I added a volume to preserve data and set restart policy as Always.
It works. (So far this is my load balancing aproach)But sometimes I need another container to join the pool as there might be more requests, or maybe at first I don't need 5 containers.
This is provided by the Swarm Mode addition in Docker 1.12. It includes orchestration that lets you not only scale your service up or down, but recover from an outage by automatically rescheduling the jobs to run on other nodes.
If you don't want to use Docker 1.12 (yet!), you can also use a Service Discovery like Consul, register your containers inside and use a tool like Consul Template to regenerate your load balancer configuration accordingly.
I made a talk 6 months ago about it. You can find the code and the configuration I used during my demo here: https://github.com/bargenson/dockerdemo

Resources