Can we add delay to Docker Restart Policy 'unless_stopped' - docker

I am running a docker container that need to be restarted when it fails but with a delay.
I've added restart-policy=unless-stopped which does the trick but incase my application crashes It immediately brings the container up.
Can we add some delay to unless-stopped restart policy without creating Swarms?

Related

Why does a container running a console app simply exits after starting

I want to run a simple dotnet core console app in a container interactively. I am not able to do that and the container simply starts and then exits immediately without fully running the program.
All that the console app has is the followng three statements.
Console.WriteLine("Hello World!");
var readText = Console.ReadLine(); // Wait for me to enter some text
Console.WriteLine(readText);
The last two lines make it interactive.
When I run the container, it prints the Hello World! But then it immediately exits, without waiting for me enter some text. Why? what am I missing?
I was able to run a dotnet core web app in the container in a similar manner, and I am able to map the ports outside and within the container to successfully browse the web app. But when it comes to console app, I am stumped.
I guess there could be something very simple that I am missing. Driving me nuts
The steps to reproduce as described below.
Launch Vs2019 and create a new .net core console project.
Add a couple of statements to make it interactive.
Add Docker support to the created project. Right click the project, Add -> select Container Orchestrator Support
Now visual studio creates a set of files and changes the csproj file as well.
In a powershell navigate to folder having the solution file. Run the command "docker-compose up"
Once the images are built, and containers are up and running, we can see start to see the problem.
We can see here Hello-World!. But it does not wait for me to type something. Type docker ps -a and see a container that is exited. When I try to start that using docker start -i or docker start -a, the container starts but exits immediately. What should I do to make the container run so that i can type something for my app running in it read? You can see in the docker desktop as well. Even if I start them(using the start button available with the docker desktop UI against each container), it simply stops again.
I had run web apps in containers. With proper port mapping, a web app running inside of a container can be accessed from outside. I had created a dotnet core web app in similar lines described above, modified the docker-compose file to include port mapping(shown below) and when I do docker-compose up, the app is up and running. But with a console app, the container simply exits.
By default, you don't have an interactive TTY when the container is started with docker-compose up.
You need to add that to your service:
stdin_open: true
tty: true
I found, Docker compose run is an alternative to up command.
docker-compose run <service name from docker-compose.yml file>
Specifically, for my application, the docker compose file looks as follows.
version: '3.4'
services:
dokconsoleapp:
image: ${DOCKER_REGISTRY-}dokconsoleapp
build:
context: .
dockerfile: DokConsoleApp/Dockerfile
# stdin_open: true
# tty: true
Note the last two lines are commented out.
Now if I run
docker-compose run dokconsoleapp
The container runs till the end of the program, interactively waiting for me to type an input after Hello-World!.
So statements
stdin_open: true
tty: true
are not needed when you use run with docker-compose instead of up

How to reset the restart-policy for a Docker stack container

I'm deploying a container using docker stack with a restart-policy. One of the container's attached volumes had some corrupted data, causing the container to repeatedly crash and eventually hit the max_restarts limit. I manually fixed the volume, and now want to restart the container. I have been unable to find a graceful way to ask Docker "please reset the max_restarts on this container". Does one exist?
I was able to proceed by doing a service rm and then re-create the service, but this seems a bit heavy-handed just for resetting a flag

docker container lifecycle confusion

I am new to Docker, and I find the definitions of containers' lifecycle differ a lot.
here is what "Manning.Docker.in.Action.2016.3" shows:
here is what google gives me:
https://medium.com/#nagarwal/lifecycle-of-docker-container-d2da9f85959
here is what the official document says:
status: One of created, restarting, running, removing, paused, exited, or dead
https://docs.docker.com/engine/reference/commandline/ps/
So what's going on here? I guess some new states(and renaming) are introduced in newer version of Docker?
Thanks in advance
Your linked diagram separates docker create from docker start, it includes "die" as a state transition, and it shows how to get to the "restarting" state. That's all valid, though it leads to a more complicated state machine.
(docker create wasn't in the very first versions of Docker but it appeared in Docker 1.3.0 in 2014, which should predate your diagram.)
Practically I might suggest an even simpler state machine:
-------> running -+------> stopped ------>
run | stop rm
\------> exited ------>
process exits rm
That is, never try to restart a container or make changes inside a running container; if you need to tweak anything, delete the existing container and create a new one. This gives you a consistent environment (when the main container process starts you always know what's in its filesystem, up to mounted data). It also matches what happens in cluster environments like Kubernetes, where the cluster manager will routinely create and delete containers for you.
When you get in a situation where internet gives you different answers, you should consider trying it yourself. Especially with technologies like docker, where it is pretty simple to make tests. For example:
I want to run a container (I will use nginx):
docker run -d nginx
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
258cd2edbed8 nginx "nginx -g 'daemon of…" 3 seconds ago Up 2 seconds 80/tcp jolly_golick
Note: docker will keep a container running only if there is a process running in it.
If you would start a debian container (for example), you would see how it immediately stop, as there is nothing running in it. So you could do
docker run -d debian sleep 10
and see that the container is up for 10 seconds.
When a container is running, you can do some things on it. You can't do other things, like removing it. To remove a container, you need to stop it first (or kill it), or force container removal.
Note: You would get all this info from docker itself, if you would be playing around with it, as it would return these info. Like if you would try to remove a running container, you would get this error:
Error response from daemon: You cannot remove a running container 258cd2edbed85bed23ab543312968bd893c1fbd9ba81de40366337f434daedff. Stop the container before attempting removal or force remove
I can't do all possible combinations here. You would get a similar error if you would try removing a paused container. Just play with it, and you will get a clear picture of how it works.

Docker Compose "Ghost Containers"

I am using docker-compose to deploy an application combining a number of different images.
Using Docker version 18.09.2, build 6247962
Docker-compose 1.117
Primarily, I have
ZooKeeper
Kafka
MYSQLDb
I notice a strange problem where i could not start my application with docker-compose up due to port already being assigned. I then checked docker stats and saw that there were three containers named "test_ZooKeeper.1slehgaior"
"test_Kafka.kgjdorgsr"
"test_MYSQLDB.kgjdorgsr"
I have tried kill the containers, removing them and pruning the system. When ever I kill one of these containers, it instantly restarts and I cannot for the life of me determine where they are being created from!
Please help :)
If you look into your docker-compose.yaml I'm pretty sure you'll find a restart:always somewhere. If you want to correctly shut down a running docker container managed by docker-compose, one way is to use docker-compose down from the directory where your yaml sits.
More information on the subject:
https://docs.docker.com/config/containers/start-containers-automatically/
Otherwise, you might try out to stop a single running container instead of killing it, which according to my memory tells docker not to restart it again, while a killed container looks to the service like it just has crashed. Not too sure about the last part though.

Docker Process Management

I have a deployed application running inside a Docker container, which is, in effect, an websocket client that runs forever. Every deploy I'm rebuilding the container and starting it with docker run using the command set in the Dockerfile.
Now, I've noticed a few times that the process occasionally dies without restarting. When running docker ps, I can see that the container is up, and has been up for 2 weeks, however the process running inside of it has died without the host being any the wiser
Do I need to go so far as to have a process manager inside of the docker container to manage the containerized process?
EDIT:
Dockerfile: https://github.com/DVG/catpen-edi/blob/master/Dockerfile
We've developed a process-manager tailor-made for Docker containers and have been using it with quite a bit of success to solve exactly the problem you describe. The best starting point is to take a look at chaperone-docker on github. The readme on the first page contains a quick link to a minimal base image as well as a fully configured LAMP stack so you can try it out and see what a fully-configured image would look like. It's open-source and fully documented.
This is a very interesting problem here related to PID1 and the fact that docker replaces PID1 with the command specified in CMD or ENTRYPOINT. What's happening is that the child process isn't automagically adopted by anything if the parent dies and it becomes an orphan (since there is no PID1 in the sense of a traditional init system like you're used to). Here is some excellent reading to give you a few ideas. You may get some mileage out of their baseimage-docker image which comes with their simplified init system ("my_app"), which will solve some of this problem for you. However, I would strongly caution you against automatically adopting the Phusion mindset for all of your containers, as there exists some ideological friction in that space. I can't recall any discussion on Docker's Github about a potential minimal init system to solve this problem, but I can't imagine it will be a problem forever. Good luck!
If you have two ruby processes it sounds like the child hasn't exited, the application has just stopped working. It's likely the EventMachine reactor is sitting in the background.
Does the EDI app really need to spawn the additional Ruby process? This only adds another layer between Docker and your app. Run the server directly with CMD [ "ruby", "boot.rb" ]. If you find the problem still occurs with a single process then you will need to find what is causing your app to hang.
When a process is running as PID 1 is docker it will need handle the SIGINT and SIGTERM signals too.
# Trap ^C
Signal.trap("INT") {
shut_down
exit
}
# Trap `Kill `
Signal.trap("TERM") {
shut_down
exit
}
Docker also has restart policies for when the container does actually die.
docker run --restart=always
no
Do not automatically restart the container when it exits. This is
the default.
on-failure[:max-retries]
Restart only if the container
exits with a non-zero exit status. Optionally, limit the number of
restart retries the Docker daemon attempts.
always
Always restart the
container regardless of the exit status. When you specify always, the
Docker daemon will try to restart the container indefinitely. The
container will also always start on daemon startup, regardless of the
current state of the container.
unless-stopped
Always restart the
container regardless of the exit status, but do not start it on daemon
startup if the container has been put to a stopped state before.

Resources