DockerFile: on container stop. Run commands on stop signal - docker

I have a process that runs in the background of my docker contained website. I am currently using forever npm package to keep this running.
I would like to clean up the process cleanly when the container is stopped/killed.
I know that all process started in the container should be auto killed after 10 seconds of waiting, but I'd rather tell forever to stop the process on this signal....
How do I write these cleanup scripts in my docker file to execute when a docker container is getting closed or killed?

In your script (entry-point, command) you should run command with exec command_name "$#" and your application has to handle the signals.

Related

Docker : entrypoint script exits much before expected

I am running my gatling (https://gatling.io/) based load tests from inside a docker container .
Last line in Dockerfile is
ENTRYPOINT ["/bin/scripts/run_test.sh"]
My tests are packaged inside a jar file which is launched using a shell script inside the jar.
While I run the jar directly on the machine but while I run the same with in the docker container it exits gracefully after finishing less than 50% of the job .
My tests peform the following functions :
make an api POST call.
poll for the status of my request till myrequest is fully
processed (delay of 30 seconds between consecutive polling attempts).
a get call for the same request to obtain complete response
My docker container exits with status code as 0 which means there are no errors.
I tried using , -d with docker run , nohup for the same CMD ["nohup","/bin/scripts/run_test.sh"] but it didnt work .
I suspect that the shell inside the container is getting killed which is causing the process to exit gracefully.
How can I run this process to make sure it runs in the background till all the operations get finished?
Also is there a way to get more verbose logs of docker apart from docker logs ?

Docker container in ECS Fargate exits with code 0 when running script. Unable to run container to get to /bin/sh

I have an ECS Cluster that is using an image hosted in AWS ECR. The dockerfile is executing a script in it's entrypoint attribute. My cluster is able to spin up instances but then goes into a stopped state. The only error it is giving me is as follows:
Exit Code 0
Entry point ["/tmp/init.sh"]
The only information given to me is the reason the container stopped:
Stopped reason Essential container in task exited
Any advice on how I can fix this would be helpful.
I tried running the container locally using the following: docker run -it application /bin/sh
For some reason running the container, I am unable to get to in using /bin/sh.
Any advice would be appreciated.
What does the init.sh script do? That message isn't bad necessarily. It may just mean that your script has completed and no background process has started so your container is exiting.
You could run the container locally gaining a shell (as you did) and launch the script manually from there or you could just run the container without /bin/sh and let the script execute to see what happens. If the script exits locally as well then that seems to be the proper behavior and anything you'd need to debug you can debug locally.

Run a script when docker is stopped

I am trying to create docker container using dockerfile where script-entry.sh is to be executed when the containers starts and script-exit.sh to be executed when the container stops.
ENTRYPOINT helped to accomplish the first part of the problem where script-entry.sh runs on startup.
How will i make sure the script-exit.sh is executed on docker exit/stop ?
docker stop sends a SIGTERM signal to the main process running inside the Docker container (the entry script). So you need a way to catch the signal and then trigger the exit script.
See This link for explanation on signal trapping and an example (near the end of the page)
Create a script, and save it as a bash file, that contains that following:
$CONTAINER_NAME="someNameHere"
docker exec -it $CONTAINER_NAME bash -c "sh script-exit.sh"
docker stop $CONTAINER_NAME
Run that file instead of running docker stop, and that should do the trick. You can setup an alias for that as well.
As for automating it inside of Docker itself, I've never seen it done before. Good luck figuring it out, if that's the road you want to take.

RUnit does not stop docker-compose's containers

I would like to have a RUnit service to supervise a set of containers launched by docker-compose tool, here's my the runit script:
In /etc/sv/app/run
#!/bin/bash
exec 2>&1
APP_HOME=/home/myapp
source $APP_HOME/env.sh
exec docker-compose -f $APP_HOME/docker-compose.yml up
Here's what I have then:
sv start app - launches the docker-compose thing just fine
sv stop app - stops docker-compose process itself but for unknown reason it leaves the containers running
Is there any chance to have the stop command to stop containers as well? I thought that is what docker-compose should do when it gets stopped by RUnit.
I'm not familiar with docker (yet) but I have familiarity with runit.
When you issue sv stop app you are actually telling runsvdir to signal the runsv for your docker launch to tear down the process. If you need something to signal the container to shut down, it won't happen because runsv will haul off and kill any child processes that are attached. You may wish to read up on ./finish scripts, which are tasked with cleaning up things.

Docker launch 2 processes in the conatiner

New to Docker and I'm reading that a Dockerfile can only have 1 CMD.
So how do I start both my database server and application server? Something like:
CMD /root/database/bin/server run &
CMD /root/appserver/bin/server run &
Docker can only start one process in a container - but that process can start whatever it likes.
Supervisord has been a popular choice to use that will then go on to star whatever else you want/need to.
Docker can run as many processes as you want to. It is no problem to run a database and an application server in the same container. However, you can only run one command in your container, so this command must start all other processes and it must run as long as your container runs (if it stops, your container will stop).
So start a shell script which itself will start all other things:
CMD /run.sh
The shell script could look like this:
echo "Lets start up"
:: Run your database server in background
/root/database/bin/server run &
:: Run your app server (not in background to keep the container up)
/root/appserver/bin/server run

Resources