I am running my gatling (https://gatling.io/) based load tests from inside a docker container .
Last line in Dockerfile is
ENTRYPOINT ["/bin/scripts/run_test.sh"]
My tests are packaged inside a jar file which is launched using a shell script inside the jar.
While I run the jar directly on the machine but while I run the same with in the docker container it exits gracefully after finishing less than 50% of the job .
My tests peform the following functions :
make an api POST call.
poll for the status of my request till myrequest is fully
processed (delay of 30 seconds between consecutive polling attempts).
a get call for the same request to obtain complete response
My docker container exits with status code as 0 which means there are no errors.
I tried using , -d with docker run , nohup for the same CMD ["nohup","/bin/scripts/run_test.sh"] but it didnt work .
I suspect that the shell inside the container is getting killed which is causing the process to exit gracefully.
How can I run this process to make sure it runs in the background till all the operations get finished?
Also is there a way to get more verbose logs of docker apart from docker logs ?
Related
I'm running an Application in tomcat. It generate some application's temp files during the process. And when we shutdown the tomcat using shutdown.sh or stop tomcat from IntelliJ then temp files will be converted to csv. This is inbuilt feature of my Application.
Now when I run the same Application in Docker, then temp is not converting to csv format as I expect it to.
I can see ENTERYPOINT in Docker configuration file that indicates the startup script for the docker. I don't know how docker stop internally works and what it trigger when we execute docker stop <container Name>, or where I should define the script for docker stop.
I also tried docker stop -t 10 <container Name>, with the thought in my mind that it might be tomcat, taking time to shutdown hence docker might be killing the tomcat process. So I tried docker stop with 10 second time also. But no luck.
I think docker is killing the tomcat when I execute docker stop
Any guidance will be appreciated.
First check this reference for process with pid 1.
Step to perform in your system.
Step 1 :- in entrypoint check your script file.
Step 2 :- update your script line entry.sh, find some line where you're starting your tomcat.
Step 3 :- for start tomcat use "exec sh catalina.sh run".
Step 4 :- check your ps with "docker exec ps -ef" it must be run on pid 1
Step 5 :- rebuild docker image.
Step 6 :- docker stop will send shutdown signal to tomcat and works.
For observe logs use docker logs -f <container_name>
I have an ECS Cluster that is using an image hosted in AWS ECR. The dockerfile is executing a script in it's entrypoint attribute. My cluster is able to spin up instances but then goes into a stopped state. The only error it is giving me is as follows:
Exit Code 0
Entry point ["/tmp/init.sh"]
The only information given to me is the reason the container stopped:
Stopped reason Essential container in task exited
Any advice on how I can fix this would be helpful.
I tried running the container locally using the following: docker run -it application /bin/sh
For some reason running the container, I am unable to get to in using /bin/sh.
Any advice would be appreciated.
What does the init.sh script do? That message isn't bad necessarily. It may just mean that your script has completed and no background process has started so your container is exiting.
You could run the container locally gaining a shell (as you did) and launch the script manually from there or you could just run the container without /bin/sh and let the script execute to see what happens. If the script exits locally as well then that seems to be the proper behavior and anything you'd need to debug you can debug locally.
I am running a jar package within a container by docker. I spot when there is database connect timeout issue or kafka connect issue, the container will fail. However, I will be fine if I print java error log to console or log file. Anyone can clarify the logic to define a container as failed/error, Thanks!
Well there is no such thing as failed/errorneus container. Docker image can have default ENTRYPOIN or CMD which executes as docker container is started but when command ends docker lifecycle ends as well.
I assume you run some server app in docker container which serves forever which makes one think that docker images all run without stopping. Your docker which should always run stops after your app crashes, you can see the details in docker logs if your didn't run it with --rm option. Try docker ps -a to see your container with exited status and see execution logs or extract files from it's filesystem to debug what went wrong.
To the extent Docker has this concept at all, it follows normal Unix semantics. The container runs a single process, and when that process exits, the container exits too. If the process exits with a status code of 0 it is "successful" and if it exits with any other status code it "fails".
In the context of a Java container, Is there a complete List of JVM exit codes asserts that a JVM will always exit with status code 0 ("success") even if the program terminates with an uncaught exception; it will only return "failure" if the JVM itself fails in some way.
The most significant place this can matter is around a restart policy. If you start your container with docker run --restart on-failure, an uncaught exception won't be considered "failure" and your container won't restart.
I have a process that runs in the background of my docker contained website. I am currently using forever npm package to keep this running.
I would like to clean up the process cleanly when the container is stopped/killed.
I know that all process started in the container should be auto killed after 10 seconds of waiting, but I'd rather tell forever to stop the process on this signal....
How do I write these cleanup scripts in my docker file to execute when a docker container is getting closed or killed?
In your script (entry-point, command) you should run command with exec command_name "$#" and your application has to handle the signals.
I am trying to create docker container using dockerfile where script-entry.sh is to be executed when the containers starts and script-exit.sh to be executed when the container stops.
ENTRYPOINT helped to accomplish the first part of the problem where script-entry.sh runs on startup.
How will i make sure the script-exit.sh is executed on docker exit/stop ?
docker stop sends a SIGTERM signal to the main process running inside the Docker container (the entry script). So you need a way to catch the signal and then trigger the exit script.
See This link for explanation on signal trapping and an example (near the end of the page)
Create a script, and save it as a bash file, that contains that following:
$CONTAINER_NAME="someNameHere"
docker exec -it $CONTAINER_NAME bash -c "sh script-exit.sh"
docker stop $CONTAINER_NAME
Run that file instead of running docker stop, and that should do the trick. You can setup an alias for that as well.
As for automating it inside of Docker itself, I've never seen it done before. Good luck figuring it out, if that's the road you want to take.