what will cause container failed and exit - docker

I am running a jar package within a container by docker. I spot when there is database connect timeout issue or kafka connect issue, the container will fail. However, I will be fine if I print java error log to console or log file. Anyone can clarify the logic to define a container as failed/error, Thanks!

Well there is no such thing as failed/errorneus container. Docker image can have default ENTRYPOIN or CMD which executes as docker container is started but when command ends docker lifecycle ends as well.
I assume you run some server app in docker container which serves forever which makes one think that docker images all run without stopping. Your docker which should always run stops after your app crashes, you can see the details in docker logs if your didn't run it with --rm option. Try docker ps -a to see your container with exited status and see execution logs or extract files from it's filesystem to debug what went wrong.

To the extent Docker has this concept at all, it follows normal Unix semantics. The container runs a single process, and when that process exits, the container exits too. If the process exits with a status code of 0 it is "successful" and if it exits with any other status code it "fails".
In the context of a Java container, Is there a complete List of JVM exit codes asserts that a JVM will always exit with status code 0 ("success") even if the program terminates with an uncaught exception; it will only return "failure" if the JVM itself fails in some way.
The most significant place this can matter is around a restart policy. If you start your container with docker run --restart on-failure, an uncaught exception won't be considered "failure" and your container won't restart.

Related

why I am unable to exec into the docker container if there is an error in container

I am running docker container Nginx, there are some errors in it,
I cannot exec into the container because it is stopped, how can I exec into the stopped container.
How to avoid the container stopping, if there exists an error in the container
Can someone help me by answering the above?
It seems like the normal execution within your container causes it to stop. So what you need to do is, create a container with an overridden entrypoint (the procedure/command that is executed on container startup).
A good place to start is by creating a shell instance where you can look around, and maybe even execute the same command manually for debugging purposes.
So let's say I have an image testimage:latest that on startup executes /bin/my_script.sh, which fails.
I can then start a container with a shell instance
$ docker run --entrypoint sh -it testimage:latest
And within that container I can run the script, and check the output
in_container$ /bin/my_script.sh
I cannot exec into the container because it is stopped, how can I exec into the stopped container.
No, you cannot exec onto a stopped container, you'd need to start the
container up again before being able to exec onto it.
How to avoid the container stopping, if there exists an error in the container
As far as I am aware there is nothing to prevent a container stopping when there are errors, however I have found How to prevent a container from shutting down during an error? which might help you with what you need (please give them credit if it does work).

Docker container in ECS Fargate exits with code 0 when running script. Unable to run container to get to /bin/sh

I have an ECS Cluster that is using an image hosted in AWS ECR. The dockerfile is executing a script in it's entrypoint attribute. My cluster is able to spin up instances but then goes into a stopped state. The only error it is giving me is as follows:
Exit Code 0
Entry point ["/tmp/init.sh"]
The only information given to me is the reason the container stopped:
Stopped reason Essential container in task exited
Any advice on how I can fix this would be helpful.
I tried running the container locally using the following: docker run -it application /bin/sh
For some reason running the container, I am unable to get to in using /bin/sh.
Any advice would be appreciated.
What does the init.sh script do? That message isn't bad necessarily. It may just mean that your script has completed and no background process has started so your container is exiting.
You could run the container locally gaining a shell (as you did) and launch the script manually from there or you could just run the container without /bin/sh and let the script execute to see what happens. If the script exits locally as well then that seems to be the proper behavior and anything you'd need to debug you can debug locally.

Stop Synology notification "Docker container stopped unexpectedly"

I have a container with one Node.js script which is launched with CMD npm start. The script runs, does some work, and exits. The node process exits because no work is pending. The npm start exits successfully. The container then stops.
I run this container on a Synology NAS from a cronjob via docker start xxxx. When it finishes, I get an alert Docker container xxxx stopped unexpectedly from their alert system. docker container ls -a shows its status as Exited (0) 5 hours ago. If I monitor docker events I see the event die with exitCode=0
It seems like I need to signal to the system that the exit is expected by producing a stop event instead of a die event. Is that something I can do in my image or on the docker start command line?
The Synology Docker package will generate the notification Docker container xxxx stopped unexpectedly when the following two conditions are met:
The container exits with a die docker event (you can see this happen by monitoring docker events when the container exits). This is any case where the main process in the container exits on its own. The exitCode does not matter.
The container is considered "enabled" by the Synology Docker GUI. This information is stored in /var/packages/Docker/etc/container_name.config:
{
"enabled" : true,
"exporting" : false,
"id" : "dbee87466fb70ea26cd9845fd79af16d793dc64d9453e4eba43430594ab4fa9b",
"image" : "busybox",
"is_ddsm" : false,
"is_package" : false,
"name" : "musing_cori",
"shortcut" : {
"enable_shortcut" : false,
"enable_status_page" : false,
"enable_web_page" : false,
"web_page_url" : ""
}
}
How to enable/disable containers with Synology's Docker GUI
Containers are automatically enabled if you start them from the GUI. All of these things will cause the container to become "enabled" and start notifying on exit:
Sliding the "toggle switch" in the container view to "on"
Using Action start on the container.
Opening the container detail panel and clicking "start"
This is probably how your container ended up "enabled" and why it is now notifying whenever it exits. Containers created with docker run -d ... do not start out enabled, and will not initially warn on exit. This is probably why things like docker run -it --rm busybox and other ephemeral containers do not cause notifications.
Containers can be disabled if you stop them while they are running. There appears to be no way to disable a container which is currently stopped. So to disable a container you must start it and then stop it before it exits on its own:
Slide the toggle switch on then off as soon as it will let you.
Use Action start and then stop as soon as it will let you (this is hard because of the extra click if your container is very shortlived).
Open the controller detail panel, click start, and then as soon as "stop" is not grayed out, click "stop".
Check your work by looking at /var/packages/Docker/etc/container_name.config.
Another option for stopping/starting the container without the notifications is to do it via the Synology Web API.
To stop a container:
synowebapi --exec api=SYNO.Docker.Container version=1 method=stop name="CONTAINER_NAME"
Then to restart it:
synowebapi --exec api=SYNO.Docker.Container version=1 method=start name="CONTAINER_NAME"
Notes:
The commands need to be run as root
You will get a warning [Line 255] Not a json value: CONTAINER_NAME but the commands work and give a response message indicating "success" : true
I don't really have any more information on it as I stumbled across it in a reddit post and there's not a lot to back it up, but it's working for me on DSM 7.1.1-42962 and I'm using it in a scheduled task.
Source and referenced links:
A Reddit post with the commands
Linked GitHub page showing the commands in use
Linked Synology Developer's Guide for DSM Login Web API
I'm not familiar with Synology so I'm not sure which component is raising the "alert" you mention, but I guess this is just a warning and not an error, because:
an exit status of 0 is very fine from a POSIX perspective;
a "die" docker event also seems quite common, e.g. running docker events then docker run --rm -it debian bash -c "echo Hello" yields the same event (while a "kill" event would be more dubious).
So maybe you get one such warning just because Synology assumes a container should be running for a long time?
Anyway, here are a couple of remarks related to your question:
Is the image/container you run really ephemeral? (regarding the data the container handles) because if this the case, instead of doing docker start container_name, you might prefer using docker run --rm -i image_name … or docker run --rm -d -i image_name …. (In this case thanks to --rm, the container removal will be automatically triggered when the container stops.)
Even if the setup you mention sounds quite reasonable for a cron job (namely, the fact that your container stops early and automatically), you might be interested in this other SO answer that gives further details on how to catch the signals raised by docker stop etc.

Docker container gets killed

I am running a docker container which is trying to access a port in another docker container. Both of these are running are configured together to run on the same network. But as soon as I start this container it gets killed and doesn't throw any error. There are no error logs. I also tried using docker inspect but couldn't find much.
PS: I am a newbie docker user.
Following from OP comment w/ ENTRYPOINT
ENTRYPOINT /configure.sh && bash
Answer
Given your ENTRYPOINT the container will always exit since the process is bash. You need to have a continuously running process in the foreground for the container to stay running i.e. an application daemon.

Docker container shows running even after exit command

I am new to docker, hence may be missing a simple piece. Here is my scenario. I started a container with command 'docker run -it ubuntu:14.04'. Then with Ctrl+P+Q, I exited such that the container keeps running. I verified with docker ps, and saw the container running. Then I again entered the container with 'docker exec -it bash. This took me inside the container again. Now on typing 'exit' command, I come out of the container, but the container is still in running mode. Normally with exit command, the container stops. Any idea why this is happening?
The container's running status is tied to the initial process that it was created for/with.
If you do docker run then this will create a new container with some inital process. When that process terminates, the whole container is stopped. If that initial process was bash, and you exit it, then this terminates the container itself.
docker exec starts a new process inside of the running container. When that process terminates, the container still keeps running.
Typing exit into an interactive bash shell will just exit that shell. It will not affect other processes running inside the same container (just like closing one terminal window in your host OS does not affect any other processes).
With the exit command in your case, the container stops only the /bin/bash/ executable. Probably some other application like NGINX or Apache is running inside the container and does not let it shut down.

Resources