Docker container gets killed - docker

I am running a docker container which is trying to access a port in another docker container. Both of these are running are configured together to run on the same network. But as soon as I start this container it gets killed and doesn't throw any error. There are no error logs. I also tried using docker inspect but couldn't find much.
PS: I am a newbie docker user.

Following from OP comment w/ ENTRYPOINT
ENTRYPOINT /configure.sh && bash
Answer
Given your ENTRYPOINT the container will always exit since the process is bash. You need to have a continuously running process in the foreground for the container to stay running i.e. an application daemon.

Related

Docker container run and pause right after

I have a docker service/image I'm using which restarts as soon as starts.
I'm unable to fix the issue by getting into the container using
docker exec -it CONTAIER_NAME
since it restarts/terminates as soon as it boots.
Is there anyway I can pause it directly? I can't rebuild the image as I don't have access to the internet on the server. (Yes I'm sure the rebuild or build--no-cache will fix the issue)
The issue should be easily fixable if I modify permissions for a certain folder, but I'm not sure how to do this inside the container when I can't access it. The image doesn't have a docker file and is used directly from the docker hub.
If we do not get any information from the container's logs, we have the option to start the process "manually". For this, we start the container with an interactive terminal (-it, -i to keep STDIN open, -t to open a pseudo-TTY) and override the entrypoint to be a shell, e.g. bash. For good measure, we want the container to be removed when it terminates (i.e. when we exit the termainal, --rm):
docker run ... -it --rm --entrypoint /bin/bash
Once inside the container, we can start the process that would have normally started through the entrypoint from the container's terminal and extract error information from here.

What to do if the docker container hangs and does not respond to any command other than ctrl+c?

I have been running a nvidia docker image since 13 days and it used to restart without any problems using docker start -i <containerid> command. But, today while I was downloading pytorch inside the container, download got stuck at 5% and gave no response for a while.
I couldn't exit the container either by ctrl+d or ctrl+c. So, I exited the terminal and in new terminal I ran this docker start -i <containerid> again. But ever since this particular container is not responding to any command. Be it start/restart/exec/commit ...nothing! any command with this container ID or name is just non-responsive and had to exit out of it only after ctrl+c
I cannot restart the docker service since it will kill all running docker containers.
Cannot even stop the container using this docker container stop <containerid>
Please help.
You can make use of docker RestartPolicy:
docker update --restart=always <container>
while mindful of caveats on the docker version you running.
or explore an answer by #Yale Huang from a similar question: How to add a restart policy to a container that was already created
I had to restart docker process to revive my container. There was nothing else I could do to solve it. used sudo service docker restart and then revived my container using docker run. I will try to build the dockerfile out of it in order to avoid future mishaps.

why I am unable to exec into the docker container if there is an error in container

I am running docker container Nginx, there are some errors in it,
I cannot exec into the container because it is stopped, how can I exec into the stopped container.
How to avoid the container stopping, if there exists an error in the container
Can someone help me by answering the above?
It seems like the normal execution within your container causes it to stop. So what you need to do is, create a container with an overridden entrypoint (the procedure/command that is executed on container startup).
A good place to start is by creating a shell instance where you can look around, and maybe even execute the same command manually for debugging purposes.
So let's say I have an image testimage:latest that on startup executes /bin/my_script.sh, which fails.
I can then start a container with a shell instance
$ docker run --entrypoint sh -it testimage:latest
And within that container I can run the script, and check the output
in_container$ /bin/my_script.sh
I cannot exec into the container because it is stopped, how can I exec into the stopped container.
No, you cannot exec onto a stopped container, you'd need to start the
container up again before being able to exec onto it.
How to avoid the container stopping, if there exists an error in the container
As far as I am aware there is nothing to prevent a container stopping when there are errors, however I have found How to prevent a container from shutting down during an error? which might help you with what you need (please give them credit if it does work).

How to keep the docker container up and running?

Here is my simple docker file
FROM java:8
EXPOSE 4000
now when I run it using the following command
sudo docker run --name hello dockerfile
and do docker ps -a it shows the status as exited. I just want to keep this container up and running so I can ssh into this container and probably transfer files and so on. It looks like containers are mainly used to run servers am I correct?
you can at least keep your container up with something like docker run -d hello sleep infinity but as said by René M, you should put in your Dockerfile something to do in your CMD or ENTRYPOINT, see the doc
https://docs.docker.com/engine/reference/builder/#cmd
and
https://docs.docker.com/engine/reference/builder/#entrypoint
That is realy simple.
Because your container is running nothing that last long. What happens is, that this container starts, has nothing to do and stops.
What you can do is:
Run the container in interactive mode with attached tty. This way your console enters the container after it's start, and let him run a tty, which is something to do and prevends the container from stopping. Then you can work inside this container, like installing an application. Doing this your work will be lost after stoping the container. But you can run docker commit on that container, which makes your changes persistent.
docker run -i -t --name hello dockerfile
Enhance your dockerfile with something usefull. Like copying an application into the container and provide a CMD command to run, when the container starts.
After this the container will last as long as your CMD command runs. If the command is a server or deamon application, the container will last for ever and will only stop when you stop him.

getting docker container to never shutdown

I'm trying to get a docker container to never shutdown.
If I run a docker container with the -d flag the container will be run in the background.
For example, can this be done:
Start docker container with -it flags
start entrypoint application
entrypoint application creates 10 other services/processes to run in that same container
entrypoint application terminates
Will the docker container stay up now that the application mentioned in the entrypoint has exited?
Why don't you simply try? AFAIU -d or -it won't affect the container termination. And I guess you understand that starting those 10 processes means you violate the docker's idea of one process per container. Why don't you start 10 containers instead? You can also do that from your start container, and they will keep running even if the starting container will terminate.
You could also give docker docs a try: https://docs.docker.com/articles/host_integration/
-it flag means you want your standard input and output routed to/from container. This is basically and conceptually incompatible with running forever.
In general container will shutdown when entry point exits. if you want to keep container running you should run one (probably the last application) not as a background daemon but in foreground. for example if nginx is the last service you want to run. What you would do as your last line of entry point script (if thats a shell script) should be some thing like this:
nginx -g "daemon off;"

Resources