I am very new to this docker things. I created containers in docker when i use to start my container it suddenly goes to excited state. I am trying to assign port:7050 to that container. All other container which i have created they use start but one orderer container when i use to turn it on, it automatically goes to excited state.
You can refer image :
Error: Container goes to excited state
Please guide me through this, i am not getting what is the problem. I tried removed all the docker conatiner and again created but i am getting same problem.
Thanks in adavnce.
Have you tried with
docker run -d ?
This will prevent the container from exiting.
If for example, you initialize a service and it runs in the background probably the container will understand that it finished its job and it will exit.
You can check this post for more information Docker container will automatically stop after "docker run -d"
First thing would be to check the logs of the pod as with:
docker logs $id
where you swap $id for the container id as you could see in the image you linked
If that doesn't tell you enough you can also call:
docker inspect $id
Related
I use the dotnet3.5 image to run containers on win10 with docker desktop 2.1.0.1(37199). Sadly, I found that after I had created a container, did nothing to it, and left it alone for 4 days, the container automotically became unstoppable. The snapshot tells the story.
The container seemed existing there when docker ps -a, but I cannot get into the container by docker exec. And for I cannot stop it--the docker stop process hangs there after I use docker stop container2--I cannot rm the container.
The only way to resolve this issue is to restore docker desktop's factory setting.
By the way, although in the snapshot the running image is aspnet:3.5-windowsservercore-10.0.14393.953, this issue also happens when the aspnet:3.5
Does anyone have good ideas to the unstoppable container? Any suggestions are welcome.
The command used above is incorrect. There is a difference between the commands and options we use. "# docker ps" or "# docker container ls" will give you the list of currently running processes or active containers.
Whereas "-a" will give you all the list of all those which are used to date which contains the list of active and deleted containers.
In your case, the container was is not there and you are trying to access the one which is non-existing, which is why it is stuck.
I am new to Docker, and I find the definitions of containers' lifecycle differ a lot.
here is what "Manning.Docker.in.Action.2016.3" shows:
here is what google gives me:
https://medium.com/#nagarwal/lifecycle-of-docker-container-d2da9f85959
here is what the official document says:
status: One of created, restarting, running, removing, paused, exited, or dead
https://docs.docker.com/engine/reference/commandline/ps/
So what's going on here? I guess some new states(and renaming) are introduced in newer version of Docker?
Thanks in advance
Your linked diagram separates docker create from docker start, it includes "die" as a state transition, and it shows how to get to the "restarting" state. That's all valid, though it leads to a more complicated state machine.
(docker create wasn't in the very first versions of Docker but it appeared in Docker 1.3.0 in 2014, which should predate your diagram.)
Practically I might suggest an even simpler state machine:
-------> running -+------> stopped ------>
run | stop rm
\------> exited ------>
process exits rm
That is, never try to restart a container or make changes inside a running container; if you need to tweak anything, delete the existing container and create a new one. This gives you a consistent environment (when the main container process starts you always know what's in its filesystem, up to mounted data). It also matches what happens in cluster environments like Kubernetes, where the cluster manager will routinely create and delete containers for you.
When you get in a situation where internet gives you different answers, you should consider trying it yourself. Especially with technologies like docker, where it is pretty simple to make tests. For example:
I want to run a container (I will use nginx):
docker run -d nginx
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
258cd2edbed8 nginx "nginx -g 'daemon of…" 3 seconds ago Up 2 seconds 80/tcp jolly_golick
Note: docker will keep a container running only if there is a process running in it.
If you would start a debian container (for example), you would see how it immediately stop, as there is nothing running in it. So you could do
docker run -d debian sleep 10
and see that the container is up for 10 seconds.
When a container is running, you can do some things on it. You can't do other things, like removing it. To remove a container, you need to stop it first (or kill it), or force container removal.
Note: You would get all this info from docker itself, if you would be playing around with it, as it would return these info. Like if you would try to remove a running container, you would get this error:
Error response from daemon: You cannot remove a running container 258cd2edbed85bed23ab543312968bd893c1fbd9ba81de40366337f434daedff. Stop the container before attempting removal or force remove
I can't do all possible combinations here. You would get a similar error if you would try removing a paused container. Just play with it, and you will get a clear picture of how it works.
My docker container hangs and don't have any idea how to get his back to life? I can't stop or restart it, there happens nothing. I can't even export him.
You could use service docker restart to restart the docker deamon (assuming you are using linux)
You can try these ideas :
Check the problem by looking the logs docker logs $container-name
You can try to create new image from your container with docker commit and create a new container.
You can create a new container from your initial image or your docker-compose.
I'm trying to pull and set a container with an database (mysql) with the following command:
docker-compose up
But for some reason is failing to start the container is showing me this:
some_db exited with code 0
But that doesn't give me any information of why that happened so I can fix it, so how can I see the details of that error?.
Instead of looking for how to find an error in your composition, you should change your process. With docker it is easy to fall into using all of the lovely tools to make things work seamlessly together, but this sounds like an individual container issue, not an issue with docker compse. You should try building an empty container with the baseimage you want then using docker exec -it <somecontainername> sh to go into it and run the commands you're trying to execute in your entrypoint manually to find the specific breakpoint.
Try at least docker logs <exited_container_name/id> (or more recently: docker container logs <exited_container_name/id>)
That way, you can check if there was any error message.
An exited container means the main process stopped: you need to make sure that main process remains up in order for the container to not exit immediately.
Try also docker inspect <exited_container_name/id> (or, again, docker container inspect <exited_container_name/id>)
If you see for instance "OOMKilled": true, that would mean the container memory was exceeded. You can also look for the "ExitCode" for clues.
See more at "Ten tips for debugging Docker containers" from Mark Betz.
I edited a file in a running docker container and restarted it, unfortunately my last edit was not correct. So every time I start the container with:
docker start <containerId>
It always exits immediately.
Now I can not even modify my edit back, since
docker exec -it <containerId> bash
can only run on a running docker.
The question is how can I change it and restart the container now? Or I had to abandon it and start a new container from an existing image?
You didn't supply any details regarding your container's purpose, or what you modified. Conceptually, you could create the file that needs to be modified in a place on your filesystem and mount that file into the container as a volume when you start it, like:
docker run -it -v /Users/<path_to_file>:<container_path_to_file> <container>
Hovever, this is bad form, as your container loses portability at that point without committing a new image.
Ideally, changes that need to be made inside of a Docker container are made in the Dockerfile, and the container image re-built. This way, your initial, working container state is represented in your Dockerfile code, making your configuration repeatable, portable, and immutable.
The file system of exited containers can still be changed. The preferable way is probably:
docker cp <fixedFile> <containerId>:<brokenFile>
But you can also circumvent docker completely; see here.