Error response from daemon: Container [id] is not running - docker

I am using docker for first time. I created docker image for DB2 and when started to login to the instance using command,
sudo docker exec -i -t db2 /bin/bash
I got following error:
Error response from daemon: Container [id] is not running
I also tried to start the instance with:
sudo docker start [id]
It returned error message as:
Error response from daemon: driver failed programming external connectivity on endpoint db2 ([id]): Bind for 0.0.0.0:50000 failed: port is already allocated
Error: failed to start containers: [id]
Can someone help on this?

If you have a look to your error message, it shows that you're trying to run an entrypoint in a container [id] which uses port 50000, that is already being used.
That's why docker start [id] doesn't work.
This can be caused by several things (let me add some of them instead concrete which is the problem because you haven't expressed many details).
docker exec should be used with container id that are already running, not images, not entrypoints. So, maybe you missed do docker run before docker exec. Try to do docker run -it db2 /bin/bash if db2 is your docker image.
Other possibility is that your container started and entrypoint exited by any reason, without releasing port 50000. So, when you tried to re-launch without having released port, if container exited but wasn't removed, is not possible for other docker be started using this port. Let me recommend you to do docker container prune to clean exited previous containers.
Maybe you're starting two or more containers from the same image (maybe db2) without doing any port mapping. If you want to run several instances of the same docker image you can do two things:
Use docker swarm, kubernetes or similar to scale container (pod). It lets you use the same port 50000.
Use a port mapping in docker run command: For example,
for first container, do docker run -d -p 50001:50000 [docker-image] [entrypoint]
for second container, do docker run -d -p 50002:50000 [docker-image] [entrypoint]
In this way, you'll have several mappings from different ports to the same 50000 avoiding this error of port-reusing, but I'm not sure if this is what you want to do. I'm only trying to help you with the little information you provided.
I hope anyway it's helpful.

Related

The docker container does not run in detached mode

When I run my docker container in detached mode by using the following command
docker run -d -p 5000:5000 --name tmp-cntr --net="host" -v /home/project:/root/ IMAGE-NAME
it does not appear when I list the containers by
docker ps
When I list all the containers by
docker ps -a
I can see that the container has exited. However, if I try to run the container with the same name it gives following error.
docker: Error response from daemon: Conflict. The container name "/tmp-cntr" is already in use by container "4b7cf4084685ad7fcaeef3ca6a07ca594752c42cbfd6eb07850d7fe8f5289bc3". You have to remove (or rename) that container to be able to reuse that name.
Is the container running or has it exited? What is the problem in my command? Please be kind enough to point out my mistake and explain how this can be corrected.
I appreciate your help.
It means the container created but exited, There maybe something wrong with your entrypoint that the container can't start successfully.
please have a check with docker logs <container-id> to show what's wrong.
Since you can't re-run, it means it's in Exited status.
You should run docker logs tmp-cntr to see what's wrong with the current exited container and then docker rm tmp-cntr to remove it.
Also you can remove --name tmp-cntr from your docker run command to prevent the same name issues instead of removing it every time, so debug it better.

Container not starting

I made a docker container of my web application.
At the end of the docker build command, I saw (which I suppose means that image was made)
Successfully tagged App:30may2020
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
When I run the container, I get error
docker run --publish 9000:9000 --detach --name App App:30may2020
docker: Error response from daemon: Conflict. The container name "/App" is already in use by container "8a641431369c418e99ccb752161f5f2848d3c8f14bb903a18b6bd4aff2966af6". You have to remove (or rename) that container to be able to reuse that name.
Question 1 - Does build command also starts the container as I didn't start the container?
Question 2 - I did docker container ls and docker container ps but I don't see my container running. Then why do I get the error?
Answers to your questions:
Question 1 - Does build command also starts the container as I didn't start the container?
Answer => No, but the command which you mentioned is a run command which will start the container.
docker run --publish 9000:9000 --detach --name App App:30may2020
As you can see, docker run will start the container from the image App:30may2020.
Question 2 - I did docker container ls and docker container ps but I don't see my container running. Then why do I get the error?
Answer2 => As the error says, App container name is already used by another container. There are below 2 ways to solve this
Run docker rm App, which will remove the container named App and if you want to see this container running run docker ps -a, and you would be able to see the container.
Note:- If you encounter an error while deleting the container, please stop the container first by running docker stop App.
The second way, don't give --name option while running the container and let docker choose the random name.
If docker ps shows nothing, then you must already have a stopped container called App. When a container stops, it remains, so that it can be started again.
As commented above docker ps -a will show all containers both running and stopped.
To remove the stopped container, use docker rm App.
It's a good idea when manually running containers, especially whilst debugging (so you're going to stop and start many times) to use the --rm flag. This will ensure that the container is removed when it's stopped.
Question1 answer: build doesn't starts a container
Question 2 answer: ps and ls display the container which are currently running but not those containers which are stopped. Do docker ps -a Incase if you want to view stopped containers.
You are getting the error because you have already a container with the name '/App' try to run a container with a different name.
Or in case if you want to run container with same name but want to use from new build you should first stop and delete the container and you can run under a same name

docker ps shows empty list - although docker telling container exists

when calling docker ps the list is empty, although I got an id:
(dcbb6aeaa06ba43fcb.....)
My steps:
Step 1: I created an image (imagekommando) of an running js.file:
Step 2: I created a container (in background) based on my image
docker run -d --name containerkommando imagekommando
I got an id! (container-id??)
Step 3: But docker ps shows empty list:
But when I repeat Step 2, I'm told, that the container (containerkommando) already exists:
docker run -d --name containerkommando imagekommando
Could you help me, understanding the logic behind?
And how can I get the container running (by ID)?
That means that the docker container exited with an error but clean up is required. With --rm option you can tell the docker to remove the container when the container has exited.
docker run --rm .....
Also to check the reason for the container exiting...you can use
docker logs <container_id>
What probably takes place here:
docker run ... creates and starts your container
your container exits
docker ps doesn't list stopped containers (default shows just running), so it made you think that it's not there.
docker run ... fails because you are trying to create and run a container with a name that already exists.
Further reading:
What are the possible states for a docker container?
Why docker container exits immediately
In Docker, a container is automatically exited when the task is finished. You have to specify a correct entrypoint to keep your docker container up.
You can check the exited containers with the command docker ps -a. This exited container will prevent you from using the name again.
So, you may want to use docker rm <container-name> before creating your new container. In a test environement, you can also use docker system prune to clean all unused container/networks.
docker ps only shows the active containers (the running ones).
Your container most probably exited right after you started it. You can use the container ID and do docker logs <container-id> to examine the reason why the container failed.
If you want to see the stopped containers together with the running containers you can do docker ps -a to get a list of all these.
Execute
docker logs <CONTAINER ID>
to view the logs of docker container run.
I faced a similar issue found out there was space issue win my docker. After clearing space the container was able to run.

Docker - Bind for 0.0.0.0:4000 failed: port is already allocated

I am using docker for the first time and I was trying to implement this -
https://docs.docker.com/get-started/part2/#tag-the-image
At one stage I was trying to connect with localhost by this command -
$ curl http://localhost:4000
which showed this error-
curl: (7) Failed to connect to localhost port 4000: Connection refused
However, I have solved this by following code -
$ docker-machine ip default
$ curl http://192.168.99.100:4000
After that everything was going fine, but in the last part, I was trying to run the app by using following line according to the tutorial...
$ docker run -p 4000:80 anibar/get-started:part1
But, I got this error
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint goofy_bohr (63f5691ef18ad6d6389ef52c56198389c7a627e5fa4a79133d6bbf13953a7c98): Bind for 0.0.0.0:4000 failed: port is already allocated.
You need to make sure that the previous container you launched is killed, before launching a new one that uses the same port.
docker container ls
docker rm -f <container-name>
Paying tribute to IgorBeaz, you need to stop running the current container. For that you are going to know current CONTAINER ID:
$ docker container ls
You get something like:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
12a32e8928ef friendlyhello "python app.py" 51 seconds ago Up 50 seconds 0.0.0.0:4000->80/tcp romantic_tesla
Then you stop the container by:
$ docker stop 12a32e8928ef
Finally you try to do what you wanted to do, for example:
$ docker run -p 4000:80 friendlyhello
I tried all the above answers, none of them worked, in my case even docker container ls doesn't show any container running. It looks like the problem is due to the fact that the docker proxy is still using ports although there are no containers running. In my case I was using ubuntu. Here's what I tried and got the problem solved, just run the following two commands:
sudo service docker stop
sudo rm -f /var/lib/docker/network/files/local-kv.db
I solved it this way:
First, I stopped all running containers:
docker-compose down
Then I executed a lsof command to find the process using the port (for me it was port 9000)
sudo lsof -i -P -n | grep 9000
Finally, I "killed" the process (in my case, it was a VSCode extension):
kill -9 <process id>
The quick fix is ​​a just restart docker:
sudo service docker stop
sudo service docker start
Above two answers are correct but didn't work for me.
I kept on seeing blank like below for docker container ls
then I tried, docker container ls -a and after that it showed all the process previously exited and running.
Then docker stop <container id> or docker container stop <container id> didn't work
then I tried docker rm -f <container id> and it worked.
Now at this I tried docker container ls -a and this process wasn't present.
When I used nginx docker image, I also got this error:
docker: Error response from daemon: driver failed programming external connectivity on endpoint recursing_knuth (9186f7d7f523732b99d3510029cde9679f3f3fe7b7eb5f612d54c4aacea58220): Bind for 0.0.0.0:8080 failed: port is already allocated.
And I solved it using following commands:
$ docker container ls
$ docker stop [CONTAINER ID]
Then, running this docker container(like this) again is ok:
$ docker run -v $PWD/vueDemo:/usr/share/nginx/html -p 8080:80 -d nginx:alpine
You just need to stop the previous docker container.
I have had same problem with docker-compose, to fix it:
Killed docker-proxy processe
Restart docker
Start docker-compose again
docker ps will reveal the list of containers running on docker. Find the one running on your needed port and note down its PID.
Stop and remove that container using following commands:
docker stop PID
docker rm PID
Now run docker-compose up and your services should run as you have freed the needed port.
on linux 'sudo systemctl restart docker' solved the issue for me
For anyone having this problem with docker-compose.
When you have more than one project (i.e. in different folders) with similar services you need to run docker-compose stop in each of your other projects.
If you are using Docker-Desktop, you can quit Docker Desktop and then restart it. It solved the problem for me.
In my case, there was no process to kill.
Updating docker fixed the problem.
It might be a conflict with the same port specified in docker-compose.yml and docker-compose.override.yml or the same port specified explicitly and using an environment variable.
I had a docker-compose.yml with ports on a container specified using environment variables, and a docker-compose.override.yml with one of the same ports specified explicitly. Apparently docker tried to open both on the same container. docker container ls -a listed neither because the container could not start and list the ports.
For me the containers where not showing up running, so NOTHING was using port 9010 (in my case) BUT Docker still complained.
I did not want to reset my Docker (for Windows) so what I did to resolve it was simply:
Remove the network (I knew that before a container was using this network with the port in question (9010) docker network ls docker network rm blabla (or id)
I actually used a new network rather than the old (buggy) one but shouldn't be needed
Restart Docker
That was the only way it worked for me. I can't explain it but somehow the "old" network was still bound to that port (9010) and Docker kept on "blocking" it (whinching about it)
FOR WINDOWS;
I killed every process that docker use and restarted the docker service on services. My containers are working now.
It is about ports that is still in use by Docker even though you are not using on that moment.
On Linux, you can run sudo netstat -tulpn to see what is currently listening on that port. You can then choose to configure either that process or your Docker container to bind to a different port to avoid the conflict.
Stopping the container didn't work for me either. I changed the port in docker-compose.yml.
For me, the problem was mapping the same port twice.
Due to a parametric docker run, it ended up being something like
docker run -p 4000:80 -p 4000:80 anibar/get-started:part1
notice double mapping on port 4000.
The log is not informative enough in this case, as it doesn't state I was the cause of the double mapping, and that the port is no longer bound after the docker run command returns with a failure.
Don't forget the easiest fix of all....
Restart your computer.
I have tried most of the above and still couldn't fix it. Then just restart my Mac and then it's all back to normal.
For anyone still looking for a solution, just make sure you have binded your port the right way round in your docker-compose.yml
It goes:
- <EXTERNAL SERVER PORT>:<INTERNAL CONTAINER PORT>
Had the same problem. Went to Docker for Mac Dashboard and clicked restart. Problem solved.
my case was dump XD I was exposing port 80 twice :D
ports:
- '${APP_PORT:-80}:80'
- '${APP_PORT:-8080}:8080'
APP_PORT is defined, thus 80 was exposed twice.
I tried almost all solutions and found out the probable/possible reason/solution. So, If you are using traefik or any other networking server, they internally facilitate proxy for load balacing. That, most use the blueprint as it, works pretty fine. It then passes the load control entirely to nginx or similiar proxy servers. So, stopping, killing(networking server) or pruning might not help.
Solution for traefik with nginx,
sudo /etc/init.d/nginx stop
# or
sudo service nginx stop
# or
sudo systemctl stop nginx
Credits
How to stop docker processes
Making Docker Stop Itself <- Safe and Fast
this is the best way to stop containers and all unstoppable processes: making docker do the job.
go to docker settings > resources. change any of the resource and click apply and restart.
docker will stop itself and its every process -- even the most stubborn ones that might not be killed by other commonly used commands such as kill or more wild commands like rm suggested by others.
i ran into a similar problem before and all the good - proper - tips from my colleagues somehow did not work out. i share this safe trick whenever someone in my team asks me about this.
Error response from daemon: driver failed programming external connectivity on endpoint foobar
Bind for 0.0.0.0:8000 failed: port is already allocated
hope this helps!
simply restart your computer, so the docker service gets restarted

How to restart a container on docker restart (--restart=true doesn't work)?

I am using docker version 1.1.0, started by systemd using the command line /usr/bin/docker -d, and tried to:
run a container
stop the docker service
restart the docker service (either using systemd or manually, specifying --restart=true on the command line)
see if my container was still running
As I understand the docs, my container should be restarted. But it is not. Its public facing port doesn't respond, and docker ps doesn't show it.
docker ps -a shows my container with an empty status:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cb0d05b4e0d9 mildred/p2pweb:latest node server-cli.js - 7 minutes ago 0.0.0.0:8888->8888/tcp jovial_ritchie
...
And when I try to docker restart cb0d05b4e0d9, I get an error:
Error response from daemon: Cannot restart container cb0d05b4e0d9: Unit docker-cb0d05b4e0d9be2aadd4276497e80f4ae56d96f8e2ab98ccdb26ef510e21d2cc.scope already exists.
2014/07/16 13:18:35 Error: failed to restart one or more containers
I can always recreate a container using the same base image using docker run ..., but how do I make sure that my running containers will be restarted if docker is restarted. Is there a solution that exists even in case the docker is not stopped properly (imagine I remove the power plug from the server).
Thank you
As mentioned in a comment, the container flag you're likely looking for is --restart=always, which will instruct Docker that unless you explicitly docker stop the container, Docker should start it back up any time either Docker dies or the container does.

Resources