Docker cannot remove network that already exists - docker

Kind of a strange situation - there's a network "omni_platform" and I cannot create it, however when I try to delete the network - Docker says it doesn't exist.
$ docker network create -d bridge omni_platform
Error response from daemon: network with name omni_platform already exists
$ docker network rm omni_platform
Error response from daemon: network s8gh5qljyaxyvjeespfsz86gn not found
Any help is appreciated thanks :)

First, restart docker with this command:
Service docker restart
Second, list all networks which are already created. I guess the command is:
docker network ls
Or
docker network ps
Then you find ID of the network you want to delete and remove it with this:
docker network rm ID
Hope it was helpful.

Deleting "network not found" in docker
Inspect the network which we are unable to delete
docker network inspect [<id> or <name>]
Disconnect the network
docker network disconnect -f [<networkID> or <networkName>] [<endpointName> or <endpointId>]
Delete unused networks
docker network prune

Related

ERROR: 2 matches found based on name: network <nameofservice>_default is ambiguous

I was building docker image using the command
docker-compose -f "docker-compose.yml" up -d --build
But it returns me an error
ERROR: 2 matches found based on name: network officeconverter_default is ambiguous
This is a bit clear that in my machine there are two networks with the same name trying to exists.
Question is how to remove the networks from docker networks
PS E:\repos\Github\officeconverter> docker network ls
NETWORK ID NAME DRIVER SCOPE
868c88a83bd6 bridge bridge local
92f7d20ed432 officeconverter_default bridge local
3f96cfb7b591 officeconverter_default bridge local
The solution is simple!
just remove the networks.
like docker network rm <network Id> <space> <network Id> ....
PS E:\repos\Github\officeconverter> docker network rm 92f7d20ed432 3f96cfb7b591
92f7d20ed432
3f96cfb7b591
PS E:\repos\Github\officeconverter> docker network ls
NETWORK ID NAME DRIVER SCOPE
868c88a83bd6 bridge bridge local
try:
docker network prune
This will remove all your networks.
A solution that worked for me is:
docker system prune -af
docker volume prune --force.
Open cronjob with sudo cronjob -e, then, add new line
*/1 * * * * docker network prune -f
This will clean non-used networks (if any) every minute.
As others have said, you can docker network rm <network id> to actually delete them.
But if you need to choose which one to delete, you can:
$ docker network inspect d5867f0be024
[
{
"Name": "minikube",
"Id": "d5867f0be024b1c81237fba1eaef3f1ff53b75e15ab84d46a13dcde53809934e",
"Created": "2022-02-16T19:24:57.7203334-05:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
...
}
]
... and then see the Created time (or other metadata) to make your decision about which to delete.
In my case I needed to delete the newer one and retain the older one.
One option is to create a new network, with a different network name, and to connect your docker container to it.
There is a similar error that can happen with ambiguous bridge network names.
One guy has made a complete blog post about how to fix this.
I found it very useful as far as I am concerned, and I believe this will be useful to others in similar situation even if you deal with a network that is not bridge: https://www.jorgeanaya.dev/en/bin/docker-network-name-is-ambiguous/
Here is the short workaround from the blog post (all credit goes to Jorge Anaya):
Create a new network. Use the parameter -d to specify the driver
docker network create -d bridge [new-network-name]
Disconnect the container(s) from the ambiguous network
docker network disconnect bridge [container-name]
Connect the container(s) to the new network
docker network connect [new-network-name] [container-name]
Optional. Purge our network and get rid off of the unused networks
docker network rm $(docker network ls -q)
And that's all, now we should be able to start our containers.
Don't forget to add sudo at the beginning of each command for permissions.
Just run docker remove prune it will remove all networks and create again.

Error while checking if volume "nvidia_driver_410.48" exists in driver "nvidia-docker"

I come across this error when running an image on nvidia-docker. This image used to run well, but now it fails. The change in the device I'm made: cloned docker/compose, then removed docker/compose.
I manage to run nvidia-docker hello-world, but this does not use Nvidia drivers.
I replaced XXX instead of the full path.
running command:
nvidia-docker run --name my_test arg1 bash
Blockquote
docker: Error response from daemon: create nvidia_driver_410.48: found reference to volume 'nvidia_driver_410.48' in driver 'nvidia-docker', but got an error while checking the driver: error while checking if volume "nvidia_driver_410.48" exists in driver "nvidia-docker": Post http://XXXdocker%2Fplugins%2Fnvidia-docker.sock/VolumeDriver.Get: dial unix XXX/docker/plugins/nvidia-docker.sock: connect: connection refused: volume name must be unique.
Blockquote
Try to delete the old containers(including stopped containers), list and delete the nvidia docker volumes. Restart docker daemon if this does not solve the problem.
Else: try to purge/reinstall nvidia-container-toolkit and restart the docker daemon on top of the aforementioned step.

Docker Network not Found

In our team, we are currently transitioning to Docker to deploy everything on our server.
We are using Docker Swarm and multiple (10+) compose files defining plenty (20+) of services. Everything works beautifully so far, except when we take down our stack using docker stack rm <name> (and redeploy using docker stack deploy <options> <name>): about every second time, we get the following error:
Failed to remove network <id>: Error response from daemon: network <id> not foundFailed to remove some resources from stack: <name>
When using docker network ls, the network is indeed not removed, however, docker network rm <id> always results in the following:
Error response from daemon: network <id> not found
What makes that even more strange is the fact that docker network inspect <id> returns a normal output. The networks are always overlay networks that are created with the compose files used to deploy our stack. Currently, we only have a single node in our Swarm.
Our current "workaround" is to restart Docker (which resolves the issue), but that is not a viable solution in a production environment. Leaving the swarm and joining it again does not resolve the issue either.
At first, we thought that this issue is related to Docker for Mac only (as we first encountered the issue on local machines), however, the same issue arises on Debian Stretch. In both cases, we use the latest Docker distribution available.
I would really appreciate any help!
If you are attempting to add a container to an existing network that no longer exists, then you can use docker-compose up --force-recreate. I found this GitHub issues comment to be a helpful overview.
You can always use docker system prune -a to get rid of the old network. This will not delete your volumes.
It will take longer to docker-compose up --build -d the next time, but it will get you past your current problem.
After using the docker prune command, I was unable to launch the docker container on a network.
The following errors stated:
ERROR: for jekyll-serve Cannot start service jekyll-serve: network
b52287167caf352c7a03c4e924aaf7d78e2bc372c703560c003acc758c013432 not
found
ERROR: Encountered errors while bring up the project.
docker system prune
enabled me to begin using docker-compose up again.
More info here: https://docs.docker.com/config/pruning/
Deleting the "network not found" in docker
Inspect the network which we are unable to delete
docker network inspect <id> or <name>
Disconnect the network
docker network disconnect -f <networkID> <endpointName> or <endpointId>
Delete unused networks
docker network prune
That sounds exactly like this issue.
Stack rm followed "too fast" by stack deploy would race for the creation/removal of networks, possibly other stack resources.
The issue is still open as of today (docker/cli), but you could try the workaround suggested:
until [ -z "$(docker service ls --filter label=com.docker.stack.namespace=$COMPOSE_PROJECT_NAME -q)" ] || [ "$limit" -lt 0 ]; do
sleep 1;
done
until [ -z "$(docker network ls --filter label=com.docker.stack.namespace=$COMPOSE_PROJECT_NAME -q)" ] || [ "$limit" -lt 0 ]; do
sleep 1;
done
I could not get rid of the networks by any of the methods in previous answers.
This is what worked for me.
systemctl restart docker
This is the experience I got and I think it might help.
Docker network is capable of doing bridging. In course told, a container can disconnect and connect from one to the other. If one disconnects from current and connect to the other, and the current disappear due to shutdown/network prune, the independent container will lose the connection. Later, when you try to start, only found "network not found" error.
The solution to this is start swarm/cluster (in my case I start with docker-compose up), disconnect the container (even it's yet up) from that network using force (-f). Connect back to that (different ID, but same name) network. Now you can successfully start it without "network not found" error.
So, the point is it maybe happens to see same name and different ID network.
old containers are still using old network. Probably you removed networks but forgot to rm old containers. Just remove old containers, create your network and build again.

Error response from daemon: Container [id] is not running

I am using docker for first time. I created docker image for DB2 and when started to login to the instance using command,
sudo docker exec -i -t db2 /bin/bash
I got following error:
Error response from daemon: Container [id] is not running
I also tried to start the instance with:
sudo docker start [id]
It returned error message as:
Error response from daemon: driver failed programming external connectivity on endpoint db2 ([id]): Bind for 0.0.0.0:50000 failed: port is already allocated
Error: failed to start containers: [id]
Can someone help on this?
If you have a look to your error message, it shows that you're trying to run an entrypoint in a container [id] which uses port 50000, that is already being used.
That's why docker start [id] doesn't work.
This can be caused by several things (let me add some of them instead concrete which is the problem because you haven't expressed many details).
docker exec should be used with container id that are already running, not images, not entrypoints. So, maybe you missed do docker run before docker exec. Try to do docker run -it db2 /bin/bash if db2 is your docker image.
Other possibility is that your container started and entrypoint exited by any reason, without releasing port 50000. So, when you tried to re-launch without having released port, if container exited but wasn't removed, is not possible for other docker be started using this port. Let me recommend you to do docker container prune to clean exited previous containers.
Maybe you're starting two or more containers from the same image (maybe db2) without doing any port mapping. If you want to run several instances of the same docker image you can do two things:
Use docker swarm, kubernetes or similar to scale container (pod). It lets you use the same port 50000.
Use a port mapping in docker run command: For example,
for first container, do docker run -d -p 50001:50000 [docker-image] [entrypoint]
for second container, do docker run -d -p 50002:50000 [docker-image] [entrypoint]
In this way, you'll have several mappings from different ports to the same 50000 avoiding this error of port-reusing, but I'm not sure if this is what you want to do. I'm only trying to help you with the little information you provided.
I hope anyway it's helpful.

Docker - Bind for 0.0.0.0:4000 failed: port is already allocated

I am using docker for the first time and I was trying to implement this -
https://docs.docker.com/get-started/part2/#tag-the-image
At one stage I was trying to connect with localhost by this command -
$ curl http://localhost:4000
which showed this error-
curl: (7) Failed to connect to localhost port 4000: Connection refused
However, I have solved this by following code -
$ docker-machine ip default
$ curl http://192.168.99.100:4000
After that everything was going fine, but in the last part, I was trying to run the app by using following line according to the tutorial...
$ docker run -p 4000:80 anibar/get-started:part1
But, I got this error
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint goofy_bohr (63f5691ef18ad6d6389ef52c56198389c7a627e5fa4a79133d6bbf13953a7c98): Bind for 0.0.0.0:4000 failed: port is already allocated.
You need to make sure that the previous container you launched is killed, before launching a new one that uses the same port.
docker container ls
docker rm -f <container-name>
Paying tribute to IgorBeaz, you need to stop running the current container. For that you are going to know current CONTAINER ID:
$ docker container ls
You get something like:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
12a32e8928ef friendlyhello "python app.py" 51 seconds ago Up 50 seconds 0.0.0.0:4000->80/tcp romantic_tesla
Then you stop the container by:
$ docker stop 12a32e8928ef
Finally you try to do what you wanted to do, for example:
$ docker run -p 4000:80 friendlyhello
I tried all the above answers, none of them worked, in my case even docker container ls doesn't show any container running. It looks like the problem is due to the fact that the docker proxy is still using ports although there are no containers running. In my case I was using ubuntu. Here's what I tried and got the problem solved, just run the following two commands:
sudo service docker stop
sudo rm -f /var/lib/docker/network/files/local-kv.db
I solved it this way:
First, I stopped all running containers:
docker-compose down
Then I executed a lsof command to find the process using the port (for me it was port 9000)
sudo lsof -i -P -n | grep 9000
Finally, I "killed" the process (in my case, it was a VSCode extension):
kill -9 <process id>
The quick fix is ​​a just restart docker:
sudo service docker stop
sudo service docker start
Above two answers are correct but didn't work for me.
I kept on seeing blank like below for docker container ls
then I tried, docker container ls -a and after that it showed all the process previously exited and running.
Then docker stop <container id> or docker container stop <container id> didn't work
then I tried docker rm -f <container id> and it worked.
Now at this I tried docker container ls -a and this process wasn't present.
When I used nginx docker image, I also got this error:
docker: Error response from daemon: driver failed programming external connectivity on endpoint recursing_knuth (9186f7d7f523732b99d3510029cde9679f3f3fe7b7eb5f612d54c4aacea58220): Bind for 0.0.0.0:8080 failed: port is already allocated.
And I solved it using following commands:
$ docker container ls
$ docker stop [CONTAINER ID]
Then, running this docker container(like this) again is ok:
$ docker run -v $PWD/vueDemo:/usr/share/nginx/html -p 8080:80 -d nginx:alpine
You just need to stop the previous docker container.
I have had same problem with docker-compose, to fix it:
Killed docker-proxy processe
Restart docker
Start docker-compose again
docker ps will reveal the list of containers running on docker. Find the one running on your needed port and note down its PID.
Stop and remove that container using following commands:
docker stop PID
docker rm PID
Now run docker-compose up and your services should run as you have freed the needed port.
on linux 'sudo systemctl restart docker' solved the issue for me
For anyone having this problem with docker-compose.
When you have more than one project (i.e. in different folders) with similar services you need to run docker-compose stop in each of your other projects.
If you are using Docker-Desktop, you can quit Docker Desktop and then restart it. It solved the problem for me.
In my case, there was no process to kill.
Updating docker fixed the problem.
It might be a conflict with the same port specified in docker-compose.yml and docker-compose.override.yml or the same port specified explicitly and using an environment variable.
I had a docker-compose.yml with ports on a container specified using environment variables, and a docker-compose.override.yml with one of the same ports specified explicitly. Apparently docker tried to open both on the same container. docker container ls -a listed neither because the container could not start and list the ports.
For me the containers where not showing up running, so NOTHING was using port 9010 (in my case) BUT Docker still complained.
I did not want to reset my Docker (for Windows) so what I did to resolve it was simply:
Remove the network (I knew that before a container was using this network with the port in question (9010) docker network ls docker network rm blabla (or id)
I actually used a new network rather than the old (buggy) one but shouldn't be needed
Restart Docker
That was the only way it worked for me. I can't explain it but somehow the "old" network was still bound to that port (9010) and Docker kept on "blocking" it (whinching about it)
FOR WINDOWS;
I killed every process that docker use and restarted the docker service on services. My containers are working now.
It is about ports that is still in use by Docker even though you are not using on that moment.
On Linux, you can run sudo netstat -tulpn to see what is currently listening on that port. You can then choose to configure either that process or your Docker container to bind to a different port to avoid the conflict.
Stopping the container didn't work for me either. I changed the port in docker-compose.yml.
For me, the problem was mapping the same port twice.
Due to a parametric docker run, it ended up being something like
docker run -p 4000:80 -p 4000:80 anibar/get-started:part1
notice double mapping on port 4000.
The log is not informative enough in this case, as it doesn't state I was the cause of the double mapping, and that the port is no longer bound after the docker run command returns with a failure.
Don't forget the easiest fix of all....
Restart your computer.
I have tried most of the above and still couldn't fix it. Then just restart my Mac and then it's all back to normal.
For anyone still looking for a solution, just make sure you have binded your port the right way round in your docker-compose.yml
It goes:
- <EXTERNAL SERVER PORT>:<INTERNAL CONTAINER PORT>
Had the same problem. Went to Docker for Mac Dashboard and clicked restart. Problem solved.
my case was dump XD I was exposing port 80 twice :D
ports:
- '${APP_PORT:-80}:80'
- '${APP_PORT:-8080}:8080'
APP_PORT is defined, thus 80 was exposed twice.
I tried almost all solutions and found out the probable/possible reason/solution. So, If you are using traefik or any other networking server, they internally facilitate proxy for load balacing. That, most use the blueprint as it, works pretty fine. It then passes the load control entirely to nginx or similiar proxy servers. So, stopping, killing(networking server) or pruning might not help.
Solution for traefik with nginx,
sudo /etc/init.d/nginx stop
# or
sudo service nginx stop
# or
sudo systemctl stop nginx
Credits
How to stop docker processes
Making Docker Stop Itself <- Safe and Fast
this is the best way to stop containers and all unstoppable processes: making docker do the job.
go to docker settings > resources. change any of the resource and click apply and restart.
docker will stop itself and its every process -- even the most stubborn ones that might not be killed by other commonly used commands such as kill or more wild commands like rm suggested by others.
i ran into a similar problem before and all the good - proper - tips from my colleagues somehow did not work out. i share this safe trick whenever someone in my team asks me about this.
Error response from daemon: driver failed programming external connectivity on endpoint foobar
Bind for 0.0.0.0:8000 failed: port is already allocated
hope this helps!
simply restart your computer, so the docker service gets restarted

Resources