I'm working on this https://github.com/hyperledger/education repository and
When i try to run ./manage up, i'm getting this error .
ERROR: Pool overlaps with other one on this address space
Try running :
docker system prune
and now run
./manage up
and it will start working.
The error you are encountering is suggesting you have a network address conflict.
run following list all the docker network running currently on your machine.
docker network ls
Then remove networks
docker network rm <networkId>
Kind of a strange situation - there's a network "omni_platform" and I cannot create it, however when I try to delete the network - Docker says it doesn't exist.
$ docker network create -d bridge omni_platform
Error response from daemon: network with name omni_platform already exists
$ docker network rm omni_platform
Error response from daemon: network s8gh5qljyaxyvjeespfsz86gn not found
Any help is appreciated thanks :)
First, restart docker with this command:
Service docker restart
Second, list all networks which are already created. I guess the command is:
docker network ls
Or
docker network ps
Then you find ID of the network you want to delete and remove it with this:
docker network rm ID
Hope it was helpful.
Deleting "network not found" in docker
Inspect the network which we are unable to delete
docker network inspect [<id> or <name>]
Disconnect the network
docker network disconnect -f [<networkID> or <networkName>] [<endpointName> or <endpointId>]
Delete unused networks
docker network prune
I am using docker for the first time and I was trying to implement this -
https://docs.docker.com/get-started/part2/#tag-the-image
At one stage I was trying to connect with localhost by this command -
$ curl http://localhost:4000
which showed this error-
curl: (7) Failed to connect to localhost port 4000: Connection refused
However, I have solved this by following code -
$ docker-machine ip default
$ curl http://192.168.99.100:4000
After that everything was going fine, but in the last part, I was trying to run the app by using following line according to the tutorial...
$ docker run -p 4000:80 anibar/get-started:part1
But, I got this error
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint goofy_bohr (63f5691ef18ad6d6389ef52c56198389c7a627e5fa4a79133d6bbf13953a7c98): Bind for 0.0.0.0:4000 failed: port is already allocated.
You need to make sure that the previous container you launched is killed, before launching a new one that uses the same port.
docker container ls
docker rm -f <container-name>
Paying tribute to IgorBeaz, you need to stop running the current container. For that you are going to know current CONTAINER ID:
$ docker container ls
You get something like:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
12a32e8928ef friendlyhello "python app.py" 51 seconds ago Up 50 seconds 0.0.0.0:4000->80/tcp romantic_tesla
Then you stop the container by:
$ docker stop 12a32e8928ef
Finally you try to do what you wanted to do, for example:
$ docker run -p 4000:80 friendlyhello
I tried all the above answers, none of them worked, in my case even docker container ls doesn't show any container running. It looks like the problem is due to the fact that the docker proxy is still using ports although there are no containers running. In my case I was using ubuntu. Here's what I tried and got the problem solved, just run the following two commands:
sudo service docker stop
sudo rm -f /var/lib/docker/network/files/local-kv.db
I solved it this way:
First, I stopped all running containers:
docker-compose down
Then I executed a lsof command to find the process using the port (for me it was port 9000)
sudo lsof -i -P -n | grep 9000
Finally, I "killed" the process (in my case, it was a VSCode extension):
kill -9 <process id>
The quick fix is a just restart docker:
sudo service docker stop
sudo service docker start
Above two answers are correct but didn't work for me.
I kept on seeing blank like below for docker container ls
then I tried, docker container ls -a and after that it showed all the process previously exited and running.
Then docker stop <container id> or docker container stop <container id> didn't work
then I tried docker rm -f <container id> and it worked.
Now at this I tried docker container ls -a and this process wasn't present.
When I used nginx docker image, I also got this error:
docker: Error response from daemon: driver failed programming external connectivity on endpoint recursing_knuth (9186f7d7f523732b99d3510029cde9679f3f3fe7b7eb5f612d54c4aacea58220): Bind for 0.0.0.0:8080 failed: port is already allocated.
And I solved it using following commands:
$ docker container ls
$ docker stop [CONTAINER ID]
Then, running this docker container(like this) again is ok:
$ docker run -v $PWD/vueDemo:/usr/share/nginx/html -p 8080:80 -d nginx:alpine
You just need to stop the previous docker container.
I have had same problem with docker-compose, to fix it:
Killed docker-proxy processe
Restart docker
Start docker-compose again
docker ps will reveal the list of containers running on docker. Find the one running on your needed port and note down its PID.
Stop and remove that container using following commands:
docker stop PID
docker rm PID
Now run docker-compose up and your services should run as you have freed the needed port.
on linux 'sudo systemctl restart docker' solved the issue for me
For anyone having this problem with docker-compose.
When you have more than one project (i.e. in different folders) with similar services you need to run docker-compose stop in each of your other projects.
If you are using Docker-Desktop, you can quit Docker Desktop and then restart it. It solved the problem for me.
In my case, there was no process to kill.
Updating docker fixed the problem.
It might be a conflict with the same port specified in docker-compose.yml and docker-compose.override.yml or the same port specified explicitly and using an environment variable.
I had a docker-compose.yml with ports on a container specified using environment variables, and a docker-compose.override.yml with one of the same ports specified explicitly. Apparently docker tried to open both on the same container. docker container ls -a listed neither because the container could not start and list the ports.
For me the containers where not showing up running, so NOTHING was using port 9010 (in my case) BUT Docker still complained.
I did not want to reset my Docker (for Windows) so what I did to resolve it was simply:
Remove the network (I knew that before a container was using this network with the port in question (9010) docker network ls docker network rm blabla (or id)
I actually used a new network rather than the old (buggy) one but shouldn't be needed
Restart Docker
That was the only way it worked for me. I can't explain it but somehow the "old" network was still bound to that port (9010) and Docker kept on "blocking" it (whinching about it)
FOR WINDOWS;
I killed every process that docker use and restarted the docker service on services. My containers are working now.
It is about ports that is still in use by Docker even though you are not using on that moment.
On Linux, you can run sudo netstat -tulpn to see what is currently listening on that port. You can then choose to configure either that process or your Docker container to bind to a different port to avoid the conflict.
Stopping the container didn't work for me either. I changed the port in docker-compose.yml.
For me, the problem was mapping the same port twice.
Due to a parametric docker run, it ended up being something like
docker run -p 4000:80 -p 4000:80 anibar/get-started:part1
notice double mapping on port 4000.
The log is not informative enough in this case, as it doesn't state I was the cause of the double mapping, and that the port is no longer bound after the docker run command returns with a failure.
Don't forget the easiest fix of all....
Restart your computer.
I have tried most of the above and still couldn't fix it. Then just restart my Mac and then it's all back to normal.
For anyone still looking for a solution, just make sure you have binded your port the right way round in your docker-compose.yml
It goes:
- <EXTERNAL SERVER PORT>:<INTERNAL CONTAINER PORT>
Had the same problem. Went to Docker for Mac Dashboard and clicked restart. Problem solved.
my case was dump XD I was exposing port 80 twice :D
ports:
- '${APP_PORT:-80}:80'
- '${APP_PORT:-8080}:8080'
APP_PORT is defined, thus 80 was exposed twice.
I tried almost all solutions and found out the probable/possible reason/solution. So, If you are using traefik or any other networking server, they internally facilitate proxy for load balacing. That, most use the blueprint as it, works pretty fine. It then passes the load control entirely to nginx or similiar proxy servers. So, stopping, killing(networking server) or pruning might not help.
Solution for traefik with nginx,
sudo /etc/init.d/nginx stop
# or
sudo service nginx stop
# or
sudo systemctl stop nginx
Credits
How to stop docker processes
Making Docker Stop Itself <- Safe and Fast
this is the best way to stop containers and all unstoppable processes: making docker do the job.
go to docker settings > resources. change any of the resource and click apply and restart.
docker will stop itself and its every process -- even the most stubborn ones that might not be killed by other commonly used commands such as kill or more wild commands like rm suggested by others.
i ran into a similar problem before and all the good - proper - tips from my colleagues somehow did not work out. i share this safe trick whenever someone in my team asks me about this.
Error response from daemon: driver failed programming external connectivity on endpoint foobar
Bind for 0.0.0.0:8000 failed: port is already allocated
hope this helps!
simply restart your computer, so the docker service gets restarted
I am aware there are a lot of questions about running Docker on windows, however this question is about running the brand new Docker for Windows, on Windows.
In my case I am using Windows 10 Pro 64 bit. According to the site this version should be supported.
I have been following a tutorial I found here:
https://prakhar.me/docker-curriculum/
I also tried following the official guide of course: https://docs.docker.com/docker-for-windows/
In both tutorials I get the same error message when trying to assign a port using either the -P parameter or when trying to specify a port -p 8080:5000:
In the official guide I run docker run -d -p 80:80 --name webserver nginx and get:
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint webserver (f9946544e4c6ad2dd9cb8cbccd251e4d48254e86562bd8e6da75c3bd42c7e45a): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:80:tcp:172.17.0.2:80: input/output error.
Following the unofficial guide i run docker run -p 8888:5000 prakhar1989/catnip and get basically the same error:
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint focused_swartz (48a0c005779c6e89bf525ead2ecff44a7f092495cd22ef7d19973002963cb232): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:8888:tcp:172.17.0.2:5000: input/output error.
If I don't try to assign a port the container will run, but then I don't know how to access it.
The docker version I am running:
Docker version 1.12.3, build 6b644ec`
docker-compose version 1.8.1, build 004ddae`
docker-machine.exe version 0.8.2, build e18a919`
Any help would be very appreciated. Thank you.
Here's a new twist.
The last Windows 10 update (Fall Creators Update, 2017) has a new "feature". It automatically starts any applications that were running when you last shutdown.
This reconstitutes Docker for Windows in a bad state. That made it appear those ports were in use by something else - it was the ghost of itself. This explained why those ports were still in use even though I stopped/started my containers and even reboot!
The solution in this case is to simply restart Docker daemon.
To prevent this after the next shutdown, don't use the shutdown button. Type this instead:
shutdown /s /t 0
This bypasses the new feature.
See the answer from Jason[MS] in this thread:
https://answers.microsoft.com/en-us/insider/forum/insider_wintp-insider_perf-insiderplat_pc/programs-autostart-after-boot-in-windows-10-fall/09dd8d3e-7b36-45d1-9181-6587dd5d53ab
Here's one guy's workaround (from the end of this thread - haven't tried it myself):
http://www.icttoolbox.nl/info/stop-windows-10-creator-fall-reopening-programs-reboot/
Restarting the Docker daemon fixes this problem temporarily, but to get rid of it ultimately I had to disable Windows 10 fast startup, which is the feature #biscuit314 described.
To disable Windows 10 fast startup, get to the Control Panel > Power Options > Choose what the power buttons do > Change settings that are currently unavailable > Uncheck Turn on fast startup (recommended) and hit Save changes
This is caused by a port numbering conflict: github issue here https://github.com/docker/compose/issues/3277
Essentially the port is in use! The reason resetting worked is because it wiped other mappings off.
1) Stop all the running containers docker stop $(docker ps -a -q) then
2) Stop the Docker on your machine & restart it.
Then run the required command. This will solve the issue.
If its in windows OS, Please do restart the Docker
This has fixed the issue for me
For Linux - Debian Users,
Use docker stop $(docker ps -a -q) only when you know whether you want to stop all the containers or not.... If yes then please run docker rm $(docker ps -a -q) to remove containers ....
Then stop the docker daemon - systemctl stop docker
Then start docker daemon - systemctl start docker
Also verify whether docker daemon is up or not - service docker status
After following all above mention steps you should be fine.....
I got the same issue before on window 10.
Restart docker, it works
Try stopping docker and initiating it again on administrator mode. After it starts open power shell on administrator mode as well.
Because the error says "mkdir" maybe this will solve your problem. Im not sure but it worked for me.
In the case of using -P a port conflict does not seen to be the reason for the error once -P will chose ports randomly. The error it self wasn't quite friendly to me but because I saw the mkdir word on it I imagined it might be a permission error, thats why I restarted docker on administrator mode and started power shell on administrator mode.
I tried all the suggestions on this issue: killing all the containers, restarting Docker Desktop, disabling "Fast Startup," restarting my computer, making sure "Experimental Features" were disabled. None of that stuff worked.
I did eventually get it running. Here are some things you may want to try (because I'm not sure what actually fixed it).
Find "Docker Desktop" and right-click to "Run as Administrator..."
Pay attention to the port that it's complaining about. Some people say this could just be Docker's unfriendly way of saying "that port is in use." In my case, the port was 80. I went into the Services on Windows Pro and disabled the "World Wide Web Publishing Service" just to be safe.
If you are here because you have this issue in Visual Studio 2019:
According to this post, the VS team is preparing a fix for this issue in 16.5 version, meanwhile, you can add the property "publishAllPorts": true in your launchSettings.json, for example:
"Docker": {
"commandName": "Docker",
"launchBrowser": true,
"launchUrl": "{Scheme}://{ServiceHost}:44374", #<== Set a fixed port
"environmentVariables": {
"ASPNETCORE_URLS": "https://+:44374;https://+:5000",
"ASPNETCORE_HTTPS_PORT": "44374"
},
"publishAllPorts": true, #<== This is equivalent to the -P flag in 'docker run'
"useSSL": true
}
Notice that the property "httpPort": XYZT is not defined. Having it defined will make the workaround not work.
It worked for me with this setup:
Windows 10 1709 Build 16299.1747 with Fast Start OFF
Docker Desktop 2.2.05 (43884)
Docker Engine 19.03.8
Visual Studio 2019 Enterprise 16.5.4.
Microsoft.VisualStudio.Azure.Containers.Tools.Targets 1.10.8
I realized that the command VS was creating, contained the -p parameter twice, one with the port I specified and another with port 80, like this: -p 3010:80 -p 3010:3010.
After adding publishAllPorts it now creates the container and I can remotely debug it.
This is what worked for me after trying every thing. Seems we need to kill the running process.
Exit docker, right-click the icon Quit Docker Desktop
Open the windows task manager, find and kill the process com.docker.backend.exe
restart docker, double-click the icon to open the Docker Desktop
I'm playing around now with docker 1.12, created a service and noticed there is a stage of "preparing" when I ran "docker service tasks xxx".
I can only guess that on this stage the images are being pulled or updated.
My question is: how can I see the logs for this stage? Or more generally: how can I see the logs for docker service tasks?
I have been using docker-machine for emulating different "hosts" in my development environment.
This is what I did to figure out what was going on during this "Preparing" phase for my services:
docker service ps <serviceName>
You should see the nodes (machines) where your service was scheduled to run. Here you'll see the "Preparing" message.
Use docker-machine ssh to connect to a particular machine:
docker-machine ssh <nameOfNode/Machine>
Your prompt will change. You are now inside another machine.
Inside this other machine do this:
tail -f /var/log/docker.log
You'll see the "daemon" log for that machine.
There you'll see if that particular daemon is doing the "pull" or what's is doing as part of the service preparation.
In my case, I found something like this:
time="2016-09-05T19:04:07.881790998Z" level=debug msg="pull progress map[progress:[===========================================> ] 112.4 MB/130.2 MB status:Downloading
Which made me realise that it was just downloading some images from my docker account.
Your assumption (about pulling during preparation) is correct.
There is no log command yet for tasks, but you could certainly connect to that daemon and do docker logs in the regular way.