docker: Error response from daemon: Address already in use - docker

I am creating a container with following command
docker run -it -p 81:80 -p 3307:3306 --net mynet123 --ip 172.18.0.22 -v /opt/lampp/htdocs:/var/www/html lamp-setia bash
Can Someone share the solution?
Thanks In Advance

You can check the existing port by running command
lsof -i tcp:81
and
lsof -i tcp:3307
if necessary you can kill that process with command
kill -9 [pid number]
After that, you can try to re-run that docker command.

Another scenario that have the exact same error is when the IP address is in use. In my setup, I had a network setup like this:
docker network create --subnet 172.28.5.0/24 cluster-test-net
and I was trying to start my docker container as below:
docker run -d --name wildfly1 --ip 172.28.5.1 -h wildfly1 -p 8080:8080 -p 9990:9990 --network=cluster-test-net wildfly-cluster-image
The reason that I got the error was that docker had already assigned the IP address 172.28.5.1 to the host itself. I noticed that when I ran ifconfig on my host and found this row in the result:
br-bb89994f6a73: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.28.5.1 netmask 255.255.255.0 broadcast 172.28.5.255
inet6 fe80::42:a2ff:fecd:81e9 prefixlen 64 scopeid 0x20<link>
ether 02:42:a2:cd:81:e9 txqueuelen 0 (Ethernet)
RX packets 4394 bytes 4695729 (4.6 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2342 bytes 175071 (175.0 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
So I just fixed it by choosing a different IP address for my docker container:
docker run -d --name wildfly1 --ip 172.28.5.10 -h wildfly1 -p 8080:8080 -p 9990:9990 --network=cluster-test-net wildfly-cluster-image

Seems that some other process is already holding the host ports that you are trying to map with the container. You may consider using netstat -aon to find out if there is/are existing processses that are holding ports 81 and 3307 on the docker host.

The port you have given in the docker run command might be assigned to some other process. Please find what is running over there. If something unimportant kill it. Or you can proceed with available ports.
Please find a snapshot below for reference,
Regards

this means another container is taking the container's IP.
Stop all containers and then start your container.
then start your container :
docker stop x
docker network connect --ip 172.24.0.4 yournetwork y
docker start y
docker start x
The order would tell indicate the conflicting containers
or use container network docker inspect network_name
to check whether the containers have the correct Ips

Related

driver failed programming external connectivity on endpoint redis : Bind for 0.0.0.0:6379 failed: port is already allocated

I'm trying to run
/usr/bin/docker run --rm -v /var/data/redis:/data -v /var/data/conf/redis.conf:/usr/local/etc/redis/redis.conf --name redis -p 6379:6379 redis:5.0.3-alpine3.9
but I get:
/usr/bin/docker: Error response from daemon: driver failed programming external connectivity on endpoint redis (f16f19b7727a710fb6c96be566dac66ce26282982960d97faa28861c24fcf2fb): Bind for 0.0.0.0:6379 failed: port is already allocated.
When I try to check the ports used with netstat, I get:
[root#artik ~]# netstat -nlpute | grep 6379
tcp6 0 0 :::6379 :::* LISTEN 0 14384 2471/docker-proxy
I have no docker containers running right now.
I don't understand this issue, what should I do ?
[root#artik ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Steps I had to take to get everything working:
sudo service docker stop
sudo rm /var/lib/docker/network/files/local-kv.db
sudo service docker start
docker system prune
And then try again.
From your netstat output its clear that there is one process holding port 6379
[root#artik ~]# netstat -nlpute | grep 6379
tcp6 0 0 :::6379 :::* LISTEN 0 14384 2471/docker-proxy
docker-proxy processes are created when you do port forwarding in docker run which is true in your case -p 6379:6379.
For more info on docker-proxy check this out.
I suspect that you earlier ran a redis container which used port 6379, but that container was not properly deleted which kept process docker-proxy running and hence you got port is already allocated
Hope this helps.
As DannyMoshe suggested for anyone else.
Try this before you potentially mess up your whole setup::
sudo service docker stop
sudo service docker start
remove the ports - ... in the docker-compose file and let it assign by itself. or change the port mapping in the host from 6379:6379 to 6378:6379 that worked for me. Before doing this you may need to clear already started containers. docker rm -f $(docker ps -a -q)

Docker container can not ping the outside world - iptables

exploring Docker 17.06.
I've installed docker on Centos 7 and created a container. Started the container with the default bridge. I can ping both host adapters, but not the outside world e.g. www.google.com
All advise out there is based on older versions of Docker and it's iptables settings.
I would like to understand how to ping to the outside world, what is required please?
TIA!
If you able to ping www.google.com from host machine try following these steps :
run on host machine:
sudo ip addr show docker0
You will get output which includes :
inet 172.17.2.1/16 scope global docker0
The docker host has the IP address 172.17.2.1 on the docker0 network interface.
Then start the container :
docker run --rm -it ubuntu:trusty bash
and run
ip addr show eth0
output will include :
inet 172.17.1.29/16 scope global eth0
Your container has the IP address 172.17.1.29. Now look at the routing table:
run:
route
output will include:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.17.2.1 0.0.0.0 UG 0 0 0 eth0
It Means the IP Address of the docker host 172.17.2.1 is set as the default route and is accessible from your container.
try ping now to your host machine ip :
root#e21b5c211a0c:/# ping 172.17.2.1
PING 172.17.2.1 (172.17.2.1) 56(84) bytes of data.
64 bytes from 172.17.2.1: icmp_seq=1 ttl=64 time=0.071 ms
64 bytes from 172.17.2.1: icmp_seq=2 ttl=64 time=0.211 ms
64 bytes from 172.17.2.1: icmp_seq=3 ttl=64 time=0.166 ms
If this works most probably you'll be able to ping www.google.com
Hope it will help!
In my case restarting docker daemon helped
sudo systemctl restart docker
If iptables is not a reason and if you have no some limitation for change containers network mode - set it to "host" mode. This should solve this issue.
Please verify your existing iptables:
iptables --list
It should show you list of iptables with source and destination details.
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
If it is anywhere for both source and destination it should ping outside IPs.(By Default its anywhere)
If not use this command to set your iptable(DOCKER-USER)
iptables -I DOCKER-USER -i eth0 -s 0.0.0.0/0 -j ACCEPT
Hope this will help!
I had a similar problem, an api docker container needed connection to outside, but the others containers not. So my option was add the flag --dns 8.8.8.8 to the docker run command , and with that the container can ping to outside. I consider this a solution for one container, if you need for more containers, maybe other responses are better. Here the documentation. And full line example:
docker run -d --rm -p 8080:8080 --dns 8.8.8.8 <docker-image-name>
where:
-d, detach mode for run containers in background
--rm, remove containers if is stop (careful if you are testing and maybe you need to inspect logs, with docker logs , don't use it)
-p, specify the port ( <host-port> : <container-port> )
--dns, the container can resolve internet domains

Hit a service running on localhost from inside a docker image

I'm on mac os, I have a service running on my machine on localhost:8000
now I want to launch a docker image and hit this service from here.
I did a docker bridge and I use it from inside, but it is not working.
Here are my steps:
My host ip:
ifconfig
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 98:01:a7:b0:2b:41
inet 192.168.0.70 netmask 0xffffff00 broadcast 192.168.0.255
media: autoselect
status: active
I hit my service from host:
curl localhost:8000 #this is working!
I create a bridge and I use it:
docker network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 dockernet
docker run --rm -it -v "$(pwd):/src" --network=dockernet qatests /bin/bash
from inside docker, I do a curl but it is not working:
curl 192.168.0.1:8000 #it's not working :-(
any ideas?
You dont need to create a new network. You can use the default one (bridge).
Just check which ip is associated to the docker0 interface in your host with ip or ifconfig (in my case is 172.17.42.1), and use that ip from inside the container:
$ curl 172.17.42.1:8000
In the end, I've discovered that if I ping my pc ip, I can see it even from the docker image.
For convenience, I did a lounch script witch get my current IP and launch the docker image making my ip accessible under "mymac" address
So what i did is lunching
MY_IP=$(ifconfig en0 | grep inet | grep -v inet6 | awk '{print $2}')
docker run --rm -it -v "$(pwd):/src" --add-host=mymac:$MY_IP qatests
/bin/bash
inside docker I can now lunch:
curl mymac:8000 #it works! now mymac is my pc outside docker

Can't ping docker IPv6 container

I ran docker daemon for using it with global IPv6 for containers:
docker daemon --ipv6 --fixed-cidr-v6="xxxx:xxxx:xxxx:xxxx::/64"
After it I ran docker container:
docker run -d --name my-container some-image
It successfully got Global IPv6 address( I checked by docker inspect my-container). But I can't to ping my container by this ip:
Destination unreachable: Address unreachable
But I can successfully ping docker0 bridge by it's IPv6 address.
Output of route -n -6 contains next lines:
Destination Next Hop Flag Met Ref Use If
xxxx:xxxx:xxxx:xxxx::/64 :: U 256 0 0 docker0
xxxx:xxxx:xxxx:xxxx::/64 :: U 1024 0 0 docker0
fe80::/64 :: U 256 0 0 docker0
docker0 interface has global IPv6 address:
inet6 addr: xxxx:xxxx:xxxx:xxxx::1/64 Scope:Global
xxxx:xxxx:xxxx:xxxx:: everywhere is the same, and it's global IPv6 address of my eth0 interface
Does docker required something additional configs for accessing my containers via IPv6?
Assuming IPv6 in your guest OS is properly configured probably you are pinging the container not from host OS, but outside and network discovery protocol is not configured. Other hosts does not know if your container is behind of your host. I'm doing this after start of container with IPv6 (in host OS) (in ExecStartPost clauses of Systemd .service file)
/usr/sbin/sysctl net.ipv6.conf.interface_name.proxy_ndp=1
/usr/bin/ip -6 neigh add proxy $(docker inspect --format {{.NetworkSettings.GlobalIPv6Address}} container_name) dev interface_name"
Beware of IPv6: docker developers say in replies to bug reports they do not have enough time to make IPv6 production-ready in version 1.10 and say nothing about 1.11.
Mb you use wrong ping command. For ipv6 is ping6.
$ ping6 2607:f0d0:1002:51::4

Docker 1.9.0 "bridge" versus a custom bridge network results in difference in hosts file and SSH_CLIENT env variable

Let me first explain what I'm trying to do, as there may be multiple ways to solve this. I have two containers in docker 1.9.0:
node001 (172.17.0.2) (sudo docker run --net=<<bridge or test>> --name=node001 -h node001 --privileged -t -i -v /sys/fs/cgroup:/sys/fs/cgroup <<image>>)
node002 (172.17.0.3) (,,)
When I launch them with --net=bridge I get the correct value for SSH_CLIENT when I ssh from one to the other:
[root#node001 ~]# ssh root#172.17.0.3
root#172.17.0.3's password:
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.17.0.3 56194 22
[root#node001 ~]# ping -c 1 node002
ping: unknown host node002
In docker 1.8.3 I could also use the hostnames I supply when I start them, in 1.8.3 that last ping statement works!
In docker 1.9.0 I don't see anything being added in /etc/hosts, and the ping statement fails. This is a problem for me. So I tried creating a custom network...
docker network create --driver bridge test
When I launch the two containers with --net=test I get a different value for SSH_CLIENT:
[root#node001 ~]# ssh root#172.18.0.3
root#172.18.0.3's password:
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.18.0.1 57388 22
[root#node001 ~]# ping -c 1 node002
PING node002 (172.18.0.3) 56(84) bytes of data.
64 bytes from node002 (172.18.0.3): icmp_seq=1 ttl=64 time=0.041 ms
Note that the ip address is not node001's, it seems to represent the docker host itself. The hosts file is correct though, containing:
172.18.0.2 node001
172.18.0.2 node001.test
172.18.0.3 node002
172.18.0.3 node002.test
My current workaround is using docker 1.8.3 with the default bridge network, but I want this to work with future docker versions.
Is there any way I can customize the test network to make it behave similarly to the default bridge network?
Alternatively:
Maybe make the default bridge network write out the /etc/hosts file in docker 1.9.0?
Any help or pointers towards different solutions will be greatly appreciated..
Edit: 21-01-2016
Apparently the problem is fixed in 1.9.1, with bridge in docker 1.8 and with a custom (--net=test) in 1.9.1, now the behaviour is correct:
[root#node001 tmp]# ip route
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.5
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.18.0.3 52162 22
Retried in 1.9.0 to see if I wasn't crazy, and yeah there the problem occurs:
[root#node001 tmp]# ip route
default via 172.18.0.1 dev eth0
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.3
[root#node002 ~]# env|grep SSH_CLI
SSH_CLIENT=172.18.0.1 53734 22
So after remove/stop/start-ing the instances the IP-addresses were not exactly the same, but it can be easily seen that the ssh_client source ip is not correct in the last code block. Thanks #sourcejedi for making me re-check.
Firstly, I don't think it's possible to change any settings on the default network, i.e. to write /etc/hosts. You apparently can't delete the default networks, so you can't recreate them with different options.
Secondly
Docker is careful that its host-wide iptables rules fully expose containers to each other’s raw IP addresses, so connections from one container to another should always appear to be originating from the first container’s own IP address. docs.docker.com
I tried reproducing your issue with the random containers I've been playing with. Running wireshark on the bridge interface for the network, I didn't see my ping packets. From this I conclude my containers are indeed talking directly to each other; the host was not doing routing and NAT.
You need to check the routes on your client container ip route. Do you have a route for 172.18.0.2/16? If you only have a default route, it could try to send everything through the docker host. And it might get confused and do masquerading as if it was talking with the outside world.
This might happen if you're running some network configuration in your privileged container. I don't know what's happening if you're just booting it with bash though.

Resources