exploring Docker 17.06.
I've installed docker on Centos 7 and created a container. Started the container with the default bridge. I can ping both host adapters, but not the outside world e.g. www.google.com
All advise out there is based on older versions of Docker and it's iptables settings.
I would like to understand how to ping to the outside world, what is required please?
TIA!
If you able to ping www.google.com from host machine try following these steps :
run on host machine:
sudo ip addr show docker0
You will get output which includes :
inet 172.17.2.1/16 scope global docker0
The docker host has the IP address 172.17.2.1 on the docker0 network interface.
Then start the container :
docker run --rm -it ubuntu:trusty bash
and run
ip addr show eth0
output will include :
inet 172.17.1.29/16 scope global eth0
Your container has the IP address 172.17.1.29. Now look at the routing table:
run:
route
output will include:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.17.2.1 0.0.0.0 UG 0 0 0 eth0
It Means the IP Address of the docker host 172.17.2.1 is set as the default route and is accessible from your container.
try ping now to your host machine ip :
root#e21b5c211a0c:/# ping 172.17.2.1
PING 172.17.2.1 (172.17.2.1) 56(84) bytes of data.
64 bytes from 172.17.2.1: icmp_seq=1 ttl=64 time=0.071 ms
64 bytes from 172.17.2.1: icmp_seq=2 ttl=64 time=0.211 ms
64 bytes from 172.17.2.1: icmp_seq=3 ttl=64 time=0.166 ms
If this works most probably you'll be able to ping www.google.com
Hope it will help!
In my case restarting docker daemon helped
sudo systemctl restart docker
If iptables is not a reason and if you have no some limitation for change containers network mode - set it to "host" mode. This should solve this issue.
Please verify your existing iptables:
iptables --list
It should show you list of iptables with source and destination details.
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
If it is anywhere for both source and destination it should ping outside IPs.(By Default its anywhere)
If not use this command to set your iptable(DOCKER-USER)
iptables -I DOCKER-USER -i eth0 -s 0.0.0.0/0 -j ACCEPT
Hope this will help!
I had a similar problem, an api docker container needed connection to outside, but the others containers not. So my option was add the flag --dns 8.8.8.8 to the docker run command , and with that the container can ping to outside. I consider this a solution for one container, if you need for more containers, maybe other responses are better. Here the documentation. And full line example:
docker run -d --rm -p 8080:8080 --dns 8.8.8.8 <docker-image-name>
where:
-d, detach mode for run containers in background
--rm, remove containers if is stop (careful if you are testing and maybe you need to inspect logs, with docker logs , don't use it)
-p, specify the port ( <host-port> : <container-port> )
--dns, the container can resolve internet domains
Related
In my Fedora 32 machine DNS is working better. DNS lookup is working when ping google.com.
PING google.com (172.217.160.174) 56(84) bytes of data.
64 bytes from bom05s12-in-f14.1e100.net (172.217.160.174): icmp_seq=1 ttl=117 time=41.5 ms
64 bytes from bom05s12-in-f14.1e100.net (172.217.160.174): icmp_seq=2 ttl=117 time=47.2 ms
I build following simple docker image using default bridge network. (I need bridge network. My issue is working when i using host network. And DockerImage will have more commands)
FROM tailor/docker-libvips:node-10.9
docker build --tag dinuka/video-file-service-test-sandbox:node-10.9 .
docker run -dit --name video-test-1 dinuka/video-file-service-test-sandbox:node-10.9
I have logged to the container using following command.
docker attach video-test-1
After that i have tried to ping an IP. It is success.
/# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=116 time=42.5 ms
But it is not working to domain
/# ping google.com
ping: google.com: Temporary failure in name resolution
The container DNS is correct. It is same as my machine name server.
/# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 192.168.1.1
My machine OS is Fedora 32. I have disable selinux and firewalld. I have tried many solutions in stackoverflow. But any can't solve this.
You need to manually add masquerading to the network interface:
ZONE=$(sudo firewall-cmd --get-zone-of-interface=<internet facing interface>)
sudo firewall-cmd --zone=$ZONE --add-masquerade --permanent success
sudo firewall-cmd --reload success
sudo systemctl restart docker
I am trying to access devices on my network with .local domain, but it doesn't seem to work in Docker.
Ping from host is working:
$ ping test1.local
PING test1.local (192.168.1.90) 56(84) bytes of data.
64 bytes from 192.168.1.90 (192.168.1.90): icmp_seq=1 ttl=255 time=1.41 ms
64 bytes from 192.168.1.90 (192.168.1.90): icmp_seq=2 ttl=255 time=1.54 ms
Docker demon config:
$ cat /etc/docker/daemon.json
{
"dns": ["192.168.1.1","8.8.8.8"]
}
If I try to ping test1.local from Docker:
$ sudo docker run --network host busybox ping -c 3 test1.local
ping: bad address 'test1.local'
Pinging device with IP works:
$ sudo docker run --network host busybox ping -c 3 192.168.1.90
PING 192.168.1.90 (192.168.1.90): 56 data bytes
64 bytes from 192.168.1.90: seq=0 ttl=255 time=4.855 ms
64 bytes from 192.168.1.90: seq=1 ttl=255 time=1.566 ms
So I assume something is wrong name resolution.
madrian#ubuntudev:~$ cat /etc/resolv.conf
$ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.
nameserver 127.0.0.1
search localdomain
Any ideas how to resolve this issue?
Try run your code without --network host argument. The problem is in the DNS resolution.
When you use default bridge (which will be used if you omit network parameter), containers inherit DNS configuration from host, and that is what you need:
https://docs.docker.com/v17.09/engine/userguide/networking/default_network/configure-dns/
When you use user-defined bridge, Docker updates DNS records to enable seamless communication between containers by their names:
https://docs.docker.com/v17.09/engine/userguide/networking/configure-dns/
Unfortunately, I was unable to find explicit explanation on how DNS works with host mode, so I assume this is a problem
I ran docker daemon for using it with global IPv6 for containers:
docker daemon --ipv6 --fixed-cidr-v6="xxxx:xxxx:xxxx:xxxx::/64"
After it I ran docker container:
docker run -d --name my-container some-image
It successfully got Global IPv6 address( I checked by docker inspect my-container). But I can't to ping my container by this ip:
Destination unreachable: Address unreachable
But I can successfully ping docker0 bridge by it's IPv6 address.
Output of route -n -6 contains next lines:
Destination Next Hop Flag Met Ref Use If
xxxx:xxxx:xxxx:xxxx::/64 :: U 256 0 0 docker0
xxxx:xxxx:xxxx:xxxx::/64 :: U 1024 0 0 docker0
fe80::/64 :: U 256 0 0 docker0
docker0 interface has global IPv6 address:
inet6 addr: xxxx:xxxx:xxxx:xxxx::1/64 Scope:Global
xxxx:xxxx:xxxx:xxxx:: everywhere is the same, and it's global IPv6 address of my eth0 interface
Does docker required something additional configs for accessing my containers via IPv6?
Assuming IPv6 in your guest OS is properly configured probably you are pinging the container not from host OS, but outside and network discovery protocol is not configured. Other hosts does not know if your container is behind of your host. I'm doing this after start of container with IPv6 (in host OS) (in ExecStartPost clauses of Systemd .service file)
/usr/sbin/sysctl net.ipv6.conf.interface_name.proxy_ndp=1
/usr/bin/ip -6 neigh add proxy $(docker inspect --format {{.NetworkSettings.GlobalIPv6Address}} container_name) dev interface_name"
Beware of IPv6: docker developers say in replies to bug reports they do not have enough time to make IPv6 production-ready in version 1.10 and say nothing about 1.11.
Mb you use wrong ping command. For ipv6 is ping6.
$ ping6 2607:f0d0:1002:51::4
I am trying to set up 4 containers(with nginx) in a system with 4 IPs and 2 interfaces. Can someone please help me? For now only 3 containers are accessible. 4th one is timing out when tried to access from the browser instead of showing a welcome page. I have given the ip routes needed
Host is Ubuntu.
So when this happened I thought it had something to do with the ip routes. So in the same system I installed apache and created 4 virtual hosts each listening to different IPs and with different document routes.
When checked all the IPs were accessible and showed the correct documents.
So now I am stuck, what do I do now!
Configuration:
4 IPs and 2 interfaces. So I created 2 IP aliases. All IPs are configured by the /etc/network/interfaces except the first one. eth0 is is set to dhcp mode.
auto eth0:1
iface eth0:1 inet static
address 172.31.118.182
netmask 255.255.255.0
auto eth1
iface eth1 inet static
address 172.31.119.23
netmask 255.255.255.0
auto eth1:1
iface eth1:1 inet static
address 172.31.119.11
netmask 255.255.255.0
It goes like this. The IPs are private IPs, so I guess there is no problem sharing it here.
eth0 - 172.31.118.249
eth0:1 - 172.31.118.182
eth1 - 172.31.119.23
eth1:1 - 172.31.119.11
Now the docker creation commands
All are just basic nginx containers, so when working it will show the default nginx page.
sudo docker create -i -t -p 172.31.118.249:80:80 --name web1 web_fresh
sudo docker create -i -t -p 172.31.118.182:80:80 --name web2 web_fresh
sudo docker create -i -t -p 172.31.119.23:80:80 --name web3 web_fresh
sudo docker create -i -t -p 172.31.119.11:80:80 --name web4 web_fresh
sudo docker start web1
sudo docker start web2
sudo docker start web3
sudo docker start web4
--
Now here web1 & web2 become immediately accessible. But the containers running on eth1 and eth1:1 are not. So I figured iproutes must be the issue and went ahead and added some routes.
ip route add 172.31.119.0/24 dev eth1 src 172.31.119.23 table eth1
ip route add default via 172.31.119.1 table eth1
ip route add 172.31.119.0/24 dev eth1 src 172.31.119.11 table eth11
ip route add default via 172.31.119.1 table eth11
ip rule add from 172.31.119.23 lookup eth1 prio 1002
ip rule add from 172.31.119.11 lookup eth11 prio 1003
This made web3 also accessible. But not the one from eth1:1. So here is where I am stuck at the moment.
Let me first explain what I'm trying to do, as there may be multiple ways to solve this. I have two containers in docker 1.9.0:
node001 (172.17.0.2) (sudo docker run --net=<<bridge or test>> --name=node001 -h node001 --privileged -t -i -v /sys/fs/cgroup:/sys/fs/cgroup <<image>>)
node002 (172.17.0.3) (,,)
When I launch them with --net=bridge I get the correct value for SSH_CLIENT when I ssh from one to the other:
[root#node001 ~]# ssh root#172.17.0.3
root#172.17.0.3's password:
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.17.0.3 56194 22
[root#node001 ~]# ping -c 1 node002
ping: unknown host node002
In docker 1.8.3 I could also use the hostnames I supply when I start them, in 1.8.3 that last ping statement works!
In docker 1.9.0 I don't see anything being added in /etc/hosts, and the ping statement fails. This is a problem for me. So I tried creating a custom network...
docker network create --driver bridge test
When I launch the two containers with --net=test I get a different value for SSH_CLIENT:
[root#node001 ~]# ssh root#172.18.0.3
root#172.18.0.3's password:
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.18.0.1 57388 22
[root#node001 ~]# ping -c 1 node002
PING node002 (172.18.0.3) 56(84) bytes of data.
64 bytes from node002 (172.18.0.3): icmp_seq=1 ttl=64 time=0.041 ms
Note that the ip address is not node001's, it seems to represent the docker host itself. The hosts file is correct though, containing:
172.18.0.2 node001
172.18.0.2 node001.test
172.18.0.3 node002
172.18.0.3 node002.test
My current workaround is using docker 1.8.3 with the default bridge network, but I want this to work with future docker versions.
Is there any way I can customize the test network to make it behave similarly to the default bridge network?
Alternatively:
Maybe make the default bridge network write out the /etc/hosts file in docker 1.9.0?
Any help or pointers towards different solutions will be greatly appreciated..
Edit: 21-01-2016
Apparently the problem is fixed in 1.9.1, with bridge in docker 1.8 and with a custom (--net=test) in 1.9.1, now the behaviour is correct:
[root#node001 tmp]# ip route
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.5
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.18.0.3 52162 22
Retried in 1.9.0 to see if I wasn't crazy, and yeah there the problem occurs:
[root#node001 tmp]# ip route
default via 172.18.0.1 dev eth0
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.3
[root#node002 ~]# env|grep SSH_CLI
SSH_CLIENT=172.18.0.1 53734 22
So after remove/stop/start-ing the instances the IP-addresses were not exactly the same, but it can be easily seen that the ssh_client source ip is not correct in the last code block. Thanks #sourcejedi for making me re-check.
Firstly, I don't think it's possible to change any settings on the default network, i.e. to write /etc/hosts. You apparently can't delete the default networks, so you can't recreate them with different options.
Secondly
Docker is careful that its host-wide iptables rules fully expose containers to each other’s raw IP addresses, so connections from one container to another should always appear to be originating from the first container’s own IP address. docs.docker.com
I tried reproducing your issue with the random containers I've been playing with. Running wireshark on the bridge interface for the network, I didn't see my ping packets. From this I conclude my containers are indeed talking directly to each other; the host was not doing routing and NAT.
You need to check the routes on your client container ip route. Do you have a route for 172.18.0.2/16? If you only have a default route, it could try to send everything through the docker host. And it might get confused and do masquerading as if it was talking with the outside world.
This might happen if you're running some network configuration in your privileged container. I don't know what's happening if you're just booting it with bash though.