I have an external server abc.internalcorp.com that I'm planning to connect from docker.
I tried to ping that server from host machine and it works.
ping abc.internalcorp.com
PING abc.internalcorp.com (172.xx.xx.xx) 56(84) bytes of data.
64 bytes from abc.internalcorp.com (172.xx.xx.xx): icmp_seq=1 ttl=47 time=32.6 ms
^C
--- abc.internalcorp.com ping statistics ---
2 packets transmitted, 1 received, 50% packet loss, time 999ms
rtt min/avg/max/mdev = 32.673/32.673/32.673/0.000 ms
But when I execute the same command from my docker container, I see no response. How could this be?
docker exec -ti docker-container bash
root#b7bdf44feb7f:/# ping abc.internalcorp.com
PING abc.internalcorp.com (172.xx.xx.xx) 56(84) bytes of data.
<No response>
This ping is just a test. abc.internalcorp.com is actually a database server and I'm unable to connect to it. I can connect to other database servers though.
Update:
I changed bip in ~/.docker/daemon.json
{
"bip": "193.168.1.5/24",
"registry-mirrors": [],
"insecure-registries": [],
"debug": true,
"experimental": false
}
But I still have the same ping issue
docker exec -ti docker-container bash
root#b7bdf44feb7f:/# ip addr show eth0
10: eth0#if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c1:a8:01:01 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 193.168.1.1/24 brd 193.168.1.255 scope global eth0
valid_lft forever preferred_lft forever
Edit
Figured out the issue. There were other networks in my docker that were having the same network subnets. Deleted them and works fine now
Need to do two things.
change the network by editing daemon.json
{
"registry-mirrors": [],
"insecure-registries": [],
"debug": true,
"experimental": false,
"bip" : "12.12.0.1/24"
}
Delete other networks in docker which might be conflicting with the ip. You check if any other network is in the same range using
docker inspect 'networkname'
The range 172.x.x.x is the default range on the internal network in docker, if you are using that same range in your local network you need to specify a different one for the docker network.
https://docs.docker.com/v17.09/engine/userguide/networking/default_network/custom-docker0/
Find the DNS with resolves abc.internalcorp.com. Add it as a DNS to your docker container by updating the daemon.json as below. If x.x.x.x is the DNS.
{
"dns": ["x.x.x.x"]
}
restart docker daemon. Then try ping from the container.
Related
When deploying docker-compose with multiple networks, only the first interface have an access to the outside world
version: "3.9"
services:
speedtest:
build:
context: .
dockerfile: speedtest.Dockerfile
tty: true
networks:
- eth0
- eth1
networks:
eth0:
eth1:
Running inside the container ping for example ping -I eth0 google.com works fine
However running ping -I eth1 google.com will get the result
PING google.com (142.250.200.238) from 172.21.0.2 eth1: 56(84) bytes of data.
From c4d3b238f9a1 (172.21.0.2) icmp_seq=1 Destination Host Unreachable
From c4d3b238f9a1 (172.21.0.2) icmp_seq=2 Destination Host Unreachable
Any idea how to have egress to the internet on both networks?
Tried multiple combinations for creating the network, with external, bridge with custom config etc...
Update
After larsks answer, using ip route add for eth1 and running tcpdump -i any packets are coming in correctly:
11:26:12.098918 eth1 Out IP 8077ec32b69d > dns.google: ICMP echo request, id 3, seq 1, length 64
11:26:12.184195 eth1 In IP dns.google > 8077ec32b69d: ICMP echo reply, id 3, seq 1, length 64
But still 100% packet loss...
The problem here is that while there are two interfaces inside the container, there is only a single default route. Given a container with two interfaces, like this:
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
70: eth0#if71: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:10:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.16.2/20 brd 192.168.31.255 scope global eth0
valid_lft forever preferred_lft forever
72: eth1#if73: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:30:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.48.2/20 brd 192.168.63.255 scope global eth1
valid_lft forever preferred_lft forever
The routing table looks like this:
/ # ip route
default via 192.168.16.1 dev eth0
192.168.16.0/20 dev eth0 proto kernel scope link src 192.168.16.2
192.168.48.0/20 dev eth1 proto kernel scope link src 192.168.48.2
When you run ping google.com or ping -I eth0 google.com, in both
cases your ICMP request egresses through eth0, goes to the
appropriate default gateway, and eventually works it way to
google.com.
But when you run ping -I eth1 google.com, there's no way to reach
the default gateway from that address; the gateway is only reachable
via eth0. Since the kernel can't find a useful route, it attempts to
connect directly. If we run tcpdump on the host interface that is
the other end of with eth1, we see:
23:47:58.035853 ARP, Request who-has 142.251.35.174 tell 192.168.48.2, length 28
23:47:59.083553 ARP, Request who-has 142.251.35.174 tell 192.168.48.2, length 28
[...]
That's the kernel saying, "I've been told to connect to this address
using this specific interface, but there's no route, so I'm going to
assume the address is on the same network and just ARP for it".
Of course that fails.
We can make this work by adding an appropriate route. You need to run
a privileged container to do this (or at least have
CAP_NET_ADMIN):
ip route add default via 192.168.48.1 metric 101
(The gateway address is the .1 address of the network associated with eth1.)
We need the metric setting to differentiate this from the existing
default route; without that the command would fail with RTNETLINK answers: File exists.
After running that command, we have:
/ # ip route
default via 192.168.16.1 dev eth0
default via 192.168.48.1 dev eth1 metric 101
192.168.16.0/20 dev eth0 proto kernel scope link src 192.168.16.2
192.168.48.0/20 dev eth1 proto kernel scope link src 192.168.48.2
And we can successfully ping google.com via eth1:
/ # ping -c2 -I eth1 google.com
PING google.com (142.251.35.174) from 192.168.48.2 eth1: 56(84) bytes of data.
64 bytes from lga25s78-in-f14.1e100.net (142.251.35.174): icmp_seq=1 ttl=116 time=8.87 ms
64 bytes from lga25s78-in-f14.1e100.net (142.251.35.174): icmp_seq=2 ttl=116 time=8.13 ms
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 8.127/8.497/8.868/0.370 ms
Having gone through all that, I'll add that I don't see many
situations in which it would be necessary: typically you use
additional networks in order to isolate things like database servers,
etc, while using the "primary" interface (the one with which the
default route is associated) for outbound requests.
I tested all this using the following docker-compose.yaml:
version: "3"
services:
sleeper:
image: alpine
cap_add:
- NET_ADMIN
command:
- sleep
- inf
networks:
- eth0
- eth1
networks:
eth0:
eth1:
So I'm trying to create a network (docker network create) so that its traffic will pass through an specific physical network interface (NIC); I have two: <iface1> (internal), and <iface2> (external).
I need the traffics of both NICs to be physically separated.
METHOD 1:
I think macvlan is the driver should use to create such network.
For most of what I found on the internet, the solutions refer to Pipework (deprecated now) and temporary docker-plugins (deprecated too).
For what most closely has helped me is this1
docker network create -d macvlan \
--subnet 192.168.0.0/16 \
--ip-range 192.168.2.0/24 \
-o parent=wlp8s0.1 \
-o macvlan_mode=bridge \
macvlan0
Then, in order for the container to be visible from the host, I need to do this in the host:
sudo ip link add macvlan0 link wlp8s0.1 type macvlan mode bridge
sudo ip addr add 192.168.2.10/16 dev macvlan0
sudo ifconfig macvlan0 up
Now the container and the host see each other :) BUT the container can't access the local network.
The idea, is that the container can access internet.
METHOD 2:
As I will use <iface2> manually, I'm ok if by default the traffic goes through <iface1>.
But no matter in which order I get the NICs up (I also tried removing the LKM for <iface2> temporarely); the whole traffic is always overtaken by the external NIC <iface2>.
And I found that it happens because the route table updates automatically at some "random" time.
In order to force the traffic to go through <iface1>, I have to (in the host):
sudo route del -net <net> gw 0.0.0.0 netmask 255.0.0.0 dev <iface2>
sudo route del default <iface2>
Now, I can verify (in several ways) that the traffic just goes through <iface1>.
But the moment that the route table updates (automatically), all traffic moves to <iface2>. Damn!
I'm sure there's a way to make the route table "static" or "persistent".
EDIT (18/Jul/2018):
The main idea is to be able to access internet through a docker container using only one of two available physical network interfaces.
My environment:
On the host created for vm virbr0 bridge with ip address 192.168.122.1 and up vm instance with interface ens3 and ip address 192.168.122.152.
192.168.122.1 - is gateway for 192.168.122.0/24 network.
Into vm:
Create network:
# docker network create --subnet 192.168.122.0/24 --gateway 192.168.122.1 --driver macvlan -o parent=ens3 vmnet
Create docker container:
# docker run -ti --network vmnet alpine ash
Check:
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
12: eth0#if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:c0:a8:7a:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.2/24 brd 192.168.122.255 scope global eth0
valid_lft forever preferred_lft forever
/ # ping 192.168.122.152
PING 192.168.122.152 (192.168.122.152): 56 data bytes
^C
--- 192.168.122.152 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
/ # ping 192.168.122.1
PING 192.168.122.1 (192.168.122.1): 56 data bytes
64 bytes from 192.168.122.1: seq=0 ttl=64 time=0.471 ms
^C
--- 192.168.122.1 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.471/0.471/0.471 ms
Ok, I up another vm with ip address 192.168.122.73 and check from docker:
/ # ping 192.168.122.73 -c2
PING 192.168.122.73 (192.168.122.73): 56 data bytes
64 bytes from 192.168.122.73: seq=0 ttl=64 time=1.630 ms
64 bytes from 192.168.122.73: seq=1 ttl=64 time=0.984 ms
--- 192.168.122.73 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.984/1.307/1.630 ms
From docker instance I can't ping interface on vm, but I can access to local network.
/ # ip n|grep 192.168.122.152
192.168.122.152 dev eth0 used 0/0/0 probes 6 FAILED
On vm I add macvlan0 nic:
# ip link add macvlan0 link ens3 type macvlan mode bridge
# ip addr add 192.168.122.100/24 dev macvlan0
# ip l set macvlan0 up
From the docker I can ping 192.168.122.100:
/ # ping 192.168.122.100 -c2
PING 192.168.122.100 (192.168.122.100): 56 data bytes
64 bytes from 192.168.122.100: seq=0 ttl=64 time=0.087 ms
64 bytes from 192.168.122.100: seq=1 ttl=64 time=0.132 ms
--- 192.168.122.100 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.087/0.109/0.132 ms
I have a python unittest code inside a centos6.8 container. The unittest code needs to bind to 127.0.0.1.
This container is run with overlay network instead of host. The failure seems to go away if I switch docker run to --network host.
Inside the container I do see a loopback
[rtpbuild#bldrh6rtp89-rh6 /]$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
Any suggestion on why binding to 127.0.0.1 doesn't work when overlay network is used but works when host network is used? And how can I make it work under overlay network?
Should work, you need to elaborate a bit more how you run containers. As an example, docker swarm cluster, manager node, overlay network in swarm scope.
Command
docker run -it --rm --network ${network} -p 127.0.0.1:555:8080 caa06d9c/echo
Request
curl 127.0.0.1:555/
Reply
{
"method": "GET",
"path": "/",
"ip": "172.17.0.1"
}
Command
docker run -it --rm --network ${network} -p 127.0.0.1:555:8080 caa06d9c/echo --ip 127.0.0.1
Check
docker exec ${container} netstat -ptunl
Result
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 1/python3
So Docker correctly assign interfaces and ports
Problem: the Internet isn't accessible within a docker container.
on my bare metal Ubuntu 17.10 box...
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=52 time=10.8 ms
but...
$ docker run --rm debian:latest ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
92 bytes from 7911d89db6a4 (192.168.220.2): Destination Host Unreachable
I think the root cause is that I had to set up a non-default network for docker0 because the default one 172.17.0.1 was already in use within my organization.
My /etc/docker/daemon.json file needs to look like this in order for docker to start successfully.
$ cat /etc/docker/daemon.json
{
"bip": "192.168.220.1/24",
"fixed-cidr": "192.168.220.0/24",
"fixed-cidr-v6": "0:0:0:0:0:ffff:c0a8:dc00/120",
"mtu": 1500,
"default-gateway": "192.168.220.10",
"default-gateway-v6": "0:0:0:0:0:ffff:c0a8:dc0a",
"dns": ["10.0.0.69","10.0.0.70","10.1.1.11"],
"debug": true
}
Note that the default-gateway setting looks wrong. However, if I correct it to read 192.168.220.1 the docker service fails to start. Running dockerd at the command line directly produces the most helpful logging, thus:
With "default-gateway": 192.168.220.1 in daemon.json...
$ sudo dockerd
-----8<-----
many lines removed
----->8-----
Error starting daemon: Error initializing network controller: Error creating default "bridge" network: failed to allocate secondary ip address (DefaultGatewayIPv4:192.168.220.1): Address already in use
Here's the info for docker0...
$ ip addr show docker0
10: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:10:bc:66:fd brd ff:ff:ff:ff:ff:ff
inet 192.168.220.1/24 brd 192.168.220.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:10ff:febc:66fd/64 scope link
valid_lft forever preferred_lft forever
And routing table...
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.62.131.1 0.0.0.0 UG 100 0 0 enp14s0
10.62.131.0 0.0.0.0 255.255.255.0 U 100 0 0 enp14s0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 enp14s0
192.168.220.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
Is this the root cause? How do I achieve the, seemingly mutually exclusive states of:
docker0 interface address is x.x.x.1
gateway address is same, x.x.x.1
dockerd runs ok
?
Thanks!
Longer answer to Wedge Martin's question. I made the changes to daemon.json as you suggested:
{
"bip": "192.168.220.2/24",
"fixed-cidr": "192.168.220.0/24",
"fixed-cidr-v6": "0:0:0:0:0:ffff:c0a8:dc00/120",
"mtu": 1500,
"default-gateway": "192.168.220.1",
"default-gateway-v6": "0:0:0:0:0:ffff:c0a8:dc0a",
"dns": ["10.0.0.69","10.0.0.70","10.1.1.11"],
"debug": true
}
so at least the daemon starts, but I still don't have internet access within a container...
$ docker run -it --rm debian:latest bash
root#bd9082bf70a0:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
15: eth0#if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:dc:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.220.3/24 brd 192.168.220.255 scope global eth0
valid_lft forever preferred_lft forever
root#bd9082bf70a0:/# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
92 bytes from bd9082bf70a0 (192.168.220.3): Destination Host Unreachable
It turned out that less is more. Simplifying daemon.json to the following resolved my issues.
{
"bip": "192.168.220.2/24"
}
If you don't set the gw, docker will set it to first non-network address in the network, or .1, but if you set it, docker will conflict when allocating the bridge as the address .1 is in use. You should only set default_gateway if its outside of the network range.
Now the bip can tell docker to use a different address than the .1 and so setting the bip can avoid the conflict, but I am not sure that it will end up doing what you want. Probably will cause routing issues as non-network route will go to address that has no host responding.
I have a swarm cluster in which I created a global service to run on all docker hosts in the cluster.
The goal is to have each container instance for this service connect to a port listening on the docker host.
For further information, I am following this Docker Daemon Metrics guide for exposing the new docker metrics API on all hosts and then proxying that host port into the overlay network so that Prometheus can scrape metrics from all swarm hosts.
I have read several docker github issues #8395 #32101 #32277 #1143 - from this my understanding is the same as outlined in the Docker Daemon Metrics. In order to connect to the host from within a swarm container, I should use the docker-gwbridge network which by default is 172.18.0.1.
Every container in my swarm has a network interface for the docker-gwbridge network:
326: eth0#if327: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
link/ether 02:42:0a:ff:00:06 brd ff:ff:ff:ff:ff:ff
inet 10.255.0.6/16 scope global eth0
valid_lft forever preferred_lft forever
inet 10.255.0.5/32 scope global eth0
valid_lft forever preferred_lft forever
333: eth1#if334: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:12:00:04 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.4/16 scope global eth1
valid_lft forever preferred_lft forever
Also, every container in the swarm has a default route that is via 172.0.0.1:
/prometheus # ip route show 0.0.0.0/0 | grep -Eo 'via \S+' | awk '{ print $2 }'
172.18.0.1
/prometheus # netstat -nr | grep '^0\.0\.0\.0' | awk '{print $2}'
172.18.0.1
/prometheus # ip route
default via 172.18.0.1 dev eth1
10.0.1.0/24 dev eth2 src 10.0.1.9
10.255.0.0/16 dev eth0 src 10.255.0.6
172.18.0.0/16 dev eth1 src 172.18.0.4
Despite this, I cannot communicate with 172.18.0.1 from within the container:
/ # wget -O- 172.18.0.1:4999
Connecting to 172.18.0.1:4999 (172.18.0.1:4999)
wget: can't connect to remote host (172.18.0.1): No route to host
On the host, I can access the docker metrics API on 172.18.0.1. I can ping and I can make a successful HTTP request.
Can anyone shed some light as to why this does not work from within the container as outlined in the Docker Daemon Metrics guide?
If the container has a network interface on the 172.18.0.1 network and has routes configured for 172.18.0.1 why do pings fail to 172.18.0.1 from within the container?
If this is not a valid approach for accessing a host port from within a swarm container, then how would one go about achieving this?
EDIT:
Just realized that I did not give all the information in the original post.
I am running docker swarm on a CentOS 7.2 host with docker version 17.04.0-ce, build 4845c56. My kernel is a build of 4.9.11 with vxlan and ipvs modules enabled.
After some further digging I have noted that this appears to be a firewall issue. I discovered that not only was I unable to ping 172.18.0.1 from within the containers - but I was not able to ping my host machine at all! I tried my domain name, the FQDN for the server and even its public IP address but the container could not ping the host (there is network access as I can ping google/etc).
I disabled firewalld on my host and then restarted the docker daemon. After this I was able to ping my host from within the containers (both domain name and 172.18.0.1). Unfortunately this is not a solution for me. I need to identify what firewall rules I need to put in place to allow container->host communication without requiring firewalld being disabled.
Firstly, I owe you a huge THANK YOU. Before I read your EDIT part, I'd spent literally day and night to solve a similar issue, and never realized that the devil is the firewall.
Without disabling the firewall, I have solved my problem on Ubunt 16.04, using
sudo ufw allow in on docker_gwbridge
sudo ufw allow out on docker_gwbridge
sudo ufw enable
I'm not much familiar with CentOS, but I do believe the following should help you, or at least serve as a hint
sudo firewall-cmd --permanent --zone=trusted --change-interface=docker_gwbridge
sudo systemctl restart firewalld
You might have to restart docker as well.