I am trying to set up 4 containers(with nginx) in a system with 4 IPs and 2 interfaces. Can someone please help me? For now only 3 containers are accessible. 4th one is timing out when tried to access from the browser instead of showing a welcome page. I have given the ip routes needed
Host is Ubuntu.
So when this happened I thought it had something to do with the ip routes. So in the same system I installed apache and created 4 virtual hosts each listening to different IPs and with different document routes.
When checked all the IPs were accessible and showed the correct documents.
So now I am stuck, what do I do now!
Configuration:
4 IPs and 2 interfaces. So I created 2 IP aliases. All IPs are configured by the /etc/network/interfaces except the first one. eth0 is is set to dhcp mode.
auto eth0:1
iface eth0:1 inet static
address 172.31.118.182
netmask 255.255.255.0
auto eth1
iface eth1 inet static
address 172.31.119.23
netmask 255.255.255.0
auto eth1:1
iface eth1:1 inet static
address 172.31.119.11
netmask 255.255.255.0
It goes like this. The IPs are private IPs, so I guess there is no problem sharing it here.
eth0 - 172.31.118.249
eth0:1 - 172.31.118.182
eth1 - 172.31.119.23
eth1:1 - 172.31.119.11
Now the docker creation commands
All are just basic nginx containers, so when working it will show the default nginx page.
sudo docker create -i -t -p 172.31.118.249:80:80 --name web1 web_fresh
sudo docker create -i -t -p 172.31.118.182:80:80 --name web2 web_fresh
sudo docker create -i -t -p 172.31.119.23:80:80 --name web3 web_fresh
sudo docker create -i -t -p 172.31.119.11:80:80 --name web4 web_fresh
sudo docker start web1
sudo docker start web2
sudo docker start web3
sudo docker start web4
--
Now here web1 & web2 become immediately accessible. But the containers running on eth1 and eth1:1 are not. So I figured iproutes must be the issue and went ahead and added some routes.
ip route add 172.31.119.0/24 dev eth1 src 172.31.119.23 table eth1
ip route add default via 172.31.119.1 table eth1
ip route add 172.31.119.0/24 dev eth1 src 172.31.119.11 table eth11
ip route add default via 172.31.119.1 table eth11
ip rule add from 172.31.119.23 lookup eth1 prio 1002
ip rule add from 172.31.119.11 lookup eth11 prio 1003
This made web3 also accessible. But not the one from eth1:1. So here is where I am stuck at the moment.
Related
I have host and 5 IPs set for that host.
I can access host by any of these IPs.
Any connection that was made from that host and dockers too are detected as from IP1
I have a docker on that host that I want to have an IP2. How can I set that docker so when any connection made from that docker to other external servers they get info about IP# instead of IP1.
Thanks!
to achieve this you need to edit the routes on your machine. you can start by running these command to find out the current routes.
$ ip route show
default via 10.1.73.254 dev eth0 proto static metric 100
10.1.73.0/24 dev eth0 proto kernel scope link src 10.1.73.17 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.18.0.0/16 dev docker_gwbridge proto kernel scope link src 172.18.0.1
then you need to change the default route like this
ip route add default via ${YOUR_IP} dev eth0 proto static metric 100
WHat have work for me at the end is to create docker network and add to iptables postrouting for that local range...
docker network create bridge-smtp --subnet=192.168.1.0/24 --gateway=192.168.1.1
iptables -t nat -I POSTROUTING -s 192.168.1.0/24 -j SNAT --to-source MYIP_ADD_HERE
docker run --rm --network bridge-smtp byrnedo/alpine-curl http://www.myip.ch
I have two VPSs, first machine (proxy from now) is for proxy and second machine (dock from now) is docker host. I want to redirect all traffic generated inside a docker container itself over proxy, to not exposure dock machines public IP.
As connection between VPSs is over internet, no local connection, created a tunnel between them by ip tunnel as follows:
On proxy:
ip tunnel add tun10 mode ipip remote x.x.x.x local y.y.y.y dev eth0
ip addr add 192.168.10.1/24 peer 192.168.10.2 dev tun10
ip link set dev tun10 mtu 1492
ip link set dev tun10 up
On dock:
ip tunnel add tun10 mode ipip remote y.y.y.y local x.x.x.x dev eth0
ip addr add 192.168.10.2/24 peer 192.168.10.1 dev tun10
ip link set dev tun10 mtu 1492
ip link set dev tun10 up
PS: Do not know if ip tunnel can be used for production, it is another question, anyway planning to use libreswan or openvpn as a tunnel between VPSs.
After, added SNAT rules to iptables on both VPSs and some routing rules as follows:
On proxy:
iptables -t nat -A POSTROUTING -s 192.168.10.2/32 -j SNAT --to-source y.y.y.y
On dock:
iptables -t nat -A POSTROUTING -s 172.27.10.0/24 -j SNAT --to-source 192.168.10.2
ip route add default via 192.168.10.1 dev tun10 table rt2
ip rule add from 192.168.10.2 table rt2
And last but not least created a docker network with one test container attached to it as follows:
docker network create --attachable --opt com.docker.network.bridge.name=br-test --opt com.docker.network.bridge.enable_ip_masquerade=false --subnet=172.27.10.0/24 testnet
docker run --network testnet alpine:latest /bin/sh
Unfortunately all these ended with no success. So the question is how to debug that? Is it correct way? How would you do the redirection over proxy?
Some words about theory: Traffic coming from 172.27.10.0/24 subnet hits iptables SNAT rule, source IP changes to 192.168.10.2. By routing rule it routes over tun10 device, it is the tunnel. And hits another iptables SNAT rule that changes IP to y.y.y.y and finally goes to destination.
I'm running a virtual machine on GCE and Centos 7. I've configured the machine with two network interfaces. When doing so, the user is required to enter the following commands to configure eth1 (every interface except eth0 requires this approach). On my machine, eth1's gateway is 10.140.0.1.
sudo ifconfig eth1 10.140.0.2 netmask 255.255.255.255 broadcast 10.140.0.2 mtu 1430
sudo echo "1 rt1" | sudo tee -a /etc/iproute2/rt_tables # (sudo su - first if permission denied)
sudo ip route add 10.140.0.1 src 10.140.0.2 dev eth1
sudo ip route add default via 10.140.0.1 dev eth1 table rt1
sudo ip rule add from 10.140.0.2/20 table rt1
sudo ip rule add to 10.140.0.2/20 table rt1
I have used the above with success, but the configuration is not persistent. I know it's possible to do so, but I first need to fully understand what the above is actually doing (breaking my problem into smaller parts).
sudo ifconfig eth1 10.140.0.2 netmask 255.255.255.255 broadcast 10.140.0.2 mtu 1430
This command seems to be telling eth1 at 10.140.0.2 to broadcast on the same internal IP. It's also setting MTU to 1430, which is strange because the other interfaces are set to 1460. Is this command really needed?
sudo echo "1 rt1" | sudo tee -a /etc/iproute2/rt_tables # (sudo su - first if permission denied)
From what I read, this command is appending "1 rt1" to the file rt_tables. If this is run once, does it need to be run each time the network comes up? Seems like it only needs to be run once.
sudo ip route add 10.140.0.1 src 10.140.0.2 dev eth1
sudo ip route add default via 10.140.0.1 dev eth1 table rt1
sudo ip rule add from 10.140.0.2/20 table rt1
sudo ip rule add to 10.140.0.2/20 table rt1
I know these commands add non-persistent rules and routes to the network configuration. Once I know the answers to the above, I will come back to the approach of making this persistent.
Referring to your question on Google group thread, as I had mentioned in the post:
IP routes and IP rules needs to be persistent routes to avoid the routes being lost after VM reboot or network services restart. Depending upon the operating system configuration files required to make the routes persistent can be different. Here is a stackexchange thread for CentOS 7, mentioning files: "/etc/sysconfig/network-scripts/route-ethX" and "/etc/sysconfig/network/scripts/rule-ethX" to keep the IP route and rule peristent. Here is the CentOS documentation for the persistent static routes.
exploring Docker 17.06.
I've installed docker on Centos 7 and created a container. Started the container with the default bridge. I can ping both host adapters, but not the outside world e.g. www.google.com
All advise out there is based on older versions of Docker and it's iptables settings.
I would like to understand how to ping to the outside world, what is required please?
TIA!
If you able to ping www.google.com from host machine try following these steps :
run on host machine:
sudo ip addr show docker0
You will get output which includes :
inet 172.17.2.1/16 scope global docker0
The docker host has the IP address 172.17.2.1 on the docker0 network interface.
Then start the container :
docker run --rm -it ubuntu:trusty bash
and run
ip addr show eth0
output will include :
inet 172.17.1.29/16 scope global eth0
Your container has the IP address 172.17.1.29. Now look at the routing table:
run:
route
output will include:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.17.2.1 0.0.0.0 UG 0 0 0 eth0
It Means the IP Address of the docker host 172.17.2.1 is set as the default route and is accessible from your container.
try ping now to your host machine ip :
root#e21b5c211a0c:/# ping 172.17.2.1
PING 172.17.2.1 (172.17.2.1) 56(84) bytes of data.
64 bytes from 172.17.2.1: icmp_seq=1 ttl=64 time=0.071 ms
64 bytes from 172.17.2.1: icmp_seq=2 ttl=64 time=0.211 ms
64 bytes from 172.17.2.1: icmp_seq=3 ttl=64 time=0.166 ms
If this works most probably you'll be able to ping www.google.com
Hope it will help!
In my case restarting docker daemon helped
sudo systemctl restart docker
If iptables is not a reason and if you have no some limitation for change containers network mode - set it to "host" mode. This should solve this issue.
Please verify your existing iptables:
iptables --list
It should show you list of iptables with source and destination details.
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
If it is anywhere for both source and destination it should ping outside IPs.(By Default its anywhere)
If not use this command to set your iptable(DOCKER-USER)
iptables -I DOCKER-USER -i eth0 -s 0.0.0.0/0 -j ACCEPT
Hope this will help!
I had a similar problem, an api docker container needed connection to outside, but the others containers not. So my option was add the flag --dns 8.8.8.8 to the docker run command , and with that the container can ping to outside. I consider this a solution for one container, if you need for more containers, maybe other responses are better. Here the documentation. And full line example:
docker run -d --rm -p 8080:8080 --dns 8.8.8.8 <docker-image-name>
where:
-d, detach mode for run containers in background
--rm, remove containers if is stop (careful if you are testing and maybe you need to inspect logs, with docker logs , don't use it)
-p, specify the port ( <host-port> : <container-port> )
--dns, the container can resolve internet domains
I ran docker daemon for using it with global IPv6 for containers:
docker daemon --ipv6 --fixed-cidr-v6="xxxx:xxxx:xxxx:xxxx::/64"
After it I ran docker container:
docker run -d --name my-container some-image
It successfully got Global IPv6 address( I checked by docker inspect my-container). But I can't to ping my container by this ip:
Destination unreachable: Address unreachable
But I can successfully ping docker0 bridge by it's IPv6 address.
Output of route -n -6 contains next lines:
Destination Next Hop Flag Met Ref Use If
xxxx:xxxx:xxxx:xxxx::/64 :: U 256 0 0 docker0
xxxx:xxxx:xxxx:xxxx::/64 :: U 1024 0 0 docker0
fe80::/64 :: U 256 0 0 docker0
docker0 interface has global IPv6 address:
inet6 addr: xxxx:xxxx:xxxx:xxxx::1/64 Scope:Global
xxxx:xxxx:xxxx:xxxx:: everywhere is the same, and it's global IPv6 address of my eth0 interface
Does docker required something additional configs for accessing my containers via IPv6?
Assuming IPv6 in your guest OS is properly configured probably you are pinging the container not from host OS, but outside and network discovery protocol is not configured. Other hosts does not know if your container is behind of your host. I'm doing this after start of container with IPv6 (in host OS) (in ExecStartPost clauses of Systemd .service file)
/usr/sbin/sysctl net.ipv6.conf.interface_name.proxy_ndp=1
/usr/bin/ip -6 neigh add proxy $(docker inspect --format {{.NetworkSettings.GlobalIPv6Address}} container_name) dev interface_name"
Beware of IPv6: docker developers say in replies to bug reports they do not have enough time to make IPv6 production-ready in version 1.10 and say nothing about 1.11.
Mb you use wrong ping command. For ipv6 is ping6.
$ ping6 2607:f0d0:1002:51::4