I'm running a virtual machine on GCE and Centos 7. I've configured the machine with two network interfaces. When doing so, the user is required to enter the following commands to configure eth1 (every interface except eth0 requires this approach). On my machine, eth1's gateway is 10.140.0.1.
sudo ifconfig eth1 10.140.0.2 netmask 255.255.255.255 broadcast 10.140.0.2 mtu 1430
sudo echo "1 rt1" | sudo tee -a /etc/iproute2/rt_tables # (sudo su - first if permission denied)
sudo ip route add 10.140.0.1 src 10.140.0.2 dev eth1
sudo ip route add default via 10.140.0.1 dev eth1 table rt1
sudo ip rule add from 10.140.0.2/20 table rt1
sudo ip rule add to 10.140.0.2/20 table rt1
I have used the above with success, but the configuration is not persistent. I know it's possible to do so, but I first need to fully understand what the above is actually doing (breaking my problem into smaller parts).
sudo ifconfig eth1 10.140.0.2 netmask 255.255.255.255 broadcast 10.140.0.2 mtu 1430
This command seems to be telling eth1 at 10.140.0.2 to broadcast on the same internal IP. It's also setting MTU to 1430, which is strange because the other interfaces are set to 1460. Is this command really needed?
sudo echo "1 rt1" | sudo tee -a /etc/iproute2/rt_tables # (sudo su - first if permission denied)
From what I read, this command is appending "1 rt1" to the file rt_tables. If this is run once, does it need to be run each time the network comes up? Seems like it only needs to be run once.
sudo ip route add 10.140.0.1 src 10.140.0.2 dev eth1
sudo ip route add default via 10.140.0.1 dev eth1 table rt1
sudo ip rule add from 10.140.0.2/20 table rt1
sudo ip rule add to 10.140.0.2/20 table rt1
I know these commands add non-persistent rules and routes to the network configuration. Once I know the answers to the above, I will come back to the approach of making this persistent.
Referring to your question on Google group thread, as I had mentioned in the post:
IP routes and IP rules needs to be persistent routes to avoid the routes being lost after VM reboot or network services restart. Depending upon the operating system configuration files required to make the routes persistent can be different. Here is a stackexchange thread for CentOS 7, mentioning files: "/etc/sysconfig/network-scripts/route-ethX" and "/etc/sysconfig/network/scripts/rule-ethX" to keep the IP route and rule peristent. Here is the CentOS documentation for the persistent static routes.
Related
I'm trying to understand how TPROXY works in an effort to build a transparent proxy for Docker containers.
After lots of research I managed to create a network namespace, inject an veth interface into it and add TPROXY rules. The following script worked on a clean Ubuntu 18.04.3:
ip netns add ns0
ip link add br1 type bridge
ip link add veth0 type veth peer name veth1
ip link set veth0 master br1
ip link set veth1 netns ns0
ip addr add 192.168.3.1/24 dev br1
ip link set br1 up
ip link set veth0 up
ip netns exec ns0 ip addr add 192.168.3.2/24 dev veth1
ip netns exec ns0 ip link set veth1 up
ip netns exec ns0 ip route add default via 192.168.3.1
iptables -t mangle -A PREROUTING -i br1 -p tcp -j TPROXY --on-ip 127.0.0.1 --on-port 1234 --tproxy-mark 0x1/0x1
ip rule add fwmark 0x1 tab 30
ip route add local default dev lo tab 30
After that I launched a toy Python server from Cloudflare blog:
import socket
IP_TRANSPARENT = 19
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.setsockopt(socket.IPPROTO_IP, IP_TRANSPARENT, 1)
s.bind(('127.0.0.1', 1234))
s.listen(32)
print("[+] Bound to tcp://127.0.0.1:1234")
while True:
c, (r_ip, r_port) = s.accept()
l_ip, l_port = c.getsockname()
print("[ ] Connection from tcp://%s:%d to tcp://%s:%d" % (r_ip, r_port, l_ip, l_port))
c.send(b"hello world\n")
c.close()
And finally by running ip netns exec ns0 curl 1.2.4.8 I was able to observe a connection from 192.168.3.2 to 1.2.4.8 and receive the "hello world" message.
The problem is that it seems to have compatibility issues with Docker. All worked well in a clean environment, but once I start Docker things start to go wrong. It seems like the TPROXY rule was no longer working. Running ip netns exec ns0 curl 192.168.3.1 gave "Connection reset" and running ip netns exec ns0 curl 1.2.4.8 timed out (both should have produced the "hello world" message). I tried restoring all iptables rules, deleting ip routes and rules generated by Docker and shutting down Docker, but none worked even if I didn't configure any networks or containers.
What is happening behind the scenes and how can I get TPROXY working normally?
I traced all processes created by Docker using strace -f dockerd, and looked for lines containing exec. Most commands are iptables commands, which I have already excluded, and the lines with modprobe looked interesting. I loaded these modules one by one and figured out that the module causing the trouble is br_netfilter.
The module enables filtering of bridged packets through iptables, ip6tables and arptables. The iptables part can be disabled by executing echo "0" | sudo tee /proc/sys/net/bridge/bridge-nf-call-iptables. After executing the command, the script worked again without impacting Docker containers.
I am still confused though. I haven't understood the consequences of such a setting. I enabled packet tracing, but it seems that the packets matched the exact same set of rules before and after enabling bridge-nf-call-iptables, but in the former case the first TCP SYN packet got delivered to the Python server, in the latter case the packet got dropped for unknown reasons.
Try running docker with -p 1234
"By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag."
https://docs.docker.com/config/containers/container-networking/
I have virtualbox with Ubuntu 14.10 server, Jenkins and Apache installed. When I access the IP of this virtualbox the homepage of apache is load correctly. But when I try to acces jenkins via x.x.x.x:8080 (ip of my virtualbox) it won't load. I only get a connection time out error
I tried to configure a different port (8081 and 6060) but that doesn't work. I also add a port forwarding to VirtualBox but that doesn't work ether...
Anyone suggestions how I can acces jenkins that is running inside a virtual machine?
Depending on whether or not you need the box to be accessible by machines other than your host, you need a bridged or host-only network interface https://www.virtualbox.org/manual/ch06.html
I've just done a full install of Nginx, Java, and Jenkins:
sudo apt-get install nginx
sudo apt-get install openjdk-7-jdk
wget -q -O - https://pkg.jenkins.io/debian/jenkins-ci.org.key | sudo apt-key add -
sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt-get update
sudo apt-get install jenkins
on a fresh Ubuntu VirtualBox instance where the first interface is Host-only and the second is NAT:
Here is my /etc/network/interfaces:
# The loopback network interface
auto lo
iface lo inet loopback
# Host-only interface
auto eth0
iface eth0 inet static
address 192.168.56.20
netmask 255.255.255.0
network 192.168.56.0
broadcast 192.168.56.255
# NAT interface
auto eth1
iface eth1 inet dhcp
I can reach Jenkins from my host on 192.168.56.20:8080 with no port forwarding necessary. You must have something unrelated to Jenkins going on, possibly firewall related. Try setting Jenkins back to 8080, removing your port forwarding, and check for firewall rules that could be getting in the way.
I am trying to set up 4 containers(with nginx) in a system with 4 IPs and 2 interfaces. Can someone please help me? For now only 3 containers are accessible. 4th one is timing out when tried to access from the browser instead of showing a welcome page. I have given the ip routes needed
Host is Ubuntu.
So when this happened I thought it had something to do with the ip routes. So in the same system I installed apache and created 4 virtual hosts each listening to different IPs and with different document routes.
When checked all the IPs were accessible and showed the correct documents.
So now I am stuck, what do I do now!
Configuration:
4 IPs and 2 interfaces. So I created 2 IP aliases. All IPs are configured by the /etc/network/interfaces except the first one. eth0 is is set to dhcp mode.
auto eth0:1
iface eth0:1 inet static
address 172.31.118.182
netmask 255.255.255.0
auto eth1
iface eth1 inet static
address 172.31.119.23
netmask 255.255.255.0
auto eth1:1
iface eth1:1 inet static
address 172.31.119.11
netmask 255.255.255.0
It goes like this. The IPs are private IPs, so I guess there is no problem sharing it here.
eth0 - 172.31.118.249
eth0:1 - 172.31.118.182
eth1 - 172.31.119.23
eth1:1 - 172.31.119.11
Now the docker creation commands
All are just basic nginx containers, so when working it will show the default nginx page.
sudo docker create -i -t -p 172.31.118.249:80:80 --name web1 web_fresh
sudo docker create -i -t -p 172.31.118.182:80:80 --name web2 web_fresh
sudo docker create -i -t -p 172.31.119.23:80:80 --name web3 web_fresh
sudo docker create -i -t -p 172.31.119.11:80:80 --name web4 web_fresh
sudo docker start web1
sudo docker start web2
sudo docker start web3
sudo docker start web4
--
Now here web1 & web2 become immediately accessible. But the containers running on eth1 and eth1:1 are not. So I figured iproutes must be the issue and went ahead and added some routes.
ip route add 172.31.119.0/24 dev eth1 src 172.31.119.23 table eth1
ip route add default via 172.31.119.1 table eth1
ip route add 172.31.119.0/24 dev eth1 src 172.31.119.11 table eth11
ip route add default via 172.31.119.1 table eth11
ip rule add from 172.31.119.23 lookup eth1 prio 1002
ip rule add from 172.31.119.11 lookup eth11 prio 1003
This made web3 also accessible. But not the one from eth1:1. So here is where I am stuck at the moment.
Let me first explain what I'm trying to do, as there may be multiple ways to solve this. I have two containers in docker 1.9.0:
node001 (172.17.0.2) (sudo docker run --net=<<bridge or test>> --name=node001 -h node001 --privileged -t -i -v /sys/fs/cgroup:/sys/fs/cgroup <<image>>)
node002 (172.17.0.3) (,,)
When I launch them with --net=bridge I get the correct value for SSH_CLIENT when I ssh from one to the other:
[root#node001 ~]# ssh root#172.17.0.3
root#172.17.0.3's password:
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.17.0.3 56194 22
[root#node001 ~]# ping -c 1 node002
ping: unknown host node002
In docker 1.8.3 I could also use the hostnames I supply when I start them, in 1.8.3 that last ping statement works!
In docker 1.9.0 I don't see anything being added in /etc/hosts, and the ping statement fails. This is a problem for me. So I tried creating a custom network...
docker network create --driver bridge test
When I launch the two containers with --net=test I get a different value for SSH_CLIENT:
[root#node001 ~]# ssh root#172.18.0.3
root#172.18.0.3's password:
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.18.0.1 57388 22
[root#node001 ~]# ping -c 1 node002
PING node002 (172.18.0.3) 56(84) bytes of data.
64 bytes from node002 (172.18.0.3): icmp_seq=1 ttl=64 time=0.041 ms
Note that the ip address is not node001's, it seems to represent the docker host itself. The hosts file is correct though, containing:
172.18.0.2 node001
172.18.0.2 node001.test
172.18.0.3 node002
172.18.0.3 node002.test
My current workaround is using docker 1.8.3 with the default bridge network, but I want this to work with future docker versions.
Is there any way I can customize the test network to make it behave similarly to the default bridge network?
Alternatively:
Maybe make the default bridge network write out the /etc/hosts file in docker 1.9.0?
Any help or pointers towards different solutions will be greatly appreciated..
Edit: 21-01-2016
Apparently the problem is fixed in 1.9.1, with bridge in docker 1.8 and with a custom (--net=test) in 1.9.1, now the behaviour is correct:
[root#node001 tmp]# ip route
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.5
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.18.0.3 52162 22
Retried in 1.9.0 to see if I wasn't crazy, and yeah there the problem occurs:
[root#node001 tmp]# ip route
default via 172.18.0.1 dev eth0
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.3
[root#node002 ~]# env|grep SSH_CLI
SSH_CLIENT=172.18.0.1 53734 22
So after remove/stop/start-ing the instances the IP-addresses were not exactly the same, but it can be easily seen that the ssh_client source ip is not correct in the last code block. Thanks #sourcejedi for making me re-check.
Firstly, I don't think it's possible to change any settings on the default network, i.e. to write /etc/hosts. You apparently can't delete the default networks, so you can't recreate them with different options.
Secondly
Docker is careful that its host-wide iptables rules fully expose containers to each other’s raw IP addresses, so connections from one container to another should always appear to be originating from the first container’s own IP address. docs.docker.com
I tried reproducing your issue with the random containers I've been playing with. Running wireshark on the bridge interface for the network, I didn't see my ping packets. From this I conclude my containers are indeed talking directly to each other; the host was not doing routing and NAT.
You need to check the routes on your client container ip route. Do you have a route for 172.18.0.2/16? If you only have a default route, it could try to send everything through the docker host. And it might get confused and do masquerading as if it was talking with the outside world.
This might happen if you're running some network configuration in your privileged container. I don't know what's happening if you're just booting it with bash though.
I use OpenStack for a while to manage my applications. Now I want to transfer them to docker as container per app because docker is more lightweight and efficient.
The problem is almost every thing related to networking went wrong in runtime.
In my design, every application container should have a static IP address and I can use hosts file to locate the container network.
Here is my implementation. (the bash filename is docker_addnet.sh)
# Useages
# docker_addnet.sh container_name IP
# interface name: veth_(containername)
# gateway 172.17.42.1
if [ $# != 2 ]; then
echo -e "ERROR! Wrong args"
exit 1
fi
container_netmask=16
container_gw=172.17.42.1
container_name=$1
bridge_if=veth_`echo ${container_name} | cut -c 1-10`
container_ip=$2/${container_netmask}
container_id=`docker ps | grep $1 | awk '{print \$1}'`
pid=`docker inspect -f '{{.State.Pid}}' ${container_name}`
echo "Contaner: " $container_name "pid: " $pid
mkdir -p /var/run/netns
ln -s /proc/$pid/ns/net /var/run/netns/$pid
brctl delif docker0 $bridge_if
ip link add A type veth peer name B
ip link set A name $bridge_if
brctl addif docker0 $bridge_if
ip link set $bridge_if up
ip link set B netns $pid
ip netns exec $pid ip link set dev B name eth0
ip netns exec $pid ip link set eth0 up
ip netns exec $pid ip addr add $container_ip dev eth0
ip netns exec $pid ip route add default via $container_gw
The script is use to set the static ip address of the container, then you run the container, you must append --net=none to manually setup the network
You can now start a container by
sudo docker run --rm -it --name repl --dns=8.8.8.8 --net=none clojure bash
and set the network by
sudo zsh docker_addnet.sh repl 172.17.15.1
In the container bash, you can see the IP address by ip addr, the output is something like
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
67: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 2e:7b:7e:5a:b5:d6 brd ff:ff:ff:ff:ff:ff
inet 172.17.15.1/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::2c7b:7eff:fe5a:b5d6/64 scope link
valid_lft forever preferred_lft forever
So far so good.
Let's try to get the container host ip address by using clojure repl. First the repl by:
lein repl
next eval the code below
(. java.net.InetAddress getLocalHost)
The clojure code is equals to
System.out.println(Inet4Address.getLocalHost());
What you get is an exception
UnknownHostException 5a8efbf89c79: Name or service not known
java.net.Inet6AddressImpl.lookupAllHostAddr (Inet6AddressImpl.java:-2)
Other things going weird is the RMI server cannot get client IP address by RemoteServer.getClientHost();.
So what may cause this issue? I remember that java sometimes get the wrong network configures, but I don't know the reason.
The documentation for InetAddress.getLocalHost() says:
Returns the address of the local host. This is achieved by retrieving the name of the host from the system, then resolving that name into an InetAddress.
Since you didn't take any steps to make your static IP address resolvable inside the container, it doesn't work.
To find the address in Java without going via hostname you could enumerate all network interfaces via NetworkInterface.getNetworkInterfaces() then iterate over each interface inspecting each address to find the one you want. Example code at
Getting the IP address of the current machine using Java
Another option would be to use Docker's --add-host and --hostname options on the docker run command to put in a mapping for the address you want, then getLocalHost() should work as you expect.
In my design, every application container should have a static IP
address and I can use hosts file to locate the container network.
Why not rethink your original design? Let's start with two obvious options:
Container linking
Add a service discovery component
Container linking
This approach is described in the Docker documentation.
https://docs.docker.com/userguide/dockerlinks/
When launching a container you specify that it is linked to another. This results in environment variables being injected into the linked container containing the IP address and port number details of the collaborating container.
Your application's configuration stops using hard coded IP addresses and instead uses soft code references that are set at run-time.
Currently docker container linking is limited to a single host, but I expect this concept will continue to evolve into multi-host implementations. Worst case you could inject environment variables into your container at run-time.
Service discovery
This is a common approach taken by large distributed applications. Examples implementations of such systems would be:
zookeeper
etcd
consul
..
With such a system in place, your back-end service components (eg database) would register themselves on startup and client processes would dynamically discover their location at run-time. This form of decoupled operation is very Docker friendly and scales very very well.