Reaching between OVS bridges - network-programming

want to ping different Open vSwitch bridges via a physical interface. My host has an eth1 and a wlan0 interface. I have created 3 OVS bridges and assigned IP addresses to them. wlan0 added to br0. Two virtual interface wlan0.1 and wlan0.2 is created by wlan0 and added them to br1 and br2. Another bridge breth is connected to eth1 interface and all other bridges are connected to breth by patch port. see the figure bellow
Now, host can ping all 3 brides. There are similar host in the network connected via wlan0 interface. They are under mesh network. Any host can ping any node's any bridge. But a PC is connected to eth1 interface only can ping br0 and other host's br0. That means, bridges assigned virtual interface are unreachable from PC. Is there any way to reach other bridges?

Related

Docker-compose "ports": listen on multiple IP addresses / IP range

Instead of listening to a single IP address like e.g. localhost:
ports:
- "127.0.0.1:80:80"
I want the container to only listen to a local network, i.e. e.g.:
ports:
- "10.0.0.0/16:80:80"
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.SERVICE.ports contains an invalid type, it should be a number, or an object
Is this possible?
I don't want to use things like swarm mode etc., yet.
If IP range is not supported, maybe at least multiple IP addresses like 10.0.0.2 and 10.0.0.3?
ERROR: for CONTAINER Cannot start service SERVICE: driver failed programming external connectivity on endpoint CONTAINER (...): Error starting userland proxy: listen tcp 10.0.0.3:80: bind: cannot assign requested address
ERROR: for SERVICE Cannot start service SERVICE: driver failed programming external connectivity on endpoint CONTAINER (...): Error starting userland proxy: listen tcp 10.0.0.3:80: bind: cannot assign requested address
Or is it not even supported to listen to 10.0.0.3 ?
The host machine is connected to 10.0.0.0/16:
> ifconfig
ens10: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.0.0.2 netmask 255.255.255.255 broadcast 10.0.0.2
inet6 f**0::8**0:ff:f**9:b**7 prefixlen 64 scopeid 0x20<link>
ether **:00:00:**:**:** txqueuelen 1000 (Ethernet)
Listening to a single IP address seems not correct. The service is listening at an IP address.
Let's say your VM has two network interfaces (ethernet cards):
Network 1 → subnet: 10.0.0.0/24 and IP 10.0.0.100
Network 2 → subnet: 10.0.1.0/24 and IP 10.0.1.200
If you set 127.0.0.1:80:80 that means that your service listening at 127.0.0.1's (localhost) port 80.
If you want to access service from 10.0.0.0/24 subnet you should set 10.0.0.100:80:80 and use http://10.0.0.100:80 address to be able connect your container from external hosts
If you want to access service from multiple networks simultaneously you can bind the container port to multiple ports, where the IP is the connection source IP):
ports:
- 10.0.0.100:80:80
- 10.0.1.200:80:80
- 127.0.0.1:80:80
And don't forget to open 80 port at VM's firewall, if a firewall exists and restricts that network
I think you misunderstood this field.
When you map 127.0.0.1:80:80 you will map interface 127.0.0.1 from your host to your container.
In the case of the 127.0.0.1 you can only access it from inside your host.
When you map 10.0.0.3:80:80 you will map interface 10.0.0.3 from your host to your container. And all ip who can access 10.0.0.3 will have acces to your docker container mapping.
But in anycase this field will not do any filtering about who access this container
EDIT: After your modification i've seen my misunderstood about your question.
You want docker to create "bridge interface" to not share the ip of your host.
I don't think this is possible when using the port mapping
If you give Compose ports: (or docker run -p) an IP address, it must be a specific known IP address of a host interface, or 0.0.0.0 for "all interfaces". The Docker daemon gives this specific IP address to a bind(2) call, which takes an address and not a network, and follows the rules in ip(7) for IPv4.
With the output you show, you can only bind containers to 10.0.0.2. If you want to use other IP addresses on the same network, you also need to assign them to the host; see for example How can I (from CLI) assign multiple IP addresses to one interface? on Ask Ubuntu, and then you can bind a container to the newly-added address.
If your system is on multiple physical networks, you can have any number of ports: so long as the host address and host port are unique. In particular you can have multiple ports: that all forward to the same container port.
ports:
# make this visible to the external load balancer on port 80
- '192.168.17.2:80:3000'
# also make this visible to the internal network also on port 80
- '10.0.0.2:80:3000'
# and the management network but on port 3000
- '10.99.0.36:3000:3000'
Again, the host must already have these IP addresses in the ifconfig output.

Interpretation of ip routes rules

The citation comes from: https://github.com/docker/labs/blob/master/networking/concepts/05-bridge-networks.md
When we peek into the host routing table we can see the IP interfaces
in the global network namespace that now includes docker0. The host
routing table provides connectivity between docker0 and eth0 on the
external network, completing the path from inside the container to the
external network.
host$ ip route default via 172.31.16.1 dev eth0
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.42.1
172.31.16.0/20 dev eth0 proto kernel scope link src 172.31.16.102
It is written: The host table provides connectivity between docker0 and eth0. I cannot see where in that rules the connectivity is introduced. Can you explain?

route all traffic over gre tunnel

I have an openvswitch sw1 with subnet 10.207.39.0/24 that has lxc containers attached and I have the same on another physical server and I have successfully connected these using a GRE tunnel. However, the lxc containers have additional ports on additional openvswitches, e.g. sw4 with subnet 192.220.39.0/24 and I want to push that traffic over the single gre tunnel on sw1 because there is only one physical interface and it's not possible to have multiple gre tunnels on each openvswitch with the same physical interface IP addr endpoints. Is it possible to push the traffic on the other openvswitches over the gre tunnel on sw1? Or is there a better way to connect multiple subnets in lxc containers on two physical hosts? Thanks.
I solved this "myself" - with help from two links provided below - (after sleeping on it and relentless google searches over several frustrating days).
I realize the solution is pretty simple and would be clear to a networking professional. I am an Oracle DBA and only know as much networking as I need to work with orabuntu-lxc software, LXC containers, and Oracle software, so please keep that in mind if the below is "obvious" - it wasn't obvious to me in my network ignorance.
I got the clue on how to solve the actual steps from this blog post:
http://www.cnblogs.com/popsuper1982/p/3800548.html
I confirmed that any subnet should be routable over a GRE tunnel from this blog post (which gave me hope to keep working towards a solution):
https://supportforums.adtran.com/thread/1408
In particular the author stated in the adtran comment that "GRE tunnels have no limitation on the types of traffic which can traverse it. It can route multiple subnets without multiple tunnels."
That post told me that the solution was likely a routing solution and that only one GRE tunnel would be needed for this use case.
Note that this feature of "no limitation" on the types of traffic is great for Oracle RAC because we need to be able to send multicast over the GRE tunnel for RAC.
This use case:
I am building an Oracle RAC infrastructure to run in LXC Linux containers. I have a public network 10.207.39.0/24 on openvswitch sw1 and a private RAC interconnect network 192.220.39.0/24 on openvswitch sw4. I want to be able to build the RAC in LXC linux containers that span multiple physical hosts and so I created a GRE tunnel to connect the 10.207.39.1 tunnel endpoint on colossus to 10.207.39.5 tunnel endpoint on guardian.
Here is the setup details:
Host "guardian":
LAN wireless physical network interface: wlp4s0 (IP 192.168.1.11)
sw1 10.207.39.5
sw4 192.220.39.5
Host "colossus":
LAN wireless physical network interface: wlp4s0 (IP 192.168.1.15)
sw1 10.207.39.1
sw4 192.220.39.1
Step 1:
Create GRE tunnel between sw1 openvswitches on both physical hosts with physical wireless LAN network interface end points:
Host "guardian": Create gre tunnel phys hosts (guardian --> colossus).
sudo ovs-vsctl add-port sw1 gre0 -- set interface gre0 type=gre options:remote_ip=192.168.1.15
Host "colossus": Create gre tunnel phys hosts (colossus --> guardian).
sudo ovs-vsctl add-port sw1 gre0 -- set interface gre0 type=gre options:remote_ip=192.168.1.11
Step 2:
Route the 192.220.39.0/24 network over the established GRE tunnel as shown below:
Host "guardian": route 192.220.39.0/24 openvswitch sw4 over GRE tunnel:
sudo route add -net 192.220.39.0/24 gw 10.207.39.5 dev sw1
Host "colossus": route 192.220.39.0/24 openvswitch sw4 over GRE tunnel:
sudo route add -net 192.220.39.0/24 gw 10.207.39.1 dev sw1
Note: To add additional subnets repeat step 2 for each subnet.
Note on MTU:
Also, you have to allow for GRE encapsulation in MTU if you want to ssh over these tunnels.
Therefore in the above example for the main GRE tunnel connecting the hosts, we need MTU to be set to 1420 to allow 80 for the GRE header.
MTU on the LXC container virtual interfaces on the sw1 switches need to be set to MTU=1420 in the LXC container config files.
MTU on the LXC container virtual interfaces on the sw4 switches need to be set to MTU=1420 in the LXC container config files.
Note that the MTU on the openvswitches sw1 and sw4 should automatically set to the MTU on the LXC intefaces as long as ALL LXC virtual interfaces are set to the new lower MTU values, so explicitly setting MTU on the openvswitches sw1 and sw4 themselves should not be necessary.
If run into issues still with SSH over the tunnels, but ping works cross-hosts cross-containers, then re-check all MTU settings on the virtual interfaces and openvswitches and recheck.

Connecting VMs Using GRE Tunnels - Openvswitch

Hello everyone i'm really new in networking, so i i'm a little bit lost please i hope anyone can help me...
I have two physical nodes with the same configuration in the interface:
# The primary network interface
#auto eth0
#iface eth0 inet dhcp
auto br0
iface br0 inet dhcp
bridge_ports eth0
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off
my nodes have the following public ip:
ubuntu001: 158.42.104.129
ubuntu002: 158.42.104.139
I run one VM in each node using the default configuration of libvirt:
Vm in ubuntu001: 10.1.1.189
Vm in ubuntu002: 10.1.1.59
I want to do ping between the VMs through "gre tunnel using OVS", so i did the next but it didn't work:
First i create an OVS bridge:
# ovs-vsctl add-br ovs-br0
Second i connect my bridge with its uplink which in this case is eth0
# ovs-vsctl add-port ovs-br0 eth0
Third i run a VM in each node (ubuntu001: 10.1.1.189 and ubuntu002: 10.1.1.59 respectively)
Fourth i add a port for the GRE tunnel:
# ovs-vsctl add-port ovs-br0 gre0 -- set interface gre0 type=gre options:remote_ip=158.42.104.139
# ovs-vsctl add-port ovs-br0 gre0 -- set interface gre0 type=gre options:remote_ip=158.42.104.129
i did the same in the other node and this show when i use ovs-vsctl show:
root#ubuntu001:~# ovs-vsctl show
41268e02-3996-4caa-b941-e4fe9c718e35
Bridge "ovs-br0"
Port "ovs-br0"
Interface "ovs-br0"
type: internal
Port "gre0"
Interface "gre0"
type: gre
options: {remote_ip="158.42.104.139"}
Port "eth0"
Interface "eth0"
ovs_version: "2.0.2"
root#ubuntu002:~# ovs-vsctl show
f0128df4-1a89-4999-8add-b5076ff055ee
Bridge "ovs-br0"
Port "ovs-br0"
Interface "ovs-br0"
type: internal
Port "gre0"
Interface "gre0"
type: gre
options: {remote_ip="158.42.104.129"}
Port "eth0"
Interface "eth0"
ovs_version: "2.0.2"
what i am doing wrong or is missing something??
Add this to /etc/network/interfaces:
auto br-ovs=br-ovs
iface br-ovs inet manual
ovs_type OVSBridge
ovs_ports gre1 gre2
ovs_extra set bridge ${IFACE} stp_enable=true
mtu 1462
allow-br-ovs gre1
iface gre1 inet manual
ovs_type OVSPort
ovs_bridge br-ovs
ovs_extra set interface ${IFACE} type=gre options:remote_ip=158.42.104.139 options:key=1
auto br1
iface br1 inet manual# (or static, or DHCP)
mtu 1462
I do not know how to do this with commands.
I think eth0 should not be in the output of ovs-vsctl show.
stp_enable=true is optional, I don't think it is needed in case of 2 nodes.
Set mtu to suit your needs. This example is for when the real NIC's mtu is 1500.
remote_ip=158.42.104.139 should contain the other node's IP. It is different on the 2 nodes.
options:key=1 is also optional, it can be used to label 2 GRE networks (eg. the second mesh would have key=2 etc.).
You can add VMs to br1 and they will be able to ping each other.
Don't forget to set the VMs' mtu to 1462.
This tutorial might be useful: https://wiredcraft.com/blog/multi-host-docker-network/

How can I shut down an ethernet interface but not the attached virtual interface?

I have an linux embedded machine that has a ethernet interface with a working network configuration. Also on this interface runs a second virtual network.
The config file reads as follows:
auto lo eth0 eth0:1
# loopback interface
iface lo inet loopback
# ethernet
iface eth0 inet static
address 192.168.1.1
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.2
# ethernet
iface eth0:1 inet static
address 123.123.123.1
netmask 255.255.255.0
network 123.123.123.0
broadcast 123.123.123.255
gateway 123.123.123.2
Now I need to bring down the eth0 device but still be able to reach the eth0:1 device.
How can this be done?
I tried to simply flush the ip address of the eth0 device, which works with the following command ip addr flush eth0. This works but it seems the services (webserver etc) are still listening on this interface...

Resources