Docker: Force inner container communication on network - docker

I'm in the midst of a project that made to convert an existing VOIP legacy system into a dockerized form. The existing system consists of 5 different Linux machines, each machine is having 2 different network interfaces - one exposed to the public WAN, and the other is a private Lan network. I plan on creating a docker compose file for setting up the orchestration.
The network roughly looks like this:
Server #1 Eth0: IP 192.168.0.200/24 Eth1: IP X.X.X.65/27
Server #2 Eth0: IP 192.168.0.201/24 Eth1: IP X.X.X.66/27
Server #3 Eth0: IP 192.168.0.202/24 Eth1: IP X.X.X.87/27
Server #4 Eth0: IP 192.168.0.203/24 Eth1: IP Y.Y.Y.240/27
Server #5 Eth0: IP 192.168.0.204/24 Eth1: IP Y.Y.Y.241/27
Servers 1-3 are part of the same subnet, so are servers 4-5.
I am trying to find the best way to convert this network setup into docker networks, I want every container to preserve his public IP (the one on Eth1, meaning that traffic generated from the container will keep the same public IP it had on the original server), but also to be able to communicate with every other docker container on the same private net, while also keeping it easily managable and having the least overhead possible.
I've created 3 macvlan networks and 1 bridge network using docker-compose, but the problem is that DNS resolution provides every container with the IP address I gave it in the Macvlan network it belongs to, say 2 dockers were assigned to the bridge network and to the same Macvlan network, resolving each other container name will provide with their Macvlan address rather than the Bridge IP address. I would like to force communication between all containers over the BRIDGE network only (essentialy setting the Macvlan network into private mode). How can I acheieve that?

Consider using --alias with "docker network connect". Like having a dedicated name for a container within the bridge network and use that name for the internal communications

Related

How docker network works when setting container IP address different from the docker assigned one

Background
I'm currently running OpenWrt inside a docker container. I created a macvlan network with subnet 172.19.0.0/16, and the OpenWrt container which connects to the macvlan network.
The docker automatically assigned one Ip address 172.19.0.2 when creating the openwrt container with Mac address 02:42:ac:13:00:02 or mac1 for short. When I login the openwrt container, and run ip addr, I get the following output, where it has IP address 192.168.50.123 bound to mac1.
Problem
The inconsistency of IP addresses makes me extremely confused, because normally I'd access the container with the docker assigned IP. In this case, the assigned IP 172.19.0.2 is un-ping-able. I can access the container through 192.168.50.123 which is a completely different network from the macvlan network this container connects to. I also edited the openwrt ip address through in /etc/config/network and no matter what ip address I choose, the I can only connect to the container through that IP address instead of the one docker assigned.
My initial thought on this is that, macvlan is all about l2 layer and Mac address, thus IP addresses don't play roles, if so why specify --subnet in the first place when creating a macvlan network as docker doc says. I'm new to docker, and don't have much experience in networking, hope any of you can help me explain this,please

Docker: Converting an existing legacy system to Dockerized form while maintaining original network scheme

I'm in the midst of a project that made to convert an existing VOIP legacy system into a dockerized form. The existing system consists of 5 different Linux machines, each machine is having 2 different network interfaces - one exposed to the public WAN, and the other is a private Lan network. I plan on creating a docker compose file for setting up the orchestration.
The network roughly looks like this:
Server #1
Eth0: IP 192.168.0.200/24
Eth1: IP X.X.X.65/27
Server #2
Eth0: IP 192.168.0.201/24
Eth1: IP X.X.X.66/27
Server #3
Eth0: IP 192.168.0.202/24
Eth1: IP X.X.X.87/27
Server #4
Eth0: IP 192.168.0.203/24
Eth1: IP Y.Y.Y.240/27
Server #5
Eth0: IP 192.168.0.204/24
Eth1: IP Y.Y.Y.241/27
Servers 1-3 are part of the same subnet, so are servers 4-5.
I am trying to find the best way to convert this network setup into docker networks, I want every container to preserve his public IP (the one on Eth1, meaning that traffic generated from the container will keep the same public IP it had on the original server), but also to be able to communicate with every other docker container on the same private net, while also keeping it easily managable and having the least overhead possible.
Would it be possible to mix between a Bridge network and connect every docker container to it, while also having a Macvlan network for each docker container which will bind to a different network interface on host level?
Can I create only 2 network interfaces for the host machine, each for a different subnet, while maintaining the different IP addresses on them (one network interface will consist of 2 IPs, the other one of 3, and each interface will have a corresponding Macvlan docker network)?
Is there a better way to make this work?
EDIT
Using the nmtui command I've created an IPv4 interface with multiple IP addresses, I would like to connect 3 of my containers to this network interface, while providing each one of them with a different public IP.
Based on the screenshot given below, would it be enough to create a single Macvlan network and assign each container with it's own IPv4 address? Reading about it online havn't provided me with a definite answer, but it seems likely that Docker engine will ignore this setting and use the defined primary IP instead for every container.
Essentialy, I would like every container to receive traffic from it's own host IP, and delivere traffic from the same IP.
services:
kamin:
networks:
kamin:
priority: 1
ipv4_address: "69.31.245.134"
networks:
kamin:
driver: macvlan
driver_opts:
parent: enp0s25
ipam:
config:
- subnet: 69.30.245.130/29
gateway: 69.31.245.129
I was able to make it work using 3 network interfaces on host level and 1 custom bridge and 2 ipvlan networks on docker level.

How can I make docker container IP addresses accessible in a WLAN?

I'm running Docker containers on a host (A) which is in a local network and gets its IP address from the WLAN router via DHCP. I'd like to access the docker containers via IP address from another host (B) which is in the same local network. I've configured a macvlan docker network in my docker compose file. However if I scan the network for IP addresses with e.g. nmap -sP XXX.XXX.XXX.0/24 with XXX.XXX.XXX as subnet mask I don't find new IP addresses. In general: Do I have to consider something special in case I create a setup like this?
Reference to a similar, simplifying question on forums.docker.com.
Macvlan does not generally work over wireless interfaces. It just took me hours to discover that, as it is nowhere mentioned in most macvlan documentation. See: http://hicu.be/macvlan-vs-ipvlan
From my understanding, access points don't like getting packets from MAC addresses that haven't previously authenticated with them.
ipvlan L2 works, just replace the macvlan driver with ipvlan and specify ipvlan_mode: 2 under driver_opts.

Docker: how to access the hosts network with a docker container?

How can I access the hosts network with a docker container? Can I put a container in the hosts network with another IP from the hosts network?
Current situation:
Docker container (default bridge network): 172.17.0.2/16
Host (server): 10.0.0.2/24
Question:
Can I put the docker container on the 10.0.0.0/24 network as a secondary address?
(or) Can I access the hosts network on the container and vica versa?
Reason:
I want to access the hosts network from my container (for example: monitoring server).
I want the container to act as a server accessible from the hosts network on all ports.
Note:
I run several docker containers so a few ports are already forwarded from the host and these should remain so. So an all-port-forward from the hosts IP isn't really a solution here.
Setup on host:
basic docker system
Centos 7
Macvlan networks may be the solution you are looking for.
You could assign multiple MAC/IP addresses on virtual NICs over single physical NIC.
There are some prerequisites for using Macvlan.

Bridge Network on Docker vs Bridged Network on VMWare/VirtualBox seems to be very different. Why?

TL;DR - Why does Docker call it's default networking as Bridge Networking when it seems to be a lot like NAT Network.
Let start by looking at how -
1) VMWare or VirtualBox handles networking for virtual machines. Say the Host IP is some random 152.50.60.21 and the network CIDR is 152.50.60.0/24.
Bridge Network - Any VM connected through this interface can have any free IP on the network the host is connected to. So if IP 152.50.60.30 is free, then VM can bind to this IP. Similarly, a second VM can have an IP 152.50.60.32 if this IP is free.
The Bridge Network connects the VM's on to the same network the host is connected to. Any machine on the internet can reach the VM's and the VM's can reach the internet directly (of course if the HOST network is connected to internet).
NAT Network - NAT is a separate network from the network the host is connected to. And VMWare can accept any valid CIDR (to not complicate things I will refer to the private reserved blocks only. Though, if am right, any CIDR is fine.) Safely, this new NAT Network being created on host and accessible only on the host can have CIDR 10.0.0.0/8 or 172.16.0.0/12 or 192.168.0.0/16 (or any subnet of these networks). I am picking 10.0.0.0/8.
So two VM's spinning up on the host and connected through NAT Network can have IP's 10.0.3.3 and 10.0.3.6
Being on the NAT Network the VM's are not visible to the outer world beyond the host i.e the VM's are not reachable to the outer world (except with DNAT/Port Forwarding configuration on the Host). But the VM's can access the outer world/internet/intranet though SNAT provided by HOST i.e the IP's of VM's are never exposed to the outer world.
VMWare Doc's reference: Understanding Common Networking Configurations
Next, let's look on the Docker side -
Dockers default networking
When an image is run on the HOST (whose IP from above is 152.50.60.21) using Dockers default networking (which it calls as Bridge Network), the new container can get an IP (say) 172.17.0.13 from the network - 172.16.0.0/12 (at least on my env). Similarly, a second container can get an IP 172.17.0.23. For accessing the internet these containers rely on SNAT provided by HOST. And any machine on the internet/intranet can't access the Containers except through port forwarding provided by the HOST. So the containers are not visible to the world except for HOST.
Looking at this I would assume that the default networking provided by Docker is NAT Network, but Docker likes to call it as Bridge Network.
So, Could anyone say where things are messed up or how I got to look at Bridge/NAT Networks?
Docker's equivalent of VMWare or VirtualBox bridge network is macvlan.
From the docs:
...you can use the macvlan network driver to assign a MAC address to each container’s virtual network interface, making it appear to be a physical network interface directly connected to the physical network. In this case, you need to designate a physical interface on your Docker host to use for the macvlan, as well as the subnet and gateway of the macvlan. You can even isolate your macvlan networks using different physical network interfaces.
When you create a macvlan network, it can either be in bridge mode or 802.1q trunk bridge mode.
In bridge mode, macvlan traffic goes through a physical device on the host.
In the simple bridge example, your traffic flows through eth0 and Docker routes traffic to your container using its MAC address. To network devices on your network, your container appears to be physically attached to the network.
Example of macvlan bridge mode:
$ docker network create -d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
-o parent=eth0 \
my-macvlan-net
This command creates MacVLAN network on top of eth0 with the network name of my-macvlan-net.
In 802.1q trunk bridge mode, traffic goes through an 802.1q sub-interface which Docker creates on the fly. This allows you to control routing and filtering at a more granular level.
In the 802.1q trunked bridge example, your traffic flows through a sub-interface of eth0 (called eth0.10 in the example below) and Docker routes traffic to your container using its MAC address. To network devices on your network, your container appears to be physically attached to the network.
Example of macvlan 802.1q trunk bridge mode:
$ docker network create -d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
-o parent=eth0.10 \
my-8021q-macvlan-net
This creates macvlan network, and has parent eth0.10.
The naming seems confusing to someone coming from VMWare or VirtualBox, but it exists.
You can see another tutorial of macvlan (includes assigning IP addr to the container) here.
Adding this as an answer because it is too long to be a comment.
After hours of googling and reading docs, since I could find any concrete answer, my theory is that the difference is because no standards are defining these virtual interfaces, unlike the physical interfaces which are defined by IEEE and IETF.
There is a list of Linux virtual network interfaces by RedHat. Most of the network interface types that you see in docker and vmware/vbox will be a subset of this, except for some. overlay network, for example -- which only exists in docker to serve its special purpose by connecting container on different hosts to a single network.
Back to the point - it is not like these network interfaces are defined by Cisco or IETF or IEEE or anything/anyone. Physical networking interfaces are implemented at OS/other networking device level and they are defined standards, for other products -- be it a virtualization platform (vmware/vbox) or containerization (docker), they need to implement their own networking solution. At this point, they can implement their interfaces, give it some fancy name, or they can call it the same thing it's called at the OS level.
I am certain docker devs would not be stupid enough to know the diff between NAT and bridge, they probably knew that the network they are calling bridge is a NAT network yet decided to call it a bridge for some reason. And since there exists no centralized definition of a bridge or NAT network, you can't tell that docker is wrong. They wanted to call this network bridge, they call it a bridge. but yes, indeed it can be confusing to people coming from a virtualization/Linux networking background. but it is what it is.
Christophorus Reyhan already wrote an amazing answer about the bridge equivalent in docker and how to create it. I am only adding this because I was not satisfied with the WHY of the question. why is it not the same? Yes, my answer is based on some assumptions, but this is the closest that I could think of the be the reason. No way to confirm the hypothesis unless one day some docker dev sees this and decides to reply.
One more thing, for vmware/vbox, the host is the host physical machine itself, and thus when using bridge mode, it connects to the host's network. but for containers, the host is not the physical host but rather the docker host/engine. so when using bridge mode, containers attach to the host's network, but it would be docker host in this case. So it is still a bridge network, but the hierarchy is different this time. container -> docker host -> host os, and not guest os -> host os. This could be another possibility as well.
From the docker documentation:
macvlan driver is sometimes the best choice when dealing with legacy applications that expect to be directly connected to the physical network, rather than routed through the Docker host’s network stack.
"rather than through the Docker host's network stack" and
Macvlan networks are best when you are migrating from a VM setup or need your containers to look like physical hosts on your network, each with a unique MAC address.
which indicates that networking is not the same as in VMs in the world of docker though they may sometimes share some similarities or the same names.

Resources