Multicast traffic to Kubernetes [closed] - docker

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
Improve this question
I want my pods to receive multicast network traffic flowing from outside of my kubernetes cluster to specific ports in my nodes.
I'm considering two solutions:
Adding hostNetwork: true flag to their yaml file along with hostPort configuration in order to receive the traffic directly to the pod.
Forwarding the traffic locally on the nodes from eth0 interface to docker0 interface using iptables command.
Method 1 is an official feature in Kubernetes, but it feels like breaking a security wall that docker originally imposed, and might cause port
collisions with host's processes, etc.
Method 2 on the other hand transparently forwards the multicast network traffic to the pods.
Despite the fact I can use an automation tool to spread this configuration (ansible/salt etc), anything configured 'out of the scope' of Kubernetes feels a little hacky to me.
Would like to hear your pros and cons, comments, and maybe other solutions to the problem of multicasting to a kubernetes cluster.

A cleaner way to support multicast is to add an additional interface to your PODs through multus-cni. Then, you could associate this new multus interface to your host network interface that will receive multicast traffic on the host. Summarizing, you will have two interface on your POD i.e:
net1 (default) for pod-to-pod communication and other unicast traffic.
eth0 (multus) for multicast traffic. Then you will need to "join" it with a NIC in your host machine, either by using bridge or macvlan
See more details here: https://github.com/intel/multus-cni/blob/master/docs/quickstart.md

In the end we picked method 1, as it is the documented way to achieve what we wanted, and I can report that it works fine.

I heard that WeaveWorks supports multicast: https://www.weave.works/use-cases/multicast-networking/
github issue has few words on multicast support

Related

Could IPv4 loopback addresses be used for IPC?

I was quite surprised when I found out that there was a really big range of IP addresses allocated for loopback (127.x.y.z).
I didn't find much information about why it's like this, except that it could be used for testing networks and protocols locally, which got me thinking if it could be a good idea to use these addresses for IPC.
At the moment, as far as I know, IPC based on networking is usually done with TCP/UDP by opening sockets on ports which are most likely not used by any other service.
So my question is, to be even more sure that there won't be a port collision, could other loopback addresses be used instead?
For a more concrete example, could two processes communicate through sockets on address 127.31.41.59 and ports 27 and 18 (or even different loopback addresses)?

How (docker) virtual networks actually work? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Can someone explain me how these virtual networks work? I know how "normal" networks work, when we have some pc with its MAC adress connecting by his private IP address to some router which has some public IP address, but I just don't get what these "virtual" networks mean.
Do containers actually get another IP address?
Does that IP address translates into host pc address so router doesn't see container and host as seperate private IP addresses?
How can these IP address be different than host IP address if host has only one network card?
Do you know any good tutorial for these stuff, it doesn't have to be for docker, just general about how virtual network works. I tried reading official docker docs about docker networks but it is too complicated for me, I am not that good in these stuff.
I don't need to know details, but just to get the picture how is this possible, and what it actually means when we create new docker network.
Well, explaining how Docker or LXD/LXC create and manage virtual networks is a bit long.
This is a high-level overview, I will add some useful link if you are interested in the topic.
In Linux you can create virtual network interfaces (veth) that are like network interfaces (MAC address or IP address can be assigned to them), these interfaces are attached to the containers.
You can connect the containers (locally) using virtual bridges (bridge-utils).
The virtual bridge can be connected to the virtual interfaces attached in the containers, that is how to create a simple virtual network in a single machine.
Docker or LXD manage for you the virtual interfaces and the virtual bridges to connect the containers (like a real network).
This is a really high-level overview, that gives you an idea of how the containers can be connected locally.
To allow the container to have internet connection, the container managers have to set correctly other parameters like iptables rules to NAT the traffic.
This video can be helpful for a better understanding

Why don't use host network in docker since docker and kubernetes network is so complex

Using docker can simplify CI/CD but also introduce the complexity, not everybody able to hold the docker network though selecting open source solutions like Flannel, Calico.
So why don't use host network in docker, or what lost if use host network in docker.
I know the port conflict is one point, any others?
There are two parts to an answer to your question:
Pods must have individual, cluster-routable, IP addresses and one should be very cautious about recycling them
You can, if you wish, not use any software defined network (SDN)
So with the first part, it is usually a huge hassle to provision a big enough CIDR to house the address range required for supporting every Pod that is running across every Namespace, and have the space be big enough to avoid recycling addresses for a very long time. Thus, having an SDN allows using "fake" addresses that one need not bother the "real" network with knowing about. No routers need to be updated, no firewalls, no DHCP, whatever.
That said, as with the second part, you don't have to use an SDN: that's exactly what the container network interface (CNI) is designed to paper over. You can use the CNI provider that makes you the happiest, including using static IP addresses or the outer network's DHCP server.
But your comment about port collisions is pretty high up the list of reasons one wouldn't just want to hostNetwork: true and be done with it; I'm actually not certain if the default kubernetes scheduler is aware of hostNetwork: true and the declared ports: on the containers: in order to avoid co-scheduling two containers that would conflict. I guess try it and see, or, better yet, don't try it -- use CNI so the next poor person who tries to interact with your cluster doesn't find a snowflake setup.

Nat Punchthrough understanding P2P concept [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
The community reviewed whether to reopen this question 4 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
So, i have been reading up on NAT-Punchthrough. I seem to be getting the idea, but i have a hard time implementing it, and i feel that i am missing a step here.
Testing this functionality is kind of hard because i have little control over the environment when it comes to a internet based connection.
I have a SQL server to run as my "facilitator" it keeps the external address of both server and client, and their port as seen by the outside.
Here are steps so far:
- I connect to my SQL server through a web request (PHP script) that stores server/client IP/PORT
- When both are known, both client and server attempt connecting (server hosts on a set port, client connects over a set port)
- Nothing significant happens
There are 2 unknowns here, and i would like to check one with you.
Is it true that NAT-Punchthrough requires that i do the first step with the exact (internal/LAN) port i plan to connect with in the step after that?
If so, i don't know how exactly my server works underwater, so it might need more ports then my initial given static port to connect over, but that at least gives me a hint.
If anyone has more documentation on this then me, please let me know.
Sources:
Programming P2P application
http://www.mindcontrol.org/~hplus/nat-punch.html
NAT punch through works on the principle of educated guesswork. It is usually used to create connections with devices that do IP Masquerading. This is the technology used in most home internet modems to the point that NAT has become interchangeably used to refer to IP Masquerading.
When you connect out from a device which is behind a NAT system like a home modem. You have no control of the port that will be used for the outbound connection to the Internet. However many of these devices allocate ports using specific patterns. For example, incremental numbers.
NAT punch through involves trying to directly connect two source systems that are both behind independent NAT devices. A third system, your "facilitator" acts as a detector for the origin port numbers currently being assigned by both NAT devices on outbound connections. The origin port number, along with the IP address is then sent to the other parties.
So now the clever bit to answer your question. Both systems that want to directly connect, start trying to communicate to the other. They try connecting to a range of ports, around the known port number detected by the facilitator. This is the guesswork.
It is important that both source systems start trying to connect as this will establish NAT sessions in the local devices that allow traffic from the Internet in. If either source device correctly guesses one of those NAT session port numbers, then a connection is established.
In reality, engineers from organisations that have use for NAT punch through have probably spent some time examining the more popular NAT port allocation algorithms and tuning their software. If you have control of connections through your NAT devices, then it would be fairly easy to set up some tests and see how the port numbers change between connections to different servers.

Using Docker for a mail server [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 4 years ago.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I've been interested in docker for a while, but not jumped in yet. I have a need to set up a mail server, so thought maybe I could use this as a reason to learn more about docker. However, I'm unclear how to best go about it.
I've installed a mailserver on a VPS before, but not into multiple containers. I'd like to install Postfix, Dovecot, MySQL or Postgresql, and SpamAssassin, similar to what is described here:
https://www.digitalocean.com/community/tutorials/how-to-configure-a-mail-server-using-postfix-dovecot-mysql-and-spamassasin
However, what would be a good way to dockerize it? Would I simply put everything into a single container? Or would it be better to have MySQL in one container, Postfix in another, and additional containers for Dovecot and SpamAssassin? Or should some containers be shared?
Are there any HOWTOs on installing a mailserver using docker? If there is, I haven't found it yet.
The point of Docker isn't containerization for containerization's sake. It is to put together things that belong together and separate things that don't belong together.
With that in mind, the way I would set this up is with a container for the MySql database and another container for all of the mail components. The mail components are typically integrated with each other by calling each other's executables or by reading/writing shared files, so it does not make sense to separate them in separate containers anyway. Since the database could also be used for other things, and communication with it is done over a socket, it makes more sense for that to be a separate container.
Dovecot, Spamassassin, et al can go in separate containers to postfix. Use LMTP for the connections and it'll all work. This much is practical.
Now for the ideological bit. If you really wanted to do things 'the docker way', what would that look like.
Postfix is the difficult one. It's not one daemon, but rather a cluster of different daemons that talk to each other and do different parts of the mail handling tasks. Some of the interaction between these component daemons is via files (e.g the mail queues), some is via sockets, and some is via signals.
When you start up postfix, you really start the 'master' daemon, which then starts the other daemon processes it needs using the rules in master.cf.
Logging is particularly difficult in this scenario. All the different daemons independently log to /dev/log, and there's really no way to process those logs without putting a syslog daemon inside the container. "Not the docker way!"
Basically the compartmentalisation of functionality in postfix is very much a micro-service sort of approach, but it's not based on containerisation. There's no way for you to separate the different services out into different containers under docker, and even if you could, the reliance on signals is problematic.
I suppose it might be possible to re-engineer the 'master' daemon, giving it access to the docker process in the host, (or running docker within docker), and thus this new master daemon could coordinate the various services in separate containers. We can speculate, but I've not heard of anyone moving on this as an actual project.
That leaves us with the more likely option of choosing a more container friendly daemon than postfix for use in docker. I've been using postfix more or less exclusively for about the past decade, and haven't had much reason to look around options till now. I'd be very interested if anyone can add commentary on possible more docker-friendly MTA options?

Resources