What is the downside of increasing the maximum number of Pods per node in AKS with CNI? - azure-aks

When creating a new node pool in Azure Kubernetes Services, the default maximum number of Pods per Node differs between CNI and Kubenet (and the tool used to create the node pool). According to the docs, the default setting is usually 30 with CNI, but 110 with Kubenet.
Why does CNI have a lower default setting and what are the downsides of increasing the actual value, e.g., to 110, like with Kubenet?

110 is the default size defined by Kubernetes upstream. With Kubnet only the nodes get an IP on your Azure subnet. All the pods get IPs from a logical network on the nodes and they are using NAT (iptables) to communicate with the Azure Network.
When you use Azure CNI, Azure pre-allocates IPs in your subnet. You can read this here:
Each node is configured with a primary IP address. By default, 30 additional IP addresses are pre-configured by Azure CNI that are assigned to pods scheduled on the node.
This means for every node with max pods 30 you would need 31 addresses free in your subnet. If your subnet is to small, you could not add any new node bcs Azure needs those 31 IP addresses to add this node.
I think max pods 30 is more like a security value bcs people tend to use /24 subnets. With max pods 110 you could only add 2 nodes to this subnet. If your AKS is running, you cannot change the subnet or the max pod size - this would require a new AKS to be created.
There is no downside to using max pods 110 on your nodes except that you need to size your subnet accordingly and you would need more planning. We are mostly using /16 vnets with /21 subnets with max pods 110 on our AKS clusters:
Clusters configured with Azure CNI networking require additional planning. The size of your virtual network and its subnet must accommodate the number of pods you plan to run and the number of nodes for the cluster.

Related

Multiple nodered containers with docker and nginx

I have some configuration where users can create nodered instances using docker containers , but i've used one docker container for each instance and i've used nginx as reverse proxy.
Thus where i need to know how much containers can be created in one network and if the number is limited how can i increase it ?
There are a few possible answers to your question:
I dont think Nginx will limit the amount of NodeRed instances you can have.
If you are working on 1 machine with 1 IP address you can change the port number for every NodeRed instance so the limit would be at around 65,535 instances (a little lower as a few ports are already used)
If you are using multiple machines (lets say virtual machines) with only 1 port for NodeRed instances, you are limited to the amount of IP addresses in your subnet.
In a normal /24 subnet (255.255.255.0) there would be 254 IP addresses.
3.1. You can change your subnet in your local network.
https://www.freecodecamp.org/news/subnet-cheat-sheet-24-subnet-mask-30-26-27-29-and-other-ip-address-cidr-network-references/
If you are using multiple machines and are using a wide range of available ports, you have nearly no limit on how many instances you can deploy. The limit would be your hardware i think.

gke-1.13 and gke-1.14 differences in communication with Peered VPC

Does anyone know why GKE pod version 1.13 uses VM IP to communicate with Peered VPC where GKE pod version 1.14 and 1.15 uses just pod IP to communicate with peered VPC machine?
GCP release notes for Jun 4th 2019, says that :
New clusters will begin to default to VPC-native
VPC-native cluster says that there are 2 types of clusters. VPC-native and Routes-based ones.
A cluster that uses alias IP ranges is called a VPC-native cluster. A cluster that uses Google Cloud Routes is called a routes-based cluster.
Additionally that page sheds light the default cluster network mode selection. It depends on the way the cluster is created.
The peculiarity is that you cannot convert a VPC-native cluster into a routes-based cluster, and you cannot convert a routes-based cluster into a VPC-native cluster.
Hope that is exactly the info you've been looking for.

Is using NSG on AKS advanced networking subnet supported and what are the ports needed to be open between nodes and master?

What port for TCP/UDP communication needs to be open between the nodes and the master of azure kubernetes services, when the nodes are in a subnet that uses advanced networking?
For security reasons we have to use a Network Security Group on every subnet that is connected to the onpremises network via VPN in azure. This NSG has to deny every implicit traffic between machines even in the same subnet to hinder attackes from traversing between systems.
So it is the same for the azure kubernetes services with advanced networking, that uses a subnet which is connected via vnet peering.
We couldn't find an answer if it is a supported scenario to have a NSG on the subnet of the aks advanced network and what ports are needed to make it work.
We tried our default NSG which denies inter traffic between host, but this hinders us from connecting to the services and from nodes to come up without errors.
AKS is a managed cluster. And the managed cluster master means that you don't need to configure components like a highly available etcd store, but it also means that you can't access the cluster master directly.
When you create an AKS cluster, a cluster master is automatically created and configured. And the Azure platform configures the secure communication between the cluster master and nodes. Interaction with the cluster master occurs through Kubernetes APIs, such as kubectl or the Kubernetes dashboard.
For more details, see Kubernetes core concepts for Azure Kubernetes Service (AKS). If you need to configure the cluster master and other things all by yourself, you can deploy your own Kubernetes cluster using aks-engine.
For the security of your pods, you can use the network policy to improve it. Although it's just a preview version.
Also, it's not recommended to expose the remote connectivity to the AKS cluster nodes if you want to connect to the AKS nodes. The suggestion is that create a bastion host, or jump box, in a management virtual network. Use the bastion host to securely route traffic into your AKS cluster to remote management tasks. For more details, see Securely connect to nodes through a bastion host.
If you have more questions, please let me know. I'm glad to provide more help.

Docker overlay network among different datacenters

all. I'm learning Docker. But still cannot find any documentations about how Docker ingress network connect several separated hosts.
I have 2 VMs in different datacenters and want create swarm cluster on them.
Is it possible that default installed ingress network makes containers on vm1 visible for containers on vm2 inside some overlay network? Or both vm1 and vm2 should be in same local network?
In general, it's not recommended to span datacenters within a Swarm. You can span availability zones (datacenters in same geo area that are ~10ms or less latency) but between regions should be their own Swarms. This is 100% a latency issue of inter-virtual-network traffic (overlay driver) and the Raft consensus traffic between Swarm managers. There is no hard limit on latency, but you likely don't want the complexity in a single Swarm of trying to prevent traffic in your apps from hopping back and forth between datacenters... unless the datacenters are very low latency.
For more data on this look at the Docker Success site (search swarm overlay and filter to reference), as the Docker EE requirements for Swarm are the same as Docker CE generally.
The other requirement between nodes in a Swarm is that they have ports open between each other's public IP's. Ideally, there is no NAT between nodes.
If both hosts are part of the same docker swarm cluster then from perspective of docker it does not matter that they are in different data centers. Routing between services will just work. For example service1 on host1 will be able to access service2 in another data center. You might however need to account for any possibly large latencies that would occur because of physical distance of hosts.
It is also the same story with the ingress network. It does not care that there are 2 data centers. Any swarm cluster node will participate in it and route incoming requests to the correct service/host.

Access a VM in the same network as the nodes of my cluster from a pod

I have a kubernetes cluster with some nodes and a VM in the same network as the nodes. I need to execute a command via SSH from one of my Pods in this VM. Is it even possible?
I do not control the cluster or the VM, I just have access to them.
Well, this is a network level issue. When you have a kubernetes cluster onthe same network as your target there is a potential issue that might or might not show up - origin IP on the tcp connection. IF your nodes will MASQ/SNAT all of the outgoing traffic then you are fine, but... for a vm in the same domain as kube nodes it might actually be excluded from that MASQ/SNAT. The reason for that is that kube nodes do know how to route traffic based on POD IP cause they have the overlay networking installed (flannel, calico, weave etc.).
To round this up, you need to either have the traffic to your destination node on MASQ/SNAT at some point, or the target node has to be able to route traffic back to your POD, usualy meaning that it needs overlay networking installed (with the exception of setups that are implemented on higher networking level then nodes them selves, like ie. AWS VPC routing tables)

Resources