gke-1.13 and gke-1.14 differences in communication with Peered VPC - gke-networking

Does anyone know why GKE pod version 1.13 uses VM IP to communicate with Peered VPC where GKE pod version 1.14 and 1.15 uses just pod IP to communicate with peered VPC machine?

GCP release notes for Jun 4th 2019, says that :
New clusters will begin to default to VPC-native
VPC-native cluster says that there are 2 types of clusters. VPC-native and Routes-based ones.
A cluster that uses alias IP ranges is called a VPC-native cluster. A cluster that uses Google Cloud Routes is called a routes-based cluster.
Additionally that page sheds light the default cluster network mode selection. It depends on the way the cluster is created.
The peculiarity is that you cannot convert a VPC-native cluster into a routes-based cluster, and you cannot convert a routes-based cluster into a VPC-native cluster.
Hope that is exactly the info you've been looking for.

Related

Host service on Minikube with public available TCP connection?

I'm currently working on making a game where the servers need to be replicated on pods inside the Minikube cluster (I'm using Minikube since it's just a school project). However, I cannot seem to figure out how to make these game servers accessible from other networks. I have tried making a NodePort service, that points to the deployment of my server image, and when using the Minikube cluster IP + my service's port I can connect fine.
What would be the best approach to solve this issue?
By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster. To make the your container accessible from outside the Kubernetes virtual network, you have to expose the Pod as a Kubernetes Service in this example as a type LoadBalancer.
On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On Minikube, the LoadBalancer type makes the Service accessible through the minikube service command.
Minikube is great for local setups but not for real clusters. It spins up only a one-node cluster for development and testing. It is useless if you want to use multiple servers environment.
Taking under consideration that you want to play fast with scalability across the nodes it is useful to create Kubernetes cluster - using Kubespray or Kubeadm.
If you want to run an actual local Kubernetes, use a local VM and then create K8s cluster using these tools for automated deployment.
Kubeadm
The kubeadm tool is good if you are new in cloud technologies -run k8s cluster for the first time. You can install and use kubeadm on various machines: your laptop, a set of cloud servers and more. Whether you’re deploying into the cloud or on-premises, you can integrate kubeadm into provisioning systems such as Ansible or Terraform.
Kubeadm provides domain Knowledge of Kubernetes clusters' life cycle management, including self-hosted layouts, dynamic discovery services and so on.
Kubespray
Kubespray supports deployments on AWS, Google Compute Engine, Microsoft Azure, OpenStack, and bare metal. It enables deployment of Kubernetes highly available clusters. It supports a variety of Linux distributions and CI. Kubespray supports kubeadm.
There isn't really a good way to do this. Minikube is for local development. If you want to run an actual local Kubernetes, start a local VM directly and use something like kubeadm and kubespray.

Why my environment is not suitable for istio ingress agteway?

I try to deploy Istio in kubernetes cluster which is running in my virtual box. I am using one master and two minions (All VB machines has bridge adapter).
After installing the Istio (version - 1.2.5), the istio-ingress gateway external IP is in pending state. I know we can use node port for this problem, but i want to know why my environment not support that LB external IP.
Kubernetes version - kubeadm version: &version.Info
{
Major:"1",
Minor:"15",
GitVersion:"v1.15.3",
GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2",
GitTreeState:"clean",
BuildDate:"2019-08-19T11:11:18Z",
GoVersion:"go1.12.9",
Compiler:"gc",
Platform:"linux/amd64"
}
Docker version - Docker version 19.03.2, build 6a30dfc
OS Platform - CentOS - 7
A LoadBalancer type Kubernetes Service requests that Kubernetes create a load balancer outside the cluster that routes traffic to some specific service. The documentation begins with
On cloud providers which support external load balancers...
On AWS, for example, Kubernetes can use the AWS APIs to request an Amazon Elastic Load Balancer.
You're not in one of these environments. The nearest equivalent for you would be running an haproxy instance on your host, outside any of the VMs, and your Kubernetes simply isn't able to do that.
You can use a NodePort type service to access your cluster, since you can directly make calls to the VMs. LoadBalancer services are NodePort services so you can experiment without changing anything.

Is using NSG on AKS advanced networking subnet supported and what are the ports needed to be open between nodes and master?

What port for TCP/UDP communication needs to be open between the nodes and the master of azure kubernetes services, when the nodes are in a subnet that uses advanced networking?
For security reasons we have to use a Network Security Group on every subnet that is connected to the onpremises network via VPN in azure. This NSG has to deny every implicit traffic between machines even in the same subnet to hinder attackes from traversing between systems.
So it is the same for the azure kubernetes services with advanced networking, that uses a subnet which is connected via vnet peering.
We couldn't find an answer if it is a supported scenario to have a NSG on the subnet of the aks advanced network and what ports are needed to make it work.
We tried our default NSG which denies inter traffic between host, but this hinders us from connecting to the services and from nodes to come up without errors.
AKS is a managed cluster. And the managed cluster master means that you don't need to configure components like a highly available etcd store, but it also means that you can't access the cluster master directly.
When you create an AKS cluster, a cluster master is automatically created and configured. And the Azure platform configures the secure communication between the cluster master and nodes. Interaction with the cluster master occurs through Kubernetes APIs, such as kubectl or the Kubernetes dashboard.
For more details, see Kubernetes core concepts for Azure Kubernetes Service (AKS). If you need to configure the cluster master and other things all by yourself, you can deploy your own Kubernetes cluster using aks-engine.
For the security of your pods, you can use the network policy to improve it. Although it's just a preview version.
Also, it's not recommended to expose the remote connectivity to the AKS cluster nodes if you want to connect to the AKS nodes. The suggestion is that create a bastion host, or jump box, in a management virtual network. Use the bastion host to securely route traffic into your AKS cluster to remote management tasks. For more details, see Securely connect to nodes through a bastion host.
If you have more questions, please let me know. I'm glad to provide more help.

Kubernetes multi servers communication

I have a question regarding Kubernetes networking.
I know that in Docker swarm if I want to run difference containers on difference servers, I need to create an overlay network, and then all the containers (from all the servers) will be attached to this network and they can communicate with each other (for example, I can ping from container A to container B).
I guess that in Kubernetes there isn't an overlay network - but another solution. For example, I would like to create 2 linux containers on 2 servers (server 1: ubuntu, server 2: centos7), so how do the pods communicate with each other if there isn't an overlay network?
And another doubt - can I create a cluster which consists of windows and linux machines with kubernetes?I mean, a multi platform kubernetes which all the pods communicate with each other.
Thanks a lot!!
In kubernetes, pods communicate with each other through service. To access any pod within cluster, it must be exposed using clusterIP service. So if you created service before creating pods, you will have env variable for each available service within container. Using that you can ping or access services and in turn pods.
For example:
Suppose you have two pods U1 and C1 and those are exposed by service named U-SVC and C-SVC respectively.
So if you want to access C1 from U1, you will have C-SVC service env variables(C-SVC_SERVICE_HOST,C-SVC_SERVICE_PORT) within container which you can use for access.
Also if DNS server set for your cluster, you can access service without env varibles.

Will that cause any problem using flannel and calico in same kubernetes cluster?

I have installed kubernetes on digital ocean cloud. I installed both flannel and calico as CNI. Will, that causes any problems in my cluster?
Calico and Flannel uses different default IP subnets and CNI driver binaries, they will not work together on the same cluster if you deploy them using standard (not Canal) installations. But it's required for Kubernetes cluster to have one of the network add-on installed. You are not limited to use Flannel or Calico add-ons, there are more of them
To remove Calico or Flannel from the cluster usually it's enough to run kubectl delete -f <calico-or-flannel.yaml> and reboot all nodes to get rid of interfaces created by Calico or Flannel. You may need to rejoin worker nodes to the cluster after that.
You can use them together, but make sure you configure things so that Calico isn't trying to control tunneling or routing. This joint configuration is sometimes called "Canal", but you can find the docs most on the Calico side at https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/flannel

Resources