On Premise - Kubernetes External Endpoint for services - docker

We are analyzing the integration of the Kubernetes service in our on premise environment. We have SaaS based services which can be exposed publicly.
We have doubts in setting up the external endpoints for the services. Is there any way to create the external endpoints for the services?
We have tried to setup the ExternalIP parameter in the services with the master node IP address. Not sure this is the correct way. Once we setup the external IP with the master node IP address we are able to access the services.
We have also tried with ingress controllers and also there we can access our services with the IP address of the node where the ingress controllers are running.
For Example :
Public IP : XXX.XX.XX.XX
Ideally, we would map the public IP with the load balancer virtual IP, but we cannot find such a setting in Kubernetes.
Is there any way to address this issue?

My suggestion is to use an Ingress Controller that acts as a proxy for all your services in kubernetes.
Of course your ingress controller has to be somehow exposed to the outside world. My suggestion is to use the hostNetwork setting for the ingress controller pod (this way, the pod will be listening on your host's physical interface, like any other "traditional" service).
A few resources:
Here details on how a pod can be reached from outside your k8s cluster).
Here a nice tutorial on how to setup an ingress controller on k8s.
If you have more than one minion in your cluster, you'll end up having problems with load balancing them. This question can be helpful about that.

Related

Forward all service ports to a singe container

I would like to run a container in kubernetes with a static ip. I found out that only a service can provide an ip address.
Is it possible to map a Service to one pod and forward all ports?
A service discovers pods based on labels and selectors. So it is not necessary to use an IP Address to statically reference a pod from a service. However, if you so wish, you can override the autonomy behind this and manually configure your own ClusterIP for the service.
Once the Pod and Service have been created, other pods in your cluster will be able to interact with the pod via the Name of the Service provided they are in the same namespace. If they are not, you will need to pass the FQDN of the service.
If you are trying to access the pod from outside of Kubernetes, then you will need to use a Service with a different type than ClusterIP. For example, a NodePort or a LoadBalancer. Alternatively, if you have an Ingress Controller with a gateway already provisioned you could use that.
With regards to you desire to forward all ports, this is not possible as port declarations in Service files must be statically mapped. It is not currently possible to pass a Port Range but there is a long standing feature request for it.

Is using NSG on AKS advanced networking subnet supported and what are the ports needed to be open between nodes and master?

What port for TCP/UDP communication needs to be open between the nodes and the master of azure kubernetes services, when the nodes are in a subnet that uses advanced networking?
For security reasons we have to use a Network Security Group on every subnet that is connected to the onpremises network via VPN in azure. This NSG has to deny every implicit traffic between machines even in the same subnet to hinder attackes from traversing between systems.
So it is the same for the azure kubernetes services with advanced networking, that uses a subnet which is connected via vnet peering.
We couldn't find an answer if it is a supported scenario to have a NSG on the subnet of the aks advanced network and what ports are needed to make it work.
We tried our default NSG which denies inter traffic between host, but this hinders us from connecting to the services and from nodes to come up without errors.
AKS is a managed cluster. And the managed cluster master means that you don't need to configure components like a highly available etcd store, but it also means that you can't access the cluster master directly.
When you create an AKS cluster, a cluster master is automatically created and configured. And the Azure platform configures the secure communication between the cluster master and nodes. Interaction with the cluster master occurs through Kubernetes APIs, such as kubectl or the Kubernetes dashboard.
For more details, see Kubernetes core concepts for Azure Kubernetes Service (AKS). If you need to configure the cluster master and other things all by yourself, you can deploy your own Kubernetes cluster using aks-engine.
For the security of your pods, you can use the network policy to improve it. Although it's just a preview version.
Also, it's not recommended to expose the remote connectivity to the AKS cluster nodes if you want to connect to the AKS nodes. The suggestion is that create a bastion host, or jump box, in a management virtual network. Use the bastion host to securely route traffic into your AKS cluster to remote management tasks. For more details, see Securely connect to nodes through a bastion host.
If you have more questions, please let me know. I'm glad to provide more help.

How does kubernetes pod gets IP instead of container instead of it as CNI plugin works at container level

How does kubernetes pod gets IP instead of container instead of it as CNI plugin works at container level?
How all containers of same pod share same network stack?
Containers use a feature from the kernel called virtual network interface, the virtual network Interface( lets name it veth0) is created and then assigned to a namespace, when a container is created, it is also assigned to a namespace, when multiple containers are created within the same namespace, only a single network interface veth0 will be created.
A POD is just the term used to specify a set of resources and features, one of them is the namespace and the containers running in it.
When you say the POD get an IP, what actually get the ip is the veth0 interface, container apps see the veth0 the same way applications outside a container see a single physical network card on a server.
CNI is just the technical specification on how it should work to make multiple network plugins work without changes to the platform. The process above should be the same to all network plugins.
There is a nice explanation in this blog post
its the kubeproxy that makes everything work. one pod has one proxy which translates all the ports over one IP for the remaining containers. only in specific cases it is said that you want to have multiple containers in the same pod. its not preferred but its possible. this is why they call it "tightly" coupled. please refer to: https://kubernetes.io/docs/concepts/cluster-administration/proxies/
Firstly, let's dig deeper into the CNI aspect. In production systems, workload/pod (workload can be thought of as one or many containerized applications used to fulfill a certain function) network isolation is a first class security requirement. Moreover, depending on how the infrastructure is set up, the routing plane might also need to be a attribute of either the workload (kubectl proxy), or the host-level proxy (kube proxy), or the central routing plane (apiserver proxy) that the host-level proxy exposes a gateway for.
For both service discovery, and actually sending requests from a workload/pod, you don't want individual application developers to talk to the apiserver proxy, since it may incur overhead. Instead you want them to communicate with other applications via either the kubectl or kube proxy, with those layers being responsible for knowing when and how to communicate with the apiserver plane.
Therefore, when spinning up a new workload, the kubelet can be passed --network-plugin=cni and a path to a configuration telling kubelet how to set up the virtual network interface for this workload/pod.
For example, if you dont want your application containers in pod to be able to talk to host-level kube proxy directly, since you want to do some infrastructure specific monitoring, your CNI and workload configuration would be:
monitoring at outermost container
outermost container creates virtual network interface for every other container in pod
outermost container is on bridge interface (also a private virtual network interface) that can talk to kube proxy on host
The IP that the pod gets is to allow other workloads to send bytes to this pod, via its bridge interface - since fundamentally, other people should be talking to the pod, not individual work units inside the pod.
There is a special container called 'pause container' that holds the network namespace for the pod. It does not do anything and its container process just goes into sleep.
Kubernetes creates one pause container for each pod, to acquire the respective pod's IP address and set up the network namespace for all other containers that are part of specific pod. All containers in a pod can reach each other using localhost.
This means that your 'application' container can die, and come back to life, and all of the network setup will still be intact.

How to access containers from external IP?

I have created kubernetes cluster with one master node and one slave node and deployed containers.How can I access containers through external IP.
I have tried assigning IP address to the containers using type=Loadbalancer in docker compose file.
I would suggest you go through the tutorials for Kubernetes.
In general, (1) you would need 3 master nodes. (2) Setup a Ingress-controller ( HTTP Load balancer ) as a type=LoadBalancer service and then configure Ingress with domain for routing, instead of using IP to access those containers directly.
https://kubernetes.io/docs/tutorials/
https://medium.com/google-cloud/kubernetes-101-pods-nodes-containers-and-clusters-c1509e409e16

What is the difference between flannel (network layer) and ingress in kubernetes ?

I am setting up 2 VPC on GCP, I setup kubeadm on each, let's call them kubemaster and kubenode1. So I ran kubeadm on kubemaster and kubenode1 which :
kubeadm init on kubemaster
kubeadm join on kubenode1
When I was trying to kubectl apply -f (a deployment which contains a pod with simple webapps inside) and kubectl apply -f (a NodePort type of Service which target the deployment port)
After that I simply access the webapps from my browser (on my local machine not on GCP), it just does not work as what I tried on minikube (I setup minikube with same kubectl apply as above too). I dig some search and there are a lot of people saying regarding Ingress and network layer (flannel in kubernetes website example)
My question is what are these Ingress and flannel ? Which one is necessary or both are not necessary at all if I just want my webapp run ? How does each other works against others ? Because from my understanding the layering is as per below :
Traffic -> Services -> Deployments/Pods
Where are these ingress and flannel suits to ? If its not about them both, why my apps does not work as intended (i open all port in GCP setting so its not security issue I suppose), I tried setting up Kubernetes Dashboard-UI, run kubectl proxy and still my browser cannot access both services (my webapp inside the deployment and also Dashboard API), may be I am a little bit lost here.
The flannel and the Ingress are completely different things.
flannel is a CNI or Container Network Interface plugin which task is networking between containers. As coreOS says:
each container is assigned an IP address that can be used to
communicate with other containers on the same host. For communicating
over a network, containers are tied to the IP addresses of the host
machines and must rely on port-mapping to reach the desired container.
This makes it difficult for applications running inside containers to
advertise their external IP and port as that information is not
available to them.
flannel solves the problem by giving each container an IP that can be
used for container-to-container communication. It uses packet
encapsulation to create a virtual overlay network that spans the whole
cluster. More specifically, flannel gives each host an IP subnet (/24
by default) from which the Docker daemon is able to allocate IPs to
the individual containers.
The Kubernetes supports some other CNI plugins: Calico, weave, etc. They vary according to functionality ( e.g. supporting features like NetworkPolicy for restricting resources )
The Ingress is a Kubernetes object which is usually operate at the application layer of the network stack (HTTP) and allow you to expose your Service externally, it also provides a features such as HTTP requests routing, cookie-based session affinity, HTTPS traffic termination and so on. (just like a web server Nginx or Apache)
I want to add few more points along with exiting answers.
After that I simply access the webapps from my browser (on my local
machine not on GCP), it just does not work as what I tried on minikube
Did you open the security rules/firewall rules for the NodePort? On which instance did you open and which instance are you hitting to access your app?
My question is what are these Ingress and flannel?
I recommend you to read offical docs. But anyway, since you asked the question, I would like to tell few words.
Flannel is a overrelay network for containers which the subnet for the container can span across multiple nodes(Which is opposite to native docker networking-host n/w, NAT, etc). Each containers gets it own IP every time it spawn. The flannel is more like control plain for container network which is internal to K8s
Highly recommend you to read How Flannel N/W works
Ingress is smart router for the load balancer(Or simple for now, we can say it exposes the application to out side of K8s). It works at application level. Once you hit the "Ingress" enpoint, it will forward to service(which depends on ingress rules) and then to app pod.
A blog on Ingress - https://medium.com/#cashisclay/kubernetes-ingress-82aa960f658e
I see you were talking about ClusterIP. Generally, the the ClusterIP is the IP for the K8s service which is nothing but a magic of "IP Tables Rules". Kube-Proxy is responsible to write ip table rules in every node once you define "Service". These ip table rules or ClusterIP points to actual pod IP(The IP assigned by flannel daemon). I hope you can understand, how flannel and "Ingress" fit into the picture or work together or responsible for application traffic.(Please correct if I'm wrong..!!)
Can you paste ingress controller yaml content? What are the rules you defined?
Since you are using GCP, why don't you try GKE? I mean it is easy to deploy, besides you can access your application with LoadBalancer instead of depending on Ingress(Anyway, its none of my business :-) )
Said short, flannel or pod-to-pod networking layer in general, is what enables pods to talk to each other in Kubernetes. Ingress Controller on the other hand is what takes Ingress objects and turns them into rules for receiving and forwarding (mostly) HTTP(S) traffic to the backing services, over pod-to-pod network.
As you can see, technically, you need only the first one (pod-to-pod networking) as you can directly expose your service somewhere with NodePort or LoadBalancer service, it is very convenient though to use Ingress if you expose multiple services (pretty much like you do with vhosts on classic web server installations.

Resources