How to use a Kubernetes pod as a gateway to specific IPs? - docker

I've got a database running in a private network (say IP 1.2.3.4).
In my own computer, I can do these steps in order to access the database:
Start a Docker container using something like docker run --privileged --sysctl net.ipv4.ip_forward=1 ...
Get the container IP
Add a routing rule, such as ip route add 1.2.3.4/32 via $container_ip
And then I'm able to connect to the database as usual.
I wonder if there's a way to route traffic through a specific pod in Kubernetes for certain IPs in order to achieve the same results. We use GKE, by the way, I don't know if this helps in any way.
PS: I'm aware of the sidecar pattern, but I don't think this would be ideal for our use case, as our jobs are short-lived tasks, and we are not able to run multiple "gateway" containers at the same time.

I wonder if there's a way to route traffic through a specific pod in Kubernetes for certain IPs in order to achieve the same results. We use GKE, by the way, I don't know if this helps in any way.
You can start a GKE in a fully private network like this, then you run application that needs to be fully private in this cluster. Access to this cluster is only possible when explicitly granted; just like those commands you used in your question, but of course now you will use the cloud platform (eg. service control, bastion etc etc), there is no need to "route traffic through a specific pod in Kubernetes for certain IPs". But if you have to run everything in a cluster, then likely a fully private cluster will not work for you, in this case you can use network policy to control access to your database pod.

GKE doesn't support the use case you mentionned #gabriel Milan.
What's your requirement ? Do you need to know which IP the pod will use to reach the database so you can open a firewall for it ?

Replying here as the comments have limited character count
Unfortunately GKE doesn't support that use case.
However You have couple of options:
Option#1: Create a dedicated nodepool with couple of nodes, force the pods to be scheduled on these nodes using taints and tolerations [1]. Use the IP addresses of these nodes on your firewall
Option#2: Install a Service Mesh like Istio, Use the Egress gateway[2] to route traffic toward your onPrem system and force the gateways to be deployed on a specific set of nodes so you have a know IP address. This quite complicated as a solution
[1] https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
[2] https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway/

i would suggest using or creating the NAT gateway instead of using the container as a gateway option.
Using container or Istio is a good idea however it has its own limitations hard to implement, management and resources usage of that gateway containers.
Ultimately you want Single IP for your K8s cluster, instead request going out instead of Node's IP on which POD is scheduled.
Here terraform of GKE NAT gateway which you can use it.
https://registry.terraform.io/modules/GoogleCloudPlatform/nat-gateway/google/latest/examples/gke-nat-gateway
NAT gateway will forward all PODs traffic from a single VM and you can use that IP in the database to whitelist also.
After implementation, there will be single Egress point in your cluster.
GitHub Repo link - Click to deploy available GCP magic ;)

Related

How the same (micro) service running in multiple containers idemtify themselves

Not sure if this is a silly question. When the same app/service running in multiple containers, how do they report themselves to zookeeper/etcd and identify themselves? So that load balancers know the different instances and know who to talk to, where to probe and dispatch, etc..? Or the service instances would use some id from the container in their identification?
Thanks in advance
To begin with, let me explain in a few sentences how it works:
The basic building block starts with the Pod, which is just a resource that can be created and destroyed on demand. Because a Pod can be moved or rescheduled to another Node, any internal IPs that this Pod is assigned can change over time.
If we were to connect to this Pod to access our application, it would not work on the next re-deployment. To make a Pod reachable to external networks or clusters without relying on any internal IPs, we need another layer of abstraction. K8s offers that abstraction with what we call a Service Deployment.
This way, you can create a website that will be identified, for example, by a load balancer.
Services provide network connectivity to Pods that work uniformly across clusters. Service discovery is the actual process of figuring out how to connect to a service.
You can also find some information about Service in the official documentation:
An abstract way to expose an application running on a set of Pods as a network service.
With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.
Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. You can read more about this topic here and here.

How to expose Kubernetes Deployments as Services using the static external IPv4-Address of the Worker Nodes?

I've set up a Kubernetes Cluster on a custom OpenStack Platform to which, I don't have any administration access. It is only possible to create Instances and assign Firewall-Rules to them. Each new instance will be automatically provided by a static external IPv4 Address which can be reached globally. This means, that I can't even create OpenStack Routers to my internal network.
So far so good, I've set up a Kubernetes Cluster using kubeadm, CoreDNS and flanel as CNI. The cluster Hardware Setup is as following:
Kubernetes-Client and Server-Version: 1.14.3 linux/amd64
All Servers run on Fedora Cloud Base 28
1 Kubernetes Master
5 Worker Nodes
6 static external IPv4 addresses (one for each of the nodes)
After the setup, I deployed my required services using deployment-files. Everything works as it should.
My question is now, how I can make the services externally accessible? Since I've no LoadBalancer provided by OpenStack? How is the best approach for this?
I'm asking this question after an estimated amount of four hours of Googling (maybe I'm just bad at it). I tried the suggested approaches from the Documentation, but it stays totally unclear for me, what the concept and the right approach for the task is.
For example I tried to assign external IPs to the Service by using for example
kubectl expose deployment $DEPLOYMENT_NAME \
--name=$SERVICE_NAME \
--port=$HOST_PORT \
--target-port=$TARGET_PORT \
--type=NodePort
or this
kubectl patch service $SERVICE_NAME -p '{"spec":{"externalIPs":["<worker_host_ip>"]}}'
Even if the external IP is now assigned, the routing to my destination service is still not routed properly, because as I get it, Kubernetes automatically assigns the hosts and random ports to the Pods (which is the desired behaviour), but with that in mind, every redeployment could crash the assigned IP to service mapping.
After your help and a big "Thank You!" in advance I expect, that I can assign the application ports of the containers, to the static IPv4 of one of the hosts and that Kubernetes automatically knows, that the deployed service will be routed over this specific IP even, if the Pods run on a different worker.
After a while of researching, I stumbled upon the MetalLB implementation of a bare metal load-balancing solution for Kubernetes.
This helped me to solve the issue described above.
However from my point of view, MetalLB should be only used as last chance, since it is not production ready and requires excessive configration using the NGINX Ingress Controller solution to properly map the IPv4 distribution of a cluster.
Anyway a big big thank you to the above gentleman, which were willing to support me and give advice!

Kubernetes - Routable IP to individual Pods

I have a cluster of database nodes hosted in VMs or Bare Metal and I'd like to create additional database nodes (hosted in Kubernetes Pods) and have them join the existing cluster (ones hosted in VMs or bare metal).
In order to have them join the cluster, each database must be able to resolve the other via distinct IP and port. Within the Kubernetes network environment, there is no issue with this and no issue between the existing VM-hosted DBs. The sticking point is that I can't seem to see a way for the VM-hosted DBs to individually route to each POD-hosted DB. Is there a Kubernetes configuration that will allow me to have each pod/DB individually routable on specific ports while sharing the same NIC for the host running the cluster? Do I need to front each POD with it's own Service?
Here is the sort of configuration I'm trying to achieve with conceptual IP address spaces.
The approach I take personaly for a similar case is to actualy make it possible for nodes in the non-kubernetes environment to be able to talk to the pods them selves. Depending on your network configuration this might be quite easy to achieve.
In my case I simply have 2 additional elements running on VMs that need to access my k8s internals :
- flannel : this actually ties my VMs to the same flannel network as k8s pods operate in
- kube-proxy : translates service ips to pod ips using iptables (in cases where I need to access by service IP)
You could avoid setting this up on VMs or their hosts if you can solve this on a gateway level (ie. have flannel/proxy on your network gate and augment it with some SNAT rules).
Having NodePort/LB service per in-k8s db might work if your DB sticks to the IPs you give (not only use for discovery bootstraping where later on the IPs are replaced with actual IPs of DBs - iirc mongo usually does something like that)

Kubernetes - Automatically populating CloudDNS records from service endpoints

When running a Kubernetes cluster on Google Cloud Platform is it possible to somehow have the IP address from service endpoints automatically assigned to a Google CloudDNS record? If so can this be done declaratively within the service YAML definition?
Simply put I don't trust that the IP address of my type: LoadBalancer service.
One option is to front your services with an ingress resource (load balancer) and attach it to a static IP that you have previously reserved.
I was unable to find this documented in either the Kubernetes or GKE documentation, but I did find it here:
https://github.com/kelseyhightower/ingress-with-static-ip
Keep in mind that the value you set for the kubernetes.io/ingress.global-static-ip-name annotation is the name of the reserved IP resource, and not the IP itself.
Previous to that being available, you needed to create a Global IP, attach it to a GCE load balancer which had a global forwarding rule targeting at the nodes of your cluster yourself.
I do not believe there is a way to make this work automatically, today, if you do not wish to front your services with a k8s Ingress or GCP load balancer. That said, the Ingress is pretty straightforward, so I would recommend you go that route, if you can.
There is also a Kubernetes Incubator project called "external-dns" that looks to be an add-on that supports this more generally, and entirely from within the cluster itself:
https://github.com/kubernetes-incubator/external-dns
I have not yet tried that approach, but mention it hear as something you may want to follow.
GKE uses deployment manager to spin new clusters, as well as other resources like Load Balancers. At the moment deployment manager does not allow to integrate Cloud DNS functionality. Nevertheless there is a feature request to support that. In the future If this feature is implemented, it might allow further integration between Cloud DNS, Kubernetes and GKE.

Creating a multi node Kubernetes Cluster for a stateless webapp

I'm trying to understand a good way to handle Kubernetes cluster where there are several nodes and a master.
I host the cluster within the cloud of my company, plain Ubuntu boxes (so no Google Cloud or AWS).
Each pod contains the webapp (which is stateless) and I run any number of pods via replication controllers.
I see that with Services, I can declare PublicIPs however this is confusing because after adding ip addresses of
my minion nodes, each ip only exposes the pod that it runs and it doesn't do any sort of load balancing. Due to this,
if a node doesn't have any active pod running (as created pods are random allocated among nodes), it simply timeouts and I end up some IP addresses that don't response. Am I understanding this wrong?
How can I truly do a proper external load balancing for my web app? Should I do load balancing on Pod level instead of using Service?
If so, pods are considered mortal and they may dynamically die and born, how I do track of this?
The PublicIP thing is changing lately and I don't know exactly where it landed. But, services are the ip address and port that you reference in your applications. In other words, if I create a database, I create it as a pod (with or without a replication controller). I don't connect to the pod, however, from another application. I connect to a service which knows about the pod (via a label selector). This is important for a number of reasons.
If the database fails and is recreated on a different host, the application accessing it still references the (stationary) service ip address, and the kubernetes proxies take care of getting the request to the correct pod.
The service address is known by all Kubernetes nodes. Any node can proxy the request appropriately.
I think a variation of the theme applies to your problem. You might consider creating an external load balancer which forwards traffic to all of your nodes for the specific (web) service. You still need to take the node out of the balancer's targets if the node goes down, but, I think that any node will forward the traffic for any service whether or not that service is on that node.
All that said, I haven't had direct experience with external (public) ip addresses load balancing to the cluster, so there are probably better techniques. The main point I was trying to make is the node will proxy the request to the appropriate pod whether or not that node has a pod.
-g

Resources