When running a Kubernetes cluster on Google Cloud Platform is it possible to somehow have the IP address from service endpoints automatically assigned to a Google CloudDNS record? If so can this be done declaratively within the service YAML definition?
Simply put I don't trust that the IP address of my type: LoadBalancer service.
One option is to front your services with an ingress resource (load balancer) and attach it to a static IP that you have previously reserved.
I was unable to find this documented in either the Kubernetes or GKE documentation, but I did find it here:
https://github.com/kelseyhightower/ingress-with-static-ip
Keep in mind that the value you set for the kubernetes.io/ingress.global-static-ip-name annotation is the name of the reserved IP resource, and not the IP itself.
Previous to that being available, you needed to create a Global IP, attach it to a GCE load balancer which had a global forwarding rule targeting at the nodes of your cluster yourself.
I do not believe there is a way to make this work automatically, today, if you do not wish to front your services with a k8s Ingress or GCP load balancer. That said, the Ingress is pretty straightforward, so I would recommend you go that route, if you can.
There is also a Kubernetes Incubator project called "external-dns" that looks to be an add-on that supports this more generally, and entirely from within the cluster itself:
https://github.com/kubernetes-incubator/external-dns
I have not yet tried that approach, but mention it hear as something you may want to follow.
GKE uses deployment manager to spin new clusters, as well as other resources like Load Balancers. At the moment deployment manager does not allow to integrate Cloud DNS functionality. Nevertheless there is a feature request to support that. In the future If this feature is implemented, it might allow further integration between Cloud DNS, Kubernetes and GKE.
Related
I've got a database running in a private network (say IP 1.2.3.4).
In my own computer, I can do these steps in order to access the database:
Start a Docker container using something like docker run --privileged --sysctl net.ipv4.ip_forward=1 ...
Get the container IP
Add a routing rule, such as ip route add 1.2.3.4/32 via $container_ip
And then I'm able to connect to the database as usual.
I wonder if there's a way to route traffic through a specific pod in Kubernetes for certain IPs in order to achieve the same results. We use GKE, by the way, I don't know if this helps in any way.
PS: I'm aware of the sidecar pattern, but I don't think this would be ideal for our use case, as our jobs are short-lived tasks, and we are not able to run multiple "gateway" containers at the same time.
I wonder if there's a way to route traffic through a specific pod in Kubernetes for certain IPs in order to achieve the same results. We use GKE, by the way, I don't know if this helps in any way.
You can start a GKE in a fully private network like this, then you run application that needs to be fully private in this cluster. Access to this cluster is only possible when explicitly granted; just like those commands you used in your question, but of course now you will use the cloud platform (eg. service control, bastion etc etc), there is no need to "route traffic through a specific pod in Kubernetes for certain IPs". But if you have to run everything in a cluster, then likely a fully private cluster will not work for you, in this case you can use network policy to control access to your database pod.
GKE doesn't support the use case you mentionned #gabriel Milan.
What's your requirement ? Do you need to know which IP the pod will use to reach the database so you can open a firewall for it ?
Replying here as the comments have limited character count
Unfortunately GKE doesn't support that use case.
However You have couple of options:
Option#1: Create a dedicated nodepool with couple of nodes, force the pods to be scheduled on these nodes using taints and tolerations [1]. Use the IP addresses of these nodes on your firewall
Option#2: Install a Service Mesh like Istio, Use the Egress gateway[2] to route traffic toward your onPrem system and force the gateways to be deployed on a specific set of nodes so you have a know IP address. This quite complicated as a solution
[1] https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
[2] https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway/
i would suggest using or creating the NAT gateway instead of using the container as a gateway option.
Using container or Istio is a good idea however it has its own limitations hard to implement, management and resources usage of that gateway containers.
Ultimately you want Single IP for your K8s cluster, instead request going out instead of Node's IP on which POD is scheduled.
Here terraform of GKE NAT gateway which you can use it.
https://registry.terraform.io/modules/GoogleCloudPlatform/nat-gateway/google/latest/examples/gke-nat-gateway
NAT gateway will forward all PODs traffic from a single VM and you can use that IP in the database to whitelist also.
After implementation, there will be single Egress point in your cluster.
GitHub Repo link - Click to deploy available GCP magic ;)
Not sure if this is a silly question. When the same app/service running in multiple containers, how do they report themselves to zookeeper/etcd and identify themselves? So that load balancers know the different instances and know who to talk to, where to probe and dispatch, etc..? Or the service instances would use some id from the container in their identification?
Thanks in advance
To begin with, let me explain in a few sentences how it works:
The basic building block starts with the Pod, which is just a resource that can be created and destroyed on demand. Because a Pod can be moved or rescheduled to another Node, any internal IPs that this Pod is assigned can change over time.
If we were to connect to this Pod to access our application, it would not work on the next re-deployment. To make a Pod reachable to external networks or clusters without relying on any internal IPs, we need another layer of abstraction. K8s offers that abstraction with what we call a Service Deployment.
This way, you can create a website that will be identified, for example, by a load balancer.
Services provide network connectivity to Pods that work uniformly across clusters. Service discovery is the actual process of figuring out how to connect to a service.
You can also find some information about Service in the official documentation:
An abstract way to expose an application running on a set of Pods as a network service.
With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.
Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. You can read more about this topic here and here.
I a want to run a web page similar like kubernetes dashboard.The web page takes input from the user and generates a small file but i want the web page to be loaded without using any server. kubernetes is deploying a pod and bringing up the web page i want to do the same.If kubernetes is also using a server how is it using it(is it directly downloading it with the OS in the pod or how is kubernetes doing it).
Overview I want to know how kubernetes dashboard is getting deployed is it using a server if so how is it getting the server installed in the kubernetes pod else how is it bring up the UI.
Actually, Kubernetes plays the role as an orchestrator and provides sufficient way for building communication channels between containers in the cluster and uses Docker by default as a container runtime.
Containers represent run-time environment for images, however images consist with OS layer and application binaries, a good explanation you can find here. In order to build own image you might consider two ways to afford this: create an image from existing one in Docker Hub or compose image from Dockerfile.To store the customized image might be the option to push it into Docker Hub repository or stand for some private isolated repo by deploying a Registry server.
When you are ready with an image, and you plan to implement application in Kubernetes cluster, that's a good time to create first microservice. Although, there are tons of materials about Kubernetes cluster and its run-time engine architecture in the globe, I would focus on the application deployment lifecycle.
Deployment is the main mechanism which defines how are Pods should to be implemented within a cluster and provides specific configuration for further application run-time workflow.
Service describes a way how the particular Pod will communicate with other resources within a cluster, providing endpoint IP address and port where your application will respond.
In general scenario with Kubernetes Dashboard, the method in use kubectl proxy will expose the application by proxying gateway between host and Kubernetes API, which is more like for testing purposes and not secure, in comparison with Nodeport type which brings more convenient way to make application accessible outside the cluster, as described in this Stack thread.
I encourage you to get some more learning stuff in the official Kubernetes documentation.
I know that GKE is driven by kubernetes underneath. But I don't seem to still get is that what part is taken care by GKE and what by k8s in the layering? The main purpose of both, as it appears to me is to manage containers in a cluster. Basically, I am looking for a simpler explanation with an example.
GKE is a managed/hosted Kubernetes (i.e. it is managed for you so you can concentrate on running your pods/containers applications)
Kubernetes does handle:
Running pods, scheduling them on nodes, guarantee no of replicas per Replication Controller settings (i.e. relaunch pods if they fail, relocate them if the node fails)
Services: proxy traffic to the right pod wherever it is located.
Jobs
In addition, there are several 'add-ons' to Kubernetes, some of which are part of what makes GKE:
DNS (you can't really live without it, even thought it's an add-on)
Metrics monitoring: with influxdb, grafana
Dashboard
None of these are out-of-the-box, although they are fairly easy to setup, but you need to maintain them.
There is no real 'logging' add-on, but there are various projects to do this (using Logspout, logstash, elasticsearch etc...)
In short Kubernetes does the orchestration, the rest are services that would run on top of Kubernetes.
GKE brings you all these components out-of-the-box, and you don't have to maintain them. They're setup for you, and they're more 'integrated' with the Google portal.
One important thing that everyone needs is the LoadBalancer part:
- Since Pods are ephemeral containers, that can be rescheduled anywhere and at any time, they are not static, so ingress traffic needs to be managed separately.
This can be done within Kubernetes by using a DaemonSet to fix a Pod on a specific node, and use a hostPort for that Pod to bind to the node's IP.
Obviously this lacks fault tolerance, so you could use multiple and do DNS round robin load balancing.
GKE takes care of all this too with external Load Balancing.
(On AWS, it's similar, with ALB taking care of load balancing in Kubernetes)
GKE (Google Container Engine) is only container platform, which Kubernetes can manage. It is not a kubernetes-like with "differences".
As mentioned in "Docker and Kubernetes and AppC " (May 2015, that can change):
Docker is currently the only supported runtime in GKE (Google Container Engine) our commercial containers product, and in GAE (Google App Engine), our Platform-as-a-Service product.
You can see Kubernetes used on GKE in this example: "Spinning Up Your First Kubernetes Cluster on GKE" from Rimantas Mocevicius.
The gcloud API will still make kubernetes commands behind the scene.
GKE will organize its platform through Kubernetes master
Every container cluster has a single master endpoint, which is managed by Container Engine.
The master provides a unified view into the cluster and, through its publicly-accessible endpoint, is the doorway for interacting with the cluster.
The managed master also runs the Kubernetes API server, which services REST requests, schedules pod creation and deletion on worker nodes, and synchronizes pod information (such as open ports and location) with service information.
In short, without getting into technical details,
GKE is managed Kubernetes, similar to how Google's Cloud Composer is managed Apache Airflow and Cloud Dataflow is managed Apache Beam.
So, some of Google Cloud Platform's services (GKE, Cloud Composer, Cloud Dataflow) are managed implementations of various open source technologies (Kubernetes, Airflow, Beam).
I'm trying to understand a good way to handle Kubernetes cluster where there are several nodes and a master.
I host the cluster within the cloud of my company, plain Ubuntu boxes (so no Google Cloud or AWS).
Each pod contains the webapp (which is stateless) and I run any number of pods via replication controllers.
I see that with Services, I can declare PublicIPs however this is confusing because after adding ip addresses of
my minion nodes, each ip only exposes the pod that it runs and it doesn't do any sort of load balancing. Due to this,
if a node doesn't have any active pod running (as created pods are random allocated among nodes), it simply timeouts and I end up some IP addresses that don't response. Am I understanding this wrong?
How can I truly do a proper external load balancing for my web app? Should I do load balancing on Pod level instead of using Service?
If so, pods are considered mortal and they may dynamically die and born, how I do track of this?
The PublicIP thing is changing lately and I don't know exactly where it landed. But, services are the ip address and port that you reference in your applications. In other words, if I create a database, I create it as a pod (with or without a replication controller). I don't connect to the pod, however, from another application. I connect to a service which knows about the pod (via a label selector). This is important for a number of reasons.
If the database fails and is recreated on a different host, the application accessing it still references the (stationary) service ip address, and the kubernetes proxies take care of getting the request to the correct pod.
The service address is known by all Kubernetes nodes. Any node can proxy the request appropriately.
I think a variation of the theme applies to your problem. You might consider creating an external load balancer which forwards traffic to all of your nodes for the specific (web) service. You still need to take the node out of the balancer's targets if the node goes down, but, I think that any node will forward the traffic for any service whether or not that service is on that node.
All that said, I haven't had direct experience with external (public) ip addresses load balancing to the cluster, so there are probably better techniques. The main point I was trying to make is the node will proxy the request to the appropriate pod whether or not that node has a pod.
-g