K8s loadbalance between different deployment/replicasets - docker

we have a system that is having 2 endpoint based on geo-location. e.g (east_url, west_url).
One of our application need to load balance between those 2 urls. In the consumer application, created 2 deployment with the same image but different environment variables such as "url=east_url", "url=west_url".
after the deployment, i have following running pod, each of them will have label: "app=consumer-app" and "region=east" or "region=west"
consumer-east-pod-1
consumer-east-pod-2
consumer-west-pod-1
consumer-west-pod-2
when i create a clusterIP service with selector: app=consumer-app, somehow it only picks up one replicaSet. I am just curious if this is actually possible in kubernates to allow Service backed up by different deployments?
Another way of doing this i can think of is to create 2 services, and have ingress controller to loadbalance it, is this possible? we are using Kong as the ingress controller. I am looking for something like openshift which can have "alternativeBackends" to serve the Route. https://docs.openshift.com/container-platform/4.1/applications/deployments/route-based-deployment-strategies.html

I was missing a label for the east replicaSets, after i add the app:consumerAPP, it works fine now.
Thanks

TL;DR: use ISTIO
With ISTIO you can create Virtual Services:
A VirtualService defines a set of traffic routing rules to apply when
a host is addressed. Each routing rule defines matching criteria for
traffic of a specific protocol. If the traffic is matched, then it is
sent to a named destination service (or subset/version of it) defined
in the registry.
The VirtualService will let you send traffic to different backends based on the URI.
Now, if you plan to perform like an A/B TEST, you can use ISTIO's (Destination Rule):[https://istio.io/docs/reference/config/networking/destination-rule/].
DestinationRule defines policies that apply to traffic intended for a
service after routing has occurred.
Version specific policies can be specified by defining a named subset
and overriding the settings specified at the service level
1.- If you are using GKE, the process to install ISTIO can be located in here
2.- If you are using K8s running on a Virtual Machine, the installation process can be found here

Related

can i use one google cloud load balancer for a static backend bucket AND a container deployed by cloud run? and with different ports?

I have a project that has routes and paths defined in a load balancer. Now I want to add a google cloud run container to that project. Do I need to make another load balancer, or can I add the paths to the current load balancer?
how would I add a path to the cloud run container path? (for either scenario)
like mydomain.com/"new-path/container"/"path-of-new-path"
like in load balancer paths:
/newPath/newPath/*
or /newPath
and then the container (express.js in my case) dicates the newPath -paths?
confused. Also now to add confusion to the matter. can i have two ports?
like:
mydomain.com:443/newPath:8080
There is many possible configuration. You can route the traffic on
The domain
The path prefix
A combination of both.
With Serverless NEG, you also have in addition an URL mask that you can use to route the traffic through different serverless services
If your service doesn't support the load balancer path, you can also use a rewrite rule to remove that additional path level and clean the API call to your service
Finally, about the port, it's independent: the frontend ports doesn't influence the path resolution for request routing. But the domain does.

How to add a reverse proxy for authentication & load balancing in Kuberenetes (GKE)?

Okay, I have a DB consisting of several nodes deployed to GKE.
The deployment.yaml adds each node as ClusterIP, which makes sense. Here is the complete deployment file:
https://github.com/dgraph-io/dgraph/blob/master/contrib/config/kubernetes/dgraph-ha/dgraph-ha.yaml
For whatever reason, the DB has zero security functionality, so I cannot expose any part using a LoadBalancer service because doing so would give unsecured access to the entire DB. The vendor argues that security is solely the user's problem. The AlphaNode comes with an API endpoint, which is also unsecured, but I actually want to connect to that API endpoint from an external IP.
So, the best I can do is adding an NGNIX as a (reverse) proxy with authentication to secure access to the API endpoint of the Alpha node(s). Practically, I have three alpha nodes so adding load balancing makes sense. I found a config that does load balancing to three alpha nodes in Docker Compose although, without authenication.:
https://gist.github.com/MichelDiz/42954e321620159c872c35c20e9d85c6
Now, the million-dollar question I have is, how do I add an NGNIX load balance to Kubernetes that authenticates and load balances incoming traffic to my (ClusterIP) alpha nodes?
Any pointers?
Any help?
If you want to do it that hard way, you can deploy your own nginx deployment and expose it as LoadBalancer Service. You can configure it with different authentication mechanisms that nginx support.
Instead, you can use Ingress resource backed by an IngressController that supports authentication. Check if your kubernetes distribution provides an IngressController and if it is supports auth. If not, you can install nginx or Traefik IngressControllers which supports authentication.
Looks like GKE ingress has recently added support for IAP bassed authentication which is still in beta - https://cloud.google.com/iap/docs/enabling-kubernetes-howto
If you are looking for more traditional type of authentication with ingress, install nginx or traefik and use the kubernetes.io/ingress.class annotation so that only IngressController claims your ingress resource - https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/

Neo4j cluster: Expose Neo4j cluster to external world

I've installed neo4j enterprise from Google cloud market place and it is accessible from within the Kubernetes network but I want to access it from my external application which is not on the same network.
Following this guide from Neo4j I'm able to connect the browser using port forwarding;
MY_CLUSTER_LEADER_POD=mygraph-neo4j-core-0
kubectl port-forward $MY_CLUSTER_LEADER_POD 7687:7687 7474:7474
In the user guide, they suggest that I should not use a load balancer on the server side. I should expose each pod in the cluster separately and use bolt+routing from my application to handle request routing. This is described in Limitations section of the guide.
It should be exposed using Nodeports but I am unable to do it properly. I've tried doing it like this;
kubectl expose pod neo-cluster-neo4j-core-0 --port=7687 --name=neo-leader-pod
But I'm unable to connect using this exposed IP. I'm not good with cloud technologies so I can't figure out what I'm doing wrong.
I went through this article Neo4j Considerations in Orchestration Environments, tells what I should do but not how to do. It assumes prior knowledge of gcloud/kubernaties.
Anyone could guide me in the right direction? Thanks
If I’m not wrong, you create a GKE cluster for neo4j enterprise.
And it works perfectly inside of the cluster network, but not from outside.
Check if you have opened the firewall for these ports.
To create rules or see the existing rules:
Go to cloud.google.com
Go to my Console
Choose your Project
Choose Networking > VPC network
Choose "Firewalls rules"
Choose "Create Firewall Rule" to create the rule if doesn't exist.
To apply the rule to select VM instances, select Targets > "Specified target tags", and enter into "Target tags" the name of the tag. This tag will be used to apply the new firewall rule onto whichever instance you'd like. Then, make sure the instances have the network tag applied.
To allow incoming TCP connections to port 7687 for example, in "Protocols and Ports" enter tcp:7687
Click Create
Check the GKE documentation for a better clue:
https://cloud.google.com/solutions/prep-kubernetes-engine-for-prod
https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy
https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps
:)

deploying Spring Boot Rest Service with https enabled, in kubernetes

I have developed a spring boot based REST API service and enabled https on it by using a self signed cert keystore (to test locally), and it works well.
server.ssl.key-store=classpath:certs/keystore.jks
server.ssl.key-store-password=keystore
server.ssl.key-store-type=PKCS12
server.ssl.key-alias=tomcat
Now, I want to package a docker image and deploy this service in a kubernetes cluster. I know I can expose the service as a NodePort and access it externally.
What I want to know is, I doubt that my self signed cert generated in local machine will work when deployed in kubernetes cluster. I researched and found a couple of solutions using kubernetes ingress, kubernetes secrets, etc. I am confused as to what will be the best way to go about doing this, so that I can access my service running in kubernetes through https. What changes will I need to do to my REST API code?
UPDATED NOTE : Though I have used a self signed cert for testing purposes, I can obtain a CA signed cert from my company and use it for production. My question is more on the lines of, For a REST API service which already uses a SSL/TLS based connection, what are some of the better ways to deploy and access the cert in kubernetes cluster , eg: package in the application itself, use Secrets, or scrap the application's SSL configuration and use Ingres instead, etc. Hope my question makes sense :)
Thanks for any suggestions.
Well it depends on the way you want to expose your service. Basically you have either an ingress, an external load balancer (only in certain cloud evironments available) or a Service thats routed to a Port (either via NodePort or HostPort) as options.
Attention: Our K8S Cluster is self hosted so I have no reliable information about external load balancers in K8S and will therefore omit that option.
If you want to expose your service directly behind one of your domains on port 80 (e.g. https://app.myorg.org) you'll want to use ingress. But if you don't need that and you can live with a specific port the NodePort approach should do the trick (e.g. https://one.ofyourcluster.servers:30000/).
Let's assume you want to try the ingress approach than you need to add the certificates to the ingress definition in K8S instead of the spring boot application or you must additionally specify that the service is reachable via https itself in the ingress. The way to do it may differ from ingress controller to ingress controller.
For the NodePort/HostPort you just need to enable SSL in your application.
Despite that you also need a valid certificate e.g. issued by https://letsencrypt.org/
Actually for K8S there are some projects that can fetch you a letsencrypt certificate automatically if you to use ingresses. (e.g. https://github.com/jetstack/cert-manager/)

How can I use vhosts with the same port in kubernetes pod?

I have an existing web application with frontend and a backend which runs on the same port (HTTPS/443) on different sub domains. Do I really need a load balancer in my pod to handle all incoming web traffic or does Kubernetes has something already build in, which I missed so far?
I would encurage getting familiar with the concept of Ingress and IngressController http://kubernetes.io/docs/user-guide/ingress/
Simplifying things a bit, you can look at ingress as a sort of vhost/path service router/revproxy.

Resources