can i use one google cloud load balancer for a static backend bucket AND a container deployed by cloud run? and with different ports? - google-cloud-run

I have a project that has routes and paths defined in a load balancer. Now I want to add a google cloud run container to that project. Do I need to make another load balancer, or can I add the paths to the current load balancer?
how would I add a path to the cloud run container path? (for either scenario)
like mydomain.com/"new-path/container"/"path-of-new-path"
like in load balancer paths:
/newPath/newPath/*
or /newPath
and then the container (express.js in my case) dicates the newPath -paths?
confused. Also now to add confusion to the matter. can i have two ports?
like:
mydomain.com:443/newPath:8080

There is many possible configuration. You can route the traffic on
The domain
The path prefix
A combination of both.
With Serverless NEG, you also have in addition an URL mask that you can use to route the traffic through different serverless services
If your service doesn't support the load balancer path, you can also use a rewrite rule to remove that additional path level and clean the API call to your service
Finally, about the port, it's independent: the frontend ports doesn't influence the path resolution for request routing. But the domain does.

Related

K8s loadbalance between different deployment/replicasets

we have a system that is having 2 endpoint based on geo-location. e.g (east_url, west_url).
One of our application need to load balance between those 2 urls. In the consumer application, created 2 deployment with the same image but different environment variables such as "url=east_url", "url=west_url".
after the deployment, i have following running pod, each of them will have label: "app=consumer-app" and "region=east" or "region=west"
consumer-east-pod-1
consumer-east-pod-2
consumer-west-pod-1
consumer-west-pod-2
when i create a clusterIP service with selector: app=consumer-app, somehow it only picks up one replicaSet. I am just curious if this is actually possible in kubernates to allow Service backed up by different deployments?
Another way of doing this i can think of is to create 2 services, and have ingress controller to loadbalance it, is this possible? we are using Kong as the ingress controller. I am looking for something like openshift which can have "alternativeBackends" to serve the Route. https://docs.openshift.com/container-platform/4.1/applications/deployments/route-based-deployment-strategies.html
I was missing a label for the east replicaSets, after i add the app:consumerAPP, it works fine now.
Thanks
TL;DR: use ISTIO
With ISTIO you can create Virtual Services:
A VirtualService defines a set of traffic routing rules to apply when
a host is addressed. Each routing rule defines matching criteria for
traffic of a specific protocol. If the traffic is matched, then it is
sent to a named destination service (or subset/version of it) defined
in the registry.
The VirtualService will let you send traffic to different backends based on the URI.
Now, if you plan to perform like an A/B TEST, you can use ISTIO's (Destination Rule):[https://istio.io/docs/reference/config/networking/destination-rule/].
DestinationRule defines policies that apply to traffic intended for a
service after routing has occurred.
Version specific policies can be specified by defining a named subset
and overriding the settings specified at the service level
1.- If you are using GKE, the process to install ISTIO can be located in here
2.- If you are using K8s running on a Virtual Machine, the installation process can be found here

How to add a reverse proxy for authentication & load balancing in Kuberenetes (GKE)?

Okay, I have a DB consisting of several nodes deployed to GKE.
The deployment.yaml adds each node as ClusterIP, which makes sense. Here is the complete deployment file:
https://github.com/dgraph-io/dgraph/blob/master/contrib/config/kubernetes/dgraph-ha/dgraph-ha.yaml
For whatever reason, the DB has zero security functionality, so I cannot expose any part using a LoadBalancer service because doing so would give unsecured access to the entire DB. The vendor argues that security is solely the user's problem. The AlphaNode comes with an API endpoint, which is also unsecured, but I actually want to connect to that API endpoint from an external IP.
So, the best I can do is adding an NGNIX as a (reverse) proxy with authentication to secure access to the API endpoint of the Alpha node(s). Practically, I have three alpha nodes so adding load balancing makes sense. I found a config that does load balancing to three alpha nodes in Docker Compose although, without authenication.:
https://gist.github.com/MichelDiz/42954e321620159c872c35c20e9d85c6
Now, the million-dollar question I have is, how do I add an NGNIX load balance to Kubernetes that authenticates and load balances incoming traffic to my (ClusterIP) alpha nodes?
Any pointers?
Any help?
If you want to do it that hard way, you can deploy your own nginx deployment and expose it as LoadBalancer Service. You can configure it with different authentication mechanisms that nginx support.
Instead, you can use Ingress resource backed by an IngressController that supports authentication. Check if your kubernetes distribution provides an IngressController and if it is supports auth. If not, you can install nginx or Traefik IngressControllers which supports authentication.
Looks like GKE ingress has recently added support for IAP bassed authentication which is still in beta - https://cloud.google.com/iap/docs/enabling-kubernetes-howto
If you are looking for more traditional type of authentication with ingress, install nginx or traefik and use the kubernetes.io/ingress.class annotation so that only IngressController claims your ingress resource - https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/

How to route to two different AWS applications on the same domain with different URLs using Applicaiton Load Balancer?

I have a web app (Node.js on Elastic Beanstalk) already serving at example.com. I have a WordPress blog that I want to serve at example.com/blog.
I want to setup AWS Load Balancer to route requests at /blog to my WordPress server and all other requests at / to my web-app. How do I do it using AWS Load Balancer(s)?
My DNS and both of these servers are on AWS. I don't want to setup a self-managed Nginx/HAProxy reverse proxy. If possible, I want to avoid using CloudFront configuration at the moment.
This is possible by adding a listener rule to an Application Load Balancer. Listener rules determine how the load balancer routes requests to the targets in one or more target groups.
After creating the load balancer, see Listeners >> Add Listener. Add a Rule with a Condition for Path is /blog then select Action forward to send traffic to a separate target group mapped to the Wordpress instances.
For more see the docs for Listener Rules for Your Application Load Balancer.

Customize Docker reverse DNS

I'm looking for a way to change what the reverse DNS resolves to in Docker.
If I set my container's FQDN to foo.bar I expect a reverse DNS lookup for its IP to resolve to foo.bar, but it always resolves to <container_name>.<network_name>.
Is there a way I can change that?
Docker's DNS support is designed to support container discovery within a cluster. It's not an application traffic management solution, so features are limited.
For example it's possible to configure a DNS wildcard which resolves "*.foo.bar" urls to a server running a container savvy load balancer solution (A load balancer that knows where all the containers, associated with each application, are located and running).
That load balancer can then route traffic based on the incoming "Hostname" HTTP header:
"app1.foo.bar" -> "App1 Container1", "App1 Container2"
"app2.foo.bar" ->
"App2 Container1", "App2 Container2", "App2 Container3"
For a practical implementation take a look at how Kubernetes does load balancing (This is an advanced topic):
http://kubernetes.io/docs/user-guide/ingress/

Tutum HAProxy Docker Virtual Host forward to entry point path

I'm trying to use the HAProxy tutum docker image to load balance between two different web applications. Both web applications has an entry point of "/". At section Virtual host and virtual path I see that I can use virtual hosts to route to the different services. I've tried to set the VIRTUAL_HOST parameter for web app 1 to */webapp1* and for web app 2 I've set it to /*webapp2*. But when try to navigate to web app 1 through HAProxy (using for example http://haproxy-test.myname.svc.tutum.io/webapp1) it forwards me to http://<internal_ip_to_webapp1/webapp1. I would like HAProxy to forward calls to /webapp1 to http://<internal_ip_to_webapp1> (i.e. the entry point of web app 1). How can I achieve this?
You should try adding the host name in the VIRTUAL_HOST parameter.
like http://haproxy-test.myname.svc.tutum.io/webapp1/*

Resources