Cloud Run - Custom Domain Mapping with Wildcard Subdomain - google-cloud-run

Our app utilizes subdomains like customerA.mydomain.com or customerB.mydomain.com, wherein the subdomains are unique publicly accessible storefronts that are "created" by our customers. We would like to route all *.mydomain.com to a particular Cloud Run service. We have already set up a wildcard subdomain CNAME in the DNS records to route to ghs.googlehosted.com., but how can we make Cloud Run accept requests from any subdomain?

You will need to map each subdomain to a service using API or gcloud commands. For example:
gcloud run domain-mappings create --service=myapp --domain=www.example.com
Technically this is because each service is with provisioned HTTP SSL certificate (built-in and provided out-of-the box for you without any charge), and there is no wildcard certificate issued by Google for your domain. That's why you cannot map a *(star) to a service. This means you need to instruct GCP Cloud Run service to map and issue a certificate request for each of your subdomain.
Also reach out to your Account Manager, as there are other limits on Cloud Run such as:
Maximum number of SSL certificates: 50 per top domain and per week

Related

Programmatically check if Cloud Run domain mapping has done

I'm developing a service which will have a subdomain for each customer. So far I've set a DNS rule on Google Domains as
* | CNAME | 3600 | ghs.googlehosted.com.
and then I add the mapping for each subdomain in the Cloud Run console. I want to do all this programmatically everytime a new user registers.
The DNS rule will handle automatically any new subdomain, and to map it to the service I'll use the gcloud command:
gcloud beta run domain-mappings create --service frontend --domain sub.domain.com
Now, how can I check when the Cloud Run provisioning has done so that I can notify the customer that the platform is ready to use? I could CRON every minute the command gcloud beta run domain-mappings describe --domain sub.domain.com, parse the JSON output and check if the status has done. It's expensive, but it should work.
The problem is that even if the gcloud cli or the web console mark the provisioning as done, the platform isn't reachable for another 5-10 minutes, resulting in a ERR_CONNECTION_REFUSED error. The service logs show that a request to the subdomain is being made, but somehow it won't serve it.
I ended up using a load balancer as suggested. I followed this doc "Setting up a load balancer with Cloud Run, App Engine, or Cloud Functions", the only different thing is that I provided my own wildcard certificate (thanks to Let's Encrypt and certbox).
Now I can just use the Google Domains' API to instantly create a subdomain.

Jenkins pointing server to domain created

Good Morning
I have created a Jenkins server in AWS I am able to access the platform using the IP of the server
however, I want to access it more securely.
I have set up a subdomain on my hosting service and I set the IP of the server as an A record
I have also defined this in the configuration section of Jenkins
however, when I access the URL https://domainname I get nothing
but if I add 8080 at the end of it it takes me to the Jenkins platform
what am I missing here?
Thanks
I recommend you to use AWS Application Load Balancer to access to you jenkins web server.
I will host https certificat (if you are using AWS Certificate Manager) and you will be able configure DNS to redirect to ALB name.

Base domain mapping broken for certain domains

I am trying to map a base domain name to a Google Cloud Run (fully managed) service. The following mappings work successfully:
Any Cloud Run service -> something.aaa.com (Immediately gives CNAME Records)
Any Cloud Run service -> bbb.com (Immediately gives A records)
Any Cloud Run service -> ccc.com (Immediately gives A records)
However, the following does not work:
Any Cloud Run service -> aaa.com (Spinner of death; never returned any DNS records after 12 hours)
Is there any where I can get more information on why this mapping is failing? The CLI also gives me a spinner when I run: gcloud beta run domain-mappings create --service $SERVICE_NAME --domain aaa.com
All domains were purchased through Google Domains . The only difference I can think of between aaa.com and bbb.com is that aaa.com was at some point using Cloudflare DNS, though I have since moved back to Google DNS.
This problem magically resolved itself after a few days. Possibly what fixed it was switching my DNS from Cloudflare back to Google DNS and waiting for that to trickle through.
If you're experiencing this issue, one workaround is to just use www.aaa.com as your canonical name instead of aaa.com. You can use a CNAME record to map www.aaa.com to your Cloud Run service. Many DNS providers (including Google DNS) will give you the ability to create a 301 Redirect from aaa.com to www.aaa.com.

How to add a reverse proxy for authentication & load balancing in Kuberenetes (GKE)?

Okay, I have a DB consisting of several nodes deployed to GKE.
The deployment.yaml adds each node as ClusterIP, which makes sense. Here is the complete deployment file:
https://github.com/dgraph-io/dgraph/blob/master/contrib/config/kubernetes/dgraph-ha/dgraph-ha.yaml
For whatever reason, the DB has zero security functionality, so I cannot expose any part using a LoadBalancer service because doing so would give unsecured access to the entire DB. The vendor argues that security is solely the user's problem. The AlphaNode comes with an API endpoint, which is also unsecured, but I actually want to connect to that API endpoint from an external IP.
So, the best I can do is adding an NGNIX as a (reverse) proxy with authentication to secure access to the API endpoint of the Alpha node(s). Practically, I have three alpha nodes so adding load balancing makes sense. I found a config that does load balancing to three alpha nodes in Docker Compose although, without authenication.:
https://gist.github.com/MichelDiz/42954e321620159c872c35c20e9d85c6
Now, the million-dollar question I have is, how do I add an NGNIX load balance to Kubernetes that authenticates and load balances incoming traffic to my (ClusterIP) alpha nodes?
Any pointers?
Any help?
If you want to do it that hard way, you can deploy your own nginx deployment and expose it as LoadBalancer Service. You can configure it with different authentication mechanisms that nginx support.
Instead, you can use Ingress resource backed by an IngressController that supports authentication. Check if your kubernetes distribution provides an IngressController and if it is supports auth. If not, you can install nginx or Traefik IngressControllers which supports authentication.
Looks like GKE ingress has recently added support for IAP bassed authentication which is still in beta - https://cloud.google.com/iap/docs/enabling-kubernetes-howto
If you are looking for more traditional type of authentication with ingress, install nginx or traefik and use the kubernetes.io/ingress.class annotation so that only IngressController claims your ingress resource - https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/

deploying Spring Boot Rest Service with https enabled, in kubernetes

I have developed a spring boot based REST API service and enabled https on it by using a self signed cert keystore (to test locally), and it works well.
server.ssl.key-store=classpath:certs/keystore.jks
server.ssl.key-store-password=keystore
server.ssl.key-store-type=PKCS12
server.ssl.key-alias=tomcat
Now, I want to package a docker image and deploy this service in a kubernetes cluster. I know I can expose the service as a NodePort and access it externally.
What I want to know is, I doubt that my self signed cert generated in local machine will work when deployed in kubernetes cluster. I researched and found a couple of solutions using kubernetes ingress, kubernetes secrets, etc. I am confused as to what will be the best way to go about doing this, so that I can access my service running in kubernetes through https. What changes will I need to do to my REST API code?
UPDATED NOTE : Though I have used a self signed cert for testing purposes, I can obtain a CA signed cert from my company and use it for production. My question is more on the lines of, For a REST API service which already uses a SSL/TLS based connection, what are some of the better ways to deploy and access the cert in kubernetes cluster , eg: package in the application itself, use Secrets, or scrap the application's SSL configuration and use Ingres instead, etc. Hope my question makes sense :)
Thanks for any suggestions.
Well it depends on the way you want to expose your service. Basically you have either an ingress, an external load balancer (only in certain cloud evironments available) or a Service thats routed to a Port (either via NodePort or HostPort) as options.
Attention: Our K8S Cluster is self hosted so I have no reliable information about external load balancers in K8S and will therefore omit that option.
If you want to expose your service directly behind one of your domains on port 80 (e.g. https://app.myorg.org) you'll want to use ingress. But if you don't need that and you can live with a specific port the NodePort approach should do the trick (e.g. https://one.ofyourcluster.servers:30000/).
Let's assume you want to try the ingress approach than you need to add the certificates to the ingress definition in K8S instead of the spring boot application or you must additionally specify that the service is reachable via https itself in the ingress. The way to do it may differ from ingress controller to ingress controller.
For the NodePort/HostPort you just need to enable SSL in your application.
Despite that you also need a valid certificate e.g. issued by https://letsencrypt.org/
Actually for K8S there are some projects that can fetch you a letsencrypt certificate automatically if you to use ingresses. (e.g. https://github.com/jetstack/cert-manager/)

Resources