Programmatically check if Cloud Run domain mapping has done - google-cloud-run

I'm developing a service which will have a subdomain for each customer. So far I've set a DNS rule on Google Domains as
* | CNAME | 3600 | ghs.googlehosted.com.
and then I add the mapping for each subdomain in the Cloud Run console. I want to do all this programmatically everytime a new user registers.
The DNS rule will handle automatically any new subdomain, and to map it to the service I'll use the gcloud command:
gcloud beta run domain-mappings create --service frontend --domain sub.domain.com
Now, how can I check when the Cloud Run provisioning has done so that I can notify the customer that the platform is ready to use? I could CRON every minute the command gcloud beta run domain-mappings describe --domain sub.domain.com, parse the JSON output and check if the status has done. It's expensive, but it should work.
The problem is that even if the gcloud cli or the web console mark the provisioning as done, the platform isn't reachable for another 5-10 minutes, resulting in a ERR_CONNECTION_REFUSED error. The service logs show that a request to the subdomain is being made, but somehow it won't serve it.

I ended up using a load balancer as suggested. I followed this doc "Setting up a load balancer with Cloud Run, App Engine, or Cloud Functions", the only different thing is that I provided my own wildcard certificate (thanks to Let's Encrypt and certbox).
Now I can just use the Google Domains' API to instantly create a subdomain.

Related

Setting up Pwnmachine slef hosted docker embed

trying to setup [pwnmachinev2]https://github.com/yeswehack/pwn-machine properly
PwnMachine is a self hosting solution based on docker aiming to provide an easy to use pwning station for bug hunters.
The basic install include a web interface, a DNS server and a reverse proxy.
Installation
Using Docker
Clone the repository locally on your machine
git clone https://github.com/yeswehack/pwn-machine.git
Enter in the repository previously cloned
cd pwn-machine/
Configure the .env <--Having trouble on this step
If you start to build direclty the project, you will be faced with an error:
${LETS_ENCRYPT_EMAIL?Please provide an email for let's encrypt}" # Replace with your email address or create a .env file
We highly recommend to create a .env file in the PwnMachine directory and to configure an email. It's used for let's encrypt to have a SSL certificate.
LETS_ENCRYPT_EMAIL="your_email#domain.com"
Building
Build the project (using option -d will start the project in background, it's optional). Building can take several minutes (depending on your computer and network connection).
docker-compose up --build -d
Once everything is done on docker side, you should be able to access on the PwnMachine at http://your_address_ip
Starting pm_powerdns-db_1 ... done
Starting pm_redis_1 ... done
Starting pm_powerdns_1 ... done
Starting pm_filebeat_1 ... done
Recreating traefik ... done
Recreating pm_manager_1 ... done
First run & configuration
Password and 2FA configuration
When you start the PwnMachine for the first time, we ask users to set a new password and 2FA authentication. This is mandatory to continue. You can use Google Authenticator, Authy, Keepass... anything you want that allows you to set up 2FA authentication.
After this, you are ready to use the PwnMachine!
How to setup DNS
Create a new DNS zone
First, we need to create a new DNS zone. Go on DNS > ZONES
Name: domain.com
Nameserver: ns.domain.com.
Postmaster: noreply.example.com.
Click on the button to save the configuration and the this new DNS zone
Create a new DNS rule
Zone: example.com.
Name: *.example.com.
Type: A
Add a new record
your_adress_ip
Click on the button +
Click on the button to save the configuration
Now you need to update your DNS servers at your ISP with the one that has just been configured.
How to expose a docker container on a subdomain and use HTTPS
For this example, we will create a new subdomain like manager.example.com to expose the PwnMachine interface on it and accessible in HTTPS.
Go on DOCKER > CONTAINERS
Right click on pm_manager
Click on Expose via traefik
A new window should open:
Name: pm_manager-router
Rule: Host(manager.example.com) && PathPrefix(/)
Entrypoint: https
Select "Middlewares"
Service: pm_manager-service
---- TLS ----
Cert Resolver: Let's Encrypt staging - DNS
Domain: *.example.com
Now, wait the DNS propagation and after some minutes you should be able to connect on manager.example.com.
I was able to get it started and access it at http://127.0.0.1/
but got a bit confused when setting up the records
im trying to set it up so i can access it over the web i.e c25.tech/payload.dtd
c25.tech is my domain , I have threw hostinger
I hope that someone can help me out thanks.
screenshot1
screenshot2
screenshot3
screenshot3

Cloud Run - Custom Domain Mapping with Wildcard Subdomain

Our app utilizes subdomains like customerA.mydomain.com or customerB.mydomain.com, wherein the subdomains are unique publicly accessible storefronts that are "created" by our customers. We would like to route all *.mydomain.com to a particular Cloud Run service. We have already set up a wildcard subdomain CNAME in the DNS records to route to ghs.googlehosted.com., but how can we make Cloud Run accept requests from any subdomain?
You will need to map each subdomain to a service using API or gcloud commands. For example:
gcloud run domain-mappings create --service=myapp --domain=www.example.com
Technically this is because each service is with provisioned HTTP SSL certificate (built-in and provided out-of-the box for you without any charge), and there is no wildcard certificate issued by Google for your domain. That's why you cannot map a *(star) to a service. This means you need to instruct GCP Cloud Run service to map and issue a certificate request for each of your subdomain.
Also reach out to your Account Manager, as there are other limits on Cloud Run such as:
Maximum number of SSL certificates: 50 per top domain and per week

Base domain mapping broken for certain domains

I am trying to map a base domain name to a Google Cloud Run (fully managed) service. The following mappings work successfully:
Any Cloud Run service -> something.aaa.com (Immediately gives CNAME Records)
Any Cloud Run service -> bbb.com (Immediately gives A records)
Any Cloud Run service -> ccc.com (Immediately gives A records)
However, the following does not work:
Any Cloud Run service -> aaa.com (Spinner of death; never returned any DNS records after 12 hours)
Is there any where I can get more information on why this mapping is failing? The CLI also gives me a spinner when I run: gcloud beta run domain-mappings create --service $SERVICE_NAME --domain aaa.com
All domains were purchased through Google Domains . The only difference I can think of between aaa.com and bbb.com is that aaa.com was at some point using Cloudflare DNS, though I have since moved back to Google DNS.
This problem magically resolved itself after a few days. Possibly what fixed it was switching my DNS from Cloudflare back to Google DNS and waiting for that to trickle through.
If you're experiencing this issue, one workaround is to just use www.aaa.com as your canonical name instead of aaa.com. You can use a CNAME record to map www.aaa.com to your Cloud Run service. Many DNS providers (including Google DNS) will give you the ability to create a 301 Redirect from aaa.com to www.aaa.com.

How to invoke a Cloud Run app without having to add the Authorization Token

I have a cloud run app deployed that is for internal use only.
Therefore only users of our cluster should have access to it.
I added the permission for allAuthenticated members giving them the role Cloud Run Invoker.
The problem is that those users (including me) now have to add authorization bearer header everytime I want to access that app.
This is what Cloud Run suggests to do (somehow useless when u wanna simply visit a frontend app)
curl -H \
"Authorization: Bearer $(gcloud auth print-identity-token)" \
https://importer-controlroom-frontend-xl23p3zuiq-ew.a.run.app
I wonder why it is not possible to be realized as authorized member like the GCP figures out. I can access the cluster but have to add the authorization header to access the cloud run app as authorized member? I find this very inconvenient.
Is there any way to make it way more fun to access the deployed cloud run app?
PS: I do not want to place the app in our cluser - so only fully managed is an option here
You currently can't do that without the Authorization header on Cloud Run.
allAuthenticated subject means any Google user (or service account), so you need to add the identity-token to prove you're one.
If you want to make your application public, read this doc.
But this is a timely request! I am currently running an experiment that lets you make requests to http://hello and automatically get routed to the full domain + automatically get the Authorization header injected! (This is for communication between Cloud Run applications.)
GCP now offers a proxy tool for making this easier, although it's in beta as of writing this.
It's part of the gcloud suite, you can run:
gcloud beta run services proxy $servicename --project $project --region $region
It will launch a webserver on localhost:8080, that forwards all requests to the targeted service, injecting the user's GCP token into all requests.
Of course this can only be used locally.

Secure gateway between Bluemix CF apps and containers

Can I use Secure-Gateway between my Cloud Foundry apps on Bluemix and my Bluemix docker container database (mongo)? It does not work for me.
Here the steps I have followed:
upload secure gw client docker image on bluemix
docker push registry.ng.bluemix.net/NAMESPACE/secure-gateway-client:latest
run the image with token as a parameter
cf ic run registry.ng.bluemix.net/edevregille/secure-gateway-client:latest GW-ID
when i look at the logs of the container secure-gateway, I get the following:
[INFO] (Client PID 1) Setting log level to INFO
[INFO] (Client PID 1) There are no Access Control List entries, the ACL Deny All flag is set to: true
[INFO] (Client PID 1) The Secure Gateway tunnel is connected
and the secure-gateway dashboard interface shows that it is connected too.
But then, when I try to add the MongoDB database (running also on my Bluemix at 134.168.18.50:27017->27017/tcp) as a destination from the service secure-gateway dashboard, nothing happened: the destination is not created (does not appear).
I am doing something wrong? Or is it just that this not a supported use case?
1) The Secure Gateway is a service used to integrate resources from a remote (company) data center into Bluemix. Why do you want to use the SG to access your docker container on Bluemix?
2) From a technical point of view the scenario described in the question should work. However, you need to add rule to the access control list (ACL) to allow access to the docker container with your MongoDB. When you are running the SG it has a console to type in commands. You could use something like allow 134.168.18.50:27017 as command to add the rule.
BTW: There is a demo using the Secure Gateway to connect to a MySQL running in a VM on Bluemix. It shows how to install the SG and add a ACL rule.
Added: If you are looking into how to secure traffic to your Bluemix app, then just use https instead of http. It is turned on automatically.

Resources