Why in cloud run domain mapping it takes so long to map my service to a domain in GoDaddy? - google-cloud-run

I have a frontend in cloud run I want to map to a domain purchased in DoDaddy (breadfree.es) for a demo, but after configuring it in Domain mapping it has taken up to 16 hours and it still does not work. Can you give me any advice? I read it takes from 4 to 8 hours for the dns to propagate, and also that for some it works deleting the records and putting them again (in a link below). Is there any other solution that works in GoDaddy?
Also, it is so strange to me because I have discovered although the domain is not mapped, if I map this service to a subdomain, it is propagated faster. I mapped it with tienda.breadfree.es, and it works, although I prefer it to be map to breadfree.es (tienda means shop in Spanish).
Thanks so much in advance for your help!
Google Cloud Run - Domain Mapping stuck at Certificate Provisioning

In general for Cloud Run (fully managed) it takes 24 hours until the domain is propagated. Even though the SSL certificate can take less than 15 minutes the full process will take up to 24 hours.
This process can take as much as 24 hours because of the certificates and the time needed to be issues and renewed. Indeed, when you map a service to a custom domain like yours (tienda) a managed certificate is automatically issued but for a domain it takes more and it is not automatically issued.
You could try what you posted in the StackOverflow post but you should wait and see after 24 hours if the domain is still not working. Usually if the domain is still not working after 24 hours probably something went wrong in the configuration phase or a step was skipped and I suggest you to open a support ticket in order to check with the Google Team what is the issue.

You can use - https://toolbox.googleapps.com/apps/dig/#A/
This will tell you Time To Live (TTL). For information on TTL check this link (https://support.google.com/a/answer/48090?hl=en)

Related

Sporadic redirects by IAP despite valid cookie (recent development, started on Friday 14th January 2022)

Since Friday all of our users are seeing sporadic 302s when trying to access our in-GCP IAP protected resources. Cookies are valid, and definitely being passed with the request.
This has worked for us for two years and nothing has changed here recently past standard GKE upgrades.
Since Friday we're seeing sporadic 302s from IAP (X-Goog-IAP-Generated-Response: true) as if the cookie is invalid. I can recreate this problem using a simple curl command, with my cookie stored in a file called cookie.test.
`curl -vs -b ./cookie.test https://gitlab.mydomain.com/projects/myapp.git
This succeeds maybe 1 out of 5 times. Behaviour is very recreatable. 2 out of 5 times we'll get a response from gitlab.mydomain.com and the other 3 times we'll see a 303 to accounts.google.com. Same cookie every time, all requests within a few seconds of each other.
This is causing an enormous inconvenience for our team.
Has there been a change to IAP recently that might explain this? Do you have any other reports of similar behaviour?
Folks,
I am from the IAP team at Google. Recently IAP has made some changes to the cookie name. However, this change should have been transparent to the browser users.
For people using GCP_IAAP_AUTH_TOKEN cookie name for programmatic auth, your flows will break. The documented way to send credentials in a programmatic call is to use Authorization / Proxy-Authorization header.
https://cloud.google.com/iap/docs/authentication-howto#authenticating_a_user_account
Cookies are meant to be used for browser flows only and IAP holds complete control of the naming and format of the cookie. If you continue to use cookies to send in credentials to IAP (by reverse engineering the new format), you run a risk of being broken again by future changes in cookie name/format.
One clarification is required though. In the original post, it was mentioned that you are getting a response of 302 to accounts.google.com, is that true for browser flows also? If so, please respond back with a har file and I'll be happy to take a look.
cheers.
I have also started facing this issue since last week and have spent around 2 days troubleshooting it as initially we thought that it must be some problem on our side.
Good to know that I am not the only one facing it.
Would really appreciate some updates from Google Around it.
However, one thing I found:- There was one official blog from google around IAP:- https://cloud.google.com/blog/products/identity-security/getting-started-with-cloud-identity-aware-proxy
they have updated this blog on 19th January and removed the mention of the cookie:- GCP_IAAP_AUTH_TOKEN
However, the line they have changed is still unclear to me and very confusing
It now says :-
That token can come from either a browser cookie or, for programmatic
access, from an Authorization: bearer header.
From where will the browser cookie come, what will be its name, there is no mention around it.
Let me know if someone finds a way to get it work again.
Thanks,
Nishant Shah

Getting random timeouts when creating Azure DevOps project with API

I'm implementing an API to create custom AzDo projects (my API calls the AzDo API).
About the creation:
Create project
Create two custom groups
Setup the permissions for the custom groups (area, iteration, build, ...)
Now my problem is that everything is working fine for one project. When creating more then one project I'm getting timeouts from AzDo, the strange thing is that I do not create all projects in the same time. I have a queue => My service always grabs one item from the queue => Means the service is creating the projects in row, not all at the same time. But it seems like that I´m getting the timeouts after I created 4 or 5 projects in a row.
If I´m waiting for some minutes between the creation, there seems to be no problems.
Note: I´m really getting the timeouts randomly, sometimes it could be a POST, another time it could be a PUT.
Does anybody have the same problem - or better, does anyone know to solve that issue?
Getting random timeouts when creating Azure DevOps project with API
Sorry for any inconvenience.
This behavior is by designed and is not a issue. There is no way to fix it at present.
In order to improve the response of Azure devops service and reduce the occupation of server by invalid requests, MS gives restrictions on REST API requests:
Only have one REST API task active at a given point in an account.
Set Api Response had a 20 second timeout.
You could check this thread and similar document for some more details.
Currently, there is no better way to modify the default timeout period, you could try to reduce the number of projects you create at once.

Malicious Bots waking up heroku free app and using up all dyno hours

I have an app hosted on heroku which for the last 5 years has done fine on free dyno hours. There's a single user, and it doesn't get much use throughout the day.
As of the last couple of months, we seem to be targeted by bots who created fake accounts. we are getting so many of these bots now, that they are waking our app up so often that it has consumed our free dyno hours.
Does anyone know how to get rid of them? I had tried using invisible_captcha but that did not seem to help.
You should consider to use RackAttack https://github.com/kickstarter/rack-attack
It's a middleware that allows you to block/allow a request.
For example, if they are using the same email domain for each new registration, you could only accept ten registrations (because it's not a big website) with this domain by hour, until they calm down.
Or, if they come from the same place, you can limit the requests of this country thanks to their IP
EDIT:
If you check the country based on the IP address, the dyno will be wake up (because you'll call an external service to get the information), so it's not a good solution in this case
Simple solution:
If you have only a single user, consider hosting the app on a secret subdomain which will not get crawled. And everytime you get an issue like this you can just change your subdomain and inform your user.
A more complex solution:
Use a solution like cloudflare and prevent unwanted traffic to the application.

Are any HTTP log files saved on my system by default?

I have an application hosted on Amazon EC2 on a Ubuntu machine, written in Ruby (on Rails), deployed with Capistrano and running on Nginx.
Last friday one module of my application has crashed and nobody in the company noticed until this morning. We spent some money with Facebook and Google ads and received a few hundreds of visits, but nobody created an account due to this bug.
I wonder if this configuration is saving the HTTP requests and its bodies somewhere in a log file. But we didnt explicitly set it, so it would only happen if any of these technologies do it by default.
Do you know whether there is such log or not?
Nope, that wouldn't be anywhere in a usable form (I'm inferring you want to try to create the accounts from request bodies in log files). You'll have the requests themselves in your nginx logs, and the rails logs will contain more info about the request, but as a matter of security, by default, any sensitive information (e.g. passwords) would be scrubbed from them. You may still be able to get some info from them.
To answer your question a little more specifically, the usual place for these logs on your system would be:
/var/log/nginx/
/path/to/your/rails/app/log/production.log
On a separate note, I would recommend looking into an error reporting service like Honeybadger, Airbrake, Raygun, Appsignal, or others so that you don't have silent failures like this moving forward.

Adding binding hostname in IIS to website breaks site

I'm baffled by an issue with my site in IIS 7 - it works fine when it's the only site that's running, and no binding is being used. I have my domain name www.mysite.com registered with the DNS to point to my IP and it pulls up my site great. However, now I want to add a second site in IIS, so I have to start using the binding host names, but when I put www.mysite.com in as the host name, suddenly I can't pull up the site anymore! And, I don't know if this information is helpful, but if I start up the Default Web Site (that has no bindings) along side my current website that I've added the binding to, the domain name will pull the default web site (IIS7 jpeg) instead! It's like it completely doesn't see the binding I've added, other than the fact that it allows me to run both sites at the same time. It doesn't take time for the binding to take effect in IIS right? Waiting until tomorrow won't make a difference I imagine?
I've done this many times before on other servers, and as far as I can remember, I seem to be doing everything the same as before. I can't figure out why it's not working this time. Am I forgetting something crucial here?
The only difference I can tell is that this domain is hosted with Google, which requires me to update the DNS myself. All those other times, I've called up goDaddy and said, 'Hey, can you point this domain to this IP address?' and they do it for me. Is it possible that I've updated the DNS incorrectly, or incompletely? I don't know if that can be the case if the domain has been pulling up the website just fine all this time until I tried to add the second website on the server... Any ideas?
BTW, this is a virtual server (2008 R2) hosted at atlantic.net - I've never used them before. All the other virtual servers I've used have been with 1and1. Don't know if that makes a difference too.
Ok, I figured it out - turns out the DNS WAS set incorrectly after all! I was able to narrow down the issue when I started creating a bunch of dummy sites and playing with my local computer's hostfile and saw that everything seemed to be routing just fine. So, clearly IIS was working as expected. So, I then called GoDaddy repeatedly until I found someone who agreed to help me with a Google domain. Apparently, from what I could understand, I set the IP address in the wrong place which only made the domain forward as the IP address, so there was no hostname for IIS to match it to, which is why it pulled up the default website (no bindings) with no problems, and ONLY the default website! He directed me to another screen (the DNS Zone File tab) where I could set it properly. Now, everything goes to the expected sites!
Hopefully this will help another newbie out one day when they have to set their own DNS (gasp!)

Resources