I'm developing a Google Sheets add-on. The add-on calls an API. In the API configuration, a url like https://longString-script.googleusercontent.com had to be added to the list of urls allowed to make requests from another domain.
Today, I noticed that this url changed to https://sameLongString-0lu-script.googleusercontent.com.
The url changed about 3 months after development start.
I'm wondering what makes the url to change because it also means a change in configuration in our back-end every time.
EDIT: Thanks for both your responses so far. Helped me understand better how this works but I still don't know if/when/how/why the url is going to change.
Quick update, the changing part of the url was "-1lu" for another user today (but not for me when I was testing). It's quite annoying since we can't use wildcards in the google dev console redirect uri field. Am I supposed to paste a lot of "-xlu" uris with x from 1 to like 10 so I don't have to touch this for a while?
For people coming across this now, we've also just encountered this issue while developing a Google Add-on. We've needed to add multiple origin urls to our oauth client for sign-in, following the longString-#lu-script.googleusercontent.com pattern mentioned by OP.
This is annoying as each url has to be entered separately in the authorized urls field (subdomain or wildcard matching isn't allowed). Also this is pretty fragile since it breaks if Google changes the urls they're hosting our add-on from. Furthermore I wasn't able to find any documentation from Google confirming that these are the script origins.
URLs are managed by the host in various ways. At the most basic level, when you build a web server you decide what to call it and what to call any pages on it. Google and other large content providers with farms of servers and redundant data centers and everything are going to manage it a bit differently, but for your purposes, it will be effectively the same in that ... you need to ask them since they are the hosting provider of your cloud content.
Something that MIGHT be related is that Google rolled out some changes recently dealing with the googleusercontent.com domain and picassa images (or at least was scheduled to do so.) So the google support forums will be the way to go with this question for the freshest answers since the cause of a URL change is usually going to be specific to that moment in time and not something that you necessarily need to worry about changing repeatedly. But again, they are going to need to confirm that it was something related to the recent planned changes... or not. :-)
When you find something out you can update this question in case it is of use to others. Especially, if they tell you that it wasn't a one time thing dealing with a change on their end.
This is more likely related to Changing origin in Same-origin Policy. As discussed:
A page may change its own origin with some limitations. A script can set the value of document.domain to its current domain or a superdomain of its current domain. If it sets it to a superdomain of its current domain, the shorter domain is used for subsequent origin checks.
For example, assume a script in the document at http://store.company.com/dir/other.html executes the following statement:
document.domain = "company.com";
After that statement executes, the page can pass the origin check with http://company.com/dir/page.html
So, as noted:
When using document.domain to allow a subdomain to access its parent securely, you need to set document.domain to the same value in both the parent domain and the subdomain. This is necessary even if doing so is simply setting the parent domain back to its original value. Failure to do this may result in permission errors.
Related
This is either a problem that Google is inflicting upon me, or a problem I am inflicting upon myself. I'm not totally sure.
When I first created my website a couple years ago, it followed a path similar to: http://www.mywebsite.abc123.com
Now, after a change in hosting services, I changed my domain to simply: https://www.mywebsite.com
I also added an SSL certifcate at the time for what it's worth. And it's been almost six months. I have all the variations (past and present) of my website registered and verified with Google's search console, but I can see no reason why the http://www.mywebsite.abc123.com link is getting indexed over the https://www.mywebsite.com link. I actually just assumed that http://www.mywebsite.abc123.com wouldn't even work anymore.
I've read about 301 redirects and it looks like something like that would solve my problem, but upon trying to implement it, I was confronted with nothing but a "Too many redirects" error.
Long story short, Google won't index my newer better URL.
But Yahoo and Bing will.
301 redirects have to be set up in the old domain so that it will point to the new one. If you still have access to that domain, you can add the redirects via .htaccess or in the admin panel.
I am looking to implement First Click Free in my rails application. Google has this information on how to verify a if a googlebot is viewing your site here.
I have been searching to see if there is anything existing for Rails to do this but I have been unable to find anything. So firstly, does anyone know of anything? If not, could anyone point me in the right direction of how to go about implementing what they have suggested in that page about how to verify?
Also, in that solution, it has to do a lookup every time to try and detect google, that seems like its going to be a big performance hit if I have to do it every page load? I could cache the IP if it has been verified in the past but Google have stated that their IP's change so at some point it may no longer belong to them. Although it probably doesn't happen regularly so it may not be that big of an issue.
Many thanks!!
Check out the browser gem: https://github.com/fnando/browser
What I'd do is use the
browser.bot?
method to check if your site is being accessed by a bot or not. If you care about the Googlebot specifically, you could check if
browser.name
includes googlebot. Keep in mind that this gem just checks the user agent sent by the client's browser, which could of course be spoofed. Sounds like that isn't a huge concern for your purposes.
I've built a Ruby gem for that recently, it's called "legitbot".
You may learn if a Web request comes from a supported bot using
bot = Legitbot.bot(userAgent, ip)
"legitbot" does this looking into User-agent and searching for a bot signature, i.e. how bots identify themselves. This doesn't guarantee that the Web request IP really comes from e.g. Googlebot. To make sure it is, call
bot.detected_as # => "Google"
bot.valid? # => true
bot.fake? # => false
Supported bots are Googlebot, Yandex bots, Bing, Baidu, DuckDuckGo.
We have realized that this URL http://Keyword:redacted#example.com/ redirects to http://example.com/ when copied and pasted into the browser's address bar.
As far as I understand this might be used in some ftp connections but we have no such use on our website. We are suspecting that we are targeted by an attack and have been warned by Google that we are passing PII (mostly email addresses) in our URL requests to their Google Adsense network. We have not been able to find the source, but we have been warned that the violation is in the form of http://Keyword:redacted#example.com/
How can we stop this from happening?
What URL redirect method we can use to not accept this and return an error message?
FYI I experienced a similar issue for a client website and followed up with Adsense support. The matter was escalated to a specialist team who investigated and determined that flagged violations with the format http://Keyword:redacted#example.com/ will be considered false positives. I'm not sure if this applies to all publishers or was specific to our case, but it might be worth following up with Adsense support.
There is nothing you can do. This is handled entirely by your browser long before it even thinks about "talking" to your server.
That's a strange URL for people to copy/paste into the browser's address bar unless they have been told/trained to do so. Your best bet is to tell them to STOP IT! :-)
I suppose you could look at the HTTP Authorization Headers and report an error if they come in populated... (This would $_SERVER['PHP_AUTH_USER'] in PHP.) I've never looked at these values when the header doesn't request them, so I'm not sure if it would work or not...
The syntax http://abc:def#something.com means you're sending userid='abc', password='def' as basic authentication parameters. Your browser will pull out the userid & password and send them along as authentication information, leaving the url without them.
As Peter Bowers mentioned, you could check the authorization headers and see if they're coming in that way, but you can't stop others from doing it if they want. If it happens a lot then I'd suspect that somewhere there's a web form asking users to enter their user/password and it's getting encoded that way. One way to sleuth it out would be to see if you can identify someone by the userid specified.
Having Keyword:redacted sounds odd. It's possible Google Adsense changed the values to avoid including confidential info.
I have awebsite, lets just call it search, in one of my browserpages open. search has a form, which when submitted runs queries on a database to which I don't have direct access. The problem with search is that the interface is rather horrible (one cannot save the aforementioned queries etc.)
I've analyzed the request (with a proxy) which is send to the server via search and I am able to replicate it. The server even sends back the correct result, but the browser is not able to open it. (Same origin policy). Do you have any ideas on how I could tackle this problem?
The answer to your question is: you can't. At least not without using a proxy as suggested in the answer by Walter, and that would mean your web site visitors would have to knowingly login to your web site using their other web site's credentials (hmm doesn't sound good...)
The reason you can't do this is related to security, if you could run a script on the tab next to the one with the site open (which is what I'm guessing you want to do), you would be able to do a CSRF attack and get any data you wish and send it to hack.com
This is, of course, assuming that there has to be a login somewhere in the process, otherwise there's no reason for you to not be able to create a simple form which posts the required query and gets the info.
If you did have access to the mentioned website, you would be able to support cross domain xml using JSONP.
It is not possible to bypass the same origin policy in javascript (assuming that you want to do it with that considering your question). You need to set up a proxy server side that is doing the request for you and returns the html.
A simple way of doing this in PHP would be like this:
<?php
echo file_get_contents("http://searchdomainname.com" . "?" . http_build_query($_GET, '', '&'));
?>
I would like to protect my s3 documents behind by rails app such that if I go to:
www.myapp.com/attachment/5 that should authenticate the user prior to displaying/downloading the document.
I have read similar questions on stackoverflow but I'm not sure I've seen any good conclusions.
From what I have read there are several things you can do to "protect" your S3 documents.
1) Obfuscate the URL. I have done this. I think this is a good thing to do so no one can guess the URL. For example it would be easy to "walk" the URL's if your S3 URLs are obvious: https://s3.amazonaws.com/myapp.com/attachments/1/document.doc. Having a URL such as:
https://s3.amazonaws.com/myapp.com/7ca/6ab/c9d/db2/727/f14/document.doc seems much better.
This is great to do but doesn't resolve the issue of passing around URLs via email or websites.
2) Use an expiring URL as shown here: Rails 3, paperclip + S3 - Howto Store for an Instance and Protect Access
For me, however this is not a great solution because the URL is exposed (even for just a short period of time) and another user could perhaps in time reuse the URL quickly. You have to adjust the time to allow for the download without providing too much time for copying. It just seems like the wrong solution.
3) Proxy the document download via the app. At first I tried to just use send_file: http://www.therailsway.com/2009/2/22/file-downloads-done-right but the problem is that these files can only be static/local files on your server and not served via another site (S3/AWS). I can however use send_data and load the document into my app and immediately serve the document to the user. The problem with this solution is obvious - twice the bandwidth and twice the time (to load the document to my app and then back to the user).
I'm looking for a solution that provides the full security of #3 but does not require the additional bandwidth and time for loading. It looks like Basecamp is "protecting" documents behind their app (via authentication) and I assume other sites are doing something similar but I don't think they are using my #3 solution.
Suggestions would be greatly appreciated.
UPDATE:
I went with a 4th solution:
4) Use amazon bucket policies to control access to the files based on referrer:
http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?UsingBucketPolicies.html
UPDATE AGAIN:
Well #4 can easily be worked around via a browsers developer's tool. So I'm still in search of a solid solution.
You'd want to do two things:
Make the bucket and all objects inside it private. The naming convention doesn't actually matter, the simpler the better.
Generate signed URLs, and redirect to them from your application. This way, your app can check if the user is authenticated and authorized, and then generate a new signed URL and redirect them to it using a 301 HTTP Status code. This means that the file will never go through your servers, so there's no load or bandwidth on you. Here's the docs to presign a GET_OBJECT request:
https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Presigner.html
I would vote for number 3 it is the only truly secure approach. Because once you pass the user to the S3 URL that is valid till its expiration time. A crafty user could use that hole the only question is, will that affect your application?
Perhaps you could set the expire time to be lower which would minimise the risk?
Take a look at an excerpt from this post:
Accessing private objects from a browser
All private objects are accessible via
an authenticated GET request to the S3
servers. You can generate an
authenticated url for an object like
this:
S3Object.url_for('beluga_baby.jpg', 'marcel_molina')
By default
authenticated urls expire 5 minutes
after they were generated.
Expiration options can be specified
either with an absolute time since the
epoch with the :expires options, or
with a number of seconds relative to
now with the :expires_in options:
I have been in the process of trying to do something similar for quite sometime now. If you dont want to use the bandwidth twice, then the only way that this is possible is to allow S3 to do it. Now I am totally with you about the exposed URL. Were you able to come up with any alternative?
I found something that might be useful in this regard - http://docs.aws.amazon.com/AmazonS3/latest/dev/AuthUsingTempFederationTokenRuby.html
Once a user logs in, an aws session with his IP as a part of the aws policy should be created and then this can be used to generate the signed urls. So in case, somebody else grabs the URL the signature will not match since the source of the request will be a different IP. Let me know if this makes sense and is secure enough.