I want to integrate the Google Adwords conversion script to my web app, therefore I have to extend my CSP rule to allow this one.
I face problem allowing https://www.google.xx/ads/ into script-src policy because it looks like, depending of the region, that the domain gonna change.
For example, if I access the page in Switzerland, the allowed script should be https://www.google.ch/ads/ but if I access it in Romania, the allowed script should be https://www.google.ro/ads/ etc.
How could I allow all domains in my policy without having to list all countries and regions of the world?
Thx in advance for the help
P.S.: Console stacktrace
Refused to load the script 'https://www.google.ro/ads/user-lists/8...
P.P.S: I tried to whitelist it using nonce but it looks like that the following script can't be whitelisted like this
<script nonce="random-base64">
window.dataLayer = window.dataLayer || [];
var gtag = function gtag(){ // <---- There, CSP problem
dataLayer.push(arguments);
};
gtag('js', new Date());
gtag('config', 'SOMETHING');
</script>
P.P.P.S.: Same problem with img-src btw. Google Adwords CSP (content security policy) img-src
How could I allow all domains in my policy without having to list all countries and regions of the world?
You don't. There's no TLD-level whitelist; and for good reason. You can't possibly guarantee that a different TLD with the same main domain is the same entity, so a wildcard would make no sense.
With Google Adsense as well I've had this issue, and basically your only options are an excessive whitelist (manually listing every possible domain and hoping they don't add new ones), an even more excessive global whitelist (this is extremely not recommended), or just listing the most common countries of origin and accepting that some geolocales will be excluded.
The third option is generally the best, I use adsense not adwords, but most of my traffic comes from the US and I'm willing to lose ad impressions from a few specific countries with low hit counts to keep from maintaining an absurd list.
The only real solution here can come from Google: they have to stop serving resources from different TLDs (this is, IMO, a terrible practice in all cases since HREF LANG tags are a thing anyway). Kind of surprised Google is even still doing it in 2018 with CSP being a moderately big deal but here we are.
As for img-src just use https: IMO. It's okay to over-eagerly load images if you're dealing with an unpredictable third party domain set. CSP is meant to block dangerous content. img-src is a pretty low risk factor and would pretty much have to be mixed with a second exploit to cause real harm.
Related
I had a recent outage on a Nginx/Rails application server. It turned out we were being bombarded by requests to a particular URL that takes a few seconds to load. It appears that a user was continually refreshing that page for a number of minutes - my guess is they accidentally put some object on their keyboard in such a way as to trigger a constant stream of browser refreshes.
Regardless of the cause, I need to put protection in place against this kind of problem, and note that this is not static content - it's dynamic, user-specific content sitting behind authentication.
I've looked into using Cache-Control but this appears to be a non-starter - on Chrome at least, refreshing a page within the same tab will trigger a request regardless of the Cache-Control header (cf iis - Is Chrome ignoring Cache-Control: max-age? - Stack Overflow)
I believe the answer may be rate limiting. If so, I wouldn't be able to do it based on IP because many of our customers share the same one. However I may be able to add a new header to identify a user and then apply rate limiting in Nginx based on this.
Does this sound like the way forward? This feels like it should be a fairly common problem!
Nginx rate limiting is a fast configuration update if immediate mitigation is needed. As others have mentioned, caching would also be ideal when combined with this.
server {
# DoS Mitigation - Use IP and User Agent to prevent against NAT funnels from different computers
limit_req_zone $host$binary_remote_addr$http_user_agent zone=rails_per_sec:10m rate=2r/s;
upstream rails {...}
try_files $uri $uri/ #rails;
location #rails {
limit_req zone=rails_per_sec burst=10 nodelay;
...
}
}
The $http_authorization header or a unique cookie (e.g. $cookie_foo) could also be used to uniquely identify requests that would collide with the same IP/user-agent values.
limit_req_zone $host$binary_remote_addr$http_authorization ...;
limit_req_zone $host$binary_remote_addr$cookie_foo ...;
A colleague of mine suggested a solution that I think is the best fit for our situation. I'll explain why in case this proves useful to anyone else.
Note that we were receiving requests at a low rate - just 6 per second. The reason this was a problem was that the page in question was quite a slow loading report, only accessible to authenticated users.
Server-side caching is not a great solution for us because it needs to be implemented individually on each affected page and we have a complex app with lots of different controllers.
Rate-limiting via Nginx might be viable but tricky to optimise and also has issues with testability.
Anyway, my colleague's solution is as follows: we already have a table that logs details of each request, including the ID of the user that made it. To find out if a user is refreshing too often, we simply schedule a Sidekiq job once every, say, 30 seconds to check this table for users with a refresh rate above our threshold and then kill any active sessions.
How you kill a session depends how you are managing them - in our case, we could simply add a flag to the user that says "rate_limited" and have our Sidekiq job set it to true, and then check the value of this flag on each request. If it's true, the user will be redirected away from the slow page and on to the login screen which will happily deal with refreshing itself 6 times per second.
You could achieve something similar even without a request logging table, e.g. by keeping track of the request rate in a new column on the users table.
Note that this solution is a better UX than Nginx rate-limiting, as users are never actually locked out of the app.
We have several interactive kiosks with files that run locally off of the hard drive. They are not "local" hosted, no web server involved. We have internet connectivity. Can we track user interactions with Google Tag Manager? I have read a few posts that seem to indicate it's possible, but the set up has changed dramatically since they were authored.
We have our GA and GMT setup, with the appropriate scripts embedded within the local html index file. We have set up a container, and several tags and triggers for simple tracking of page views. But there is no live data coming into my GA dashboard. I am sure I am missing steps if this is possible. Any help much appreciated.
Hoping I am headed right direction here - but still no tracking - where do I get a clientID to manually pass in? Thank you!!!
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-XXXXXXXXX-X',{
'storage':'none',
'clientId': 'XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXX'
});
</script>
Your question is about GTM, but it is much more likely that your problem is with Google Analytics. There is nothing that prevents GTM from running in a local file (unless you use a very old GTM snippet - I think before GTM switched completely to https, Google used an url without protocol, that would need to be changed), but Google Analytics will not work in a default installation if it cannot set cookies (which in a local file it can't).
At the very least you would have to set the "storage" field to "none" in your GA tag or GA settings variable, and then pass in a client id manually (in a kiosk it is rather hard to determine when a new visit starts, so maybe you could set a different client id every time users return to a home screen or something like that. Or you just live with the fact that everybody is the same user in GA).
I'm developing a Google Sheets add-on. The add-on calls an API. In the API configuration, a url like https://longString-script.googleusercontent.com had to be added to the list of urls allowed to make requests from another domain.
Today, I noticed that this url changed to https://sameLongString-0lu-script.googleusercontent.com.
The url changed about 3 months after development start.
I'm wondering what makes the url to change because it also means a change in configuration in our back-end every time.
EDIT: Thanks for both your responses so far. Helped me understand better how this works but I still don't know if/when/how/why the url is going to change.
Quick update, the changing part of the url was "-1lu" for another user today (but not for me when I was testing). It's quite annoying since we can't use wildcards in the google dev console redirect uri field. Am I supposed to paste a lot of "-xlu" uris with x from 1 to like 10 so I don't have to touch this for a while?
For people coming across this now, we've also just encountered this issue while developing a Google Add-on. We've needed to add multiple origin urls to our oauth client for sign-in, following the longString-#lu-script.googleusercontent.com pattern mentioned by OP.
This is annoying as each url has to be entered separately in the authorized urls field (subdomain or wildcard matching isn't allowed). Also this is pretty fragile since it breaks if Google changes the urls they're hosting our add-on from. Furthermore I wasn't able to find any documentation from Google confirming that these are the script origins.
URLs are managed by the host in various ways. At the most basic level, when you build a web server you decide what to call it and what to call any pages on it. Google and other large content providers with farms of servers and redundant data centers and everything are going to manage it a bit differently, but for your purposes, it will be effectively the same in that ... you need to ask them since they are the hosting provider of your cloud content.
Something that MIGHT be related is that Google rolled out some changes recently dealing with the googleusercontent.com domain and picassa images (or at least was scheduled to do so.) So the google support forums will be the way to go with this question for the freshest answers since the cause of a URL change is usually going to be specific to that moment in time and not something that you necessarily need to worry about changing repeatedly. But again, they are going to need to confirm that it was something related to the recent planned changes... or not. :-)
When you find something out you can update this question in case it is of use to others. Especially, if they tell you that it wasn't a one time thing dealing with a change on their end.
This is more likely related to Changing origin in Same-origin Policy. As discussed:
A page may change its own origin with some limitations. A script can set the value of document.domain to its current domain or a superdomain of its current domain. If it sets it to a superdomain of its current domain, the shorter domain is used for subsequent origin checks.
For example, assume a script in the document at http://store.company.com/dir/other.html executes the following statement:
document.domain = "company.com";
After that statement executes, the page can pass the origin check with http://company.com/dir/page.html
So, as noted:
When using document.domain to allow a subdomain to access its parent securely, you need to set document.domain to the same value in both the parent domain and the subdomain. This is necessary even if doing so is simply setting the parent domain back to its original value. Failure to do this may result in permission errors.
I'm forcing https to access my website, but some of the contents must be loaded over http (for example video contents can not be over https), but the browsers block the request because of mixed-contents policy.
After hours of searching I found that I can use Content-Security-Policy but I have no idea how to allow mixed contents with it.
<meta http-equiv="Content-Security-Policy" content="????">
You can't.
CSP is there to restrict content on your website, not to loosen browser restrictions.
Secure https sites given users certain guarantees and it's not really fair to then allow http content to be loaded over it (hence the mixed content warnings) and really not fair if you could hide these warnings without your users consent.
You can use CSP for a couple of things to aid a migration to https, for example:
You can use it to automatically upgrade http request to https (though browser support isn't universal). This helps in case you missed changing a http link to https equivalent. However this assumes the resource can be loaded over https and sounds like you cannot load them over https so that's not an option.
You can also use CSP to help you identify any http resources on you site you missed by reporting back a message to a service you can monitor to say a http resource was attempted to be loaded. This allows you identify and fix the http links to https so you don't have to depend on above automatic upgrade.
But neither is what you are really looking for.
You shouldn't... but you CAN, the feature is demonstrated here an HTTP PNG image converted on-the-fly to HTTPS.
<meta http-equiv="Content-Security-Policy" content="upgrade-insecure-requests">
There's also a new permissions API, described here, that allows a Web server to check the user's permissions for features like geolocation, push, notification and Web MIDI.
Seeing some odd behaviour in Chrome, and not sure if it's expected behaviour when using appcache, or just Chrome.
It's a single-page app, powered by our RestAPI, it works fine when the RestAPI is being requested under HTTP, however as soon as we change the url to be the HTTPS version then it stops working. There's not a lot (i.e. any) information in Chrome's console as to why it decides to stop working.
We've managed to narrow it down to the NETWORK section in the appcache file, the only way we can get it to work is to use the * wildcard, which we don't want to do, as that bypasses the whole point of the appcache, and reduces security (from my understanding from reading the docs etc).
We've tried any and all variations of the API url (as in combinations of it with wildcards in various relevant locations), but none seem to work (even a https://* doesn't allow a successful request).
Any experienced know what's going on at all?
Thanks
Need a bit of clarification (see my comment), but in the meantime:
The NETWORK behaviour of the manifest is really there to, according to the spec, make "the testing of offline applications simpler", by reducing the difference between online and offline behaviour. In reality, it just adds another gotcha.
By default, anything that isn't explicitly in the manifest (listed in the manifest file), implicitly part of the cache (a visited page that points to the manifest), or covered by a FALLBACK prefix, will fail to load, even if you're online, unless the url is listed in the NETWORK section or the NETWORK section lists *.
Wildcards don't have special meaning in the NETWORK section, if you list http://whatever.com/* it will allow requests to that url, as an asterisk is a valid character in a url. The only special case is a single *, which means "allow the page to make network requests for any resources that aren't in the cache".
Basically, using * in NETWORK isn't a security risk, in fact it's probably what you want to do, every AppCache site I've built uses it.
I drew this flow chart to try and explain how appcache loads pages and resources: