Parser-blocking, cross-origin script on Cloudflare script - parsing

We are using Cloudflare on one of our sites and last week we have noticed that the site dosen`t load properly. We can see the following error in the console:
A Parser-blocking, cross-origin script,
http://ajax.cloudflare.com/cdn-cgi/nexp/dok3v=f2befc48d1/cloudflare.min.js,
is invoked via document.write. This may be blocked by the browser if
the device has poor network connectivity. See
https://www.chromestatus.com/feature/5718547946799104 for more
details.
This happens in Firefox and Chrome, since last week as new versions of these browsers came out. We have tried to contact Cloudflare, however there is no reply from them. Inspecting their code we can see the document.write and we haven`t got access to this.
Anyone came across with any solution on this?

Related

Identity Server 4 with Chrome 76 gets stuck on authorize callback

At my work, we are finally upgrading our old Identity Server 3 to 4. We just got a very weird problem doing so. Everything works fine in all major browsers, but we also need to support some Electron clients. Here is where the weird part begins. All very old clients using Electron version 3 still work. All newer clients starting at Electron 9 also work. The only clients that don't work are the ones using Electron 6 (Chrome 76).
I already found this very helpful article written by Sebastian Gingter which helped to get the login working. But it only got me one step further. Now the client gets stuck at the connect/authorize/callback endpoint using the response_mode = form_post.
I already found some articles/stackoverflow questions pointing out to check the redirect URIs and to downgrade the CSP to version 1. The redirect URIs are configured correctly since the other clients work. The CSP does not help since I don't even get that far. It seems that the response body is never even loaded by Electron/Chrome.
Devtools Timing Screenshot
The request never finishes. On the server-side, it does though. I debugged through the IS 4 code and the dynamic HTML is written to the response like with all the other clients. I even called CompleteAsync() on the response manually and it still did not finish.
I researched and debugged for quite some time now and am out of ideas. Does anyone out there know this issue and more importantly also knows how to fix it?

.com extension adds random characters/numbers

I'm using a domain name with this general structure: http://mydomainname.com/
However, when I click it, I get a 404 message saying:
And when I look in the URL, it's not http://mydomainname.com/ but surprisingly http://mydomainname.com/YkPWZ/.
How did YkPWZ/ appear automatically and what can I do to eliminate this issue? Sometimes accessing http://mydomainname.com/ works fine, but most of the time the browser automatically tacks on some random characters at the end of the URL, throwing the 404 message. This is not a browser-specific issue and I've had a few colleagues replicate this issue on different operating systems (both desktop and iOS).
P.S. If it matters at all, I generated my website using Github Pages (markdown files, not HTML).
I'm quite certain this is an issue on the GoDaddy side of things, though I'm unable to find any official documentation on the subject. As noted in comments above, the redirect isn't coming from GitHub Pages.
I found an old thread discussing the issue. Here is a brief summary:
GoDaddy may use redirects like this to handle load balancing on their shared hosting servers.
In several cases, users contacted GoDaddy to ask about the problem and
had the issue resolved, but
were never told the technical specifics of what was happening.
If you wish to stay with GoDaddy I recommend contacting them and sending them to the link I found above. They may be able to resolve the issue for you, though I wouldn't expect an explanation.
Alternatively, you can use another web host. In many circles, GoDaddy isn't rated very highly. It's lucky that there are so many web hosts to choose from. Alternatively, you can use a custom domain directly with GitHub Pages, bypassing a third-party host entirely.

Users are "logging in as others" on Chrome for iOS

We're having a unique issue that is affecting a small handful of users from around the world. Nothing connects them aside from the fact they are all using Chrome for iOS.
Intermittently, users will login to our application (https://www.mousehuntgame.com) and appear to be "someone else". This issue cropped up recently during a period when no new code had been pushed to the site.
Of course the first thing we checked was that our authentication was not bugged or that the user's hash (stored in either cookies or a PHP session) was not crossing connections somewhere. The issue is not in the authentication system, and it only affects users using Chrome for iOS. The same users using Safari no longer see the issue.
We have the following PHP headers being sent to prevent caching:
header("Cache-Control: no-cache, no-store, max-age=0, must-revalidate, private");
header("Pragma: no-cache");
The "target users" that these users "turn into" are not yet confirmed to be also using Chrome. The solution for them to simply stop using the browser is not an option as others who continue to use Chrome can still gain access to these accounts.
Can Chrome be somehow caching cookies and "sharing" them across users? Could this be a DNS issue where it sees a mobile user agent and in order to save loading time it retrieves cached information and hands it off without further checking who the user is? This is a stretch, I know, but it's been a strange issue and we're grasping at straws now.
I work on the Chrome Data Compression proxy.
I'd be very surprised if the Chrome proxy were at fault here, since we respect standard caching headers. That said, there could be a bug. If you can try to reproduce with and without the proxy that would be helpful. Another way to test is to open the page in an Incognito tab (which does not use the proxy).
(Edited)
I looked at some of the headers we are seeing from your site, and they include things like
Cache-Control: max-age=2592000
which means these responses are publicly cacheable for 30 days. I see a wide range of caching headers from many different URLs on the site, suggesting that your caching rules aren't being applied as widely as you thought; but of course I don't know the structure of the site and whether that would lead to the problem you are describing.
Feel free to reach out (email is fine too) and I'm happy to help debug if you still think this is a problem on our end.

google bot, false links

I have a little problem with google bot, I have a server working on windows server 2009, the system called Workcube and it works on coldfusion, there is an error reporter built-in, thus i recieve every message of error, especially it concerned with google bot, that trying to go to a false link, which doesn't exist! the links looks like this:
http://www.bilgiteknolojileri.net/index.cfm?fuseaction=objects2.view_product_list&product_catid=282&HIERARCHY=215.005&brand_id=hoyrrolmwdgldah
http://www.bilgiteknolojileri.net/index.cfm?fuseaction=objects2.view_product_list&product_catid=145&HIERARCHY=200.003&brand_id=hoyrrolmwdgldah
http://www.bilgiteknolojileri.net/index.cfm?fuseaction=objects2.view_product_list&product_catid=123&HIERARCHY=110.006&brand_id=xxblpflyevlitojg
http://www.bilgiteknolojileri.net/index.cfm?fuseaction=objects2.view_product_list&product_catid=1&HIERARCHY=100&brand_id=xxblpflyevlitojg
of course with definition like brand_id=hoyrrolmwdgldah or brand_id=xxblpflyevlitojg is false, i don't have any idea what can be the problem?! need advice! thank you all for help! ;)
You might want to verify your site with Google Webmaster Tools which will provide URLs that it finds that error out.
Your logs are also valid, but you need to verify that it really is Googlebot hitting your site and not someone spoofing their User Agent.
Here are instructions to do just that: http://googlewebmastercentral.blogspot.com/2006/09/how-to-verify-googlebot.html
Essentially you need to do a reverse DNS lookup and then a forward DNS lookup after you receive the host from the reverse lookup.
Once you've verified it's the real Googlebot you can start troubleshooting. You see Googlebot won't request URLs that it hasn't naturally seen before, meaning Googlebot shouldn't be making direct object reference requests. I suspect it's a rogue bot with a User Agent of Googlebot, but if it's not you might want to look through your site to see if you're accidentally linking to those pages.
Unfortunately you posted the full URLs, so even if you clean up your site, Googelbot will see the links from Stack Overflow and continue to crawl them because it'll be in their crawl queue.
I'd suggest 301 redirecting these URLs to someplace that make sense to your users. Otherwise I would 404 or 410 these pages so Google know to remove these pages from their index.
In addition, if these are pages you don't want indexed, I would suggest adding the path to your robots.txt file so Googlebot can't continue to request more of these pages.
Unfortunately there's no real good way of telling Googlebot to never ever crawl these URLs again. You can always go into Google Webmaster Tools and request the URLs to be removed from their index which may stop Googlebot from crawling them again, but that doesn't guarantee it.

IE 8 will no longer accept cookies from localhost

I had to disable cookies for some testing in a web application. now for some reason in IE I cannot get cookies working on localhost any more. They work as expected in Safari, Firefox, and Chrome, but for some unknown reason I cannot for the life of me get cookies working on localhost. I have tried literally every setting imaginable with absolutely no luck. If I change the Url to 'localhost." it works as expected, but when I just use "localhost", without the "." period, cookies are absolutely not written. What the heck did i do? I tried upgrading to IE 9 and that didn't work. I reverted back to IE 8 and still have the same problem. I'm going absolutely mad trying to firgure out what is causing this. I tried tools, internet options, privacy, advanced, and explicit tell the browser to accept all 1st and 3rd party cookies and I'll be damned if I'm on a localhost site, the cookies are not written. This has worked perfect in the past, so it's no doubt some setting I changed but I cannot for the life of me figure out what the hell is going on. If anyone has any idea of how I can remedy this, please do let me know. I've tried every setting imaginable with absolutely no luck. I hate internet explorer but that a conversation for a different day.
go into tools, internet options, advanced, and hit the reset button. Put everything back to factory defaults :)
At my wit's end, I just decided to try using http://127.0.0.1/... instead of http://localhost/.... It works. Had a similar problem with Safari and same solution worked there. Hope it works for you.
Were you by chance using a tool like Fiddler2? Check your connection settings etc... I have had IE get hung in a weird state after using web proxy tools.
#Hcabnettek try to set IE caching settings to Always Refresh from server in Developer Tools.
That might be problem and also try adding one extra querystring containing some random values to your page URL every time because you can never be sure about cache is enabled or disabled at client side, so adding random values in URL's querystring will trigger IE to load new cache for that different page URL.
Hope that helps you, because it helped me also.

Resources