We're having a unique issue that is affecting a small handful of users from around the world. Nothing connects them aside from the fact they are all using Chrome for iOS.
Intermittently, users will login to our application (https://www.mousehuntgame.com) and appear to be "someone else". This issue cropped up recently during a period when no new code had been pushed to the site.
Of course the first thing we checked was that our authentication was not bugged or that the user's hash (stored in either cookies or a PHP session) was not crossing connections somewhere. The issue is not in the authentication system, and it only affects users using Chrome for iOS. The same users using Safari no longer see the issue.
We have the following PHP headers being sent to prevent caching:
header("Cache-Control: no-cache, no-store, max-age=0, must-revalidate, private");
header("Pragma: no-cache");
The "target users" that these users "turn into" are not yet confirmed to be also using Chrome. The solution for them to simply stop using the browser is not an option as others who continue to use Chrome can still gain access to these accounts.
Can Chrome be somehow caching cookies and "sharing" them across users? Could this be a DNS issue where it sees a mobile user agent and in order to save loading time it retrieves cached information and hands it off without further checking who the user is? This is a stretch, I know, but it's been a strange issue and we're grasping at straws now.
I work on the Chrome Data Compression proxy.
I'd be very surprised if the Chrome proxy were at fault here, since we respect standard caching headers. That said, there could be a bug. If you can try to reproduce with and without the proxy that would be helpful. Another way to test is to open the page in an Incognito tab (which does not use the proxy).
(Edited)
I looked at some of the headers we are seeing from your site, and they include things like
Cache-Control: max-age=2592000
which means these responses are publicly cacheable for 30 days. I see a wide range of caching headers from many different URLs on the site, suggesting that your caching rules aren't being applied as widely as you thought; but of course I don't know the structure of the site and whether that would lead to the problem you are describing.
Feel free to reach out (email is fine too) and I'm happy to help debug if you still think this is a problem on our end.
Related
We have two sites with different subdomains. Sometimes our employees lose their cookies (they are just gone) on both domains at the same time so they get logged out.
I don't really see how our app can be responsible, because we have different server configurations (and for each site there're multiple servers btw). I guess only nginx versions (1.10.3) are the same. Plus this does not explain why do they get logged out on both sites at the same time.
If it helps, we use rails (3/5), unicorn (4.8.3/5.3.0), on older app sessions are stored in redis and in the new one in cookies.
So I wonder maybe there're some browser (security) policies when it clears cookies. Maybe on some ssl connection error, ip changes or whatever.
I understand that this is not definitive problem description but it seems like magic to us atm so I hope that someone encountered something like this.
P.S. btw we tried to ask one of our employees to use firefox instead of Chrome (that is used by all of them) but it does not seem to be making any difference (he wasnt logged out for a week but then he was like every 20 minutes)
I'm using a domain name with this general structure: http://mydomainname.com/
However, when I click it, I get a 404 message saying:
And when I look in the URL, it's not http://mydomainname.com/ but surprisingly http://mydomainname.com/YkPWZ/.
How did YkPWZ/ appear automatically and what can I do to eliminate this issue? Sometimes accessing http://mydomainname.com/ works fine, but most of the time the browser automatically tacks on some random characters at the end of the URL, throwing the 404 message. This is not a browser-specific issue and I've had a few colleagues replicate this issue on different operating systems (both desktop and iOS).
P.S. If it matters at all, I generated my website using Github Pages (markdown files, not HTML).
I'm quite certain this is an issue on the GoDaddy side of things, though I'm unable to find any official documentation on the subject. As noted in comments above, the redirect isn't coming from GitHub Pages.
I found an old thread discussing the issue. Here is a brief summary:
GoDaddy may use redirects like this to handle load balancing on their shared hosting servers.
In several cases, users contacted GoDaddy to ask about the problem and
had the issue resolved, but
were never told the technical specifics of what was happening.
If you wish to stay with GoDaddy I recommend contacting them and sending them to the link I found above. They may be able to resolve the issue for you, though I wouldn't expect an explanation.
Alternatively, you can use another web host. In many circles, GoDaddy isn't rated very highly. It's lucky that there are so many web hosts to choose from. Alternatively, you can use a custom domain directly with GitHub Pages, bypassing a third-party host entirely.
I have a Rails application hosted on Heroku and I am preparing to deploy another application that will use the same session cookie. Let's assume that main application is hosted at app.mycompany.com and the new application will be hosted at reports.mycompany.com. I've setup session cookies in both apps with cookie domain .mycompany.com and everything works ok. I've modified /etc/hosts to test those settings on my local machine.
Since everything worked fine on my local machine I wanted to test it on our staging environment, which is hosted at mycompany-staging.herokuapp.com. For this app I've set cookie domain to .herokuapp.com. And now it does not work. It is not possible to login. From inspector it looks like correct Set-Cookie header is sent from the server, but the browser never send this cookie back on request.
The same thing happens on my local machine when pointing mycompany-staging.herokuapp.com to 127.0.0.1. This happens only when I use herokuapp.com. Everything else works fine or at least couple of different domains work fine, including herokuapp2.com.
I am really confused. It looks like there is some cache issue, but I don't know where exactly. I am testing this mostly on Chrome with incognito mode, but I also tried Safari with the same problem.
Can anyone point me in the right direction? Or maybe I am missing something obvious.
This is because herokuapp.com is included in the Mozilla Foundation’s Public Suffix List. Cookie with domain *.herokuapp.com cannot be set any more. Refer to doc in devcenter.heroku.com:
herokuapp.com is included in the Mozilla Foundation’s Public Suffix List. This list is used in recent versions of several browsers, such as Firefox, Chrome and Opera, to limit how broadly a cookie may be scoped. In other words, in browsers that support the functionality, applications in the herokuapp.com domain are prevented from setting cookies for *.herokuapp.com. Note that *.herokuapp.com cookies can currently be set in Internet Explorer, but this behavior should not be relied upon and may change in the future.
I'm trying to optimize my application in Ruby on Rails, and I realized that the pictures in my application is what most long does it take to load, but I also noticed another problem, which is that google chrome isn't caching the images.
I noted this because in the Google Developers Console you can see that Google Chrome makes a request to load the images that are canceled before the images are truly loaded.
This can be seen here, first I open the Google Developers Console, then refresh the page and within the first requests there you can see the ones of the images, but they are canceled immediately.
After that you can see the requests that actually loaded the images.
I don't understand why is this happening if in the response headers you can see that the Cache Control is set to public with max-age = 31536...
I put the images in my application this way:
<div class="col-xs-3"><%= image_tag "#{#hero.id}/ability_1.png", class: "center-block"%></div>
And the images are organized in folders in app/assets/images
Is there a RoR way to fix this?
Edit: Now testing my app (which is in Heroku) in Windows I noticed that in fact Google Chrome caches the images sometimes, but it happens like the 50% of the times (and when I was in Ubuntu in development it didn't work a single time), while in firefox the first time the images are loaded, but the subsequent times I load the same view I can't even notice the reload, it's beatiful, Why google Chrome is not like that? Is normal that Google Chrome acts so weird?
The most important thing to realize when analyzing browser caching is the "Status Code". In your example, you can see you got a "304", which stands for "Not Modified" Which means the browser "could potentially use it's cache". So you ARE in fact caching. Caching != Not hitting your web server.
The definition according to Mozilla:
This is used for caching purposes. It is telling to client that response has not been modified. So, client can continue to use same cached version of response.
It sends the etag and last-modified to your web server, and your web server then looks at those meta and say "Nope, this file hasn't changed, so feel free to use your cache", and that's it. It actually does not send the file again. You can see that the "Size" is much less then when it's a "200" status code, where the web server IS sending the file, and the timing should me much shorter as well.
In Chrome you can force "non-caching" by checking the "Disable cache" option in the Network tab.
Hope that helps!
It looks like Chrome does handle image caching differently. What type of reload are you doing (following links, pressing enter in the address bar, Ctrl+r)? It looks like if you press enter in the search bar it will respect max-age but if you use Ctrl+r Chrome sets max-age to 0.
expires_in max-age cache control doesn't work
Chrome doesn't cache images/js/css
You can force caching with manifest file. There's plenty of docs on the web about the topic. Here's a starter: http://www.w3schools.com/html/html5_app_cache.asp
the request headers contain max-age=0. Try setting that to a big number!
Note: Please correct me if any of my assumptions are wrong. I'm not very sure of any of this...
I have been playing around with HTTP caching on Heroku and trying to work out
a nice way to differentiate between mobile and desktop requests when caching using Varnish
on Heroku.
My first idea was that I could set a Vary header so the cache is Varied on If-None-Match. As Rails automatically sends back etags generated from a hash of the content the etag would vary between desktop and mobile requests (different templates) and so it would eventually cache two versions (not fact, just my original thoughts). I have been playing around with this but I don't think it works.
Firstly, I can't wrap my head around when/if anything gets cached as surely requests with If-None-Match will be conditional gets anyway? Secondly, in practice fresh requests (ones without If-None-Match) sometimes receive the mobile site. Is this because the cache doesn't know whether to serve up the mobile or desktop cached version as the If-None-Match header isn't there?
As it probably sounds, I am rather confused. Will this approach work in any way or am I being silly? Also, is there anyway to achieve separate cached versions if I am unable to reach the Varnish config at all (as I am on Heroku)?
The exact code I am using in Rails to set the cache headers is:
response.headers['Cache-Control'] = 'public, max-age=86400'
response.headers['Vary'] = 'If-None-Match'
Edit: I am aware I can use Vary: User-Agent but trying to avoid it if possible due to it have a high miss rate (many, many user agents).
You could try Vary: User-Agent. However you'll have many cached versions of a single page (one for each user agent).
An other solution may be to detect mobile browsers directly in the reverse proxy, set a X-Is-Mobile-Browser client header before the reverse proxy attempts to find a cached page, set a Vary: X-Is-Mobile-Browser on the backend server (so that the reverse proxy will only cache 2 versions of the same page) and replace that header with Vary: User-Agent before sending to client.
If you can not change your varnish configuration, you have to make different urls for mobile and desktop pages. You can add some url-parameter (?mobile=true), add a piece in your path (yourdomain.com/mobile/news) or use a different host (like m.yourdomain.com).
This makes a lot of sense because (I've seen this many times, both in CMSs and applications) at some point in time you want to differentiate content and structure for mobile devices. People just do different things or are looking for different information on mobile devices...