I am running Apache 2.4.18 on Ubuntu (single server machine). I didn't make any changes to the default settings as far as cache headers are concerned (no cache-related change to /etc/apache2/apached2.conf, no .htaccess files). My understanding is that apache's default behavior is to use ETag's, with the desired behavior of returning a 304 if the client already has a matching file, or a 200 (plus the new file) if it does not.
This is not what I see.
On iOS/Safari, when I update files on the server, my client behaves as if it has a mix of old and new files. This can be resolved by clearing web data in Safari and reloading the page, so it does seem to be a caching issue. I read that iOS/Safari is (or at least was, in earlier versions?) different in its respect for ETag headers. But it wasn't clear to me how to fix this.
On Chrome in Windows, the file is always served (i.e., response 200, not 304) even when the file hasn't changed since the last request, even though I can see the use of ETag in the headers.
Can someone share their apache 2.4 settings to get the desired behavior I describe above, on both iOS and Chrome?
(Here is another question that asks the same thing about the Chrome part: Apache + Etags -> returns 200 and send content instead of 304)
I think the answer to my question is that caching works differently, at least in Chrome, when you are using the file:// protocol to test locally vs. the http:// protocol.
Related
Jenkins was working fine on Firefox until a couple of weeks back.
http://www.sub.domain.com:8080
Then I think there was a Firefox update and by default it was redirecting to
https://www.sub.domain.com:8080
There was no way I could force it to http.
So I went on Chrome and it worked there until this morning when I got the Chrome 77 update.
Same issue all over again.
Then I loaded it up on IE. It works fine. I am able to use
http://www.sub.domain.com:8080
I checked with the admin if they are redirecting all traffic to https but that's not the case. What's happening here? Any browser change that I am not aware of? Any Jenkins config change that I should be using?
Did you check the HSTS cache in chrome? Go to chrome://net-internals/#hsts
Query the HSTS cache there. If there is a result you can clear it using the delete option on that page.
Another thing to check is if your using the Jenkins HSTS filter plugin "which adds a response header indicating that HTTP Strict Transport Security (HSTS) response headers should be sent." See https://wiki.jenkins.io/display/JENKINS/HSTS+Filter+Plugin
I installed TFS 2017 to be accessible on both, HTTP (port 8080, default settings) and HTTPS. Now I removed HTTP binding form the IIS and reapplied the Public URL (via Administration Console -> Change Public URL).
Most of the TFS application tier works normally (as it uses relative addressing). However, build extensions somehow want to get their icons from HTTP (port 8080). See screenshot. When I noticed this, I first checked the HTML/JS source and I found that _vssPageContext variable still holds some URLs pointing to old HTTP configuration.
Has anyone solved that mistery or has any idea what to do?
EDIT: Later I re-enabled the HTTP bindings in IIS just to make the TFS work and I get a lot of warnings and errors due to HTTP / HTTPS mixup (I access TFS via HTTPS, however some content is still accessed via HTTP):
Mixed Content: The page at
'https://xxxx.xxxxx.xxxx/tfs/TFSDefault/Project/_build/definitionEditor?definitionId=113&_a=simple-process'
was loaded over HTTPS, but requested an insecure image
'http://xxxx.xxxxx.xxxx:8080/tfs/TFSDefault/_apis/distributedtask/tasks/9fcb05af-0ffe-4687-99f2-99821aad927e/0.1.1305/icon'.
This content should also be served over HTTPS.
WebSocket connection to
'ws://xxxx.xxxxx.xxxx:8080/tfs/signalr/connect?transport=webSockets&clientProtocol=1.5&contextToken=412c3608-de3b-4dab-a00d-bf5c13728d97&connectionToken=OoSymcl1qzWg%2BrHB9pzSBpb%2BdHVywo7NNUWN5xMx3Z51p9ZdZQ14wvoQKXqxB%2Bvo66eTap4iUdlqzHR1hJNUf%2By8oFUaudlkCbQIZjHQhLBHsEWtcLdfLlL7MAevl4h0My1yQA%3D%3D&connectionData=%5B%7B%22name%22%3A%22builddetailhub%22%7D%5D&tid=7'
failed: HTTP Authentication failed; no valid credentials available.
This is an issue related to the default endpoint of TFS being initially set as http, which all the elements are then defaulting their requests to, rather than relying on the initial request you are making in the browser. so you end up with a javascript element attempting to connect to the server via http and get a cross content issue.
Here is a really good article that covers the issues you are probably facing and how to fix them to use https: https://hybriddbablog.com/2017/12/16/changing-tfs-to-use-https-update-your-agent-settings-too/
I have to caveat that I havent done this yet, we actually went back in favour of running http until we moved to the next version of TFS, but from my experience of TFS, the steps look sound.
This issue is strange, and i've spend a couple of days trying to solve it but i'm completely lost. I've developed a webapp with CodeIgniter 3.0.6 + AngularJS 1.5.5 as main frameworks for front/backend.
The problem is when I change the iPhone/iPad network from WIFI to 3G/4G,
some random HTTP GET request to static files fail. The files aren't always the same, but it only fails on images and js scripts.
The HTTP GET Status Code is 503 - Service Unavailable, and opening the file's URL points to a static HTML file with the same error.
The weirdest thing is that the response header Server changes from WIFI request (Apache) to 3G/4G request (nginx).
File loaded properly:
File error:
There are also other headers that are different between WIFI and (X)G request.
PHP works fine, HTML and dynamic data load properly. The problem appears to be at the static resources request.
EDIT
I've checked several websites hosted in 1and1, different hosting packs, and i 've even checked other domains hosted in the shared host where my app is running and it happends everywhere. The only change is the number of failing files, and it's random.
EDIT 2
After test with other ios browsers (Firefox and Opera), the problem seems to be focus on Safari and Chrome. Maybe i should say Webkit, but Opera seems fine.
EDIT 3
I've found and article (in comments, repu problems) while searching for a way to handle angular $http request from an offline device.
I need to go deeply and perform the tests described in the link, but seems a problem with the Websockets and the proxy servers used by operators, Vodafone in this case.
did anyone else find this issue?
I will edit this post with the improvements you suggest or the info you need.
In my ASP.NET MVC 2 app, I have the following lines:
Response.Cache.SetMaxAge(TimeSpan.FromDays(90));
Response.Cache.SetETag(lastWriteTime.Value.Ticks.ToString());
Using Fiddler to trace the HTTP streams, I can see:
ETag: 634473035667000000
in the Response Headers when running under IIS7, but when I'm running under the Visual Studio 2010 web server, this header just... disappears. Whether I set it via Response.Cache.SetETag() or via Response.AppendHeader("ETag", etag), it just never gets returned.
Is this a "feature" of the IIS web server? Is there some config setting I've missed? It's going to make testing cache invalidation a bit fiddly if I have to attach to the IIS process to be able to debug anything...
EDIT: It also appears that despite calling Response.Cache.SetCacheability(HttpCacheability.Public), VS/Cassini always returns resources with HTTP Cache-Control set to "private"... does that help?
The ETag will be suppressed if you use HttpCacheability.Private.
You can find more information on Why does HttpCacheability.Private suppress ETags?
If you change it to HttpCacheability.ServerAndPrivate it should work
Simple - it's Cassini.
Cassini isn't meant to be a production server, but is there to facilitate debugging (which is why it overrides caching too - after all if you recompile and rerun would you want your new code not touched because a page is cached?)
If you want your debugging to work as it would in IIS then IISExpress is where you should be going... there's no attach problem there as it will spin up a real instance of IIS, but in your own user context.
I am trying to build an offline web app for the iPad, and I am trying to verify that the cache.manifest is being served correctly by Apache Web Server 2, and is working. I have added an 'AddType' for the .manifest extension to the mime-types configuration file for the Apache web server.
If I look at the access logs, the first request to the cache-manifest is returned with a 200 HTTP response code, any further requests are served with 304, which is 'not modified'. I take this to mean it is working. The assets (html, images) are returned with a combination of both (200, then 304 as above) so indicates it is working.
When I load it on the iPad, I get the page, but when I go offline, and reload it is unable to load as it does not have a connection to the internet.
I am serving it off the Apache web server of my Mac, so having trouble reliably testing it with my Mac. Any ideas on what is going wrong, or how to verify it is working?
Testing the cache manifest is somewhat of a pain in general, but there are a few useful techniques.
First, start with testing it using Safari on the Mac directly. Just turn off Apache when you want to check it in offline mode.
In Safari, open the Activity monitor and look for any resources that are listed as "cancelled" -- those are typically ones that are missing from the manifest.
Also use the Web Inspector to check the response-type of the manifest file.
In most cases the problem is that you have resources in the application which aren't specified in the manifest; this causes the whole caching operation to fail. Unfortunately there's no method in the HTML5 API to list which resources failed; this would be supremely helpful to developers.