I want to use new feature in Cloudfront, which allows to gzip files on-fly using Accept-Encoding: gzip header. I set up my CDN distribution, turned on "Compress Objects Automatically", whitelisted headers: Origin, Accept-Control-Request-Headers and Accept-Control-Request-Method (I'm using AngularJS, I need it for OPTIONS method). I dont have any CORS set on my S3 bucket.
As stated in their docs, it should start working when I add Accept-Encoding: gzip header to the request. However, I'm still getting raw file.
Response Headers
Accept-Ranges:bytes
Age:65505
Cache-Control:public, max-age=31557600
Connection:keep-alive
Content-Length:408016
Content-Type:text/css
Date:Mon, 21 Mar 2016 16:00:36 GMT
ETag:"5a04faf838d5165f24ebcba54eb5fbac"
Expires:Tue, 21 Mar 2017 21:59:21 GMT
Last-Modified:Mon, 21 Mar 2016 15:59:22 GMT
Server:AmazonS3
Via:1.1 0e6067b46ed4b3e688f898d03e5c1c67.cloudfront.net (CloudFront)
X-Amz-Cf-Id:gKYTTq0cIcUvHTtlrdMig8D1R2ZVdea4EnflV0-IxhtaxgRvLYj6LQ==
X-Cache:Hit from cloudfront
Request Headers
Accept:text/css,*/*;q=0.1
Accept-Encoding:gzip, deflate, sdch
Accept-Language:pl,en-US;q=0.8,en;q=0.6
Cache-Control:max-age=0
Connection:keep-alive
Host: XXX.cloudfront.net
Referer: XXX
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.87 Safari/537.36
My configuration is:
Rails with Angular on Unicorn (using asset_sync)
Nginx
S3 and Cloudfront
Notice these two response headers.
Age: 65505
X-Cache: Hit from cloudfront
This object was cached by a prior request, 65,505 seconds (≅ 18 hours) before you requested it this particular time.
Once CloudFront has cached an object at a particular edge, if you later configure the relevant cache behavior to enable on-the-fly compression, CloudFront won't go back and re-compress objects already in its cache. It will continue to serve the original version of the object until it's evicted.
If this 18 hour interval is longer ago than you enabled compression on the distribution, that is the most likely explanation for what you are seeing.
CloudFront compresses files in each edge location when it gets the files from your origin. When you configure CloudFront to compress your content, it doesn't compress files that are already in edge locations. In addition, when a file expires in an edge location and CloudFront forwards another request for the file to your origin, CloudFront doesn't compress the file if your origin returns an HTTP status code 304, which means that the edge location already has the latest version of the file. If you want CloudFront to compress the files that are already in edge locations, you'll need to invalidate those files.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html
Evict everything from your distribution's cache by submitting an invalidation request for the path * (to cover everything) or just this particular /path or /path*, etc. Within a few minutes, all cached content for your distribution (or for the specific path match, if you don't specify * everything) will be evicted (wait for the invalidation to show that it's complete), and you should see compression working on subsequent requests.
Keep an eye on the Age: (how long CloudFront has had a copy of the particular response) and once it drops off and then resets, I would venture a guess that you'd see what you expect.
If this doesn't resolve the issue, there is another possibility, but I'd expect this to be a fairly unusual occurrence:
In rare cases, when a CloudFront edge location is unusually busy, some files might not be compressed.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html
Related
Did anyone manage to enable GZIP compression for outgoig (aka downstream) application/json, text/plain responses (payloads)?
I traced it down to [https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/gzip_filter#runtime][1], but did not find a way to enable it using ESPv2 docker image…
Update 08/05/2021:
· Setting Accept-Encoding: gzip request header has no effect, returned response is not gzipped.
· And indeed, the new/updated filter is envoy.filters.http.compressor
In particular, https://github.com/GoogleCloudPlatform/esp-v2/blob/master/src/go/util/marshal.go does not mention compressor (legacy gzip) http filter either…
Any ideas?
Currently (as of 16th September 2021) gzip compression on ESP v2 is not supported at all, see: https://github.com/GoogleCloudPlatform/esp-v2/issues/607
Should any of PWA related resources be served with any kind of cache headers from server or should we move classic http caching out of our way by turning it completely off?
Namely, what should be http cache headers for:
manifest file
Related to it, how does new versions of manifest file (favicon changed for example) get to the client?
service worker js file
(this one is a bit tricky because browsers check for new versions every 24 hours so some caching might be good?)
index.html (entry point for spa)
My understanding was that it should be turned off completely and all the cache should be handled from service worker but there seems to be different infos out there and hard to extract best practices.
There's some guidance at https://web.dev/reliable/http-cache, along with a number of other resources on the web.
In general, building a PWA and introducing a service worker doesn't change the best practices that you should follow for your HTTP caching.
For assets that include versioning information in their URL (like /v1.0.0/app.js, or /app.1234abcd.js), and you know that the contents of a given URL won't even change, you should use Cache-Control: max-age=31536000.
For assets that don't include versioning information in their URL (like most HTML documents, and also /manifest.json, if you don't include a hash there), you should set Cache-Control: no-cache along with ETag or Last-Modified, to ensure that a previously cached response is revalidated before being used.
For your service worker file itself, modern browsers will ignore the Cache-Control header value you set by default, so it doesn't really matter. But it's still a best practice to use Cache-Control: no-cache so that older browsers will revalidate it before using it.
everyone:
I have this nagging problem I cannot seem to fix. So, please, help me out!!!!
So, I have a page that renders a partial. The page gets renders correctly within a couple of seconds, however, Chrome still receives (by showing the "loading" icon") for some 30 more seconds and reports an error (failed to load resource) on Chrome Inspector. It seems like the response is not correctly closed. If I took out the line in the partial, that renders asian characters, it would work fine - meaning it would render the page and properly stops.
This problem gets worse, if the partial is gets rendered as part of an AJAX call via jQuery. It would not even get rendered then, because it fails to get a proper ending for the response.
I would appreciate your help.
Thanks.
Here is the HTTP header:
Request Method:GET Status Code:200 OK Request Headersview source
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
Accept-Encoding:gzip,deflate,sdch Accept-Language:en-US,en;q=0.8
Connection:keep-alive Cookie:XXXXX Host:localhost:3000
Referer:https://localhost:3000/home User-Agent:Mozilla/5.0 (Macintosh;
Intel Mac OS X 10_7_2) AppleWebKit/535.2 (KHTML, like Gecko)
Chrome/15.0.874.106 Safari/535.2 Response Headersview source
Cache-Control:max-age=0, private, must-revalidate
Connection:Keep-Alive Content-Length:55118 Content-Type:text/html;
charset=utf-8 Date:Wed, 02 Nov 2011 23:07:52 GMT
Etag:"77d774b3b119012c5fabbd5c625a98a8" P3p:CP="CAO PSA OUR"
Server:WEBrick/1.3.1 (Ruby/1.9.2/2011-07-09) OpenSSL/0.9.8r
X-Runtime:1.070787 X-Ua-Compatible:IE=Edge
UPDATE:
I just installed Firefox/Firebug. They give more info than Chrome does. What a pleasant surprise! Firebug confirmed my theory that somehow content-length got messed up. So, if the rendered partial contains some asian characters, the content-length in the response header and the actual response body size are different. If no asian characters are present, they are the same. Has anybody seen this problem before?
OK! We finally figured this out! YEAAAAH!!!
This was caused by WEBrick's inability to handle HTTPs correctly. Basically, WEBrick would not render pages correctly, which would causes a discrepancy between Content-Length in the response header and the actual body size. When this happens, the browser would wait for request to complete and throws an error (usually Failed to load resources error on Chrome) after 30 some seconds.
So, if you want to use HTTPs on your machine (localhost), make sure you use Thin as your server and nginx as a reverse proxy server. Even though it sounds complicated, it's not. Basically, Thin will serve up your pages just like how WEBrick would do. If a HTTPS request comes in, say through port 443 or whatever port you set up for, nginx takes care of validating the request and forwards it to Thin, which then handles the rendering, etc.
I hope this posts would help someone..
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How do short URLs services work?
I often see shortened urls from bitly.com such as http://bit.ly/abcd. How is this "bit.ly" realized at server side? Is it some DNS trick inside?
Yes.. actually if you go to https://bitly.com/ you will notice that it provides this URL shortening service.
Going to http://bit.ly/abcd just redirects it to a URL of your choice. You can figure it by looking at the HTTP request and response headers
Request URL:http://bit.ly/abcd
Request Method:GET
Status Code:301 Moved
Request Headersview source
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Connection:keep-alive
Host:bit.ly
User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/13.0.782.24 Safari/535.1
Response Headersview source
Cache-control:private; max-age=90
Connection:keep-alive
Content-Length:145
Content-Type:text/html; charset=utf-8
Date:Thu, 16 Jun 2011 21:14:04 GMT
Location:http://macthemes2.net/forum/viewtopic.php?id=16786044
MIME-Version:1.0
Server:nginx
Set-Cookie:_bit=4dfa721c-001f7-011f8-c8ac8fa8;domain=.bit.ly;expires=Tue Dec 13 16:14:04 2011;path=/; HttpOnly
http://www.w3.org/Protocols/HTTP/HTRESP.html talks about status codes and 301 is what you should be looking for
No, it's just an HTTP server that looks up abcd in a database, finds http://example.com/long/url, and sends an HTTP redirect answer, like
HTTP/1.1 301 Moved Permanently
Location: http://example.com/long/url
Have you gone to http://bit.ly/? The url shortener stores the long url in a database, then when the short url is used, the url shortener service performs an http redirect to the long url.
LY is the top-level domain for Libya, which is distinct from bitly.com.
bit.ly is just a domain like any other (ie: .com, .net. .fr)
In this case .ly belongs to Libya.
It looks like they use A-Za-z0-9 for generating their URLs, and if my calculations are right, this means at any one time they can probably store a database of 61,474,519 of those codes mapped onto the long URLs. Assuming certain links can expire, or people can delete links they have made, it's safe to assume they won't run out of possibilities soon...and hey if they do, just make the links up to 8 characters- then you get 3,381,098,545 possibilities =P
In my latest project which is in RC1 I have noticed that I have this browser caching issue that I just can't shake. This is what my header looks like
HTTP/1.1 200 OK
Date: Tue, 03 Mar 2009 15:11:34 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
X-AspNet-Version: 2.0.50727
X-AspNetMvc-Version: 1.0
Cache-Control: private
Expires: Tue, 03 Mar 2009 15:11:34 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 4614
Now technically if this is private it shouldn't have an expiration date on the content right? I've tried no-cache as well with the same results. Anybody have any experience with this particular issue?
Cache-Control: private only specifies that the response is only intended for a single user and should not be stored in a shared cache (say, in a proxy) and used to serve requests for other users. I don't see anything in the protocol documentation that would preclude the use of an Expires header with a value. In fact, it seems a perfectly reasonable thing to say "use this for subsequent requests for this user only, but not after this time." There are other values for Cache-Control where Expires may not make sense, but I believe that the protocol has a means for disambiguating between conflicting headers (See section 4 of the protocol docs).
Quoting from Section 16.2 of the HTTP 1.1 protocol docs:
private
Indicates that all or part of the response message is intended for
a single user and MUST NOT be cached by a shared cache. This
allows an origin server to state that the specified parts of the
response are intended for only one user and are not a valid
response for requests by other users. A private (non-shared)
cache MAY cache the response.
Note: This usage of the word private only controls where the
response may be cached, and cannot ensure the privacy of the
message content.
There's no reason why private content can't be cached, its just that it should only be cached by the browser in the current users context, it should not be cached server side or by other caches such as a proxy server.