Did anyone manage to enable GZIP compression for outgoig (aka downstream) application/json, text/plain responses (payloads)?
I traced it down to [https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/gzip_filter#runtime][1], but did not find a way to enable it using ESPv2 docker image…
Update 08/05/2021:
· Setting Accept-Encoding: gzip request header has no effect, returned response is not gzipped.
· And indeed, the new/updated filter is envoy.filters.http.compressor
In particular, https://github.com/GoogleCloudPlatform/esp-v2/blob/master/src/go/util/marshal.go does not mention compressor (legacy gzip) http filter either…
Any ideas?
Currently (as of 16th September 2021) gzip compression on ESP v2 is not supported at all, see: https://github.com/GoogleCloudPlatform/esp-v2/issues/607
Related
Cloud Run / the "Google Frontend" seems to completely buffer responses from a Cloud Run application, even when using chunked transfer encoding for the response. This is bad for incremental rendering.
I have a Java web app based on com.sun.net.HttpServer which supports chunked encoding for the response. Especially flushing the output stream creates a chunk, so I can do:
write response line
flush
compute for 10s
write more response lines
Locally, this results in a chunked response:
HTTP/1.1 200 OK
Date: Sat, 07 Mar 2020 17:08:10 GMT
Transfer-encoding: chunked
Content-type: text/plain;charset=utf-8
1c
<first output>
14
<next output>
17
<next output>
0
Using curl, I can see the output appear incrementally.
In contrast, when deploying the same app in Cloud Run, the response gets fully buffered and returned in full, no matter how long the pause (many seconds), how many chunks or how much content is returned (I've tested up to several megabytes):
curl -v https://...
< HTTP/2 200
< content-type: text/plain;charset=utf-8
< x-cloud-trace-context: 3872abb809e97a76298f4c46b9217656;o=1
< date: Sat, 07 Mar 2020 17:18:48 GMT
< server: Google Frontend
< content-length: 2450359
(Example has 50k chunks!)
Is there a way to have the GFE pass through the chunked encoding?
Cloud Run does not support streaming responses where data is sent in incremental chunks to the client while a request is being processed. All data from your service is sent as a single HTTP response.
Also, the 'Content-Length' and 'Transfer-Encoding' headers are removed from responses your application may serve. Those headers are then added by the Google Front End and finally, the response is served.
Source of information:
1) Internal conversations with the Cloud Run engineering team.
2) Google Cloud Security Whitepaper that details the GFE (Google Frontend) which sits in front of Cloud Run Managed.
https://services.google.com/fh/files/misc/security_whitepapers_march2018.pdf
I want to use new feature in Cloudfront, which allows to gzip files on-fly using Accept-Encoding: gzip header. I set up my CDN distribution, turned on "Compress Objects Automatically", whitelisted headers: Origin, Accept-Control-Request-Headers and Accept-Control-Request-Method (I'm using AngularJS, I need it for OPTIONS method). I dont have any CORS set on my S3 bucket.
As stated in their docs, it should start working when I add Accept-Encoding: gzip header to the request. However, I'm still getting raw file.
Response Headers
Accept-Ranges:bytes
Age:65505
Cache-Control:public, max-age=31557600
Connection:keep-alive
Content-Length:408016
Content-Type:text/css
Date:Mon, 21 Mar 2016 16:00:36 GMT
ETag:"5a04faf838d5165f24ebcba54eb5fbac"
Expires:Tue, 21 Mar 2017 21:59:21 GMT
Last-Modified:Mon, 21 Mar 2016 15:59:22 GMT
Server:AmazonS3
Via:1.1 0e6067b46ed4b3e688f898d03e5c1c67.cloudfront.net (CloudFront)
X-Amz-Cf-Id:gKYTTq0cIcUvHTtlrdMig8D1R2ZVdea4EnflV0-IxhtaxgRvLYj6LQ==
X-Cache:Hit from cloudfront
Request Headers
Accept:text/css,*/*;q=0.1
Accept-Encoding:gzip, deflate, sdch
Accept-Language:pl,en-US;q=0.8,en;q=0.6
Cache-Control:max-age=0
Connection:keep-alive
Host: XXX.cloudfront.net
Referer: XXX
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.87 Safari/537.36
My configuration is:
Rails with Angular on Unicorn (using asset_sync)
Nginx
S3 and Cloudfront
Notice these two response headers.
Age: 65505
X-Cache: Hit from cloudfront
This object was cached by a prior request, 65,505 seconds (≅ 18 hours) before you requested it this particular time.
Once CloudFront has cached an object at a particular edge, if you later configure the relevant cache behavior to enable on-the-fly compression, CloudFront won't go back and re-compress objects already in its cache. It will continue to serve the original version of the object until it's evicted.
If this 18 hour interval is longer ago than you enabled compression on the distribution, that is the most likely explanation for what you are seeing.
CloudFront compresses files in each edge location when it gets the files from your origin. When you configure CloudFront to compress your content, it doesn't compress files that are already in edge locations. In addition, when a file expires in an edge location and CloudFront forwards another request for the file to your origin, CloudFront doesn't compress the file if your origin returns an HTTP status code 304, which means that the edge location already has the latest version of the file. If you want CloudFront to compress the files that are already in edge locations, you'll need to invalidate those files.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html
Evict everything from your distribution's cache by submitting an invalidation request for the path * (to cover everything) or just this particular /path or /path*, etc. Within a few minutes, all cached content for your distribution (or for the specific path match, if you don't specify * everything) will be evicted (wait for the invalidation to show that it's complete), and you should see compression working on subsequent requests.
Keep an eye on the Age: (how long CloudFront has had a copy of the particular response) and once it drops off and then resets, I would venture a guess that you'd see what you expect.
If this doesn't resolve the issue, there is another possibility, but I'd expect this to be a fairly unusual occurrence:
In rare cases, when a CloudFront edge location is unusually busy, some files might not be compressed.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html
Does anyone know how to get the gzip version of a response from YQL?
For example requesting this:
http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.quotes%20where%20symbol%20in%20(%22AAPL%22)&env=store://datatables.org/alltableswithkeys
The response is not gzipped?
You need to set the header "Content-Encoding: gzip" in your request.
However if the server doesn't supports gzip compression method you might not get the same response.
The web server is by no means obliged to use any compression method – this depends on the internal settings of the web server and also may depend on the internal architecture of the website in question.
I use a web application that's returning a Content-MD5 header but in my iOS app, I cannot retrieve that header using [NSHTTPURLResponse allHeaderFields] (whereas I can see it when I use cURL).
Does anyone know if iOS is deliberately removing that header?
So I've figured out what's happened.
Our SaaS provider has activated gzip by default on non-production instances. As mentioned in some other threads, NSURLConnection supports gzip compression transparently and will automatically send the Accept-Encoding: gzip HTTP header. When the response is received, NSURLConnection decompresses the content and removes the Content-Md5 header (because the Content-MD5 is a hash of the compressed data), which is why I'm not seeing it in the list of received headers.
I have an asp.net mvc application running on IIS 7. The problem I'm having is that depending on client the response may be recived (as seen through fiddler) as "chunked transfer-encoding. What I can't understand is why this only happens to some of my clients (even if two computers is on the same network with the same browser (IE 8) and not everyone, or vice versa?
Could anyone explain this to me?
Sorry for this late update, but the problem turned out to be the result of how the user reached the server. If the user was connected to the local lan through a vpn-connection the proxy would be sidestepped otherwise the proxy would be used. This resulted in two different results.
Chunked encoding is enabled on the server side if you prematurely flush the output stream. Do you have any user-agent-specific code might be calling Flush()?
RFC 2616 says:
All HTTP/1.1 applications MUST be able to receive and decode the "chunked" transfer-coding
Transfer-Encoding: chunked is defined for HTTP/1.1. Are some of your clients using HTTP/1.0 or even (shudder) 0.9? In that case, the server must not use transfer-encoding, as it's not a part of the protocol.
Although most modern clients understand HTTP/1.1, most have an option to downgrade to 1.0 when using a proxy (for historical reasons - some older proxies had buggy 1.1 implementations). So, although the browser may understand 1.1, it can request 1.0 if so instructed.
Example: MSIE 6+ has this in the Internet Options dialog - tab Advanced - HTTP 1.1 settings - checkboxes "Use HTTP 1.1" and "Use HTTP 1.1 through proxy connections".
Also, chunked encoding is not activated for all responses - usually the server switches it on when Content-Length is not set, or when the output buffer is flushed.