Does anyone know how to get the gzip version of a response from YQL?
For example requesting this:
http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.quotes%20where%20symbol%20in%20(%22AAPL%22)&env=store://datatables.org/alltableswithkeys
The response is not gzipped?
You need to set the header "Content-Encoding: gzip" in your request.
However if the server doesn't supports gzip compression method you might not get the same response.
The web server is by no means obliged to use any compression method – this depends on the internal settings of the web server and also may depend on the internal architecture of the website in question.
Related
I was playing around with GZIP compression recently and the way I understand the following:
Client requests some files or data from a Web Server. Client also sends a header that says "Accept-Encoding,gzip"
Web Server retrieves the files or data, compresses them, and sends them back GZIP compressed to the client. The Web Server also sends a header saying "Content-Encoded,gzip" to note to the Client that the data is compressed.
The Client then de-compresses the data/files and loads them for the user.
I understand that this is common practice, and it makes a ton of sense when you need to load a page that requires a ton of HTML, CSS, and JavaScript, which can be relatively large, and add to your browser's loading time.
However, I was trying to look further into this and why is it not common to GZIP compress a request body when doing a POST call? Is it because usually request bodies are small so the time it takes to decompress the file on the web server is longer than it takes to simply send the request? Is there some sort of document or reference I can have about this?
Thanks!
It's uncommon because in a client - server relationship, the server sends all the data to the client, and as you mentioned, the data coming from the client tends to be small and so compression rarely brings any performance gains.
In a REST API, I would say that big request payloads were common, but apparently Spring Framework, known for their REST tools, disagree - they explicitly say in their docs here that you can set the servlet container to do response compression, with no mention of request compression. As Spring Framework's mode of operation is to provide functionality that they think lots of people will use, they obviously didn't feel it worthwhile to provide a ServletFilter implementation that we users could employ to read compressed request bodies.
It would be interesting to trawl the user mailing lists of tomcat, struts, jackson, gson etc for similar discussions.
If you want to write your own decompression filter, try reading this: How to decode Gzip compressed request body in Spring MVC
Alternatively, put your servlet container behind a web server that offers more functionality. People obviously do need request compression enough that web servers such as Apache offer it - this SO answer summarises it well already: HTTP request compression - you'll find the reference to the HTTP spec there too.
Very old question but I decided to resurrect it because it was my first google result and I feel the currently only answer is incomplete.
HTTP request compression is uncommon because the client can't be sure the server supports it.
When the server sends a response, it can use the Accept-Encoding header from the client's request to see if the client would understand a gzipped response.
When the client sends a request, it can be the first HTTP communication so there is nothing to tell the client that the server would understand a gzipped request. The client can still do so, but it's a gamble.
Although very few modern http servers would not know gzip, the configuration to apply it to request bodies is still very uncommon. At least on nginx, it looks like custom Lua scripting is required to get it working.
Don't do it, for no other reason than security. Firewalls have a hard or impossible time dealing with compressed input data.
Does the YouTube Data API Client Library for Java use Etags and/or gzip, as described at Getting started page?
Documentation is short (only find java docs) and don't say anything about it, so i guess is just a wrapper.
Based from this link, Etags are supported by youtube but it depends on what kind of data you are asking.
To use the etag, create a header request and put "If-None-Match" equal to your etag value. Note this should be a request header and not appended to the endpoint call. You can also use "If-Match".
Depending on what kind of API you are using, the way of inserting a new value to the request header may differ slightly. The ETag response-header field provides the current value of the entity tag for the requested variant.
You may also check on this related thread.
I'm getting data from backend using AFNetworking and set request's cachePolicy as NSURLRequestUseProtocolCachePolicy.
The response headers contain ETag value and Transfer-Encoding is chunked.
In the second time I call the same API, it gets the fresh data instead of getting from cache as expected.
I notice that if the response is not chunked (contain Content-Length header), caching work perfectly
My question is: is it possible to cache chunked response in iOS?
Thank you for any advice
NSURLCache, which AFNetworking uses for caching, doesn't support caching this type of request.
You could try:
using SDURLCache, an open-source alternative that gives you more control, or
subclassing NSURLCache yourself to roll your own implementation
using requests that have supported caching headers
I'm using Transactional Cypher HTTP endpoint from my application to execute queries in Neo4j. I was wondering if there is a way to get zipped response from server.
I read some threads about it but they mentioned to create unmanaged extensions for it:
http://www.markhneedham.com/blog/2013/07/08/neo4j-unmanaged-extension-creating-gzipped-streamed-responses-with-jetty/
I just want the zipped response using HTTP endpoints that I'm already using.
I guess setting http parameter in request to tell the server to compress response will not work.
Is there any configuration that can enable the response to get compressed?
Any ideas for unzipping response as well?
Regards,
Rahul
You can run Neo4j behind a proxy caring about compression. One example would be using Apache httpd with mod_deflate for compression and mod_proxy_http for the communication with Neo4j.
I've played around some time ago with a mod_proxy setup, see https://github.com/sarmbruster/vagrant_neo4j_modproxy/blob/master/etc/apache2/sites-available/default as a starting point. Be aware this example does not use mod_deflate yet.
Mark Needham implemented it once, it was not a lot of effort, so you can just take his code build it and put it into your server:
http://www.markhneedham.com/blog/2013/07/08/neo4j-unmanaged-extension-creating-gzipped-streamed-responses-with-jetty/
I am using caches_action to cache one of the action's response
I want to save in the cache compression response and then send it as it is if browser supports that compression otherwise decompress it and then send it.
Some characteristics of my content:
1. It rarely changes
2. My server gets requests from 90% gzip enabled browsers
Do you see any issue with this approach?
If you it is a right approach then is there a easy way to achieve the same?
Compression should be handled by Apache or the webserver.
Assuming the client supports compression, the webserver will load your static file and serve a compressed response.
I suggest you to have a look at your webserver configuration.
Here's an example using Apache.