I am using caches_action to cache one of the action's response
I want to save in the cache compression response and then send it as it is if browser supports that compression otherwise decompress it and then send it.
Some characteristics of my content:
1. It rarely changes
2. My server gets requests from 90% gzip enabled browsers
Do you see any issue with this approach?
If you it is a right approach then is there a easy way to achieve the same?
Compression should be handled by Apache or the webserver.
Assuming the client supports compression, the webserver will load your static file and serve a compressed response.
I suggest you to have a look at your webserver configuration.
Here's an example using Apache.
Related
I am configuring gzip compression for a server via Jetty, and there are PUT/POST endpoints whose response payloads I would like to compress. The default GzipHandler configuration for Jetty specifically only includes GET; that this is the default is documented, but I'm unable to find documentation as to why this is the default. Is there a downside to applying gzip when the method is non-GET?
The reason comes down to responses from PUT and POST are, in a general sense, not suitable for being put into a cache.
GET was selected as the default back when gzip compression was first introduced, (and back before Jetty moved to Eclipse, back before Servlet 2.0, back when it was called the GzipFilter in Jetty) and in that era if the content couldn't be cached it wasn't compressed.
Why?
Well, back then using system resources to compress the same content over and over and over again was seen as a negative, it was more important to serve many requests vs a few optimized requests.
The GzipHandler can be configured to use any method, even nonsensical ones like HEAD.
Don't let a default value with historical reasons prevent you from using the GzipHandler, use it, configure it, and be happy.
Feel free to file an issue requesting the defaults be changed at https://github.com/eclipse/jetty.project/issues
I have a show view, that uses a 'Universal Viewer' to load images. The image dimensions come from a json file that comes from a IIIF image server.
I fixed a bug and a new json file exists, but the user's browser is still using the old info.json file.
I understand that I could just have them do a hard-reload, like I myself did on my machine, but many users may be affected, and I'm just damn curious now.
Modern browsers all ship with cache control functionality baked into it. Using a combination of ETags and Cache-Control headers, you can accomplish what you seek without having to change the file names or use cache busting query parameters.
ETags allow you to communicate a token to a client that will tell their browser to update the cached version. This token can be created based on the content creation date, content length, or a fingerprint of the content.
Cache-Control headers allow you to create policies for web resources about how long, who, and how your content can be cached.
Using ETags and Cache-Control headers is a useful way to communicate to users when to update their cache when serving IIIF or any other content. However, adding ETags and Cache-Control this can be quite specific to your local implementation. Many frameworks (like Ruby on Rails) have much of this functionality baked into it. There are also web server configurations that may need to be modified, some sample configurations are available from the HTML5 Boilerplate project that use these strategies.
Sample Apache configurations for:
ETags https://github.com/h5bp/server-configs-apache/blob/master/src/web_performance/etags.conf
Cache expiration https://github.com/h5bp/server-configs-apache/blob/master/src/web_performance/expires_headers.conf
It depends on where the JSON file is being served from, and how it's being cached.
The guaranteed way to expire the cache on the file is to change the filename every time it changes. This is typically done be renaming it filename-MD5HASH.ext, where the MD5HASH is the MD5 hash of the file.
If you can't change the file name (it comes from a source you can't control, you might be able to get away with adding a caching busting query key to the URL. Something like http://example.com/file.ext?q=123456.
I was playing around with GZIP compression recently and the way I understand the following:
Client requests some files or data from a Web Server. Client also sends a header that says "Accept-Encoding,gzip"
Web Server retrieves the files or data, compresses them, and sends them back GZIP compressed to the client. The Web Server also sends a header saying "Content-Encoded,gzip" to note to the Client that the data is compressed.
The Client then de-compresses the data/files and loads them for the user.
I understand that this is common practice, and it makes a ton of sense when you need to load a page that requires a ton of HTML, CSS, and JavaScript, which can be relatively large, and add to your browser's loading time.
However, I was trying to look further into this and why is it not common to GZIP compress a request body when doing a POST call? Is it because usually request bodies are small so the time it takes to decompress the file on the web server is longer than it takes to simply send the request? Is there some sort of document or reference I can have about this?
Thanks!
It's uncommon because in a client - server relationship, the server sends all the data to the client, and as you mentioned, the data coming from the client tends to be small and so compression rarely brings any performance gains.
In a REST API, I would say that big request payloads were common, but apparently Spring Framework, known for their REST tools, disagree - they explicitly say in their docs here that you can set the servlet container to do response compression, with no mention of request compression. As Spring Framework's mode of operation is to provide functionality that they think lots of people will use, they obviously didn't feel it worthwhile to provide a ServletFilter implementation that we users could employ to read compressed request bodies.
It would be interesting to trawl the user mailing lists of tomcat, struts, jackson, gson etc for similar discussions.
If you want to write your own decompression filter, try reading this: How to decode Gzip compressed request body in Spring MVC
Alternatively, put your servlet container behind a web server that offers more functionality. People obviously do need request compression enough that web servers such as Apache offer it - this SO answer summarises it well already: HTTP request compression - you'll find the reference to the HTTP spec there too.
Very old question but I decided to resurrect it because it was my first google result and I feel the currently only answer is incomplete.
HTTP request compression is uncommon because the client can't be sure the server supports it.
When the server sends a response, it can use the Accept-Encoding header from the client's request to see if the client would understand a gzipped response.
When the client sends a request, it can be the first HTTP communication so there is nothing to tell the client that the server would understand a gzipped request. The client can still do so, but it's a gamble.
Although very few modern http servers would not know gzip, the configuration to apply it to request bodies is still very uncommon. At least on nginx, it looks like custom Lua scripting is required to get it working.
Don't do it, for no other reason than security. Firewalls have a hard or impossible time dealing with compressed input data.
Best way for upload or download images in ios?
in ios I can upload images and upload images on server by via ftp. I also saw many person use HTTP post methods for upload or download image in shape of NSData.
so which method is fast and secure?
HTTP is the better choice because port 80 is almost always open while port 21 is often closed in business settings.
Neither are faster or more secure for your IOS app. In general FTP is not the most secure technology to be running on your server (sFTP is better), so many people prefer not to run FTP servers, and therefore have to use HTTP for uploads (as Zaph says, on many firewalls, FTP is not even allowed by default for this reason).
But using HTTP for uploads that requires code on your server to handle HTTP POST and put the files in the correct location. The fact that you are writing this code potentially makes it safer: you can validate the incoming data, make sure it is the right size and filetype and take account of any user bandwidth or storage limits.
You don't use HTTP post to download images, but HTTP GET. That doesn't require you to use anything special on the server, and HTTP server can serve it.
Unless you have a good reason not to, I'd suggest using HTTP. A good reason might be that you're integrating your app with an existing FTP service.
I've got the following rails code
send_file '/test.pdf'
The file seems to be downloading with 0 bytes, has anyone got any ideas on how to fix this?
thanks
I believe that send_file depends on support from your web server to work. Are you running your app using the rails built in server? If so, I think you'll see the behaviour you've got here.
Basically, the idea behind send_file is that it sets an HTTP header 'X-Sendfile' pointing to the file you want to send. The web server sees this header and rather than returning the body of your response, sends the specified file instead.
The benefit of this approach is that your average web server is highly optimised for sending the content of static files very quickly, and will typically get the job done many times more quickly than rails itself can.
So, the solutions to your problem are to either:
* Use a webserver that supports X-Sendfile (e.g. Apache), or
* As rubyprince commented, use send_data instead, which will make ruby do the heavy lifting.
Just as an aside, you should be able to confirm that this is what's happening by looking at the response headers using either Firebug or Safari/Chrome's developer tools.