Getting a compressed Neo4j response - neo4j

I'm using Transactional Cypher HTTP endpoint from my application to execute queries in Neo4j. I was wondering if there is a way to get zipped response from server.
I read some threads about it but they mentioned to create unmanaged extensions for it:
http://www.markhneedham.com/blog/2013/07/08/neo4j-unmanaged-extension-creating-gzipped-streamed-responses-with-jetty/
I just want the zipped response using HTTP endpoints that I'm already using.
I guess setting http parameter in request to tell the server to compress response will not work.
Is there any configuration that can enable the response to get compressed?
Any ideas for unzipping response as well?
Regards,
Rahul

You can run Neo4j behind a proxy caring about compression. One example would be using Apache httpd with mod_deflate for compression and mod_proxy_http for the communication with Neo4j.
I've played around some time ago with a mod_proxy setup, see https://github.com/sarmbruster/vagrant_neo4j_modproxy/blob/master/etc/apache2/sites-available/default as a starting point. Be aware this example does not use mod_deflate yet.

Mark Needham implemented it once, it was not a lot of effort, so you can just take his code build it and put it into your server:
http://www.markhneedham.com/blog/2013/07/08/neo4j-unmanaged-extension-creating-gzipped-streamed-responses-with-jetty/

Related

Why is GZIP Compression of a Request Body during a POST method uncommon?

I was playing around with GZIP compression recently and the way I understand the following:
Client requests some files or data from a Web Server. Client also sends a header that says "Accept-Encoding,gzip"
Web Server retrieves the files or data, compresses them, and sends them back GZIP compressed to the client. The Web Server also sends a header saying "Content-Encoded,gzip" to note to the Client that the data is compressed.
The Client then de-compresses the data/files and loads them for the user.
I understand that this is common practice, and it makes a ton of sense when you need to load a page that requires a ton of HTML, CSS, and JavaScript, which can be relatively large, and add to your browser's loading time.
However, I was trying to look further into this and why is it not common to GZIP compress a request body when doing a POST call? Is it because usually request bodies are small so the time it takes to decompress the file on the web server is longer than it takes to simply send the request? Is there some sort of document or reference I can have about this?
Thanks!
It's uncommon because in a client - server relationship, the server sends all the data to the client, and as you mentioned, the data coming from the client tends to be small and so compression rarely brings any performance gains.
In a REST API, I would say that big request payloads were common, but apparently Spring Framework, known for their REST tools, disagree - they explicitly say in their docs here that you can set the servlet container to do response compression, with no mention of request compression. As Spring Framework's mode of operation is to provide functionality that they think lots of people will use, they obviously didn't feel it worthwhile to provide a ServletFilter implementation that we users could employ to read compressed request bodies.
It would be interesting to trawl the user mailing lists of tomcat, struts, jackson, gson etc for similar discussions.
If you want to write your own decompression filter, try reading this: How to decode Gzip compressed request body in Spring MVC
Alternatively, put your servlet container behind a web server that offers more functionality. People obviously do need request compression enough that web servers such as Apache offer it - this SO answer summarises it well already: HTTP request compression - you'll find the reference to the HTTP spec there too.
Very old question but I decided to resurrect it because it was my first google result and I feel the currently only answer is incomplete.
HTTP request compression is uncommon because the client can't be sure the server supports it.
When the server sends a response, it can use the Accept-Encoding header from the client's request to see if the client would understand a gzipped response.
When the client sends a request, it can be the first HTTP communication so there is nothing to tell the client that the server would understand a gzipped request. The client can still do so, but it's a gamble.
Although very few modern http servers would not know gzip, the configuration to apply it to request bodies is still very uncommon. At least on nginx, it looks like custom Lua scripting is required to get it working.
Don't do it, for no other reason than security. Firewalls have a hard or impossible time dealing with compressed input data.

Asana - Rest API - Multipart/form image upload times out

I am working on a little tool to upload issues found during development to Asana. I am able to get and use post to create tasks etc, but I am unable to do a proper multipart forum upload.
When I run my image upload post request through an independent perl based cgi script I am getting 200's back and an image saved on my server.
When I target Asana, I get 504 gateway timeouts. I am thinking there must be something strict that the perl script is letting through but I have malformed in my request but I am hard pressed to find it.
Is there a web expert or asana expert out there who might be able to help shed some light on what might be missing.
Note the wireshark capture has an extra field. The Asana docs indicate a task field I have tried with and without that field since it is unclear if the task id encoded in the url satisfies that requirement.
I found the problem!
My boundary= had quotes around the value which was getting through on my cgi / apache setup but not for asana.

Controlling IIS BITS uploads

I'm running an IIS web site (built using ASP.NET/MVC) that among other things collects files from multiple agents that anonymously upload the files via BITS.
I need to make sure that only files uploaded from known sources as well as matching certain predefined file name pattern will be accepted by IIS. All other BITS upload attempts must be cancelled.
As I understand, BITS uses an ad hoc protocol over HTTP 1.1 using "BITS_POST" verb. So, ideally, I'd like to hook into IIS, analyze a BITS_POST request info and if it does not satisfy my pre-conditions, drop the request.
I've tried to create and register a filter implementing IActionFilter.OnActionExecuting, but it seems that my filter does not receive BITS_POST requests.
I'd be glad to hear if somebody have implemented similar BITS related solutions and how this was done. Anyway, other ideas are welcome too.
Regards,
Natan
I have never worked with BITS, frankly i dont know what is it.
What i usually do is such situations is implement an HTTP module. On its begin request event, you can iterate through incoming HTTP request data and decide to stop processing the request if data is not complying with requirements. You have full access to HttpContext.Current.Request object from HTTP module code.
With HTTP modules, you can execute .NET code even before entering the ASP.NET pipeline.

erlang httpc sending http response to wrong handler

Our app makes a lot of HTTP requests and we are facing this issue with both inets-5.5.1 and 5.3.2.
Basically our receive clause for the response is trying to match the request id returned in httpc:request call
and we see that the Request Id match fails
We gave seen this mismatch in all 3 receive clauses for
stream_start, stream and stream_end
What we observed after a lot of trial and error is that if the same pid makes the http requests, the responses get kind of muddled up but if we spawn a separate process for the httpc:request, it is better . We also tried using a separate httpc profile to isolate the current process requests from other process requests. But even after this we have seen a lot of occurrences of this faulty behavior.
This is occurring with a lot of our http requests. Has anyone faced this ?
Thanks
Suma
This may not be a direct solution, but....
I suggest you try a much more "heavy duty" http client called ibrowse.
(if it is not too late for your project!)
Inets httpd and httpc are better for "simple HTTP tasks", so you may need Yaws, Mochiweb e.t.c if you need to do "duty grade" HTTP jobs instead of inets httpd
wish u success!

Rails 3 send_file - 0 bytes

I've got the following rails code
send_file '/test.pdf'
The file seems to be downloading with 0 bytes, has anyone got any ideas on how to fix this?
thanks
I believe that send_file depends on support from your web server to work. Are you running your app using the rails built in server? If so, I think you'll see the behaviour you've got here.
Basically, the idea behind send_file is that it sets an HTTP header 'X-Sendfile' pointing to the file you want to send. The web server sees this header and rather than returning the body of your response, sends the specified file instead.
The benefit of this approach is that your average web server is highly optimised for sending the content of static files very quickly, and will typically get the job done many times more quickly than rails itself can.
So, the solutions to your problem are to either:
* Use a webserver that supports X-Sendfile (e.g. Apache), or
* As rubyprince commented, use send_data instead, which will make ruby do the heavy lifting.
Just as an aside, you should be able to confirm that this is what's happening by looking at the response headers using either Firebug or Safari/Chrome's developer tools.

Resources