Thin server QUERY_STRING is longer than the (1024 * 10) allowed length - ruby-on-rails

How can I increase the maximum allowed value for QUERY_STRING using either thin, puma, or unicorn web servers in Rails? I'm attempting to make a POST request to my Rails API that exceeds the limit, and just need to increase the server's maximum threshold
Specific error on POST: Invalid request: HTTP element QUERY_STRING is longer than the (1024 * 10) allowed length.
I only came across this question in one other place (HTTP query string length with thin web server) and I couldn't quite make sense of the answer (specifically, where does one find the C file to edit in that answer?)

You'll find thin.c in something like ~/.rvm/gems/ruby-2.2.0/gems/thin-1.6.4/ext/thin_parser
you'll want to change
DEF_MAX_LENGTH(REQUEST_URI, 1024 * 12);
...
DEF_MAX_LENGTH(QUERY_STRING, (1024 * 10));
in this same folder you just need to use the Makefile to reload the thin_parser.so, and to replace the previous thin_parser.so by the new one in ~/.rvm/gems/ruby-2.2.0/gems/thin-1.6.4/lib (seems like the Makefile is not doing it itself)
make clean && make && cp thin_parser.so ../../lib/
I just made it work that way, hope it helps

The file in question is in /ext/thin_parser/thin.c within the gem source code. To make the change you want I believe the easiest path would be to fork this gem on Github, publish your changes in your fork, and then bundle your version using the git: option in your Gemfile. Like:
gem 'thin', git: '<URL to your fork>', branch: '<branch of fork to use>'

Related

How to cache based on size in Varnish?

I've been trying to cache based on response size of varnish.
Other answers suggested using Content-Length to decide whether or not to cache but I'm using InfluxDB (Varnish reverse proxies to this) and it responds with a Transfer-Encoding:Chunked which omits the Content-Length header and I am not able to figure out the size of the response.
Is there any way I could access response body size and make decision in vcl_backend_response?
Cache miss: chunked transfer encoding
When Varnish processes incoming chunks from the origin, it has no idea ahead of time how much data will be received. Varnish streams the data through to the client and stores the data byte per byte.
Once the 0\r\n\r\n is received to mark the end of the stream, Varnish will finalize the object storage and calculate the total amount of bytes.
Cache hit: content length
The next time the object is requested, Varnish no longer needs to use Chunked Transfer Encoding, because it has the full object in cache and knows the size. At that point a Content-Length header is part of the response, but this header is not accessible in VCL because it seems to be generated after sub vcl_deliver {} is executed.
Remove objects after the fact
It is possible to remove objects after the fact by monitoring their size through VSL.
The following command will look at the backend request accounting field of the VSL output and check the total size. If the size is greater than 5MB, it generates output
varnishlog -g request -i berequrl -q "BereqAcct[5] > 5242880"
Here's some potential output:
* << Request >> 98330
** << BeReq >> 98331
-- BereqURL /
At that point, you know that the / resource is bigger than 5 MB. You can then attempt to remove it from the cache using the following command:
varnishadm ban "obj.http.x-url == / && obj.http.x-host == domain.com"
Replace domain.com with the actual hostname of your service and set / to the URL of the actual endpoint you're trying to remove from the cache.
Don't forget to add the following code to your VCL file to ensure that the x-url and x-host headers are available:
sub vcl_backend_response {
set beresp.http.x-url = bereq.url;
set beresp.http.x-host = bereq.http.host;
}
sub vcl_deliver {
unset resp.http.x-url;
unset resp.http.x-host;
}
Conclusion
Although there's no turn-key solution to access the size of the body in VCL, but the hacky solution I suggested where we remove objects after the fact is the only thing I can think of.

How select prefered file transport method?

I have a problem, as I think, with my prosody configuration. When I am sending files (for example photos) more the ~2 or 3 megabytes (as I established experimentally) using Converstions 2.* version (android IM app) it transfers this files using peer to peer connection instead of uploading this file to server and sending a link to my interlocutor. Small files transfers well using http upload. And I couldn't find a reason for such behavior.
Here are some lines for http_upload module from my config, that I took from official documentation (where I hadn't found a setup for turning off peer to peer files transfer):
http_upload_file_size_limit = 536870912 -- 512 MB in bytes
http_upload_expire_after = 604800 -- 60 * 60 * 24 * 7
http_upload_quota = 10737418240 -- 10 GB
http_upload_path = "/var/lib/prosody"
And this is my full config: https://pastebin.com/V6DNYrhe
Small files are transferred well using http upload. And I couldn't
find a reason for such behavior.
TL;DR: You put options in the wrong place. The default 1MB limit
applies. This is advertised to clients so they know about it and can use
more efficient p2p transfer methods for very large files.
http_upload_path = "/var/lib/prosody"
This line makes Prosodys data directory public, allowing anyone easy
access to all user data. You really don't want to do that. You are
lucky you did not put that in the correct section.
And this is my full config: https://pastebin.com/V6DNYrhe
"http_upload" is in the global modules_enabled list which will load
it onto all VirtualHost(s).
You have added options to the end of the config file, putting them under
a Component section. That makes those options only apply to that
Component.
Thus, the VirtualHost where mod_http_upload is loaded sees no options
set and will use the defaults.
http_upload_file_size_limit = 536870912 -- 512 MB in bytes
Don't do this. Prosodys built-in HTTP server is not optimized for very
large uploads. There is a safety limit on HTTP request size that will
cap HTTP upload size limit to 10M to prevent DoS attacks.
While that limit can be changed, I would strongly suggest you look at
https://modules.prosody.im/mod_http_upload_external.html instead.

Play 2.6, URI length exceeds the configured limit of 2048 characters

I am trying to migrate a Play 2.5 version to 2.6.2. I keep getting the URI-length exceeds error. Anyone knows how to override this?
I tried below Akka setting but still no luck.
play.server.akka{
http.server.parsing.max-uri-length = infinite
http.client.parsing.max-uri-length = infinite
http.host-connection-pool.client.parsing.max-uri-length = infinite
http.max-uri-length = infinite
max-uri-length = infinite
}
Simply add
akka.http {
parsing {
max-uri-length = 16k
}
}
to your application.conf. The prefix play.server is only used for a small subset of convenience features for Akka-HTTP integration into the Playframework, e.g. play.server.akka.requestTimeout. Those are documented in the Configuring the Akka HTTP server backend documentation.
I was getting error due to header length exceeding default 8 KB(8192). Added the following to build.sbt and it worked for me :D
javaOptions += "-Dakka.http.parsing.max-header-value-length=16k"
You can try similar for uri length if other options don't work
This took me way to long to figure out. It is somehow NOT to be found in the documentation.
Here is a snippet (confirmed working with play 2.8) to put in your application.conf which is also configurable via an environment variable and works for BOTH dev and prod mode:
# Dev Mode
play.akka.dev-mode.akka.http.parsing.max-uri-length = 16384
play.akka.dev-mode.akka.http.parsing.max-uri-length = ${?PLAY_MAX_URI_LENGTH}
# Prod Mode
akka.http.parsing.max-uri-length = 16384
akka.http.parsing.max-uri-length = ${?PLAY_MAX_URI_LENGTH}
You can then edit the config or with an already deployed application just set PLAY_MAX_URI_LENGTH and it is dynamically configurable without the need to modify commandline arguments.
env PLAY_MAX_URI_LENGTH=16384 sbt run
If anyone getting this type of error in chrome browser when trying to access a site or login. [HTTP header value exceeds the configured limit of 8192 characters]
, Go to chrome
settings -> Security and Privacy -> Site Settings , View Permission and data stored across sites
Search for the specific website and on that site do Clear all data.

How to tune Garbage Collection in Ruby?

i am working on a ruby project. and i used tunemygc gem to get some optimal settings for my app.
RUBY_GC_HEAP_INIT_SLOTS 220886
RUBY_GC_HEAP_FREE_SLOTS 3378483
RUBY_GC_HEAP_GROWTH_FACTOR 1.03
RUBY_GC_HEAP_GROWTH_MAX_SLOTS 478
RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR 2.0
RUBY_GC_MALLOC_LIMIT 16777216
RUBY_GC_MALLOC_LIMIT_MAX 30198989
RUBY_GC_MALLOC_LIMIT_GROWTH_FACTOR 1.32
RUBY_GC_OLDMALLOC_LIMIT 16777216
RUBY_GC_OLDMALLOC_LIMIT_MAX 30198989
RUBY_GC_OLDMALLOC_LIMIT_GROWTH_FACTOR 1.2
but i don't know how to config my garbage collection with these settings.
Set those as environment variables on your server that are available to the ruby process when it starts. As in:
export RUBY_GC_HEAP_INIT_SLOTS=220886
...
Then start your ruby app
If your app is on heroku, you can also use the free heroku add on, and apply all the recommended settings with just one button.

Uploading a file larger than 2GB using PHP

I'm trying to upload a file larger than 2GB to a local PHP 5.3.4 server. I've set the following server variables:
memory_limit = -1
post_max_size = 9G
upload_max_filesize = 5G
However, in the error_log I found:
PHP Warning: POST Content-Length of 2120909412 bytes exceeds the limit of 1073741824 bytes in Unknown on line 0
Can anyone tell me why this keeps failing please?
I had a similar problem, but my config was:
post_max_size = 1.8G
upload_max_filesize = 1.8G
and yet I could not upload a 1.2GB file. The error was very same:
PHP Warning: POST Content-Length of 1347484420 bytes exceeds the limit of 1073741824 bytes in Unknown on line 0
I spent a day wondering where the heck was this "limit of 1073741824" coming from!
Solution:
Actually, the error was in the php.ini parser: It only understands INTEGER numbers, so essentially it was parsing 1.8G as 1G !!
Changing the value to e.g. 1800M fixed it.
Pls ensure to restart the apache server with the below command service apache2 restart
I don't know about in 5.3.x, but in 5.2.x there are some int/long issues in the PHP code. even if you're on a 64-bit system and have a version of PHP compiled with 64-bit, there are several problems.
First, the code that converts post_max_size and others from ascii to integer stores the value in an int, so it converting "9G" and putting the result into this int will bork the value because 9G is a larger number than a 32-bit variable can hold.
But there are also several other areas of PHP code that are used with the Apache module, CGI, etc. that need to be changed from int to long.
So...for this to work, you need to edit the PHP code and compile it by hand (make sure you compile it as 64-bit). here's a link to a list of diffs:
http://www.archive.org/~tracey/downloads/patches/karmic-64bit-post-large-files.patch
Referenced from this php bug post: http://bugs.php.net/bug.php?id=44522
The file above is a diff on 5.2.10 code, but I just made the changes by hand to 5.2.17 code and i just uploaded a 3.4gb single file through apache/php (which hadn't worked before the change).
ope that helps.
I figure out how to use http and php to upload a 10G file.
php.ini:
post_max_size = 0
upload_max_filesize = 0
It works in php 5.3.10.
if you do not load that file all into memory , memory_limit is unrelated.
Maybe this can come from apache limitations on POST size:
http://httpd.apache.org/docs/current/mod/core.html#limitrequestbody
It seems this limitation on 2Gb can be greater on 64bits installations, maybe. And i'm not sure setting 0 in this directove does not reach the compilation limit. see for examples that thread:
http://ubuntuforums.org/archive/index.php/t-1385890.html
Then do not forget to alter as well the max_input_time in PHP.
But you are reaching high limits :-) maybe you could try a rich client (flash? js?) on the browser side, doing the transfer in chunks or some sort of FTP things, with progress indicators for the user.
As phliKtid mentioned, this is a limitation with the PHP framework. Save for editing the source code as mentioned in the bug report phliKtid linked, there is a workaround that involves setting the upload_max_filesize to 0 in the php.ini file.
; Maximum allowed size for uploaded files.
; http://php.net/upload-max-filesize
upload_max_filesize = 0
By doing this, PHP will not crash when trying to convert "5G" into a 32-bit integer and you will be able to upload files as big as you allow with the "post_max_size" variable.
We've had the same problem: uploads stopped at 2GB.
Under SLES (SUSE Linux Enterprise Server) 11 SP 2, php53 was the problem.
Then we added a new repository that has php54:
http://download.opensuse.org/repositories/server:/php/SLE_11_SP2/
and upgraded to that, we now can upload 5GB :-)

Resources