how to set the max length of webservice parameter - ruby-on-rails

the system's configuration is:
web server: WEBrick
software enviroment: ruby on rails
when the browser passes more than 400 bytes parameters to the server, the server returns 414 code(Request-URI Too Large).
How to solve this problem?

Ryan Bates answered your question here: https://github.com/intridea/omniauth/issues/43
In short: use mongrel in development.
A few useful details:
The URI limits vary given client, server and even browser.
Browsers
IE has a limit around 2kb, firefox 65kb. Since Api are triggered from servers it's not that annoying.
Servers
Nginx's default limit is 4kb for 32bits and 8kb for 64bits.
Apache's default is 8190 bytes.
Both can be changed inside config.
Source: the excellent 'Service Oriented Design with RoR'

Related

Where does http protocol rests in rails framework?

I just wanted to know that where does the HTTP framework rests in rails and how to implement a different protocol for client-server communication using different network layer?
There's a new protocol called QUIC which has low latency and if
somebody wants to implement that in rails app how does someone do it?
I hardly found any resources related to the implementation on
internet.
At a guess, this would be handled by the Rack middleware that sits between the web server and your Rails code. Your Rails application does not interact with the web server, rather it interacts with Rack which interacts with your web server.
Rails <---> Rack <---> Web Server <---> Web Client
Here is a tiny Rack server that says "Hello, world!".
require "rack"
require "thin"
class HelloWorld
def call(env)
[ 200, { "Content-Type" => "text/plain" }, ["Hello World"] ]
end
end
Rack::Handler::Thin.run HelloWorld.new
Rack::Handler::Thin talks to the tiny thin web server passing it a response consisting of an HTTP code, HTTP headers, and the response body.
You may be in luck. The LiteSpeed web server supports QUIC and Rack has a LiteSpeed handler. It might Just Work.
As discussed in the comments QUIC is not yet formally standardised so it is unsurprising it is not available in most tools. None of the major web servers (e.g. Apache, Nginx or IIS) and even indicated they are working on it yet. It is due to be completed in July and then submitted for standardisation which will take a few months after that. After that I would expect implementations to start becoming available.
Google invented QUIC and do have a version on their servers and in Chrome. This form the basis of the QUIC that will be standardised but the two have diverged and are not compatible. So you could implement a version of Google QUIC if you wanted to and some servers like LiteSpeed and some CDNs like Akamai do this. As do Google themselves on their Cloud Platform. They basically do this by reverse engineering the open source Google Chrome code. Also as Google iterates QUIC and stops supporting the old versions they must keep up or it will stop working. Eventually Google QUIC will be deprecated and then retired once the IETF standardised QUIC comes out.
QUIC is also incredibly complex! Implementing it will not be easy and will take considerable effort and time. It is not as simple as finding the HTTP code and copying and pasting it and changing a couple of things. It’s a massive whole new protocol that reimplements parts of TCP, TLS and HTTP/2. HTTP/3 then is what is left over from HTTP/2 and needs to be implemented as well as QUIC to be useful.
Finally the the impact of QUIC might not be as large as you think. QUIC is an evolution of HTTP/2 and fixes one edge case where HTTP/2 can be slower than HTTP/1.1 if there is very high packet loss. Apart from this scenario the initial versions of QUIC will be very similar to HTTP/2 and TLSv1.3 which are available now. One of the main reasons for QUIC is to allow it to evolve quickly as TCP is almost impossible to change as it’s so baked in. Future versions of QUIC will likely include Forward Error Correction (to automatically recreate dropped packets), connection migration (to allow you to seamlessly switch from WiFi to Mobile) and also be available for more than HTTP but they are out of scope for the initial version as defined by the QUIC working group charter because even without those QUIC is complicated. Additionally TCP is highly optimised into Operating Systems and network stacks so QUIC will likely be more CPU expensive and slow, especially initially and there may be other issues to solve as well.
So all in all, if you want QUIC now then look at one of the webservers or CDN or Google Cloud Platform and put this in front of your application server. Like HTTP/2 this usually gives the main benefits, and means you don’t need to worry about all of the above complications. But to me, QUIC is one to watch for the future, and not something id want to turn in for now.
If interested in learning more about HTTP/2, HTTP/3 and QUIC, and some of the complexities, then you can check out my book on the subject: https://www.manning.com/books/http2-in-action

WampServer Stop Responding Live Requests after few minutes

I have setup WampServer with a LIVE IP on WIndows 8, but after few minutes of usage it stop responding to requests, the browser shows connecting progress bar and it goes on.
While on the same machine, when i access the apache server via Localhost, it works fine.
Any quick help to figure out this issue will highly be appreciated. More info can be provided on request.
You could try adding these 2 command to your httpd.conf file, doing so has fixed similiar issues in the past.
httpd.conf File
Add this section
# AcceptFilter: On Windows, none uses accept() rather than AcceptEx() and
# will not recycle sockets between connections. This is useful for network
# adapters with broken driver support, as well as some virtual network
# providers such as vpn drivers, or spam, virus or spyware filters.
AcceptFilter http none
AcceptFilter https none
Just before this line ( approx line 480 )
# Supplemental configuration
It is also worth noting that Windows 8 has a maximum external connection limit, as do all Windows Desktop OS's. The W8 limit is 20 connections, and in earlier versions the limit is smaller.
Basically if you are trying to run a server i.e. Apache on a desktop OS, it works just great for a single developer situation. It does not however work well under any kind of multi user loading, as Windows Desktop OS's are not configured like a Windows Server. Maybe you should put WAMPServer on a proper Server OS in order to test multi user loads.

invalid content-Length ERROR while uploading 2 GB of stream

When Trying to upload 2GB of stream i got invalid content length error
am running Apache as frontend server to mongrel and chrome as my browser.
One more thing one i do it with mongrel alone am able to upload this 2 GB of stream ,cud anybody tell me whats the problem and how do i configure content length in apache??
I'd imagine the problem is that Apache considers it a denial of service attempt and has a cap to prevent you locking up all the server's resources (very reasonable), whether that's configurable or not I'm not sure - can't find anything but will keep hunting. If it's not, you can always build your own copy of Apache with the limit removed.
Have you considered sending the data in reasonable-sized chunks?
You might also wish to inspect the outgoing request using packet inspection or browser debugging tools. It's possible the content-length is being malformed by the browser (I doubt they have a 2GB test case...)

Limit upload speed for testing on lighttpd

I'm implementing ubr upload. It used Perl and PHP to upload files with a progress bar. I'm running a lighttpd development server and would like to test it fully. Currently it just transfer the files instantly since its really just moving files on my computer. Is there a way to make it seem like it actually transfers it slowly so I can watch the progress bar?
I tried adding the following to my lighttpd.conf. It may have slowed down loading the pages a little, but uploads are still instantanteous.
$HTTP["host"] == "localhost" {
server.kbytes-per-second = 8
}
Thanks
Instead of throttling things on the server side, you could try throttling your client machine. There's a nice article on how to throttle bandwidth on macs over at O'Reilly:
Exploring the Mac OS X firewall
ipfw is a BSD thing, but on Linux you could try using the shaper module and shapecfg:
Traffic Shaping Basics
$HTTP['host'] contains the host of the server. You could put the config variable in the configuration file without the host check.
Thanks for the help! Actually, I'm dual booting and just tested my exact script on my apache server. When I transfer a 200mb file on apache it actually displays the progress bar as the file transfers. On my lighttpd server, the page is "busy" as it posts the file in the background, then the bar pops up as 100% complete.
I think the way the script works is that CGI posts the file, and as it is doing that it keeps writing the size it has written into another file. Then a php script is being called every second which opens this file and looks at how much has been written.
It seems like my lighttpd server is not allowing perl and php to work at the same time.. I may be wrong though.
On my windows server I actually installed WAMP and perl. My lighttpd is using fastcgi for the php and just mod_cgi module for the perl scripts.
Ah it looks like other people have issues with lighttpd and uber uploader...
(can't link to it since I'm new)
Now the question is if lighttpd is worth using since I'll have to change this on top of all my mod_rewrite stuff.
Try using charles: http://www.charlesproxy.com/
You can limit your browser bandwidth by using the Sloppy HTTP proxy: http://www.dallaway.com/sloppy/
Sloppy deliberately slows the transfer of data between client and server.
Example usage: you probably build web sites on your local network, which is fast. Using Sloppy is one way to get the "dial-up experience" of your work without the hassle of having to install a modem.

Proxy choices: mod_proxy_balancer, nginx + proxy balancer, haproxy?

We're running a Rails site at http://hansard.millbanksystems.com, on a dedicated Accelerator. We currently have Apache setup with mod-proxy-balancer, proxying to four mongrels running the application.
Some requests are rather slow and in order to prevent the situation where other requests get queued up behind them, we're considering options for proxying that will direct requests to an idle mongrel if there is one.
Options appear to include:
recompiling mod_proxy_balancer for Apache as described at http://labs.reevoo.com/
compiling nginx with the fair proxy balancer for Solaris
compiling haproxy for Open Solaris (although this may not work well with SMF)
Are these reasonable options? Have we missed anything obvious? We'd be very grateful for your advice.
Apache is a bit of a strange beast to use for your balancing. It's certainly capable but it's like using a tank to do the shopping.
Haproxy/Nginx are more specifically tailored for the job. You should get higher throughput and use fewer resources at the same time.
HAProxy offers a much richer set of features for load-balancing than mod_proxy_balancer, nginx, and pretty much any other software out there.
In particular for your situation, the log output is highly customisable so it should be much easier to identify when, where and why slow requests occur.
Also, there are a few different load distribution algorithms available, with nice automatic failover capabilities too.
37Signals have a post on Rails and HAProxy here (originally seen here).
if you want to avoid Apache, it is possible to deploy a Mongrel cluster with an alternative web server, such as nginx or lighttpd, and a load balancer of some variety such as Pound or a hardware-based solution.
Pounds (http://www.apsis.ch/pound/) worked well for me!
The only issue with haproxy and SMF is that you can't use it's soft-restart feature to implement the 'refresh' action, unless you write a wrapper script. I wrote about that in a bit more detail here
However, IME haproxy has been absolutely bomb-proof on solaris, and I would recommend it highly. We ship anything from a few hundred GB to a couple of TB a day through a single haproxy instance on solaris 10 and so far (touch wood) in 2+ years of operation we've not had any problems with it.
Pound is an HTTP load balancer that I've used successfully in the past. It includes a dynamic scaling feature that may help with your specific problem:
DynScale (0|1): Enable or disable
the dynamic rescaling code (default:
0). If enabled Pound will periodically
try to modify the back-end priorities
in order to equalise the response
times from the various back-ends. This
value can be overridden for specific
services.
Pound is small, well documented, and easy to configure.
I've used mod_proxy_balancer + mongrel_cluster successfully (small traffic website).

Resources