Restrict maximim permitted request and response time - ruby-on-rails

Is it possible within the Rails app (in Rack?) to restrict the maximum permitted duration for https requests and responses? Like a forced timeout.
Long running http post requests can be considered a vulnerability to a DoS attack.
Rails 5.2.3, Ruby 2.6.5
The app is hosted on AWS, in Elastic Beanstalk with an Application Load Balancer (ALB). I thought configuring this within the Rails app (for production) would be cleanest if possible, but otherwise wherever it can be done - ALB, puma etc.

Don't think it's possible in Rails.
Check out https://github.com/sharpstone/rack-timeout. Also can restrict Puma worker_timeout (tho explicitly mentions does not protect against slow requests, but see: https://github.com/puma/puma/issues/1024).
Definitely available as a configuration in ALB!

Related

Rails 3: How to stop logger from logging /healthcheck requests?

I have a rails app on 2 servers and it uses a load balancer. The load balancer makes requests to /healthcheck every few seconds to make sure that the servers are still available. However, this is filling up my production logs. Is there a way to filter out requests to healthcheck#healthcheck ?
quiet_assets removes calls to your assets in development. I'm sure you could probably do something similar in production for healthchecks.
https://github.com/evrone/quiet_assets/blob/master/lib/quiet_assets.rb

Dealing with long requests in Unicorn + Rails app

We have a Rails app that we run on Unicorn (2 workers) and nginx. We want to integrate a 3rd party API where processing of a single request takes between 1 and 20 seconds. If we simply create a new controller that proxies to that service the entire app suffers, because it takes only 2 people to make a request to that service via our API and for 20 seconds the rest of the users can't access the rest of our app.
We're thinking about 2 solutions.
Create a separate node.js server that will do all of the requests to the 3rd party API. We would only use Rails for authentication/authorization in this case, and we would redirect the requests to node via nginx using X-Accel-Redirect header (as described here http://blog.bitbucket.org/2012/08/24/segregating-services/)
Replace Unicorn with Thin or Rainbow! and keep proxying in our Rails app, which could then, presumably, allow us to handle many more concurrent connections.
Which solution might we be better off? Or is there something else we could do.
I personally feel that nodes even-loop is better suited for the job here, because in option 2 we would still be blocking many threads and waiting for HTTP requests to finish and in option 1, we could be doing more requests while waiting for the slow ones to finish.
Thanks!
We've been using the X-Accel-Redirect solution in production for a while now and it's working great.
In nginx config under server, we have entries for external services (written in node.js in our case), e.g.
server {
...
location ^~ /some-service {
internal;
rewrite ^/some-service/(.*)$ /$1 break;
proxy_pass http://location-of-some-service:5000;
}
}
In rails we authenticate and authorize the requests and when we want to pass it to some other service, in the controller we do something like
headers['X-Accel-Redirect'] = '/some-service'
render :nothing => true
Now, rails is done with processing the request and hands it back to nginx. Nginx sees the x-accel-redirect header and replays the request to the new url - /some-service which we configured to proxy to our node.js service. Unicorn and rails can now process new requests even if node.js+nginx is still processing that original request.
This way we're using Rails as our main entry point and gatekeeper of our application - that's where authentication and authorization happens. But we were able to move a lot of functionality into these smaller, standalone node.js services when that's more appropriate.
You can use EventMachine in your existing Rails app which would mean much less re-writing. Instead of making a net/http request to the API, you would make a EM::HttpRequest request to the API and add a callback. This is similar to node.js option but does not require a special server IMO.

Rails development: how to respond to several requests at once?

I have inherited the maintenance of a legacy web-application with an "interesting" way to manage concurrent access to the database.
The application is based on ruby-on-rails 2.3.8.
I'd like to set up a development environment and from there have two web browser make simultaneous requests, just to get the gist of what is going on.
Of course this is not going to work if I use Webrick, since it services just one http request at a time, so all the requests are effectively serialized by it.
I thought that mongrel could help me, but
mongrel_rails start -n 5
is actually spawning a single process and it seems to be single-threaded, too.
What is the easiest way of setting my development environment so that it responds to more than one request at a time? I'd like to avoid using apache and mod_passenger because, this being development, I'd like to be able to change the code and have it reloaded automatically on the next request.
In development mode, mod_passenger does reload classes and views. I use passenger exclusively for both development and deployment.
In production, you can (from the root of the rails app):
touch tmp/restart.txt
and passenger will reload the app.
Take a look at thin
http://code.macournoyer.com/thin/

How to prevent abusive crawlers from crawling a rails app deployed on Heroku?

I want to restrict the crawler access to my rails app running on Heroku. This would have been a straight forward task if I was using Apache OR nginX. Since the app is deployed on Heroku I am not sure how I can restrict access at the HTTP server level.
I have tried to use robots.txt file, but the offending crawlers don't honor robot.txt.
These are the solutions I am considering:
1) A before_filter in the rails layer to restrict access.
2) Rack based solution to restrict access
I am wondering if there are any better ways to deal with this problem.
I have read about honeypot solutions: You have one URI that must not be crawled (put it in robots.txt). If any IP calls this URI, block it. I'd implement it as a Rack middleware so the hit does not go to the full Rails stack.
Sorry, I googled around but could not find the original article.

Help debugging Apache, Passenger and Rails problem

We have an environment running that uses Apache, Passenger and rails. The system is handling most request normally, yet certain requests do not make it to the rails application. For instance, a request to /books is successful, but /books/1 hits apache and passenger, but does not even make it to rails.
We set the apache log level to debug and the passenger log level to 3 so that we could monitor all incoming requests. We could see each request coming through and even the /books/1 request is being handled by passenger. But it never gets to rails.
Is there any way to determine where the request goes between Passenger and rails or where debugging information might live? Has anyone ever seen any problems with passenger spawning or queuing? We have spawning set to conservative. Also, we have had some permission/ownership problems in the past, so I am not ruling this out yet.
Thanks in advance
First guess: is that it's reading from your cache at public/books/1.html. This fits all the symptoms. If there is a public/books/1.html file when you go to request books/1 Apache will serve the request by sending just that file.
Second guess: alternate configuration is mucking about with how Apache is serving the route.

Resources