How to deploy a threadsafe asynchronous Rails app? - ruby-on-rails

I've read tons of material around the web about thread safety and performance in different versions of ruby and rails and I think I understand those things quite well at this point.
What seems to be oddly missing from the discussions is how to actually deploy an asynchronous Rails app. When talking about threads and synchronicity in an app, there are two things people want to optimize:
utilizing all CPU cores with minimal RAM usage
being able to serve new requests while previous requests are waiting on IO
Point 1 is where people get (rightly) excited about JRuby. For this question I am only trying to optimize point 2.
Say this is the only controller in my app:
class TheController < ActionController::Base
def fast
render :text => "hello"
end
def slow
render :text => User.count.to_s
end
end
fast has no IO and can serve hundreds or thousands of requests per second, and slow has to send a request over the network, wait for work to be done, then receive the answer over the network, and is therefore much slower than fast.
So an ideal deployment would allow hundreds of requests to fast to be fulfilled while a request to slow is waiting on IO.
What seems to be missing from the discussions around the web is which layer of the stack is responsible for enabling this concurrency. thin has a --threaded flag, which will "Call the Rack application in threads [experimental]" -- does that start a new thread for each incoming request? Spool up rack app instances in threads that persist and wait for incoming requests?
Is thin the only way or are there others? Does the ruby runtime matter for optimizing point 2?

The right approach for you depends heavily on what your slow method is doing.
In a perfect world, you could use use something like the sinatra-synchrony gem to handle each request in a fiber. You'd only be limited by the maximum number of fibers. Unfortunately, the stack size on fibers is hardcoded, and it is easy to overrun in a Rails app. Additionally, I've read a few horror stories of the difficulties of debugging fibers, due to the automatic yielding after async IO has been initiated. Race conditions are still possible when using fibers, as well. Currently, fibered Ruby is a bit of a ghetto, at least on the front-end of a web app.
A more pragmatic solution that doesn't require code changes is to use a Rack server that has pool of worker threads such as Rainbows! or Puma. I believe Thin's --threaded flag handles each request in a new thread, but spinning up a native OS thread is not cheap. Better to use a thread pool with the pool size set sufficiently high. In Rails, don't forget to set config.threadsafe! in production.
If you're OK with changing code, you can check out Konstantin Haase's excellent talk on real-time Rack. He discusses using the EventMachine::Deferrable class to produce a response outside of the traditional request/response cycle that Rack is built on. This seems really neat, but you have to rewrite the code in an async style.
Also take a look at Cramp and Goliath. These let you implement your slow method in a separate Rack app that is hosted alongside your Rails app, but you will probably have to rewrite your code to work in the Cramp/Goliath handlers as well.
As for your question about the Ruby runtime, it also depends on the work that slow is doing. If you're doing CPU-heavy computation, then you run the risk of the GIL giving you issues. If you're doing IO, then the GIL shouldn't get in your way. (I say shouldn't because I believe I've read about issues with the older mysql gem blocking the GIL.)
Personally, I've had success using sinatra-synchrony for a backend, mashup web service. I can issue several requests to external web services in parallel, and wait for all of them to return. Meanwhile, the frontend Rails server uses a thread pool, and makes requests directly to the backend. Not perfect, but it works well enough right now.

Related

How many requests can Puma buffer simultaneously?

I understand a benefit of Puma over other Rails web servers is how it handles slow clients. While a Puma server receives and downloads a (potentially slow) request, it can still receive and download other requests that might download quicker and be passed on to a worker for processing before the slow request has finished being received.
But I can't find any information about what, if any, limits there are to this.
Can Puma download any number of requests at the same time? If 1000 slow requests hit it at the same time, would the 1001st request reach a Puma worker first assuming it wasn't a slow request?
I guess what I'm interested in generally is what impact multiple slow requests have on other requests, including each other - because I'm working on an application that's likely to involve plenty of 'slow requests' (image uploads from phones via 3G).
This great article by #nate-berkopec helps explain in principle how Puma helps with slow clients: "In clustered mode, then, Puma can deal with slow requests (thanks to a separate master process whose responsibility it is to download requests and pass them on)..." Any more light anyone can shed would be very welcome.
There are a number of considerations, such as the IO polling system, memory and concurrency concerns.
IO polling system
Edit (Sep. 9th, 2020): By now the Puma server is running on the nio4r and should no longer be subject to the limits of the select system call (where file descriptor values are limited to 1023).
As far as I know, Puma uses the select system call (unlike iodine or passenger, which also protect you from slow clients but use kqueue or epoll).
The select system call is limited on most systems (usually up to 1024 clients / maxfd). I would assume that would create a limit.
However, I know Puma is working on replacing the select system call with something both portable and effective (such as leveraging the nio4r gem).
I don't know if that was already achieved, but it will break this limit and probably improve performance.
Memory
Slow clients still consume memory as they slowly fill the buffer with their header data or slowly download the buffered data that was sent (keeping the buffer in the memory until the download is complete).
Memory limitations will always add limitations to slow client handling.
Some limitations can be elevated, such as sending static files using X-Sendfile (supported with iodine, and when Puma or passenger are running under nginx)... but this isn't really something you can fix.
Concurrency
Puma handles slow clients within the Ruby's GIL (global instruction lock). This means that no other threads / instructions can execute while Puma is handling slow clients.
This is often a non issue, but a large enough number of slow clients increases the cost of context switching and system calls. This could (potentially) slow a server considerably.
Both Passenger and iodine perform slow client buffering outside of the GIL, allowing these system calls to be truly concurrent (when multiple CPU cores are available).
This will mitigate the issue but not fully solve the issue.
Conclusion and Caveats
The biggest issue is usually the IO polling system. A solution for this is on Puma's roadmap (maybe already implemented, I'm not sure).
The other issues (memory limits and concurrency limits) are relatively less important, but they can't be mitigated without using language extensions (the iodine server is written in C and Passenger is written in C++).
Since Puma doesn't (currently) require any language extensions (except for it's integrated HTTP parsers in C and Java), these issue remain.
I should point out that I'm the author for the iodine HTTP/Websocket server, so I'm somewhat biased.

Recurring job to check if url exists

I want to build a service that notifies me when a url returns status 200. I'm currently using a sidekiq worker, if the status == 200, it updates my database (row.available = true), if not, it raises an exception and retries the worker in n seconds, n amount of times.
Though this works, it doesn't feel efficient or scalable (1000's checks would result in 1000's of exceptions, and on certain platforms that's bad news -- JRuby), and I'm sure there is a way I can build an internal service to manage this url monitoring that doesn't rely on sidekiq (perhaps in Go, or another, more suited Ruby gem). However, I have no idea where to begin, and so I'd appreciate some general direction.
Writing and running a simple link checker is easy. Doing that for 1000s of links quickly, without redundancy, and handling dead and slow-responding links without bogging down your entire system gets harder.
I'd use three threads, plus two queues:
A dispatcher thread that only reads from the database. It is responsible for finding and queuing URLs to be checked in to a "to be checked" queue.
A worker thread that consumes from the first queue and pushes results into the "updated URL results" queue.
An updater/consumer thread that takes the result of a thread in #2 and updates the database.
Ruby has some built-in classes to help:
Thread
Queue
I'd highly recommend Typhoeus and Hydra for use in the middle thread. The documentation for these two classes cover a lot of what you need to do as far as handling multiple threads running in parallel.
I wouldn't write this code as part of a Rails application. There is no value added by Rails to this, nor is it necessary. I would either require Active Record and piggy-back on the existing database.yaml settings and models, or use Rails' "runner" to run the code as an adjunct to the Rails code.
Or, I'd write a small, application-specific, piece of code to run on a different server to avoid bogging down the Rails server. Using something like MySQL or PostgreSQL drivers would let you talk to the same database that Rails uses. In this case I'd use the Sequel gem to act as the ORM, but that's because I prefer it over Active Record.
There are a lot of things to consider as you write this code, including retries of failed URLs, sensing redirections and updating the source URLs to reflect them to avoid wasting time, and not beating up the hosting servers causing you to be banned.
I've written several apps for this purpose over the years and doing it right takes a lot of forethought, so think out your design up front otherwise you could end up with some major rewrites later on.

Why does Rails action controller not use threads?

My Rails application has a route that takes a lot of time to process, which makes the entire webpage freeze.
Why does this happen? Is it Rails or third-party gems which are not thread-safe?
Is there any way to work around this? I'm considering using a process pool, just like a thread pool, except it is heavier, it'll take a lot of memory, but it'll be cheaper than halting the whole app.
First thing to notice, your Rails action should not be heavy-weight. When a user requests a page, you should serve the user right away.
Now, there are cases when you need the user to wait for the result, in which case, you can always use websockets, or HTTP streaming.
Now, Ruby and Rails have a problem with threads, which you can read about in "Parallelism is a Myth in Ruby."
A solution you can use in Rails, is to use servers like Unicorn, which forks as many process workers as you want, and each one will be working independent of the others, Puma for creating multi threads, etc.
Now, if you have an action which is a heavy process, you may want to delay the work to a process pool like delayed_job. You can even create a nice UI with JavaScript to fetch the status of the job and show the progress to the user. You can use a pool of tasks to be performed with RabbitMQ, where another process On the background could listen to new messages and act on them, and even give a response, etc.
Have in mind that most webservers have a client timeout, and you don't really want the user to wait for one minute or more without a response, so it's always nice to use a stream response to give some feedback right away while the action is being completed, or answer with some JavaScript code that will continue hitting the server to see how the task is being performed, or even a websocket if required.
Rails uses a mutex lock around the entire request in the middleware stack, so a Rails process only ever takes one request at a time.
However, you can disable this by enabling the config.threadsafe! option AND using a multithreaded server, such as Puma.
Then there is the whole roadblock of using MRI which doesn't really let two threads run at the same time unless they are doing non-blocking I/O.
You would need to use a Ruby implementation that supports real threads, such as Rubinius or Jruby.

How is request processing with rails, redis, and node.js asynchronous?

For web development I'd like to mix rails and node.js since I want to get the best out of both worlds (rails for fast web development and node for concurrency). I know that some people choose to just use full ruby stack with eventmachine that is integrated into rails controller so that every request can be nonblocking by using fiber in event-loop model. I have been able to understand how that works in a big picture.
At this moement however I want to try doing nonblocking request processing with rails and node.js with message queue concept. I heard that this can be achieved by using redis as an intermediary. I'm still having trouble trying to figure out how that works as of now. From what I can understand: so we have 2 apps A (rails) and B (node.js) and redis. rails app will handle requests from users that go through controllers in REST manner, and then from there rails will pass that through redis, and then redis will form queues and node.js app will pick up that queue and do whatever necessary afterhand (write or read from backend db).
My questions:
So how would that improve concurrency and scalability? from what i
know since rails handle the requests through controllers
synchronously, and then write to redis, the requests will be
blocking still, even though node.js end can pickup the queue
asynchronously. (I have a feeling that it's not asynchronous yet if it's not end to end
non-blocking).
Would node.js be considered a proxy or an application here if redis
is the intermediary?
I'm new to redis and learning it still. If I'm using 100% noSQL
solution for my backend database, such as mongoDB or couchDB, are they replaceable by redis entirely or is redis more seen as a
messaging queue tool like rabbitMQ?
Is messaging queue a different concurrency concept than threading or
event-loop model or is it supposed to supplement them?
That's all my question. I'm new to message queue concept. Will appreciate any help and pointers to right direction and articles that help me learn more. thanks.
You are mixing some things here that don't go together.
Let's first make sure we are on the same page regarding the strengths/weaknesses of the involved technologies
Rails: Used for it's web-development simplicity and perfect for serving database-backed web-applications.
Not very performant when having to serve a large number of long running requests as you'd run out of threads on your Ruby workers - but well suited for anything that can scale horizontally with more web-nodes (multiple web-servers - 1 db).
Node.js: Great for high-concurrency scenarios. Not as easy as rails to write a regular web-application in it. But can handle near an insane amount of long-running low-cpu tasks efficiently.
Redis: A Key-Value Store that supports operations on it's data-structures (increment/decrement values, append/prepent push/pop to lists - all operations that make this DB work consistently with multiple clients writing at once)
Now as you can see, there is no benefit in having Rails AND Node serve the same request - communicating through Redis. Going through the Rails Stack would not provide any benefit if the requests ends up being handled by the Node server.
And even if you only offload some processing to the node server, it's still the Rails webserver that handles the requests and has to wait for a response from node - killing the desired scalability. It simply makes no sense.
Where you would a setup with Node and Rails together is in certain areas of your app that have drastically different scaling requirements.
If you are for example writing a Website that displays live stats for Football games you can easily see that there are two different concerns in your app: The "normal" Site that contains signup, billing and profile stuff that screams for a quick implementation through rails. And the "live" portion of the site where users see live results and you expect to handle a lot of clients at once - all waiting for something to happen (low cpu - high concurrency).
In such a case it may be beneficial to actually seperate the two parts of the site into a Ruby and a Node app, with then sharing data about the user through a store like Redis (but actually you just need some shared state that both can look at and write to for synchronization purposes).
So you would use for example Rails for the Signup/Login portions - once signed up write the session cookie into redis alongside with the permissions of the user (what game is he allowed to follow) and hand the user off to the Node.js app.
There the Node app can read the session information from Redis and serve the user.
Word of advice:
You don't get scalability by simply throwing Node.js into your Toolbox. You really have to find out what Node.js is good at (low-cpu high-io concurrent operations) and how you can leverage that to remedy some of the problems your currently chosen technology has.
I can answer 3 for you. Redis does not guarantee that when you perform an operation that result will actually be on disk, also transaction handling it a bit "different". It also requires for the whole database to be in memory. Depending on the situation this can be an issue or not. It is however incredibly fast. It is not a messaging queue, you can easily make a queue out of it, but it is not it's purpose. If you want to have a queuing system only you can probably do better with something else.

Using Thread.new to send email on rails

I've been sending emails on my application (ruby 1.8.7, rails 2.3.2) like this
Thread.new{UserMailer.deliver_signup_notification(user)}
Since ruby use green threads, there's any performance advantage doing this, or I can just use
UserMailer.deliver_signup_notification(user)
?
Thanks
Global VM lock will still almost certainly apply while sending that email, meaning no difference.
You should not start threads in a request/response cycle. You should not start threads at all unless you can watch them from create to join, and even then, it is rarely worth the trouble it creates.
Rails is not thread-safe, and is not meant to be from within your controller actions. Only since Rails 2.3 has just dispatching been thread-safe, and only if you turn it on in environment.rb with config.threadsafe!.
This article explains in more detail. If you want to send your message asynchronously use BackgroundRb or its analog.
In general, using green threads to run background tasks asynchronously will mean that your application can respond to the user before the mail is sent. You're not concerned about exploiting multiple CPUs; you're only concerned on off-loading the work onto a background process and returning a web page as soon as possible.
And from examining the Rails documentation, it looks like deliver_signup_notification will block long enough to get the mail queued (although I may be wrong). So using a thread here might make your application seem more responsive, depending on how your mailer is configured.
Unfortunately, it's not clear to me that deliver_signup_notification is necessarily thread-safe. I'd want to read the documentation carefully before relying on that.
Note also that you're making assumptions about the lifetime of a Rails process once a request has been served. Many Rails applications using DRb (or a similar tool) to offload these background tasks onto an entirely separate worker process. The easiest way to do this changes fairly often--see Google for a number of popular libraries.
I have used your exact strategy and our applications are currently running in production (but rails 2.2.2). I've kept a close eye on it and our load has been relatively low (Less than 20 emails sent per day average, with peaks of around 150/day).
So far we have noticed no problems, and this appears to have resolved several performance issues we were having when using Google's mailserver.
If you need something in a hurry then give it a shot, it has been working for us.
They'll be the same as far as I know.

Resources