How Rails serves concurrent requests - ruby-on-rails

I will divide this question in two parts.
Part 1:
How Rails serves concurrent requests to the same action?
Is Rails serves concurrent requests in different Threads or Rails serve in a queue?
Part 2:
Why I ask?
I make a very specific project. And now I thinking about change the application architecture if necessary.
Question
Foreword
I have controller with action calculate.
It seems like that:
def calculate
first_player_code = Code.find(params[:first_player_id])
second_player_code = Code.find(params[:second_player_id])
File.open("first_player_code.rb", "w"){|file| file << first_player_code}
File.open("second_player_code.rb", "w"){|file| file << second_player_code}
system("ruby calculate.rb")
end
calculate.rb is contains script, which uses files "first_player_code.rb" and "second_player_code.rb".
This script make JSON file as result and send it to browser.
More information
"first_player_code.rb" and "second_player_code.rb" contains Ruby classes.
In "calculate.rb" i requiring these files and making objects from classes, which are in these files.
I just didn't find another way to do this (without using Files).
The question itself
Are there synchronization problems?

You would typically configure a web server like Puma that uses threads to handle requests concurrently and pass them to the Rails application. True threading is not possible with regular MRI ruby because the Global Interpreter Lock prevents more than one thread from running at a time.
For long running processes you should try to prevent blocking by passing those off to background jobs. Sidekiq is a good option for this.
Also see https://github.com/ruby-concurrency/concurrent-ruby

Related

Why do I need to wrap threads in Ruby on rails application?

In my RoR app, I'm writing an API in which I need to call multiple upstream APIs, so I'm planning to call them in parallel to save time. I want to follow best practices when implementing multi-threaded logic in ruby-on-rails applications.
The RoR guide states clearly that we need to wrap our code but it didn't explain why it is important.
From ruby-on-rails guidelines:
Each thread should be wrapped before it runs application code, so if
your application manually delegates work to other threads, such as via
Thread.new or Concurrent Ruby features that use thread pools, you
should immediately wrap the block
My App runs Rails version 4.
Number of upstream API calls in a single request ranges from 3 to 30
I checked out this similar SO post, but it doesn't mention anything about wrapping threaded code.
Wrapping the thread on the executor makes sure that you won't have any problem with unloaded constants... You won't see errors like
Unable to autoload constant User, expected ../user.rb to define it (LoadError).

Recurring job to check if url exists

I want to build a service that notifies me when a url returns status 200. I'm currently using a sidekiq worker, if the status == 200, it updates my database (row.available = true), if not, it raises an exception and retries the worker in n seconds, n amount of times.
Though this works, it doesn't feel efficient or scalable (1000's checks would result in 1000's of exceptions, and on certain platforms that's bad news -- JRuby), and I'm sure there is a way I can build an internal service to manage this url monitoring that doesn't rely on sidekiq (perhaps in Go, or another, more suited Ruby gem). However, I have no idea where to begin, and so I'd appreciate some general direction.
Writing and running a simple link checker is easy. Doing that for 1000s of links quickly, without redundancy, and handling dead and slow-responding links without bogging down your entire system gets harder.
I'd use three threads, plus two queues:
A dispatcher thread that only reads from the database. It is responsible for finding and queuing URLs to be checked in to a "to be checked" queue.
A worker thread that consumes from the first queue and pushes results into the "updated URL results" queue.
An updater/consumer thread that takes the result of a thread in #2 and updates the database.
Ruby has some built-in classes to help:
Thread
Queue
I'd highly recommend Typhoeus and Hydra for use in the middle thread. The documentation for these two classes cover a lot of what you need to do as far as handling multiple threads running in parallel.
I wouldn't write this code as part of a Rails application. There is no value added by Rails to this, nor is it necessary. I would either require Active Record and piggy-back on the existing database.yaml settings and models, or use Rails' "runner" to run the code as an adjunct to the Rails code.
Or, I'd write a small, application-specific, piece of code to run on a different server to avoid bogging down the Rails server. Using something like MySQL or PostgreSQL drivers would let you talk to the same database that Rails uses. In this case I'd use the Sequel gem to act as the ORM, but that's because I prefer it over Active Record.
There are a lot of things to consider as you write this code, including retries of failed URLs, sensing redirections and updating the source URLs to reflect them to avoid wasting time, and not beating up the hosting servers causing you to be banned.
I've written several apps for this purpose over the years and doing it right takes a lot of forethought, so think out your design up front otherwise you could end up with some major rewrites later on.

Rails best practice: background process/thread?

I'm coming from a PHP environment (at least in terms of web dev) and into the beautiful world of Ruby, so I may have some dumb questions. I imagine there are some fundamentally different options available when not using PHP.
In PHP, we use memcache to store alerts we want to display in a bar along the top of the page. When something happens that generates an alert (such as a new blog post being made), a cron script that runs once every 5 minutes or so puts that information into memcache.
Now when a user visits the site, we look in memcache to find any alerts that they haven't already dismissed and we display them.
What I'm guessing I can do differently in Rails, is to by-pass the need for a cron script, and also the need to look in memcache on every request, by using a Singleton and a polling process running in a separate thread to copy from memcache to this singleton. This would, in theory, be more optimized than checking memcache once-per-request and also encapsulate the polling logic into one place, rather than being split between a cron task and the lookup logic.
My question is: are there any caveats to having some sort of runloop in the background while a Rails app is running? I understand the implications of multithreading, from Objective-C/Java, but I'm asking specifically about the Rails (3) environment.
Basically something like:
class SiteAlertsMap < Hash
include Singleton
def initialize
super
begin_polling
end
# ... SNIP, any specific methods etc ...
private
def begin_polling
# Create some other Thread here, which polls at set intervals
end
end
This leads me into a similar question. We push (encrypted) tasks onto an SQS queue, for things related to e-commerce and for long-running background tasks. We don't use cron for this, but rather we have a worker daemon written in PHP, which runs in the background. Right now when we deploy, we have to shut down this worker and start it again from the new code-base. In Rails, could I somehow have this process start and stop with the rails server (unicorn) itself? I don't think that's something I'd running on the main process in a separate thread, since we often want to control it as a process by itself, but it would be nice if it just conveniently ran when the web application was running.
Threading for background processes in ruby would be a terrible mistake, especially since you're using a multi-process server. Using unicorn with say 4 worker processes would mean that you'd be polling from each of them, which is not what you want. Ruby doesn't really have real threads, it has green threads in 1.8 and a global interpreter lock in 1.9 IIRC. Many gems and libraries are also obnoxiously unthreadsafe.
Using memcache is still your best option and, if you have it set up correctly, you should only see it adding a millisecond or two to the request time. Another option which would give you the benefit of persisting these alerts while incurring minimal additional overhead would be to store these alerts in redis. This would better protect you against things like memcache crashing or server reboots.
For the background jobs you should use a similar approach to what you have now, but there are several off the shelf handlers for this like resque, delayed_job, and a few others. If you absolutely have to use SQS as the backend queue, you might be able to find some code to help you, but otherwise you could write it yourself. This still requires the other daemon to be rebooted whenever there is a code change. In practice this isn't a huge concern as best practices dictate using a deployment system like capistrano where a rule can easily be added to bounce the daemon on deploy. I use monit to watch the daemon process, so restarting it is as easy as telling monit to restart it.
In general, Ruby is not like Java/Objective-C when it comes to threads. It follows the more Unix-like model of process based isolation, but the community has come up with best practices and ways to make this less painful than in other languages. Ruby does require a bit more attention to setting up its stack as it is not as simple as enabling mod_php and copying some files around, but once the choices and architecture is understood, it is easier to reason about how your application works. The process model, in my opinion, is much better for web apps as it isolates code and state from the effects of other running operations. The isolation also makes the app easier to work with in a distributed system.

Logging inside threads in a Rails application

I've got a Rails application in which a small number of actions require significant computation time. Rather than going through the complexity of managing these actions as background tasks, I've found that I can split the processing into multiple threads and by using JRuby with a multicore sever, I can ensure that all threads complete in a reasonable time. (The customer has already expressed a strong interest in keeping this approach vs. running tasks in the background.)
The problem is that writing to the Rails logger doesn't work within these threads. Nothing shows up in the log file. I found a few references to this problem but no solutions. I wouldn't mind inserting puts in my code to help with debugging but stdout seems to be eaten up by the glassfish gem app server.
Has anyone successfully done logging inside a Rails ruby thread without creating a new log each time?
I was scratching my head with the same problem. For me the answer was as follows:
Thread.new do
begin
...
ensure
Rails.logger.flush
end
end
I understand your concerns about the background tasks, but remember that spinning off threads in Rails can be a scary thing. The framework makes next to no provisions for multithreading, which means you have to treat all Rails objects as not being thread-safe. Even the database connection gets tricky.
As for the logger: The standard Ruby logger class should be thread safe. But even if Rails uses that, you have no control over what the Rails app is doing to it. For example the benchmarking mechanism will "silence" the logger by switching levels.
I would avoid using the rails logger. If you want to use the threads, create a new logger inside the thread that logs the messages for that operation. If you don't want to create a new log for each thread, you can also try to create one thread-safe logging object in your runtime that each of the threads can access.
In your place I'd probably have another look at the background job solutions. While DRb looks like a nightmare, "bj" seems nice and easy; although it required some work to get it running with JRuby. There's also the alternative to use a Java scheduler from JRuby, see http://www.jkraemer.net/2008/1/12/job-scheduling-with-jruby-and-rails

Is Rails shared-nothing or can separate requests access the same runtime variables?

PHP runs in a shared-nothing environment, which in this context means that every web request is run in a clean environment. You can not access another request's data except through a separate persistence layer (filesystem, database, etc.).
What about Ruby on Rails? I just read a blog post stating that separate requests might access the same class variable.
It has occurred to me that this probably depends on the web server. Mongrel's FAQ states that Mongrel uses one thread per request - suggesting a shared-nothing environment. The FAQ goes on to say that RoR is not thread safe, which further suggests that RoR would not exist in a shared environment unless a new request reuses the in-memory objects created from the previous request.
Obviously this has huge security ramifications. So I have two questions:
Is the RoR environment shared-nothing?
If RoR runs in (or might run in some circumstances) a shared environment, what variables and other data storage should I be paranoid about?
Update: I'll clarify further. In a Java servlet container you can have objects which persist across multiple requests. This is typically done for caching data which multiple users would have access to, database connections, etc.. In PHP this can not be done at the application layer, it must be done in a separate persistence layer like Memcached. So the twofold question is: which scenario is RoR like (PHP or Java) and if like Java, which data types persist across multiple requests?
In short:
No, Rails never runs in a shared-nothing environment.
Be paranoid about class variables and class instance variables.
The longer version:
Rails processes start their life cycle by loading the framework and application. They will typically run only a single thread, which will process many requests during its lifetime. The requests will therefore be dispatched strictly sequentially.
Nevertheless, all classes persist across requests. This means any object referenced from your classes and metaclasses (such as class variables and class instance variables) will be shared across requests. This may bite you, for example, if you try to memoize methods (#var ||= expensive_calculation) in your class methods, expecting it will only persist during the current request. In reality, the calculation will only be performed on the first request.
On the surface, it may seem nice to implement caching, or other behaviour that depends on persistence across requests. Typically, it isn't. This is because most deployment strategies will use several Rails processes to counter their own single-threaded nature. It is simply not cool to block all requests while waiting for a slow database query, so the easy way out is to spawn more processes. Naturally, these processes do not share anything (except some memory perhaps, which you won't notice). This may bite you if you save stuff in your class variables or class instance variables during requests. Then, somehow, sometimes the stuff appears to be present, and sometimes it appears to be gone. (In reality, of course, the data may or may not be present in some process, and absent in others).
Some deployment configurations (most notably JRuby + Glassfish) are in fact multithreaded.
Rails is thread safe, so it can deal with it. But your application may not be thread safe. All controller instances are thrown away after each request, but as we know, the classes are shared. This may bite you if you pass information around in class variables or in class instance variables. If you do not properly use synchronisation methods, you may very well end up in race condition hell.
As a side note: Rails is typically run in single-threaded processes because Ruby's thread implementation is imperfect. Luckily, things are a little better in Ruby 1.9. And a lot better in JRuby.
With both these Ruby implementations gaining in popularity, it seems likely that multithreaded Rails deployment strategies will also gain in popularity and number. It is a good idea to write your application with multithreaded request dispatching in mind already.
Here is a relatively simple example that illustrates what can happen if you are not careful about modifying shared objects.
Create a new Rails project: rails test
Create a new file lib/misc.rb and put in it this:
class Misc
#xxx = 'Hello'
def Misc.contents()
return #xxx
end
end
Create a new controller: ruby script/generate controller Posts index
Change app/views/posts/index.html.erb to contain this code:
<%
require 'misc'; y = Misc.contents() ; y << ' (goodbye) '
%>
<pre><%= y %></pre>
(This is where we modify the implicitly shared object.)
Add RESTful routes to config/routes.rb.
Start the server ruby script/server and load the page /posts several times. You will see the number of ( goodbye) strings increasing by one on each successive page reload.
In your average deployment using Passenger, you probably have multiple app processes that share nothing between them but classes within each process that maintain their (static) state from request to request. Each request, though, makes a new instance of your controllers.
You might call this a cluster of distinct shared-state environments.
To use your Java analogy, you can do the caching and have it work from request to request, you just can't assume that it will be available on every request.
Shared-nothing is sometimes a good idea. But not when you have to load a large application framework and a large domain model and a large amount of configuration on every request.
For efficiency, Rails keeps some data available in memory to be shared among all requests for the lifetime of an application. Most of this data is read-only, so you shouldn't be worried.
When you write your app, stay away from writing to shared objects (excluding the database, for example, which comes out-of-the-box with good concurrency control) and you should be fine.

Resources