Question about thread-safety and disabling multi-threading with Puma - ruby-on-rails

The Puma readme states the following: "Be aware that additionally Puma creates threads on its own for internal purposes (e.g. handling slow clients). So, even if you specify -t 1:1, expect around 7 threads created in your application."
Suppose that my Rails app is not thread-safe, and as such I need to prevent the app from being multi-threaded. Let's say I use Puma and specify -t 1:1 to try configure this. Is there any thread-safety-related reason for me to be concerned that Puma will still create threads on its own for internal purposes? I think the answer is probably no, but I'm asking here to be sure.
I asked this same question in a GitHub issue as well.

The GitHub issue received the following answer and was marked as completed:
No, these threads don't interact with your app in any way, and only one thread will ever run your application code, ever, with no concurrency.

Related

Why rails 5 using puma instead of webrick for development purpose?

I tried to find out the difference between the Puma and Webrick, but didn't get it or satisfied with it.
So could any one please share information regarding it.
By default WEBrick is single threaded, single process. This means that if two requests come in at the same time, the second must wait for the first to finish.
The most efficient way to tackle slow I/O is multithreading. A worker process spawns several worker threads inside of it. Each request is handled by one of those threads, but when it pauses for I/O - like waiting on a db query - another thread starts its work. This rapid back & forth makes best use of your RAM limitations, and keeps your CPU busy.
So, multithreading is achieved using Puma and that is why it is used as a default App Server in Rails App.
This is a question for Ruby on Rails developers rather than broad audience, because I don't understand reasons any other that putting development environment closer to production where Puma is a solid choice.
To correct the current answer however, I must say that Webrick is, and always has been, a multi-threaded web server. It now ships with Ruby language (and also a rubygem is available). And it is definitely good enough to serve Rails applications for development or for lower-scale production environments.
On the other hand it is not as configurable as other web servers like Puma. Also it is based on the old-school new thread per request design. This can be a problem under heavy load which can lead to too many threads created. Modern web servers solve this by using thread pools, worker processes or combination of the two or other techniques. This includes Puma, however for development spawning a new thread per request is totally fine.
I have no hard feelings for any of the two, both are great Ruby web servers and in our project we actually use them both in production. Anyway, if you like using Webrick for RoR development, you indeed can still use it:
rails server webrick
Rails 6.1 Minor update:
rails server -u webrick [-p NNNN]

Is it possible to run Sidekiq in the same process with a puma rails server?

Is there anything in its architecture that makes it hard to do?
I want to run an existing rails+sidekiq application in a VM with very little memory, and loading the entire rails stack in two different process is using a lot of RAM.
Puma is built to spin up homogenous web worker threads, and divide incoming requests among them. If you wanted to modify it to spawn off separate Sidekiq threads, it should technically be possible with a crazy puma.rb file, but there's no precedent I can find for doing so (edit: Mike’s answer below points out that the sucker_punch gem can essentially do this, for the same purpose of memory efficiency). Practically-speaking, if your VM cannot support running two Rails processes at a time, it probably won't be able to handle the increased memory load as your application does the work of both Sidekiq and Puma… but that depends on your workload.
If this is just for development purposes, you might be able to accomplish what you're looking for by turning on Sidekiq's inline mode (normally meant just for testing):
require 'sidekiq/testing'
Sidekiq::Testing.inline!
This will cause all perform_async calls to actually execute inline, instead of going into Redis and being picked up by the Sidekiq process.
Nothing official.
This is what sucker_punch is designed for.

Phusion Passenger Server Free vs. Enterprise Version and Thread Local Variables

Is it expected that the current thread will be the same across concurrent requests to "free" passenger?
I've got a bug in that thread local ruby variables are not independent to concurrent requests. I.e., the same thread id shows for both of two concurrent requests (simulated with a sleep to slow things down).
Is this different for Passenger enterprise edition?
Is there a proper way to get a thread local variable that is isolated for the life of a single request in Rails?
UPDATED:
Problem is not specific to Passenger. Problem is there for Thin as well.
Other libraries, such as paper_trail may have this issue: https://github.com/airblade/paper_trail/issues/499
Here's a potential fix: https://github.com/steveklabnik/request_store, along with a detailed description of the issue I'm seeing.
If you want a request-local variable, consider putting it into the Rack request env object. See the answers on this question for a more complete rundown.
Regarding thread locals, that certainly seems like unexpected behavior, but it's entirely possible that Passenger is doing something that doesn't guarantee that a single thread will be owned by a request for its whole lifetime. The Rack::Lock middleware usually solves this by taking a mutex around the whole request, but if you've removed it, you aren't guaranteed to have that synchronization. In general, thread-local variables are a code smell and probably an indicator that you're doing something you shouldn't.
Solved:
https://github.com/airblade/paper_trail/issues/499#issuecomment-83175865
https://github.com/steveklabnik/request_store
Thread.current[:something] is not as you'd expect with Thin and Passenger!!
Not sure about Unicorn and Puma.
Add the request_store gem, and follow the instructions to solve this issue.

Rails best practice: background process/thread?

I'm coming from a PHP environment (at least in terms of web dev) and into the beautiful world of Ruby, so I may have some dumb questions. I imagine there are some fundamentally different options available when not using PHP.
In PHP, we use memcache to store alerts we want to display in a bar along the top of the page. When something happens that generates an alert (such as a new blog post being made), a cron script that runs once every 5 minutes or so puts that information into memcache.
Now when a user visits the site, we look in memcache to find any alerts that they haven't already dismissed and we display them.
What I'm guessing I can do differently in Rails, is to by-pass the need for a cron script, and also the need to look in memcache on every request, by using a Singleton and a polling process running in a separate thread to copy from memcache to this singleton. This would, in theory, be more optimized than checking memcache once-per-request and also encapsulate the polling logic into one place, rather than being split between a cron task and the lookup logic.
My question is: are there any caveats to having some sort of runloop in the background while a Rails app is running? I understand the implications of multithreading, from Objective-C/Java, but I'm asking specifically about the Rails (3) environment.
Basically something like:
class SiteAlertsMap < Hash
include Singleton
def initialize
super
begin_polling
end
# ... SNIP, any specific methods etc ...
private
def begin_polling
# Create some other Thread here, which polls at set intervals
end
end
This leads me into a similar question. We push (encrypted) tasks onto an SQS queue, for things related to e-commerce and for long-running background tasks. We don't use cron for this, but rather we have a worker daemon written in PHP, which runs in the background. Right now when we deploy, we have to shut down this worker and start it again from the new code-base. In Rails, could I somehow have this process start and stop with the rails server (unicorn) itself? I don't think that's something I'd running on the main process in a separate thread, since we often want to control it as a process by itself, but it would be nice if it just conveniently ran when the web application was running.
Threading for background processes in ruby would be a terrible mistake, especially since you're using a multi-process server. Using unicorn with say 4 worker processes would mean that you'd be polling from each of them, which is not what you want. Ruby doesn't really have real threads, it has green threads in 1.8 and a global interpreter lock in 1.9 IIRC. Many gems and libraries are also obnoxiously unthreadsafe.
Using memcache is still your best option and, if you have it set up correctly, you should only see it adding a millisecond or two to the request time. Another option which would give you the benefit of persisting these alerts while incurring minimal additional overhead would be to store these alerts in redis. This would better protect you against things like memcache crashing or server reboots.
For the background jobs you should use a similar approach to what you have now, but there are several off the shelf handlers for this like resque, delayed_job, and a few others. If you absolutely have to use SQS as the backend queue, you might be able to find some code to help you, but otherwise you could write it yourself. This still requires the other daemon to be rebooted whenever there is a code change. In practice this isn't a huge concern as best practices dictate using a deployment system like capistrano where a rule can easily be added to bounce the daemon on deploy. I use monit to watch the daemon process, so restarting it is as easy as telling monit to restart it.
In general, Ruby is not like Java/Objective-C when it comes to threads. It follows the more Unix-like model of process based isolation, but the community has come up with best practices and ways to make this less painful than in other languages. Ruby does require a bit more attention to setting up its stack as it is not as simple as enabling mod_php and copying some files around, but once the choices and architecture is understood, it is easier to reason about how your application works. The process model, in my opinion, is much better for web apps as it isolates code and state from the effects of other running operations. The isolation also makes the app easier to work with in a distributed system.

Using Thread.new to send email on rails

I've been sending emails on my application (ruby 1.8.7, rails 2.3.2) like this
Thread.new{UserMailer.deliver_signup_notification(user)}
Since ruby use green threads, there's any performance advantage doing this, or I can just use
UserMailer.deliver_signup_notification(user)
?
Thanks
Global VM lock will still almost certainly apply while sending that email, meaning no difference.
You should not start threads in a request/response cycle. You should not start threads at all unless you can watch them from create to join, and even then, it is rarely worth the trouble it creates.
Rails is not thread-safe, and is not meant to be from within your controller actions. Only since Rails 2.3 has just dispatching been thread-safe, and only if you turn it on in environment.rb with config.threadsafe!.
This article explains in more detail. If you want to send your message asynchronously use BackgroundRb or its analog.
In general, using green threads to run background tasks asynchronously will mean that your application can respond to the user before the mail is sent. You're not concerned about exploiting multiple CPUs; you're only concerned on off-loading the work onto a background process and returning a web page as soon as possible.
And from examining the Rails documentation, it looks like deliver_signup_notification will block long enough to get the mail queued (although I may be wrong). So using a thread here might make your application seem more responsive, depending on how your mailer is configured.
Unfortunately, it's not clear to me that deliver_signup_notification is necessarily thread-safe. I'd want to read the documentation carefully before relying on that.
Note also that you're making assumptions about the lifetime of a Rails process once a request has been served. Many Rails applications using DRb (or a similar tool) to offload these background tasks onto an entirely separate worker process. The easiest way to do this changes fairly often--see Google for a number of popular libraries.
I have used your exact strategy and our applications are currently running in production (but rails 2.2.2). I've kept a close eye on it and our load has been relatively low (Less than 20 emails sent per day average, with peaks of around 150/day).
So far we have noticed no problems, and this appears to have resolved several performance issues we were having when using Google's mailserver.
If you need something in a hurry then give it a shot, it has been working for us.
They'll be the same as far as I know.

Resources