I tried to find out the difference between the Puma and Webrick, but didn't get it or satisfied with it.
So could any one please share information regarding it.
By default WEBrick is single threaded, single process. This means that if two requests come in at the same time, the second must wait for the first to finish.
The most efficient way to tackle slow I/O is multithreading. A worker process spawns several worker threads inside of it. Each request is handled by one of those threads, but when it pauses for I/O - like waiting on a db query - another thread starts its work. This rapid back & forth makes best use of your RAM limitations, and keeps your CPU busy.
So, multithreading is achieved using Puma and that is why it is used as a default App Server in Rails App.
This is a question for Ruby on Rails developers rather than broad audience, because I don't understand reasons any other that putting development environment closer to production where Puma is a solid choice.
To correct the current answer however, I must say that Webrick is, and always has been, a multi-threaded web server. It now ships with Ruby language (and also a rubygem is available). And it is definitely good enough to serve Rails applications for development or for lower-scale production environments.
On the other hand it is not as configurable as other web servers like Puma. Also it is based on the old-school new thread per request design. This can be a problem under heavy load which can lead to too many threads created. Modern web servers solve this by using thread pools, worker processes or combination of the two or other techniques. This includes Puma, however for development spawning a new thread per request is totally fine.
I have no hard feelings for any of the two, both are great Ruby web servers and in our project we actually use them both in production. Anyway, if you like using Webrick for RoR development, you indeed can still use it:
rails server webrick
Rails 6.1 Minor update:
rails server -u webrick [-p NNNN]
Related
our Rails 4 app is running in in a thin -s 16 ... multiprocess server with Apache as frontend and its reverse proxy handling the internal requests. All works fine and well, the performance is OK for our number of users.
Because everything works just out of the box, I didn't really bother about how Thin actually works. I did stumble about all the Fibers and EventMachine goodness recently though and read up a lot on it.
Thin is using EventMachine for handling Rack requests. So by design it would be able to handle multiple requests in parallel with one ruby process. Unfortunately, while the Thin documentation is functional enough in making you able to run the server in a few minutes' worth of work, it is rather quiet about the internal workings.
Am I correct in assuming that all talk about "concurrency" is moot as soon as I run Thin with Rails? ActiveRecord, the DB drivers, the template processing, etc. are likely not EM enabled, not using Fibers etc., hence will block the single process anyways, while at the same time using most of the server-side processing time in DB-heavy applications.
All my research seems to lead to this conclusion; unfortunately the Thin website/docs say nothing about it. Frankly I am baffled how the word "concurrent" is even entering the picture here...
Can someone clear that up for me? Am I missing some fundamental piece of information, or is the "concurrency" touted on http://code.macournoyer.com/thin/ meant in regards to manually crafted Rack servers which don't use Rails or other "heavy" middlewares?
Thanks!
You are right, when using with Rails it will not be concurrent. Any call to db driver / file system will block the entire ruby interpreter. It may be partially changed in Rails 5, but for now you just need to run a lot of worker processes.
If we say Node.js is single threaded and therefore there is just one thread that handles all the requests, what is Rails?
As I understand, Node.js is both the application and the server, but I am lost on what Rails would be? How does Rails handle requests in terms of threads/processes?
Rails can be single-threaded, it can be multi-threaded, it can be multi-process (where each process is single-threaded), or it can be multi-process where each process is multi-threaded.
It really all depends upon the app server you're using, and it kind of depends upon which Ruby implementation you're using. MRI Ruby supports native threads as of 1.9, but it still maintains what's known as a global interpreter lock. The GIL prevents the Ruby interpreter from running in multiple threads at a time. In most cases that's not really a big deal though, because the thing threads are helping with the most is waiting for I/O. If you're using either JRuby or Rubinius, they can actually run Ruby code in multiple threads at a time.
Check out the different app servers and what they offer in terms of concurrency features. Unicorn is a common one for deploying multi-process/single-threaded applications. Puma is a newer app server that's capable of running multi-threaded applications, and I believe they're either adding (or maybe have added by now, I'm not sure) the ability to run multi-process as well. Passenger seems to be able to work in every model I've listed above.
I hope this helps a little. It should at least give you some things to Google for to find more information.
Just wanted to get people's opinions on using Unicorn vs Thin as a rails server. Most of the articles/benchmarks I found online seem very incomplete, so it would nice to have a centralized place to discuss it.
Unicron is a multi-processes server, while thin is an event based/non-blocking server. Event-based servers are great... if your code is asynchronous/non-blocking - vanilla rails is blocking. So unless you use non-blocking rails libraries, I really don't see the advantage of using Thin. Even worse, in a non-blocking server, if your i/o loop is blocking you're going to block the entire loop and not be able to handle any more requests until the blocking call returns. Blocking libraries are going to slow thin down!
Why did Heroku choose Thin as their default server (for cedar)? They are smart guys, so I'm sure they had a reason.
Bellow is a link that suggests replacing Thin with 4 Unicorn workers - this makes perfect sense to me.
4 Unicron workers on Heroku
Thin is easy to configure - not optimal, but it just works in the Heroku environment.
Unicorn can be more efficient, but it needs to be configured: How many workers? Preload App? What do you pick?
I have released Unicorn Heroku apps with workers set to 3, 5 and 8 - just based on how big each app is - how much code, how much memory is used and how much traffic you get all go into picking this number, and you need to monitor over time to make sure you got the number right, and your app isn't running out of memory.
Preload false - this will make your app start slower, but when Unicorn restarts a worker, this is 'safer' with network connections (memcache, postgres, mongo etc)
Preload true - this is better, but you need to handle server re-connections correctly in the pre and post fork code.
Thin has none of these issues out of the box, but you only get process of execution.
Summary: It's really hard to configure Unicorn out of the box to work well (or at all) for everyone, whereas Thin can just work to get people running with fewer support requests.
Recently (only a few months ago) the folks behind Phusion Passenger add support to Heroku. Definitely this is an alternative you should try and see if fits your needs.
Is blazing fast even with 1 dyno and the drop in response time is palpable.
A simple Passenger Ruby Heroku Demo is hosted on github.
The main benefits that Passengers on Heroku claims are:
Static asset acceleration through Nginx - Don't let your Ruby app serve static assets, let Nginx do it for you and offload your app for the really important tasks. Nginx will do a much better job.
Multiple worker processes - Instead of running only one worker on a dyno, Phusion Passenger runs multiple worker on a single dyno, thus utilizing its resources to its fullest and giving you more bang for the buck. This approach is similar to Unicorn's. But unlike Unicorn, Phusion Passenger dynamically scales the number of worker processes based on current traffic, thus freeing up resources when they're not necessary.
Memory optimizations - Phusion Passenger uses less memory than Thin and Unicorn. It also supports copy-on-write virtual memory in combination with code preloading, thus making your app use even less memory when run on Ruby 2.0.
Request/response buffering - The included Nginx buffers requests and responses, thus protecting your app against slow clients (e.g. mobile devices on mobile networks) and improving performance.
Out-of-band garbage collection - Ruby's garbage collector is slow, but why bother your visitors with long response times? Fix this by running garbage collection outside of the normal request-response cycle! This concept, first introduced by Unicorn, has been improved upon: Phusion Passenger ensures that only one request at the same time is running out-of-band garbage collection, thus eliminating all the problems Unicorn's out-of-band garbage collection has.
JRuby support - Unicorn's a better choice than Thin, but it doesn't support JRuby. Phusion Passenger does.
Hope this helps.
Heroku does not use intelligent routing - it will randomly assign jobs to dynos regardless of whether the dyno is busy. Thus, if your dyno cannot handle multiple jobs at once, you will get latency (perhaps massive latency) even if you are paying for lots of other dynos that are free. " That's right — if your app needs 80 dynos with an intelligent router, it needs 4,000 with a random router. "
http://news.rapgenius.com/James-somers-herokus-ugly-secret-lyrics
Heroku says they are working on this, and their plan is to make it easier to use Unicorn. They basically said "Oops, we didn't notice that this was a problem for a few years... and now that we look, it's definitely a problem for Thin... so I guess you need to use a different program than the one we've been pushing all this time."
http://news.rapgenius.com/Jesper-joergensen-routing-performance-update-lyrics
From the official Heroku explanation (second link above):
"Rails, in fact, does not yet reliably support concurrent request handling. This leaves Rails developers unable to leverage the additional concurrency capabilities offered by the Cedar stack, unless they move to a concurrent web server like Puma or Unicorn.
Rails apps deployed to Cedar with Thin can rather quickly end up with request queuing problems. Because the Cedar router no longer does any queuing on behalf of the app, requests queued at the dyno must wait until the single Rails process works its way through the queue. Many customers have run into this issue and we failed to take action and provide them with a better approach to deploying Rails apps on Cedar."
Also of interest is that their performance tools, including New Relic, have not been reporting time spent in the dyno queue.
http://news.rapgenius.com/Lemon-money-trees-rap-genius-response-to-heroku-lyrics
Oops.
I've recently found that some people prefer using unicorn_rails instead of the default WEBrick as a web server for developing Rails applications.
I understand that if I wanted to use unicorn in production, it could make kind of sense to try it out in development, but since the configuration is different in production, is it even relevant?
Is there any real, tangible advantage that I would get from using thin or unicorn instead of WEBrick for developing a Rails application, such as speed or some additional features? Or is this just a matter of personal preference?
It is important to develop as closely as possible to the production environment. It helps ensure that an application will work as expected when deployed into production, instead of stumbling upon bugs at runtime.
This issue is alleviated with the use of Continuous Testing on a Build server that replicates the production environment. Even though you are not actively developing on an identical environment, the Continuous Testing gives you coverage that the application is functioning in the expected way.
As to speed, the performance hit running a Rails app in development mode will negate any benefit the various web servers brings.
In addition to the other answers giving a pretty good overview already, there is also a technical reason you might want to consider using unicorn over WEBrick:
WEBrick does not support subdomains. Support for HTTPS is rather hacky to implement.
So if you have an SaaS application using subdomains, or if you simply want to have admin/api/... subdomain, then WEBrick is not an option. There is POW for Mac OS X, but this won't work for Linux developers.
My personal experience is that Unicorn is much much faster then WEBrick when using a remote machine (ubuntu, 4 cores, 8G mem, connecting though VPN -> ssh) as your development environment (as I do) -- I see sub-1 second page load times with Unicorn, while WEBrick takes 3 to 5 second or more. I'm not sure why, it may have more to do with my network then anything else, but that is what I personally see.
I haven't used Thin for development, as I've read that it requires an additional gem to allow for class reloading in development mode (I can't personally validate the accuracy of that). Also, I am more familiar with Unicorn and use it in production.
My personal experience is that WEBrick is faster in my development environment than Unicorn and Thin (OS X) in a pretty big Rails app (lots of gems, routes etc). But you should measure it yourself, with your app in your machine to see (I tested using ab and using Chrome's developer tools).
However using the same server in production and development is a very good idea.
I am setting up an Apache2 webserver running multiple Ruby on Rails web applications with Phusion Passenger. I know that Passenger spawns Ruby processes for handling requests. I have the following questions:
If more than one request has to be handled at the same time, will Passenger spawn multiple processes or multiple (Ruby) threads? How do I configure it so it always spawns single-threaded processes?
If I have two Rails applications, imagine that a request for app A goes to process 1, then later request for app B arrives. Is it possible that process 1 will handle this request as well? When and how is this possible? In other words, is one process allowed to handle requests for multiple Rails applications?
I have the same Rails application exported in multiple URLs and multiple virtual hosts (such as http:// and https://). Will the same process be able to serve different virtual hosts? (The answer to this seems to be yes, I've set a global variable in answering a request to virtual host A, and I was able to retrieve the value in virtual host B.)
Generally speaking, Passenger spawns new processes by forking an ApplicationSpawner, which has the framework and application code pre-loaded into memory, or a FrameworkSpawner, which just has the framework code.
Passenger, as far as I know, doesn't deal in threads. Instead, as the load increases on an application, it will fork that Application's ApplicationSpawner and initialize another instance. When load decreases, one or more application instances are killed off.
If Passenger is configured in a certain way (I believe by choosing the "smart" spawn method), it will create a FrameworkSpawner, which loads the rails code, but no application code, which can then be forked to load and application using that version of Rails.
So to answer your questions:
It will serve them sequentially, then spawn additional processes if it decides the load is high enough.
No. One process can only belong to a single Rails Application.
I'm kind of sketchy on this one, but your experiment makes sense. Passenger should be smart enough to figure out that even though it's running from different places in the server config, you're talking about the same application. It's probably based on the application's filesystem path.
EDIT: I went and read up on this a bit. Turns out I was mostly right, but the technical details were a bit off. See the Passenger documentation
Yup, Burke is right. In case of the third question, Phusion Passenger recognizes applications by their application root path. So even if you have two virtual hosts, if they both point to the same DocumentRoot then Phusion Passenger will think that they're the same app.