What "workers" really are? - ruby-on-rails

I feel lost.
Nginx has it's own "Worker" processes,
Unicorn has it's own "Worker" settings,
Resque has it's own "Workers".
Unicorn's settings should be related to Nginx's or Resque's I guess?
I really searched for a clue but didn't got any.
Are all of these "workers" same?
If not can you briefly tell what are they?

Nginx - Nginx is the web server that gets the incoming requests and serves to unicorns on request.
Unicorn - Each unicorn worker loads a separate Rails environment(Worker).
Resque - Each Resque worker loads a separate Rails environment(Worker).
The purpose of Unicorn and Resque are different.
Unicorn serves the web requests.
Resque gets background jobs from Redis and processes it

Related

How to run a worker or workers on Heroku undefinitely and how do they differ from `web`?

I'd like to have two separate instances of PhantomJS constantly waiting on an "Ask" from the main application.
I'm confused how a worker difference from a web process on heroku.
In the Procfile, if I were to define web as NOT my application and started it from the worker process, what would happen?
Or, for example, puma web server starts out multiple threads. Does the web process acount for multiple threads or do you need a web or worker per every thread puma creates?
Thanks

How rails app works with Unicorn and NginX?

i would like to know how a browser request flow b/w Nginx and Unicorn to work with rails app? Explain. Thanks in advance.
Very generic question, thus a generic answer:
Nginx is usually set up as a reverse-proxy and static asset server.
Browser -(http)-> Nginx -(http)-> Unicorn worker -(rack)-> rails app
All unicorn workers share the same listening socket, balancing is done by OS via simple rule of 'first come - first serve'

Does application server workers spawn threads?

The application servers used by Ruby web applications that I know have the concept of worker processes. For example, Unicorn has this on the unicorn.rb configuration file, and for mongrel it is called servers, set usually on your mongrel_cluster.yml file.
My two questions about it:
1) Does every worker/server works as a web server and spam a processes/threads/fiber each time it receives a request, or it blocks when a new request is done if there is already other running?
2) Is this different from application server to application server? (Like unicorn, mongrel, thin, webrick...)
This is different from app server to app server.
Mongrel (at least as of a few years ago) would have several worker processes, and you would use something like Apache to load balance between the worker processes; each would listen on a different port. And each mongrel worker had its own queue of requests, so if it was busy when apache gave it a new request, the new request would go in the queue until that worker finished its request. Occasionally, we would see problems where a very long request (generating a report) would have other requests pile up behind it, even if other mongrel workers were much less busy.
Unicorn has a master process and just needs to listen on one port, or a unix socket, and uses only one request queue. That master process only assigns requests to worker processes as they become available, so the problem we had with Mongrel is much less of an issue. If one worker takes a really long time, it won't have requests backing up behind it specifically, it just won't be available to help with the master queue of requests until it finishes its report or whatever the big request is.
Webrick shouldn't even be considered, it's designed to run as just one worker in development, reloading everything all the time.
off the top of my head, so don't take this as "truth"
ruby (MRI) servers:
unicorn, passenger and mongrel all use 'workers' which are separate processes, all of these workers are started when you launch the master process and they persist until the master process exits. If you have 10 workers and they are all handling requests, then request 11 will be blocked waiting for one of them to complete.
webrick only runs a single process as far as I know, so request 2 would be blocked until request 1 finishes
thin: I believe it uses 'event I/O' to handle http, but is still a single process server
jruby servers:
trinidad, torquebox are multi-threaded and run on the JVM
see also puma: multi-threaded for use with jruby or rubinious
I think GitHub best explains unicorn in their (old, but valid) blog post https://github.com/blog/517-unicorn.
I think it puts backlog requests in a queue.

Single-dyno setup on Heroku for Rails with both WebSocket and worker queue

At the moment I have a small web application running on Heroku with a single dyno. This dyno runs a Rails app on Unicorn with a single worker queue.
config/unicorn.rb:
worker_processes 1
timeout 180
#resque_pid = nil
before_fork do |server, worker|
#resque_pid ||= spawn("bundle exec rake jobs:work")
end
I would like to add WebSockets functionality, but from what I have read Unicorn is not one of the web servers that supports faye-websocket. There is the Rainbows! web server that is based on Unicorn, but I'm unclear if it's possible for me to switch and keep my spawn for the queue worker.
I suppose having more than a single dyno one could just add another dyno to run a Rainbows! web server for the WebSockets part, right? This is unfortunately not an option at the moment. Is there a way to get it working with a single dyno for my setup?
If not, what other options are available to get information from server to client e.g. based on asynchronous work being completed? I'm using poll for other things in the application, i.e. to start an asynchronous job that is handled by the worker process and upon completion the polling client (browser) will see a completion flag. This works, but I'd like to improve it if possible.
I'm open to hear about your experiences and suggestions. Thanks in advance!

diffrence between mongrel and mongrel cluster?

Can anybody provide a brief explanation of the differences between mongrel and mongrel cluster?
Mongrel is a web server that can handle one request at a time. In order to handle multiple requests, you want to run multiple mongrels. A proxy server (i.e. apache) will sit in front of the servers and listen on port 80 and then relay the web requests to an available mongrel. Mongrel cluster is a gem that manages the launching of the mongrels, stopping, restarting and running it in the right environment with the right user. It abstracts the individual mongrels as workers so you don't need to worry about them (until things go wrong). All of that is managed by a configuration file usually located with the application.
Tass and Larry K are correct though. If you are looking at a new setup, think about passenger or unicorn. Both are great, unicorn is a bit more complicated and I would not recommend it to a beginner.
Mongrel cluster is multiple mongrel instances. Then the web server rotates amongst them to handle incoming calls.
But these days the cool kids tend to use Passenger (and often the related Enterprise Ruby too)
Mongrel cluster is somewhat of outdated, today you use unicorn. The github guys switched too.

Resources