Does application server workers spawn threads? - ruby-on-rails

The application servers used by Ruby web applications that I know have the concept of worker processes. For example, Unicorn has this on the unicorn.rb configuration file, and for mongrel it is called servers, set usually on your mongrel_cluster.yml file.
My two questions about it:
1) Does every worker/server works as a web server and spam a processes/threads/fiber each time it receives a request, or it blocks when a new request is done if there is already other running?
2) Is this different from application server to application server? (Like unicorn, mongrel, thin, webrick...)

This is different from app server to app server.
Mongrel (at least as of a few years ago) would have several worker processes, and you would use something like Apache to load balance between the worker processes; each would listen on a different port. And each mongrel worker had its own queue of requests, so if it was busy when apache gave it a new request, the new request would go in the queue until that worker finished its request. Occasionally, we would see problems where a very long request (generating a report) would have other requests pile up behind it, even if other mongrel workers were much less busy.
Unicorn has a master process and just needs to listen on one port, or a unix socket, and uses only one request queue. That master process only assigns requests to worker processes as they become available, so the problem we had with Mongrel is much less of an issue. If one worker takes a really long time, it won't have requests backing up behind it specifically, it just won't be available to help with the master queue of requests until it finishes its report or whatever the big request is.
Webrick shouldn't even be considered, it's designed to run as just one worker in development, reloading everything all the time.

off the top of my head, so don't take this as "truth"
ruby (MRI) servers:
unicorn, passenger and mongrel all use 'workers' which are separate processes, all of these workers are started when you launch the master process and they persist until the master process exits. If you have 10 workers and they are all handling requests, then request 11 will be blocked waiting for one of them to complete.
webrick only runs a single process as far as I know, so request 2 would be blocked until request 1 finishes
thin: I believe it uses 'event I/O' to handle http, but is still a single process server
jruby servers:
trinidad, torquebox are multi-threaded and run on the JVM
see also puma: multi-threaded for use with jruby or rubinious

I think GitHub best explains unicorn in their (old, but valid) blog post https://github.com/blog/517-unicorn.
I think it puts backlog requests in a queue.

Related

Does Unicorn app server workers goes up and down all the time? or they are long lived? how do they retrieve the request from master?

We have a Rails app inside a Unicorn app server.
Unicron works with processes(unlike Puma for example that works with threads).
I understand the general architecture, where there's one master process which somehow pass each request to a worker. I'm not sure how though.
If we have 5 workers, it means we have 5 process that run all the time? or the master forks a new process for each request, and when a worker finish handling a request it dies?
How does the request pass to the worker?
Also, if there's an elaborated article about unicorn architecture as a reference it would be amazing!
The master process forks a new worker for each request, and when the worker finish handling the request it dies. The master process pass the request to the worker by using a socket. To pass the request the master process writes the request to the socket and the worker reads it.

How to run a worker or workers on Heroku undefinitely and how do they differ from `web`?

I'd like to have two separate instances of PhantomJS constantly waiting on an "Ask" from the main application.
I'm confused how a worker difference from a web process on heroku.
In the Procfile, if I were to define web as NOT my application and started it from the worker process, what would happen?
Or, for example, puma web server starts out multiple threads. Does the web process acount for multiple threads or do you need a web or worker per every thread puma creates?
Thanks

Worker "dyno" in AWS Elastic Beanstalk

Amazon Web Service has now a worker tiers in their Elastic Beanstalk. But, it nevertheless confuse us who come from the days of Worker dyno.
As a comparison, in Heroku, one can configure two dynos (something like processor?) each for web and worker. The web will work for any request, and will timeout normally at 15 secs. Thus, if you have a request that last more than that, your request will simply timed-out although not terminated per se. In that case, you should use worker and your web dyno should visit the endpoint several times per minutes (maybe) to check if there is any result to be brought back to the user. To make either worker or web dyno, what you need is just slide the slider and you are good to go. Sometimes, you may need a Procfile. But there is nothing fancy, or something really difficult, or confusing about it.
In AWS EBS (Elastic Beanstalk), since day 1 you hit eb init, you will be asked whether it is a Standard or Worker. When you hit Standard, it seems there is no way to make it as worker as well.
In our situation, the worker and standard web is located under one application. So, how could we use an EBS instance both for worker and standard. Our worker is using sidekiq, and redis. Please, point to any guidance or help us in this matter.
AWS Elastic Beanstalk has two types of Environments - Web tier and Worker tier.
Web tier environments are meant for web applications - http/https request processing. You get one or more EC2 instances behind a load balancer. You can get other resources like database per your requirement. You can choose the platform you wish e.g. Ruby, Python, Java, Node.js, PHP, Docker.
Worker environments are meant for asynchronous message processing. When you create a worker environment you do not have a load balancer. All your EC2 instances are in an autoscaling group. All these instances are running a daemon which is polling a single SQS queue for messages. When a message is pulled by the daemon from the SQS queue, the daemon sends a HTTP Post request on localhost:80. You can configure the port but the important thing is that the daemon posts the message as an HTTP request on localhost. Your worker application is actually a web application that receives the post request and processes the message. After the message is successfully processed the worker daemon expects that your web application running on localhost returns a HTTP 200 OK response. The daemon then deletes the message from SQS queue. You can write your worker application for any platform just like standard web server applications - Ruby, Python, Java, Node.js, PHP, Docker.
Based on my understanding of your usecase I would recommend creating two Elastic Beanstalk environments - one Standard and one Worker environment. The Standard web server receives HTTP requests and processes them synchronously. This environment puts the relevant data in an SQS queue. The second environment is a worker and the daemon running on this environment polls this SQS queue for messages. Your second environment is a web application that is NOT open to the internet. The worker daemon posts the messages as HTTP requests to your worker environment. Thus you can process long running workloads asynchronously using this second worker environment.
With worker environments you can use your own queues or Elastic Beanstalk can generate a queue for you. You can configure parameters like message visibility timeout, http connections based on your requirements or you can use the defaults.
Below are some links that may be useful for you:
http://aws.amazon.com/blogs/aws/background-task-handling-for-aws-elastic-beanstalk/
http://blogs.aws.amazon.com/application-management/post/Tx1Y8QSQRL1KQZC/Elastic-Beanstalk-Video-Tutorial-Worker-Tier
https://stackoverflow.com/a/23942498/161628
Does this meet your requirements? Please let me know if you have further questions.
Update
You need to upload your source code at two places - once for the worker environment and once for the web server environment. If someone was starting from scratch then they might have two separate code bases. But I think in your case I think it should be perfectly fine to have a single code base shared between the two environments. Suppose your web request arrives at '/register', then the register() method in your application can post messages to an SQS queue and be done with the HTTP request. Now your worker environment will poll the SQS queue and post messages over HTTP on localhost to the URL '/async_register' which will invoke a method async_register() in your application and do the asynchronous processing. These two methods can live in the same source code bundle which can be shared by both the worker and web server environment. The code path taken by worker and web server will be different so that web server environments will invoke register() and worker environments will invoke the async_register() method.
Another caveat is that HTTP requests sent by the worker daemon on localhost will contain an HTTP header - "User-Agent": "aws-sqsd/1.1". Read more here. So in your web application you can have a single listener to post requests on "/register" and depending on whether this header is present or not you invoke register() or async_register() methods internally.
Also I think if you want to share the code base between the two environments you can upload the code base at only one place. Your environments are logically grouped into applications. So you can have a single application. You upload your source code to this application using the "CreateApplicationVersion" API call. Suppose you upload an application version with label 'v1'. You can now create a worker environment and a web server environment under the same application. When you create an environment you need to provide a version to deploy to your enviroment. In this case you can deploy v1 to both environments. So you will be sharing the same source code for both environments. When you have a new version "v2". You upload this version and then perform an update on both environments changing their version to "v2".
The same version of the source code can be deployed to both environments. They will be running on different EC2 instances because one environment is dedicated for responding to web requests and one environment is dedicated for responding to asynchronous web requests (worker).

Single-dyno setup on Heroku for Rails with both WebSocket and worker queue

At the moment I have a small web application running on Heroku with a single dyno. This dyno runs a Rails app on Unicorn with a single worker queue.
config/unicorn.rb:
worker_processes 1
timeout 180
#resque_pid = nil
before_fork do |server, worker|
#resque_pid ||= spawn("bundle exec rake jobs:work")
end
I would like to add WebSockets functionality, but from what I have read Unicorn is not one of the web servers that supports faye-websocket. There is the Rainbows! web server that is based on Unicorn, but I'm unclear if it's possible for me to switch and keep my spawn for the queue worker.
I suppose having more than a single dyno one could just add another dyno to run a Rainbows! web server for the WebSockets part, right? This is unfortunately not an option at the moment. Is there a way to get it working with a single dyno for my setup?
If not, what other options are available to get information from server to client e.g. based on asynchronous work being completed? I'm using poll for other things in the application, i.e. to start an asynchronous job that is handled by the worker process and upon completion the polling client (browser) will see a completion flag. This works, but I'd like to improve it if possible.
I'm open to hear about your experiences and suggestions. Thanks in advance!

How do I gracefully shut down a Mongrel web server

My RubyOnRails app is set up with the usual pack of mongrels behind Apache configuration. We've noticed that our Mongrel web server memory usage can grow quite large on certain operations and we'd really like to be able to dynamically do a graceful restart of selected Mongrel processes at any time.
However, for reasons I won't go into here it can sometimes be very important that we don't interrupt a Mongrel while it is servicing a request, so I assume a simple process kill isn't the answer.
Ideally, I want to send the Mongrel a signal that says "finish whatever you're doing and then quit before accepting any more connections".
Is there a standard technique or best practice for this?
I've done a little more investigation into the Mongrel source and it turns out that Mongrel installs a signal handler to catch an standard process kill (TERM) and do a graceful shutdown, so I don't need a special procedure after all.
You can see this working from the log output you get when killing a Mongrel while it's processing a request. For example:
** TERM signal received.
Thu Aug 28 00:52:35 +0000 2008: Reaping 2 threads for slow workers because of 'shutdown'
Waiting for 2 requests to finish, could take 60 seconds.Thu Aug 28 00:52:41 +0000 2008: Reaping 2 threads for slow workers because of 'shutdown'
Waiting for 2 requests to finish, could take 60 seconds.Thu Aug 28 00:52:43 +0000 2008 (13051) Rendering layoutfalsecontent_typetext/htmlactionindex within layouts/application
Look at using monit. You can dynamically restart mongrel based on memory or CPU usage. Here's a line from a config file that I wrote for a client of mine.
check process mongrel-8000 with pidfile /var/www/apps/fooapp/current/tmp/pids/mongrel.8000.pid
start program = "/usr/local/bin/mongrel_rails cluster::start --only 8000"
stop program = "/usr/local/bin/mongrel_rails cluster::stop --only 8000"
if totalmem is greater than 150.0 MB for 5 cycles then restart # eating up memory?
if cpu is greater than 50% for 8 cycles then alert # send an email to admin
if cpu is greater than 80% for 5 cycles then restart # hung process?
if loadavg(5min) greater than 10 for 3 cycles then restart # bad, bad, bad
if 3 restarts within 5 cycles then timeout # something is wrong, call the sys-admin
if failed host 192.168.106.53 port 8000 protocol http request /monit_stub
with timeout 10 seconds
then restart
group mongrel
You'd then repeat this configuration for all of your mongrel cluster instances. The monit_stub line is just an empty file that monit tries to download. If it can't, it tries to restart the instance as well.
Note: the resource monitoring seems not to work on OS X with the Darwin kernel.
Better question is how to keep your app from consuming so much memory that it requires you to reboot mongrels from time to time.
www.modrails.com reduced our memory footprint significantly
Boggy:
If you have one process running, it will gracefully shut down (service all the requests in its queue which should only be 1 if you are using proper load balancing). The problem is you can't start the new server until the old one dies, so your users will queue up in the load balancer. What I've found successful is a 'cascade' or rolling restart of the mongrels. Instead of stopping them all and starting them all (therefore queuing requests until the one mongrel is done, stopped, restarted and accepting connections), you can stop then start each mongrel sequentially, blocking the call to restart the next mongrel until the previous one is back up (use a real HTTP check to a /status controller). As your mongrels roll, only one at a time is down and you are serving across two code bases - if you can't do this you should throw up a maintenance page for a minute. You should be able to automate this with capistrano or whatever your deploy tool is.
So I have 3 tasks:
cap:deploy - which does the traditional restart all at the same time method with a hook that puts up a maintenance page and then takes it down after an HTTP check.
cap:deploy:rolling - which does this cascade across the machine (I pull from a iClassify to know how many mongrels are on the given machine) without a maintenance page.
cap deploy:migrations - which does maintenance page + migrations since its usually a bad idea to run migrations 'live'.
Try using:
mongrel_cluster_ctl stop
You can also use:
mongrel_cluster_ctl restart
got a question
what happens when /usr/local/bin/mongrel_rails cluster::start --only 8000 is triggered ?
are all of the requests served by this particular process, to their end ? or are they aborted ?
I curious if this whole start/restart thing can be done without affecting the end users...

Resources