running delayed_job on each app server or on a separate instance? - delayed-job

We're migrating our application www.monaqasat.com from a single server to a distributed infrastructure and we're debating where to run delayed_job from. The two obvious options are:
From each app instance?
From a single (or redundant) utility server?
Any recommendations or pros/cons?

I would prefer to use a single utility server unless it starts to get overwhelming for a single server.
I would also look at Gearman- http://gearman.org/.
If you need some serious messaging then RabbitMq http://www.rabbitmq.com/ is a good choice.

Related

Ruby on Rails on few servers

I have a big application. One of the part of this is highload processing with user files. I decide to provide for this one dedicate server. There will be nginx for distribution content and some programs (non rails) for processing files.
I have two question:
What better to use on this server? (Rails or something else, maybe Sinatra)
If I'll use Rails how to deploy? I can't find any instruction. If I have one app and two servers how to deploy it and delegate task for each other?
ps I need to authorize user on both servers. In Rails I use Devise.
You can use Rails for this. If both servers will act as a web client to the end user then you'll need some sort of load balancer in front of the two servers. HAProxy does a great job on this.
As far as getting the two applications to communicate with each other, this will be less trivial than you may think. What you should do is use a locking mechanism on performing the tasks. Delayed_job by default will lock a job in the queue so that any other works will not try and work on the same job. You can use callbacks from ActiveJob to notify the user via web sockets whenever their job is completed.
Anything that will take time or calling an external API should usually be placed into a background processing queue so that you're not holding up the user.
If you cannot spin up more than the two servers, you should make one of them the master or at least have some clear roles of the two servers. For example, one server may be your background processing and memcache server while the other is storing your database and handles your web sockets.
There are a lot of different ways of configuring the services and anything including and beyond what I've mentioned is opinionated.
Having separate servers for handling tasks is my preference as it makes them easier to manage from a Sys Admin perspective. For example, if we find that our web sockets server is hammered, we can simply spin up a few more web socket servers and throw them into a load balancer pool. The end user would not be negatively impacted from your networking changes. Whereas, if you have your servers performing dual roles outside of your standard Rails installation, you may find yourself cloning and wasting resources. Each of my web servers usually also perform background tasks on low-intermediate priority queues while a dedicated server is left for handling mission critical jobs.

Ruby on Rails web server

I'm new to web development, and have a question about deploy ruby on rails application.
For what i'm understanding, there are two ways to deploy. One is using cloud server like Heroku (I'm currently using). You just need to upload your project to their cloud server and ready to go.
Another way is build your own server using, for example, apache+passenger. By doing this way, I need to have a physical computer with Ubuntu + Apache + Passenger running continually right?
So my question is which way is better, faster?
Heroku has a form of automated deployment built-in. There are other cloud-based providers which offer a similar sort of service.
The alternative is self-hosted. You don't need a "physical computer", you can use a virtualized server in the cloud just the same. Popular choices are Linode, Digital Ocean and Amazon EC2 which is what Heroku is based on.
"Better" is highly subjective. Do you know how to maintain a server? If the answer to this question is "No", then Heroku is probably the best bet.
If you do know how to maintain a server, you can usually get better performance from your own rig since you have full control over how your application is launched, how long it stays running, and can increase resources at marginal additional cost. The downside is you're responsible for everything should it malfunction.

How to run multiple tiny Ruby (Rack) apps on one server?

I want to run several (more than 2) tiny Rack-based apps on my VPS, which already has one large Rails app running.
The Rails app uses the traditional pair of Unicorn & nginx, and it requires most of the RAM that I have on my VPS.
I've tried adding similar Unicorn configurations for each app and it led me to conclude the RAM is insufficient.
So my question is: is it possible to set up one small memory-saving server which allows me to run several Sinatra apps at once?
UPDATE: in case it matters, I don't care much about the performance. These apps are not intended to do any serious jobs.
UPDATE 2: an approach based on sockets shared with Nginx is preferred over the one with ports.
Thanks!
I did my own exploration of this question and I think I found a solution which will allow me having one web-server running all my tiny apps at once.
It is based on RackStack not-yet-a-gem created by Remi Taylor (#remi on Github) https://github.com/remi/rack-stack.
RackStack is inspired by Rack::Builder, which as well seems being good for accomplishing a task like this - RackStack just goes in the same direction further, abstracting "stack" functionality in a way I found very nice and handy.
Here is a demonstration of RackStack which consists of two sample apps (Sinatra and Rack): https://github.com/stanislaw/skeletons/tree/master/rack_stack. To mimic stack app behavior on a real server I modified my /etc/hosts file to have localhost2 host pointing to 127.0.0.1.
I fire up Thin server and then run requests on localhost or localhost2: the requests to 'localhost' are served by FirstApp, to 'localhost2' by SecondApp.
I can't now foresee any problems that can appear, when I will test my apps on a real server, but now this approach seems to be exactly what I was looking for: I imagine, that on a real server Nginx will pass requests to all the hosts associated with my rack apps to a socket listened by Thin server. So, RackStack will meet only those requests which are addressed to the apps I have in my stack.
Any suggestions, improvements of this scheme or alternatives are still appreciated!

RabbitMQ with EventMachine and Rails

we are currently planning a rails 3.2.2 application where we use RabbitMQ. We would like to run several kind of workers (and several instances of a worker) to process messages from different queues. The workers are written in ruby and are laying in the lib directory of the rails app.
Some of the workers needs the rails framework (active record, active model...) and some of them don't. The first worker should be called every minute to check if updates are available. The other workers should process the messages from their queues when messages (which are send by the first worker) are present and do some (time consuming) stuff with it.
So far, so good. My problem is, that I only have little experiences with messaging systems like RabbitMQ and no experiences with the rails interaction between them. So I'm wondering what the best practices are to get the two playing with each other. Here are my requirements again:
Rails 3.2.2 app
RabbitMQ
Several kind of workers
Several instances of one worker
Control the amount of workers out of rails
Workers are doing time consuming tasks, so they have to be async
Only a few workers needs the rails framework. The others are just ruby files with some dependencies like Net or File
I was looking for some solution and came up with two possibilities:
Using amqp with EventMachine in a new thread
Of course, I don't want my rails app to be blocked when a new worker is created. The worker should run in another thread and do its work asynchronously. And furthermore, it should not start a new instance of my rails application. It should only require the things the worker needs.
But in some articles they say that there are some issues with Passenger. And another fact that I don't like is, that we are using webbrick for development and we ought to include workarounds for that too. It would be possible to switch to another webserver like thin, but I don't have any experience with that either.
Using some kind of daemonizing
Maybe its possible to run workers as a daemon, but I don't know how much overhead this would come up with, or how I can control the amount of workers.
Hope someone can advise a good solution for that (and I hope I made myself clear ;)
It seems to me that AMQP is a big shot to kill your problem. Have you tried to use Resque? The backed Redis database has some neat features (like publish/subscribe and blocking list pop) which make it very interesting as a message queue, and Resque is very easy to use in any Rails app.
The workers are daemonized, and you decide which worker of your pool listens to which queue, so you can scale each type of job as needed.
Using EM reactor inside a request/response cycle is not recommended, because it may conflict with an existing event loop (for instance if your app is served by thin), in any case you have to configure it specifically for your web server, OTOS it may be interesting to have an evented queue consumer, if your jobs have blocking IO and are not processor-bound.
If you still want to do it with AMQP, see Starting the event loop and connecting in Web applications and configure for your web server accordingly. Or use bunny to push synchronously in the queue (and whichever job consumer you deam useflu, like workling for instance)
we are running slightly different -- but similar technology stack.
daemon kit is used for eventmachine side of the system... no rails, but shared models (mongomapper & mongodb). EM is pulling messages off the queues, and doing whatever logic is required (we have ruleby in the mix, but if-then-else works too).
mulesoft ESB is our outward-facing message receiver and sender that helps us deal with the HL7/MLLP world. But in v1 of the app, we used some java code in ActiveMQ to manage HL7 messages.
the rails app then just serves up stuff for the user to see -- again, using the shared models.

Would it make sense to make a web server/app server for Rails in NodeJS

OK, NodeJS is all the buzz these days because it handles things in a non-blocking asynchronous way. Because of this, it is very well suited to being a server of some sort, handling requests from multiple clients concurrently. So my question is whether it would make sense, from a technical perspective, to write a general-purpose Rails app AND web server for production use. To be clear, it would take the place of (for example) Apache and Phusion Passenger. Would this setup, in theory, not be faster at handling requests and responding?
You could use Nginx, Lighttpd or Mongrel2 that are event based and probably still keep your Ruby on Rails. To my knowledge, all three of those use event I/O and don't build and tear down threads or forks on each new connection. This way, you can keep your Ruby on Rails. If you need bidirectional communication for any AJAX, then I'd suggest putting in a Node.JS Socket.IO server.
Apache is very inefficient at handling concurrent connections. If you have a high volume traffic scenario then node should do a better job than Apache at handling the connections. However, node itself is much more than just a http server, it is possible to write brand new MVC frameworks not unlike Rails for building web applications. It is perhaps not wise to write a http server in node to replace Apache / Phusion Passenger just yet. Node is young and has not yet released version 1.0.

Resources