Rails 4.2 load balancing with nginx redis and sidekiq - ruby-on-rails

Hi I just launched a rails 4 application which uses nginx as load balancer with thin serving rails on 2 ports. Additionally I use redis as cache which is also getting used by sidekiq.
I was wondering how can I scale up using another machine in order to run two more rails applications there. My idea is just running two more rails applications on another machine but the headache comes with redis since sidekiq is making heavy use of it. My first idea was just to have another redis slave which is just read only on the second machine . But this might be error prone since I have a lot of writes into redis in order to check a worker queue.
The following scenario kind of confuses me. The web app makes a request and triggers sidekiq which performs a long running action, it continuously updates the status in redis. The web client polls the app every second in order to get the status. Now it could be possible that the request gets redirected to the second machine with the redis slave which is not yet updated. So I was wondering how would be the best setup, just using one redis instance taking into account latency or run a redis slave?

You have two machines:
MachineA running thin and sidekiq.
MachineB running thin and sidekiq.
Now you install redis on MachineA and point Sidekiq to MachineA for Redis. Both Sidekiqs will talk to Redis on MachineA. See Using Redis for more detail.
Side note: A redis slave is useful for read-only debugging but isn't useful for scaling Sidekiq.

Related

Will 2 servers running Rails + Sidekiq, using the same redis server cause unexpected behavour?

I'm planning to migrate my 2 sidekiq instances to use 1 Redis database. I'm concerned there may be issues with race conditions. Is it safe to do this or not?
I currently have 2 rails servers in production behind a load balancer. Each server is cloned, running a rails app, sidekiq, and a redis database.
The staging environment has the same setup. However, I have connected both sidekiq instances to a single Redis database.
So far I have had no problems, but the staging environment does not see much traffic to see any noticeable effects.
You should at least use different redis databases for staging and production so that tasks from one environment do not end up being run in the other.
In your current setup tasks from one server are executed solely by the same server, but it's not necessary - you can have sidekiq instances pool shared between servers (sidekiq is designed to run fine this way) as long as they have same or compatible code versions (there may be problems while rolling out new versions when task for new version ends up picked by older one).
This setup is actually better - if one sidekiq instance has all threads busy, tasks from correcponding server can be still run on the other.

Separate clock process from sidekiq workers on Docker

I am currently working on moving my environment off Heroku and part of my application is runs a clock process that sets off a Sidekiq background job.
As I understand it, Sidekiq is composed of a client, which sends jobs off to be queued into Redis and a server which pulls off requests of the queue and processes them. I am now trying to split out my application into the following containers on Docker:
- Redis container
- Clock container (Using Clockwork gem)
- Worker container
- Web application container (Rails)
However, I am not sure how one is supposed to split up this Sidekiq server and client. Essentially, the clock container needs to be running Sidekiq on it so that the client can send off jobs to the Redis queue every so often. However, the worker containers should also run Sidekiq (the server though) on them so that they can process the jobs. I assume that splitting up the responsibilities between different containers should be quite possible to do since Heroku allows you to split this across various dynos.
I can imagine one way to do this would be to assign the clock container to pull off a non-existent queue so that it just never pulls any jobs off the queue and then set the worker to be pulling off a queue that exists. However, this just doesn't seem like the most optimal approach to me since it will still be checking for new jobs in this non-existing queue.
Any tips or guides on how I can start going about this?
The sidekiq client just publishes jobs into redis. A sidekiq deamon process just subscribes to redis and starts worker threads as they are published.
So you can just install the redis gem on both conatainers: Clock container and Worker Container and start the worker daemon only on the Worker Container and provide a proper redis config to both. You also have to make sure that the worker sourcecode is available on both servers/containers as Sidekiq client just stores the name of the worker class and then the daemon instanciates it through metaprogramming.
But actually you also can just include a sidekiq daemon process together with every application wich needs to process a worker job. Yes, there is this best practise of docker for one container per process, however imho this is not an all or nothing rule. In this case i see both processes as one unity. It's just a way of running some code in background. You then would just configure that instances of same applications just work against same sidekiq queues. Or you could even configure that every physical node runs again a separate queue.

How to supervise sidekiq and rails server processes?

What is the best way to manage multiple interconnected services for a web applications like:
rails server / unicorn / puma
redis / sidekiq / resque
So that, if one is stopped or started, others get stopped/started too.
Usually a monitoring tool is used for this purpose. One such good tool is God.
The basic idea is to run God as a system service, and configure your sidekiq to be watched by God. When your server restarts, God runs as a service and it will start your sidekiq workers.
You have more benefits by using God, to name just a few:
notifications: you can configure it to send you notifications when your sidekiq worker dies and gets restarted.
resource monitoring: you can configure it to take actions based on predefined rules. For example, restart the job when it consumes too much memory.
Update: just read an article this morning which might be very helpful: Create, run and manage your Ruby background processes with upstart.

Sidekiq servers in different machines consuming from one Redis

Right now I have a Rails application hosted in Heroku and I also have the Sidekiq workers managed from there.
I am thinking on having Rails in Heroku as a client to push jobs to the Redis queue, and then have N machines that have Sidekiq daemons that get jobs from that Redis queue where rails is pushing jobs.
My question is:
Can I have several Sidekiq daemons that process jobs from that same queue? I can't find an example that specifies how to do so. I have seen sidekiq pure ruby, but I fail to see how would I write the server so that it starts fetching jobs.
(This is what actually concerns me the most) Once the job is done, how do I send the results back to Rails? I am thinking on POSTing the result via JSON. But I would appreciate any feedback on this too.
Finally, if it is possible to have several Sidekiq servers, from where should I manage all the workers/jobs/ - Set up the UI in the Rails side would be enough?

Rails Resque workers in production - where to keep Redis server?

I have Rails app configured with Resque and Redis. I am using God to start/stop workers. So far I was using Redis-to-go, but since I moved to an EC2 high-memory instance, I think it would be a better idea to run the Redis server on that EC2 instance and have all things happening there.
Is that a good idea?
We run our Redis instance (for resque) on the same server as the rest of our app. It's been great, and uses very little memory. But we only process about 5000 jobs a day.
Either way, assuming you are only using Redis for Resque, we've done it with extremely low overhead in CPU or Memory. Redis is very efficient as Resque storage.

Resources