Sidekiq servers in different machines consuming from one Redis - ruby-on-rails

Right now I have a Rails application hosted in Heroku and I also have the Sidekiq workers managed from there.
I am thinking on having Rails in Heroku as a client to push jobs to the Redis queue, and then have N machines that have Sidekiq daemons that get jobs from that Redis queue where rails is pushing jobs.
My question is:
Can I have several Sidekiq daemons that process jobs from that same queue? I can't find an example that specifies how to do so. I have seen sidekiq pure ruby, but I fail to see how would I write the server so that it starts fetching jobs.
(This is what actually concerns me the most) Once the job is done, how do I send the results back to Rails? I am thinking on POSTing the result via JSON. But I would appreciate any feedback on this too.
Finally, if it is possible to have several Sidekiq servers, from where should I manage all the workers/jobs/ - Set up the UI in the Rails side would be enough?

Related

Will 2 servers running Rails + Sidekiq, using the same redis server cause unexpected behavour?

I'm planning to migrate my 2 sidekiq instances to use 1 Redis database. I'm concerned there may be issues with race conditions. Is it safe to do this or not?
I currently have 2 rails servers in production behind a load balancer. Each server is cloned, running a rails app, sidekiq, and a redis database.
The staging environment has the same setup. However, I have connected both sidekiq instances to a single Redis database.
So far I have had no problems, but the staging environment does not see much traffic to see any noticeable effects.
You should at least use different redis databases for staging and production so that tasks from one environment do not end up being run in the other.
In your current setup tasks from one server are executed solely by the same server, but it's not necessary - you can have sidekiq instances pool shared between servers (sidekiq is designed to run fine this way) as long as they have same or compatible code versions (there may be problems while rolling out new versions when task for new version ends up picked by older one).
This setup is actually better - if one sidekiq instance has all threads busy, tasks from correcponding server can be still run on the other.

Jobs are executed many times on an AWS worker

I'm performing some jobs on an AWS worker environment. I can't get why but for some reason my job is executed many times. I can say that because I save my job in the database and I saw it in a state running, then completed, and then running again. Even though I launched it just one time. My application is built in Ruby on Rails, I'm using active_job and the gem active_elastic_job because I take advantage of elastic beanstalk. Anyone have any idea? I can provide all the info you want.

How the Sidekiq server process pulls jobs from the queue in Redis?

I've two Rails application running on two different instance(lets say Server1 and Server2) but they have similar codes and shares the same Postgresql DB.
I installed Sidekiq and pushing the jobs in Queue from both the servers, but I'm running the Sidekiq process only in Server1.
I've single Redis server and its running on Server1 which shares the Redis with Server2.
If a job pushed from Server2 it getting processed in Server1's Sidekiq process and its what I actually wanted.
My question is
How the Sidekiq process on Server1 knows that a job is pushed in Redis?
Whether the Sidekiq process continuously checks on the Redis server for any new jobs or the Redis server is intimating to the Sidekiq process about the new job?
I got confused and amazed about this!!!
Could anyone please clarify the Sidekiq's process to get the job from Redis server?
It will be helpful for newbies like me.
Sidekiq uses redis command named BRPOP.
This command gets an element from a list (which is your job queue). And if the list is empty, it waits for element to appear and then pops/returns it. This also works with multiple queues at the same time.
So no, sidekiq does not poll redis and redis does not push notifications to sidekiq.
Sidekiq uses a polling mechanism to check for new jobs in Redis. The default polling interval is set at 5 seconds and can be adjusted in the configuration file located at lib/sidekiq/config.rb [link]
# lib/sidekiq/config.rb
average_scheduled_poll_interval: 5
By the way, jobs are stored in Redis as a list and Sidekiq retrieves them by using the BRPOP (Blocking Right Pop) command to avoid any race conditions. This ensures that multiple Sidekiq processes running on different instances are able to retrieve the jobs in a coordinated manner.

Rails 4.2 load balancing with nginx redis and sidekiq

Hi I just launched a rails 4 application which uses nginx as load balancer with thin serving rails on 2 ports. Additionally I use redis as cache which is also getting used by sidekiq.
I was wondering how can I scale up using another machine in order to run two more rails applications there. My idea is just running two more rails applications on another machine but the headache comes with redis since sidekiq is making heavy use of it. My first idea was just to have another redis slave which is just read only on the second machine . But this might be error prone since I have a lot of writes into redis in order to check a worker queue.
The following scenario kind of confuses me. The web app makes a request and triggers sidekiq which performs a long running action, it continuously updates the status in redis. The web client polls the app every second in order to get the status. Now it could be possible that the request gets redirected to the second machine with the redis slave which is not yet updated. So I was wondering how would be the best setup, just using one redis instance taking into account latency or run a redis slave?
You have two machines:
MachineA running thin and sidekiq.
MachineB running thin and sidekiq.
Now you install redis on MachineA and point Sidekiq to MachineA for Redis. Both Sidekiqs will talk to Redis on MachineA. See Using Redis for more detail.
Side note: A redis slave is useful for read-only debugging but isn't useful for scaling Sidekiq.

How to supervise sidekiq and rails server processes?

What is the best way to manage multiple interconnected services for a web applications like:
rails server / unicorn / puma
redis / sidekiq / resque
So that, if one is stopped or started, others get stopped/started too.
Usually a monitoring tool is used for this purpose. One such good tool is God.
The basic idea is to run God as a system service, and configure your sidekiq to be watched by God. When your server restarts, God runs as a service and it will start your sidekiq workers.
You have more benefits by using God, to name just a few:
notifications: you can configure it to send you notifications when your sidekiq worker dies and gets restarted.
resource monitoring: you can configure it to take actions based on predefined rules. For example, restart the job when it consumes too much memory.
Update: just read an article this morning which might be very helpful: Create, run and manage your Ruby background processes with upstart.

Resources