Here I have multiple rails applications (microservices - 10), need to use one sidekiq and redis needs configure to handle across microservices.
Related
We are hosting application on AWS and are using EB (ElasticBeanstalk) for deployments. Application is Rails and we are using Sidekiq for background processes. We have decoupled RDS instance, ElasticCache (for Sidekiq communication) and generally, we are stateless architecture.
At the moment our web process and sidekiq process are running on same EC2 instances. This means that we need to use larger instances to support this process. We want to move to separate web and worker architecture. Idea is to move web processes on EC2 small instances and have one EC2 large instance dedicated to Sidekiq only. Reason for this is that we have CPU usage issues where bigger worker jobs hog all the resources and take the instance down which than dominos in new instances and general not the optimal use of our resources.
This seems like no brainer to us, but we are having trouble finding web resources where this has been implemented. Also, it is confusing to us setting up Web EB app and Worker EB app separately. How would deploy work, would we deploy two separate EB applications at the same time? That does not seem safe.
We are looking on guidance on how to best go ahead achieving above goal, are there examples or setups that you could share where we could see a real-world example of this?
Also is a there a better way to do this?
The web/worker setup you described for a Rails application is absolutely appropriate. Within the same application, you can create an environment for your web server and an environment for your worker. Your code base can be deployed to both environments either separately (if your changes only affect either the worker or the web server), or at the same time (if your changes affect both environments). You can set environment variables specific to your environment that you can use to determine whether code should run on the worker or the web server. Here is a brief outline of the steps you could use:
Create a new application.
Create a web server environment within the application (for example "production").
Create a worker environment within the application (for example "production-worker").
Set an environment variable, for example APP_ENVIRONMENT (this name could be anything you choose) on production with the value "production", and with the value "production-worker" on the production-worker environment.
Create configuration files in .ebextensions to start/stop sidekiq (and any other programs needed for the worker) depending on if the APP_ENVIRONMENT variable name matches "worker".
Set up a cron.yaml file for background jobs (see AWS docs).
For the background jobs, I create a separate cron controller for the endpoints listed in the cron.yaml file.
Deploy your code base to both the web server and worker environments. Subsequent changes can be deployed as needed to the appropriate environment.
For Sidekiq, your web application and your worker both need to be given access to the Redis server, so your application can create jobs and then the worker can pick them up to process.
I'm planning to migrate my 2 sidekiq instances to use 1 Redis database. I'm concerned there may be issues with race conditions. Is it safe to do this or not?
I currently have 2 rails servers in production behind a load balancer. Each server is cloned, running a rails app, sidekiq, and a redis database.
The staging environment has the same setup. However, I have connected both sidekiq instances to a single Redis database.
So far I have had no problems, but the staging environment does not see much traffic to see any noticeable effects.
You should at least use different redis databases for staging and production so that tasks from one environment do not end up being run in the other.
In your current setup tasks from one server are executed solely by the same server, but it's not necessary - you can have sidekiq instances pool shared between servers (sidekiq is designed to run fine this way) as long as they have same or compatible code versions (there may be problems while rolling out new versions when task for new version ends up picked by older one).
This setup is actually better - if one sidekiq instance has all threads busy, tasks from correcponding server can be still run on the other.
Hi I just launched a rails 4 application which uses nginx as load balancer with thin serving rails on 2 ports. Additionally I use redis as cache which is also getting used by sidekiq.
I was wondering how can I scale up using another machine in order to run two more rails applications there. My idea is just running two more rails applications on another machine but the headache comes with redis since sidekiq is making heavy use of it. My first idea was just to have another redis slave which is just read only on the second machine . But this might be error prone since I have a lot of writes into redis in order to check a worker queue.
The following scenario kind of confuses me. The web app makes a request and triggers sidekiq which performs a long running action, it continuously updates the status in redis. The web client polls the app every second in order to get the status. Now it could be possible that the request gets redirected to the second machine with the redis slave which is not yet updated. So I was wondering how would be the best setup, just using one redis instance taking into account latency or run a redis slave?
You have two machines:
MachineA running thin and sidekiq.
MachineB running thin and sidekiq.
Now you install redis on MachineA and point Sidekiq to MachineA for Redis. Both Sidekiqs will talk to Redis on MachineA. See Using Redis for more detail.
Side note: A redis slave is useful for read-only debugging but isn't useful for scaling Sidekiq.
I can run multiple sidekiq processes each one process queues for a specific Rails application.
But if I have 10 applications then it will be always running 10 Sidekiq processes each one consuming memory.
How to run only one Sidekiq process which will serve multiple applications?
Right now I have a Rails application hosted in Heroku and I also have the Sidekiq workers managed from there.
I am thinking on having Rails in Heroku as a client to push jobs to the Redis queue, and then have N machines that have Sidekiq daemons that get jobs from that Redis queue where rails is pushing jobs.
My question is:
Can I have several Sidekiq daemons that process jobs from that same queue? I can't find an example that specifies how to do so. I have seen sidekiq pure ruby, but I fail to see how would I write the server so that it starts fetching jobs.
(This is what actually concerns me the most) Once the job is done, how do I send the results back to Rails? I am thinking on POSTing the result via JSON. But I would appreciate any feedback on this too.
Finally, if it is possible to have several Sidekiq servers, from where should I manage all the workers/jobs/ - Set up the UI in the Rails side would be enough?