Separate clock process from sidekiq workers on Docker - ruby-on-rails

I am currently working on moving my environment off Heroku and part of my application is runs a clock process that sets off a Sidekiq background job.
As I understand it, Sidekiq is composed of a client, which sends jobs off to be queued into Redis and a server which pulls off requests of the queue and processes them. I am now trying to split out my application into the following containers on Docker:
- Redis container
- Clock container (Using Clockwork gem)
- Worker container
- Web application container (Rails)
However, I am not sure how one is supposed to split up this Sidekiq server and client. Essentially, the clock container needs to be running Sidekiq on it so that the client can send off jobs to the Redis queue every so often. However, the worker containers should also run Sidekiq (the server though) on them so that they can process the jobs. I assume that splitting up the responsibilities between different containers should be quite possible to do since Heroku allows you to split this across various dynos.
I can imagine one way to do this would be to assign the clock container to pull off a non-existent queue so that it just never pulls any jobs off the queue and then set the worker to be pulling off a queue that exists. However, this just doesn't seem like the most optimal approach to me since it will still be checking for new jobs in this non-existing queue.
Any tips or guides on how I can start going about this?

The sidekiq client just publishes jobs into redis. A sidekiq deamon process just subscribes to redis and starts worker threads as they are published.
So you can just install the redis gem on both conatainers: Clock container and Worker Container and start the worker daemon only on the Worker Container and provide a proper redis config to both. You also have to make sure that the worker sourcecode is available on both servers/containers as Sidekiq client just stores the name of the worker class and then the daemon instanciates it through metaprogramming.
But actually you also can just include a sidekiq daemon process together with every application wich needs to process a worker job. Yes, there is this best practise of docker for one container per process, however imho this is not an all or nothing rule. In this case i see both processes as one unity. It's just a way of running some code in background. You then would just configure that instances of same applications just work against same sidekiq queues. Or you could even configure that every physical node runs again a separate queue.

Related

Does it make sense to run multiple similar processes in a container?

a brief background to give context on the question.
Currently my team and i are in the midst of migrating our microservices to k8s to lessen the effort of having to maintain multiple deployment tools & pipelines.
One of the microservices that we are planning to migrate is an ETL worker that listens to messages on SQS and performs multi-stage processing.
It is built using PHP Laravel and we use supervisord to control how many processes to run on each worker instance on aws ec2. Each process basically executes a laravel command to poll different queues for new messages. We also periodically adjust the number of processes to maximize utilization of each instance's compute power.
So the questions are:
is this method of deployment still feasible when moving to k8s? Is there still a need to "maximize" compute usage? Are we better off just running 1 process in each container using the "container way" (not sure what is the tool called. runit?)
i read from multiple sources (e.g https://devops.stackexchange.com/questions/447/why-it-is-recommended-to-run-only-one-process-in-a-container) that it is ideal that for a container to run only 1 process. There's also the case of recovering crashed processes and how running supervisord might interfere with how container performs recovery. But i am not very sure if it applies for our use case.
You should absolutely restructure this to run one process per container and one container per pod. You do not typically need an init system or a process manager like supervisord or runit (there is an argument to have a dedicated init like tini that can do the special pid-1 things).
You mention two concerns here, restarting failed processes and process placement in the cluster. For both of these, Kubernetes handles these automatically for you.
If the main process in a Pod fails, Kubernetes will restart it. You don't need to do anything for this. If it fails repeatedly, it will start delaying the restarts. This functionality only works if the main process fails – if your container's main process is a supervisor process, you will never get a pod restart and you may not directly notice if a process can't start up at all.
Typically you'll run containers via Deployments that have some number of identical replica Pods. Kubernetes itself takes responsibility for deciding which node will run each pod; you don't need to manually specify this. The smaller the pods are, the easier it is to place them. Since you're controlling the number of replicas of a pod, you also want to separate concerns like Web servers vs. queue workers so you can scale these independently.
Kubernetes has some ability to auto-scale, though the typical direction is to size the cluster based on the workload: in a cloud-oriented setup if you add a new pod that requests more CPUs than your cluster currently has available, it will provision a new node. The HorizonalPodAutoscaler is something of an advanced setup, but you can configure it so that the number of workers is a function of your queue length. Again, this works better if the only thing it's scaling is the worker pods, and not a collection of unrelated things packaged together.

Where should I run scheduled background jobs?

Here in my company we have our regular application in aws ebs with some background jobs. The problem is, these jobs are starting to get heavier and we were thinking in separate them from the application. The question is: Where should we do it?
We were thinking in doing it in aws lambda, but then we would have to port our rails code to python, node or java, which seems to be a lot of work. What are other options for this? Should we just create another ec2 environment for the jobs? Thanks in advance.
Edit: I'm using shoryuken gem: http://github.com/phstc/shoryuken integrated with SQS. But its currently with some memory leak and my application is going down sometimes, I dont know if the memory leak is the cause tough. We already separated the application between an API part in EBS and a front-end part in S3.
Normally, just another EC2 instance with a copy of your Rails app, where instead of rails s to start the web server, you run rake resque:work or whatever your job runner start command is. Both would share the same Redis instance and database so that your web server writes the jobs to the queue and the worker picks them up and runs them.
If you need more workers, just add more EC2 instances pointing to the same Redis instance. I would advise separating your jobs by queue name, so that one worker can just process fast stuff e.g. email sending, and others can do long running or slow jobs.
We had a similar requirement, for us it was the sidekiq background jobs, they started to get very heavy, so we split it into a separate opsworks stack, with a simple recipe to build the machine dependencies ( ruby, mysql, etc ), and since we don't have to worry about load balancers and requests timing out, it's ok for all machines to deploy at the same time.
Also another thing you could use in opsworks is using scheduled machines ( if the jobs are needed at certain times during the day ), having the machine get provisioned few minutes before the time of the task, and then after the task is done you could make it shutdown automatically, that would reduce your cost.
EB also has a different type of application, which is the worker application, you could also check that out, but honestly I haven't looked into it so I can't tell you what are the pros and cons of that.
We recently passed on that route. I dockerized our rails app, and wrote a custom entrypoint to that docker container. In summary that entrypoint parses commands after you run docker run IMAGE_NAME
For example: If you run: docker run IMAGE_NAME sb rake do-something-magical entrypoint understands that it will run rake job with sandbox envrionment config. if you only run: docker run IMAGE_NAME it will do rails s -b 0.0.0.0
PS: I wrote custom entrypoint because we have 3 different environments, that entrypoint downloads environment specific config from s3.
And I set up an ECS Cluster, wrote an task-runner job on Lambda this lambda function schedules a task on ecs cluster, and we trigger that lambda from CloudWatch Events. You can send custom payloads to lambda when using CloudWatch Events.
It sounds complicated but implementation is really simple.
You may consider to submit your task to AWS SQS services, then you may use elasticbeantaslk worker enviroment to process your backgrown task.
Elasticbeanstalk supports rail application:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Ruby_rails.html
Depending on what kind of work these background jobs perform, you might want to think about maybe extracting those functions into microservices if you are running your jobs on a difference instance anyway.
Here is a good codeship blog post on how to approach this.
For simple mailer type stuff, this definitely feels a little heavy handed, but if the functionality is more complex, e.g. general notification of different clients, it might well be worth the overhead.

How the Sidekiq server process pulls jobs from the queue in Redis?

I've two Rails application running on two different instance(lets say Server1 and Server2) but they have similar codes and shares the same Postgresql DB.
I installed Sidekiq and pushing the jobs in Queue from both the servers, but I'm running the Sidekiq process only in Server1.
I've single Redis server and its running on Server1 which shares the Redis with Server2.
If a job pushed from Server2 it getting processed in Server1's Sidekiq process and its what I actually wanted.
My question is
How the Sidekiq process on Server1 knows that a job is pushed in Redis?
Whether the Sidekiq process continuously checks on the Redis server for any new jobs or the Redis server is intimating to the Sidekiq process about the new job?
I got confused and amazed about this!!!
Could anyone please clarify the Sidekiq's process to get the job from Redis server?
It will be helpful for newbies like me.
Sidekiq uses redis command named BRPOP.
This command gets an element from a list (which is your job queue). And if the list is empty, it waits for element to appear and then pops/returns it. This also works with multiple queues at the same time.
So no, sidekiq does not poll redis and redis does not push notifications to sidekiq.
Sidekiq uses a polling mechanism to check for new jobs in Redis. The default polling interval is set at 5 seconds and can be adjusted in the configuration file located at lib/sidekiq/config.rb [link]
# lib/sidekiq/config.rb
average_scheduled_poll_interval: 5
By the way, jobs are stored in Redis as a list and Sidekiq retrieves them by using the BRPOP (Blocking Right Pop) command to avoid any race conditions. This ensures that multiple Sidekiq processes running on different instances are able to retrieve the jobs in a coordinated manner.

Rails 4.2 load balancing with nginx redis and sidekiq

Hi I just launched a rails 4 application which uses nginx as load balancer with thin serving rails on 2 ports. Additionally I use redis as cache which is also getting used by sidekiq.
I was wondering how can I scale up using another machine in order to run two more rails applications there. My idea is just running two more rails applications on another machine but the headache comes with redis since sidekiq is making heavy use of it. My first idea was just to have another redis slave which is just read only on the second machine . But this might be error prone since I have a lot of writes into redis in order to check a worker queue.
The following scenario kind of confuses me. The web app makes a request and triggers sidekiq which performs a long running action, it continuously updates the status in redis. The web client polls the app every second in order to get the status. Now it could be possible that the request gets redirected to the second machine with the redis slave which is not yet updated. So I was wondering how would be the best setup, just using one redis instance taking into account latency or run a redis slave?
You have two machines:
MachineA running thin and sidekiq.
MachineB running thin and sidekiq.
Now you install redis on MachineA and point Sidekiq to MachineA for Redis. Both Sidekiqs will talk to Redis on MachineA. See Using Redis for more detail.
Side note: A redis slave is useful for read-only debugging but isn't useful for scaling Sidekiq.

Dockerizing Delayed Job

We are currently working on Dockerizing our Ruby on Rails application, which also includes Delayed Job. A question buzzing within our development team is whether and/or how to Dockerize the Delayed Job component separately from the application.
This would allow Delayed Job to start up new containers when necessary for high traffic within the jobs queue. In addition, since Delayed Job actually starts up the Rails application each time when first booting up, we thought the following benefits would follow:
The Delayed Job container might start up quicker
Application code would start up regardless of the Delayed Job container startup time
So I know a guy responsible for a rails app that uses delayed jobs. When it came time to dockerize said app, it got a container for each. Both containers are using the same codebase but one runs the frontend and one the jobs. It's not devops microservice-eriffic but it works.
Outside of the logical separation between the two, docker containers are supposed to only have a single process running inside. Could've hacked it together but it seemed wrong to break a docker fundamental out of the gate.

Resources