Rails: PUMA cluster mode vs threaded only? - ruby-on-rails

In my rails app, hosted on the Digitalocean app platform, we're seeing high ram usage and can accommodate only 2 puma workers in a pro instance (1 vCPU, 2 GB Ram). Should I prefer more small instances without any workers as that would be more cost-effective?
i.e. if it was heroku, then should I use 2x dyno with more puma workers or multiple 1x dynos?
Is there any inherent advantage to using puma in cluster mode vs threaded only with more instances?

Related

Memory is not getting released after use In Puma Docker Container

We are using Puma with cluster mode where we are using 3 workers with 5 threads. Once the container application is up it starts consuming memory by Puma workers at a high rate and after some time memory utilization by workers slows down but increases step by step.
After the memory is utilized 100% it stops the container and starts again.
This is really a big problem we are facing.
Attaching the Memory utilization Graph for Rails Puma Application.

Is Puma WEB_CONCURRENCY on a per Dyno basis for Heroku?

I'm using Puma web server on Heroku, and currently have 3 of the standard 2x dynos. The app is Ruby on Rails.
My understanding is that by increasing WEB_CONCURRENCY in /config/puma.rb it increases the number of puma workers, at the expense of additional RAM usage.
Current Setup:
workers ENV.fetch("WEB_CONCURRENCY") { 5 }
Question:
Is the 5 concurrent workers on a per dyno basis, or overall?
If I have 3 dynos, does this mean I have 15 workers, or only 5?
I was previously looking for a way to check the current number of existing workers, but couldn't find any commands to do this on Heroku.
Yes, the web concurrency is on a per-dyno basis.
Each dyno is an independent container, running on a different server. So you should see each dyno as an independent server.

what is the correct way to set up puma, rails and nginx with docker?

I have a Rails application which is using Puma. I'm using nginx for load balancing. I would like to dockerize and deploy to a DigitalOcean (Docker) droplet.
After reading lots of blogs and examples (most of which are a year old and that's a long time in the Docker world), I'm still confused about 2 things. Let's say that I select a DigitalOcean box with 4 CPUs. How am I supposed to set up the Rails containers? Should I set up 4 different containers, where Puma is configured with 1 worker process? Or should I set up 1 container where Puma is configured with 4 worker processes?
And the second thing I'm confused about: should I run nginx inside the Rails container, or should I run them in separate containers?
These 2 questions allow 4 permutations that I diagramed below.
option 1
option 2
option 3
option 4
Docker likes to push the single process per container style of design. When running multiple processes in a single container there is the extra layer of a service manager in between Docker and the underlying processes which causes Docker to lose visibility of the real service status. This more often than not makes services harder to manage with Docker and it's associated tools. Puma managing workers will not be as bad as a generic service manager running multiple processes.
You may also need to consider the next step in the application, hosting across multiple droplets/hosts and how to easy it will be to move to that next step.
Option 1 and 3 follow Dockers preferred design. If you are using MRI, Puma can run in clustered mode so it just depends on whether you want to manage the Ruby processes yourself (1) or have Puma do the worker management (3). There will be differences between how nginx and Puma distribute requests between workers. Puma can also schedule zero down time updates which would require a bit of effort to get working via Docker. If you are using Rubinius or JRuby you would probably lean towards option 3 and let threads do the work.
Option 1 may allow you to more easily scale across different sized hosts with Docker tools.
Option 2 looks like it adds an unnecessary application hop and Docker no longer maintains the service state in your app tier as you need something else in the container to launch both nginx and Puma.
Option 4 might perform a bit better than others due to the local sockets, but again Docker is no longer aware of the service state.
In any case try a couple of solutions and benchmark both with something like JMeter. You will quickly get an idea of what works and what doesn't, both in performance and management.

How to enable cluster or hybrid mode in Puma and what are they?

The readme at https://github.com/schneems/puma_worker_killer says Puma worker killer can only function if you have enabled cluster mode or hybrid mode (threads + worker cluster). If you are only using threads (and not workers) then puma worker killer cannot help keep your memory in control.
So if I'm using puma on heroku, how do I tell if cluster or hybrid mode is enabled, given that in the puma readme it only talks about clustered mode?
How do I enable cluster mode? How do I enable hybrid mode?
Is that simply done by specifying the number of workers in config/puma.rb in your Rails app's config folder?
Hybrid mode or Cluster mode are the same. Yes you need to specify the number of worker. You will need at least 2 workers to call it to be a cluster. Generally the number of workers depend up on the number for core's your processor has.

Can one Sidekiq instance process multiple queues from multiple applications?

I can run multiple sidekiq processes each one process queues for a specific Rails application.
But if I have 10 applications then it will be always running 10 Sidekiq processes each one consuming memory.
How to run only one Sidekiq process which will serve multiple applications?

Resources