How does puma master process transfer the request to workers? - ruby-on-rails

I've been searching for an answer on this but I couldn't find one.
How does Puma master process communicates with the workers ? How the master process sends the request to the worker ? Is this done with shared memory ? Unix socket ?
Thanks!

The master doesn't deal with requests, it merely monitors the workers and restarts them when necessary.
The workers, independently, will pull requests from some queueing system, e.g. a TCP port or unix socket.

Related

Does Unicorn app server workers goes up and down all the time? or they are long lived? how do they retrieve the request from master?

We have a Rails app inside a Unicorn app server.
Unicron works with processes(unlike Puma for example that works with threads).
I understand the general architecture, where there's one master process which somehow pass each request to a worker. I'm not sure how though.
If we have 5 workers, it means we have 5 process that run all the time? or the master forks a new process for each request, and when a worker finish handling a request it dies?
How does the request pass to the worker?
Also, if there's an elaborated article about unicorn architecture as a reference it would be amazing!
The master process forks a new worker for each request, and when the worker finish handling the request it dies. The master process pass the request to the worker by using a socket. To pass the request the master process writes the request to the socket and the worker reads it.

Preventing uwsgi_response_write_body_do() TIMEOUT

We use uwsgi with the python3 plugin, under nginx, to serve potentially hundreds of megabytes of data per query. Sometimes when nginx is queried from client a slow network connection, a uwsgi worker dies with "uwsgi_response_write_body_do() TIMEOUT !!!".
I understand the uwsgi python plugin reads from the iterator our app returns as fast as it can, trying to send the data over the uwsgi protocol unix socket to nginx. The HTTPS/TCP connection to the client from nginx will get backed up from a slow network connection and nginx will pause reading from its uwsgi socket. uwsgi will then fail some writes towards nginx, log that message and die.
Normally we run nginx with uwsgi buffering disabled. I tried enabling buffering, but it doesn't help as the amount of data it might need to buffer is 100s of MBs.
Our data is not simply read out of a file, so we can't use file offload.
Is there a way to configure uwsgi to pause reading from the our python iterator if that unix socket backs up?
The existing question here "uwsgi_response_write_body_do() TIMEOUT - But uwsgi_read_timeout not helping" doesn't help, as we have buffering off.
To answer my own question, adding socket-timeout = 60 is helping for all but the slowest client connection speeds.
That's sufficient so this question can be closed.

Is there a way to start a Rails puma webserver within rails without a db connection?

I have a microservice that is taking in webhooks to process but it is currently getting pounded by the sender of said webhooks. Right now I am taking them and inserting the webhooks into the db for processing but the data is so bursty at times that I don't have enough bandwidth to manage the flood of requests and I cannot scale anymore as I'm out of db connections. The current thought is to just take the webhooks and throw them into a Kafka queue for processing; using Kafka I can scale up the number of frontend workers to whatever I need to handle the deluge of requests and I have the replayability of Kafka. By throwing the webhooks into Kafka, the frontend web server no longer needs a pool of db connections as it literally is just taking the request and throwing into the queue for processing. Does anyone have any knowledge on removing the db connectivity from Puma or have an alternative to do what's being asked?
Currently running
ruby 2.6.3
rails 6.0.1
puma 3.11
Ended up using Puma's before fork and on_worker_boot methods to not re-establish the database connection for those particular web workers within the config

Does application server workers spawn threads?

The application servers used by Ruby web applications that I know have the concept of worker processes. For example, Unicorn has this on the unicorn.rb configuration file, and for mongrel it is called servers, set usually on your mongrel_cluster.yml file.
My two questions about it:
1) Does every worker/server works as a web server and spam a processes/threads/fiber each time it receives a request, or it blocks when a new request is done if there is already other running?
2) Is this different from application server to application server? (Like unicorn, mongrel, thin, webrick...)
This is different from app server to app server.
Mongrel (at least as of a few years ago) would have several worker processes, and you would use something like Apache to load balance between the worker processes; each would listen on a different port. And each mongrel worker had its own queue of requests, so if it was busy when apache gave it a new request, the new request would go in the queue until that worker finished its request. Occasionally, we would see problems where a very long request (generating a report) would have other requests pile up behind it, even if other mongrel workers were much less busy.
Unicorn has a master process and just needs to listen on one port, or a unix socket, and uses only one request queue. That master process only assigns requests to worker processes as they become available, so the problem we had with Mongrel is much less of an issue. If one worker takes a really long time, it won't have requests backing up behind it specifically, it just won't be available to help with the master queue of requests until it finishes its report or whatever the big request is.
Webrick shouldn't even be considered, it's designed to run as just one worker in development, reloading everything all the time.
off the top of my head, so don't take this as "truth"
ruby (MRI) servers:
unicorn, passenger and mongrel all use 'workers' which are separate processes, all of these workers are started when you launch the master process and they persist until the master process exits. If you have 10 workers and they are all handling requests, then request 11 will be blocked waiting for one of them to complete.
webrick only runs a single process as far as I know, so request 2 would be blocked until request 1 finishes
thin: I believe it uses 'event I/O' to handle http, but is still a single process server
jruby servers:
trinidad, torquebox are multi-threaded and run on the JVM
see also puma: multi-threaded for use with jruby or rubinious
I think GitHub best explains unicorn in their (old, but valid) blog post https://github.com/blog/517-unicorn.
I think it puts backlog requests in a queue.

Erlang Supervisor Strategy For Restarting Connections to Downed Hosts

I'm using erlang as a bridge between services and I was wondering what advice people had for handling downed connections?
I'm taking input from local files and piping them out to AMQP and it's conceivable that the AMQP broker could go down. For that case I would want to keep retrying to connect to the AMQP server but I don't want to peg the CPU with those connections attempts. My inclination is to put a sleep into the reboot of the AMQP code. Wouldn't that 'hack' essentially circumvent the purpose of failing quickly and letting erlang handle it? More generally, should the erlang supervisor behavior be used for handling downed connections?
I think it's reasonable to code your own semantics for handling connections to an external server yourself. Supervisors are best suited to handling crashed/locked/otherwise unhealthy processes in your own process tree not reconnections to an external service.
Is your process that pipes the local files in the same process tree as the AMQP broker or is it a separate service?

Resources