I have an app that has some operations that timeout every once in a while (given our Puma configured timeout) but given then process just dies and a new one comes over I have no way of knowing why/where the process was hung.
Is there a way for me to print all threads before Puma kills my process?
I've tried using on_worker_shutdown but that doesn't seem to be called on a timeout kill. This is a Rails 4.2 app running on Ruby 2.2.7.
You can try add a middleware that implements a time-out lower than the Puma and on this one you dumpy whatever you need/want.
It is not answering your question about Puma but might be a workaround to solve the issue that you have now.
Related
I tried to find out the difference between the Puma and Webrick, but didn't get it or satisfied with it.
So could any one please share information regarding it.
By default WEBrick is single threaded, single process. This means that if two requests come in at the same time, the second must wait for the first to finish.
The most efficient way to tackle slow I/O is multithreading. A worker process spawns several worker threads inside of it. Each request is handled by one of those threads, but when it pauses for I/O - like waiting on a db query - another thread starts its work. This rapid back & forth makes best use of your RAM limitations, and keeps your CPU busy.
So, multithreading is achieved using Puma and that is why it is used as a default App Server in Rails App.
This is a question for Ruby on Rails developers rather than broad audience, because I don't understand reasons any other that putting development environment closer to production where Puma is a solid choice.
To correct the current answer however, I must say that Webrick is, and always has been, a multi-threaded web server. It now ships with Ruby language (and also a rubygem is available). And it is definitely good enough to serve Rails applications for development or for lower-scale production environments.
On the other hand it is not as configurable as other web servers like Puma. Also it is based on the old-school new thread per request design. This can be a problem under heavy load which can lead to too many threads created. Modern web servers solve this by using thread pools, worker processes or combination of the two or other techniques. This includes Puma, however for development spawning a new thread per request is totally fine.
I have no hard feelings for any of the two, both are great Ruby web servers and in our project we actually use them both in production. Anyway, if you like using Webrick for RoR development, you indeed can still use it:
rails server webrick
Rails 6.1 Minor update:
rails server -u webrick [-p NNNN]
I've inherited a Rails application using Puma as a server. Puma was selected for its multi threading capabilities. Unfortunately, the application code is not thread safe and it has become hard to micro manage every new piece of code being added in let alone check whether updated gems continue to be threadsafe. I am thinking of moving to a Unicorn server for its multi processing capabilities, which will allow me to not worry about mutable state, code being threadsafe, etc. I know it will consume more memory but I am hearing that Unicorn also has issues with memory leaks and performance. Have you ever made the transition from Puma to Unicorn? Was it painful? How has unicorn performed? Has Unicorn proven to be scalable in a production environment?
I am trying to understand exactly how requests to a rails application get processed with Phusion Passenger. I have read through the Passenger docs (found here: http://www.modrails.com/documentation/Architectural%20overview.html#_phusion_passenger_architecture) and I understand how they maintain copies of the rails framework and your application code in memory so that every request to the application doesn't get bogged down by spinning up another instance of your application. What I don't understand is how these separate application instances share the native ruby process on my linux machine. I have been doing some research and here is what I think is happening:
One request hits the web server which dispatches Passenger to fulfill the request on one of Passenger's idle worker processes. Another request comes in almost simultaneously and is handled by yet another idle Passenger worker process.
At this point there are two requests being executed which are being managed by two different Passenger worker processes. Passenger creates a green thread on Linux's native Ruby thread for each worker process. Each green thread is executed using context-switching so that blocking operations on one Passenger worker process do not prevent that other worker process from being executed.
Am I on the right track?
Thanks for your help!
The application instances don't "share the native Ruby process". An application instance is a Ruby process (or a Node.js process, or a Python process, depending on what language your app is written in), and is also the same as a "Passenger worker process". It also has got nothing to do with threading.
I have a task that creates new activerecord records which I have recently moved to a background task using delayed_job and foreman, as recommended by Heroku
Sometimes this works fine, but sometimes it causes the Rails app in the browser to stop responding.
At this point I can see from the database that all the delayed jobs have completed, and that all the new records have been created.
However when I kill the processes I get a futher 11,200 lines of terminal output. This mostly consists of the execution by the web process of two methods on the model, both of which involve calls to the database:
validate :hit_database_to_see_if_model_exists?
before_save :get_rows_from_database_and_perform_calculation
There are also a number of INSERT statements which I'm sure have already hit the database because the number of records does not change before/after killing the processes
Here is my Procfile:
web: bundle exec rails server thin -p $PORT -e $RACK_ENV
worker: bundle exec rake jobs:work
So it feels like I am getting a 'stack overflow' (woop). Can you shed any light on:
What generally is going on?
Where in Rails this 'stack overflow' is taking place
Whether these things are actually happening after I hit 'Ctrl + C' or just being printed to the terminal at that point?
What might be causing this?
How I could debug / fix it?
UPDATE
It looks like there are certain tasks that are being assigned to the web process by the background task, but not being executed until the browser is 'prodded.' In some circumstances they all execute, but if there are too many the app falls over. Any idea on what might be causing this?
UPDATE
I tried running the web and worker processes in two separate windows.
In this scenario I have been unable to replicate the problem of the browser hanging, and in each case the worker process completed properly.
However I did make the interesting observation that if I don't touch the browser then no output appears in the web window. However if I do touch the browser then thousands of lines of what the worker process is doing at that moment appear in the web window.
Is this normal? Does this shed any light on what the problem might be?
Update
At the bottom of the terminal output after I kill the processes it says "Killed: 9"
07:45:21 system | sending SIGKILL to all processes
Killed: 9
What exactly does this 9 refer to? Is this unusual?
Update
I am using:
delayed_job 3.0.4
delayed_job_active_record 0.3.3
delayed_job_web 1.1.2
foreman 0.60.2
RESOLUTION
Thanks to #Justin's answer below (and this related question). It seems that Ruby buffers stdout by default, and that this buffer was overflowing, causing the app to stop responding. I added $stdout.sync = true at the top of config/environments/development.rb, and the problem appears to have gone away.
This is only a partial answer, but it might help you with debugging.
Rails buffers logging by default, and flushes it after every web request. One option is to simply replace the logger with a simpler logger
Rails.logger = Logger.new(STDOUT)
You can also configure the buffered logger to flush more often
Rails.logger.auto_flushing = (Rails.env.development? || Rails.env.test?)
You also have to be careful about STDOUT. In my current project I have stdout flushing enabled in both config.ru and my background bootup (I'm using sidekiq, so the boot process may be a little different)
STDOUT.sync = true
As to your larger problem,
I'm surprised that the rails process is running background tasks. Is there an option to disable that, at least for experimentation?
Then there are the standard debugging tools - all the database calls on save worry me, so I'd try various combinations of disabling them to see if anything improves. Especially with two of them - for instance, if your before_save hook changes a value in the model, it might be trigger a validation; if that resets the before_save hooks, you'd have a loop.
We have a rails app running on passenger and we background process some tasks using a combination of RabbitMQ and Workling. The workling's worker process is started using the script/workling_client command. There is always only one worker process started, and the script/workling_client has a :multiple => false options, thus allowing only one instance. But sometimes, under mysterious circumstances which I haven't been able to track down, more worklings spawn up. If I let the system run for some time, more and more worklings appear. I'm not sure if these rogue worklings cause any problems, but it is still unsettling not to know why is it happening. We are using Monit to monitor the workling process. So if it dies, it will spawn it up again. But this still does not explain how come there are suddenly more than one of them.
So my question is: does anyone know what can be cause of this and how to make it stop? Is it possible that workling sometimes dies by itself, without deleting it's pid file? Could there be something wrong with the Daemons gem workling_client is build upon?
Not an answer - I have the same problems running RabbitMQ + Workling.
I'm using God to monitor the single workling process as well (:multiple => false)...
I found that the multiple worklings were eating up huge amounts of memory & causing serious resource usage, so it's important that I find a solution for this.
You might find this message thread helpful: http://groups.google.com/group/rubyonrails-talk/browse_thread/thread/ed8edd0368066292/5b17d91cc85c3ada?show_docid=5b17d91cc85c3ada&pli=1