I'm having ruby instances from mod_rails go "rogue" -- these processes are no longer listed in passenger-status and utilize 100% cpu.
Other than installing god/monit to kill the instance, can anyone give me some advice on how to prevent this? I haven't been able to find anything in the logs that helps.
If you're using Linux, you can install the "strace" utility to see what the Ruby process is doing that's consuming all the CPU. That will give you a good low-level view. It should be available in your package manager. Then you can:
$ sudo strace -p 22710
Process 22710 attached - interrupt to quit
...lots of stuff...
(press Ctrl+C)
Then, if you want to stop the process in the middle and dump a stack trace, you can follow the guide on using GDB in Ruby at http://eigenclass.org/hiki.rb?ruby+live+process+introspection, specifically doing:
gdb --pid=(ruby process)
session-ruby
stdout_redirect
(in other terminal) tail -f /tmp/ruby_debug.(pid)
eval "caller"
You can also use the ruby-debug Gem to remotely connect to debug sockets you open up, described in http://duckpunching.com/passenger-mod_rails-for-development-now-with-debugger
There also seems to be a project on Github concerned with debugging Passenger instances that looks interesting, but the documentation is lacking:
http://github.com/ddollar/socket-debugger/tree/master
I had a ruby process related to Phusion Passenger, which consumed lots of CPU, even though it should have been idle.
The problem went away after I ran
date -s "`date`"
as suggested in this thread. (That was on Debian Squeeze)
Apparently, the problem was related to a leap second, and could affect many other applications like MySQL, Java, etc. More info in this thread on lklm.
We saw something similar to this with very long running SQL queries.
MySQL would kill the queries because they exceeded the long running limit and the thread never realized that the query was dead.
You may want to check the database logs.
This is a recurring issue with passenger. I've seen this problem many times helping people that ran ruby on rails with passenger. I don't have a fix but you might want to try this http://www.modrails.com/documentation/Users%20guide%20Apache.html#debugging_frozen
Related
I have had a Rails 3 app deployed on Elastic Beanstalk for close to 2 years now. For the most part, I haven't had any issues; however, I recently upgraded to one of their new Ruby configurations (64bit Amazon Linux 2014.09 v1.0.9 running Ruby 2.1 (Passenger Standalone)) and I've been fighting an issue for several days where one of more Ruby processes will consume the CPU - to the point where my site becomes unresponsive. I was using a single m3.medium instance, but I've since moved to a m3.large, which only buys me some time to manually log into the EC2 instance and kill the run away process(es). I would say this happens once or twice a day.
The only thing I had an issue with when moving to the new Ruby config was that I had to add the following to my .ebextensions folder so Nokogiri could install (w/bundle install)...
commands:
build_nokogiri:
command: "bundle config build.nokogiri --use-system-libraries"
I don't think this would cause these hanging processes, but I could be wrong. I also don't want to rule out something unrelated to the Elastic Beanstalk upgrade, but I can't thing of any other significant change that would cause this problem. I realize this isn't a whole lot of information, but has anyone experienced anything similar to this? Anyone have suggestions for tracing these processes to their root cause?
Thanks in advance!
Since you upgraded your beanstalk configuration, I guess you also upgraded Ruby/Rails version. This bumped up all gem versions. The performance issue probably originate from one of these changes (and not the Hardware change).
So this brings us into the domain of RoR performance troubleshooting:
1. Check the beanstalk logs for errors. If you're lucky you'll find a configuration issue this way. give it an hour.
2. Assuming all well there, try to setup the exact same version on your localhost (passenger + ruby 2.1 + gems version). If you're lucky, you will witness the same slowness and be able to debug.
3. If you'd like to shoot straight for production debugging, I suggest you'd install newrelic (or any other application monitoring tool) and then drill into the details of the slowness in their dashboard. I found it extremely useful.
I was able to resolve my run away Ruby process issue by SSHing into my EC2 instance and installing/running gdb. Here's a link - http://isotope11.com/blog/getting-a-ruby-backtrace-from-gnu-debugger with the steps I followed. I did have to sudo yum install gdb before.
gdb uncovered an infinite loop in a section of my code that was looping through days in a date range.
In my production server, I'm using ruby foreman to run multiple processes, I just want my application to keep working, even if one of the processes down, I want my processes to keep working even if one down , is there any tricky way to restart the process or even not to stop all the processes in case one went down ? I mean in production level I want the solution to be stable enough, is that possible without Upstart ? thanks in advance
You should not be using foreman itself for production - it is only intended as a development tool. Instead, you can use something like god with my foreman_god gem in production.
Alternatively, you can use foreman to export config files for other process monitoring systems, for example upstart.
You can monitor your foreman process with another program like http://mmonit.com/monit/. But somehow you'll find that monitoring a process which monitor other processes is kinda strange.
I upgraded to Lion few weeks ago, and it completely screwed by Ruby on Rails environment. I have installing RVM, different ruby versions and can't seem to find a solution for it... I think it was one of the worst decisions I could do upgrading to Lion. It only brought problems to me.
Anyway, I have realised that rendering a page of my application (which works perfectly well on deployed server and locally too in other machines) increases the ruby process memory in 20-30mb which is kind of crazy. So you can imagine that after a while, my ruby process reaches 2gb of memory in use and my computer is not usable anymore.
I have seen many people with problems upgrading to Lion but I have not been able to find a solution for my case.
Any had the same problem? Any ideas how could I try to solve this issue?
Thanks
You could use the memprof gem (No longer maintained and doesn't work for Ruby above version 1.8.7) and memprof.com (Broken Link) to get to the bottom of the issue.
Also you could experiment with using Passenger, Unicorn or Thin instead of the default Webrick to see if that gives you different behaviour.
I do not know how you might fix the memory leak, but can propose one way to contain it and further troubleshoot it.
If you are willing to learn Docker, you can contain your development environment inside a Docker container, all while accessing the code on your local machine, just like a shared folder in Vagrant.
When you run the Docker container that runs, you can specify a limit on the amount of memory that container can use. Your rails server process might crash and stop the container, but at least you won't have to restart your machine.
Maybe that will give you more leeway for troubleshooting the problem in greater depth.
Docker Run Reference, see the section "Runtime constraints on CPU and memory".
I'm currently porting a Rails App currently using REE to JRuby so I can offer an easy-to-install JRuby alternative.
I've bundled the app into a WAR file using Bundler which I'm currently deploying to GlassFish. However, this app has a couple of daemon processes and it would be ideal if these could be part of the WAR file, and potentially monitored by Glassfish (if possible).
I've looked at QuartzScheduler, and while meets my needs for a couple of things, I have a daemon process that must execute every 20 seconds as it's polling the database for any delayed mail to send.
If anyone can provide any insight as to how best to set up daemon processes in a JRuby/Java/Glassfish environment any help will be greatly appreciated! :)
One way to daemonize a JRuby process is to use akuma framework (on *nix) or others.
I would rather use cronjobs (schedulling) rather than daemons as they are less error-proned, daemons can leak memory, can stop on errors etc. Check jruby-quartz and quartz_scheduler
EDIT
If one uses Torquebox it offers support for services and scheduling.
I searched the site already but couldn't find any suitable information. As there is always some expert around I'm sure one of guys knows exactly what I'm searching for :-)
We're on a balanced system:
Machine 1: HAProxy load balancer
Machine 2 & 3: Apache mod_rails and (of course) our Rails applications
Those were the days when we were able to monitor all Mongrel processes using monit (or other monitoring tools).
Is there any way to do an easy and clever monitoring of passenger processes with monit (or other tools), too? How can I dynamically get all pids of the running processes and pass them to monitoring?
Matt
There are various options available. Here are some of them:
The passenger-status tool lets you inspect its internal status
FiveRuns Manage can monitor a Passenger installation
Scout can also monitor Passenger
I made a plugin which make Passenger processes monitorable by Monit:
https://github.com/romanbsd/passenger_monit
Its a little ghetto but run these commands
watch passenger-status
watch passenger-memory-stats
then install and run htop
I did a quick search and I think I found the thing your looking for. He uses a script which runs off "passenger-status" as John Topley said.
http://blog.slowb.ro/2013/06/18/add-passenger-status-to-monitoring-on-zenoss/