I serve my software using passenger. It spawns many ruby processes.
Sometimes one of these rubies becomes bloated and I want it to die.
I was hoping to use god to that intent. My idea was to monitor all these rubies and if it is consuming more than 500MB of memory for 3 cycles, god should try to gracefuly kill it. If it remains alive for more than 5 minutes then god should kill it not gracefully.
It seems to me that god always tries to run the service again, so it forces us to provide a start command. Is it possible to use god only to kill bad behaviored processes and let the passenger spawner to bring them back to live when necessary?
Answer to your question lies in question itself. you can kill ruby processes using god gem which is ruby process process monitor framework by github guys.
basically, here is how it works:
configure god to monitor process it can be anything from apache,passenger,mongrel or just simple file doing a long-running task.
Set conditionals in god's configuration file based upon which god will execute some predefined code.
here is a simple example(taken from docs). consider this as file long running process that runs undefiantly which we want to monitor for memory usage, lets call it simple.rb
loop do
puts 'Hello'
sleep 1
end
now, we install the god gem & configure it to as run as superuser so it can kill/spawn processes and next create a configuration file. example(also taken from docs):
God.watch do |w|
w.name = "simple"
w.start = "ruby /full/path/to/simple.rb"
w.keepalive(:memory_max => 500.megabytes)
end
Here, as you may have got the idea if the process memory usage goes above 500 megabytes, god will restart it. here are few resources that might help, if you are getting started with process management using god gem:
Example gist - Passenger worker monitor to kill workers which use too much RAM(Don't use god, but spawns a new passenger worker instead)
Project Homepage
Github Page
An indepth tutorial using god with rails & passenger
Now, please remember ALL configuration for god is actually legal ruby code so you can get creative & do all sorts of things.
lastly, if you are frequently finding yourself running long running process, I advice you to try JRuby which is works much better with long running processes due to JVM & LOT faster than MRI
I use the same setup on many of my projects and had the same memory leaking issues. After messing around with monitoring, we decided to use the passenger features to tackle it. Specifically it allows the setting (e.g.) PassengerMaxRequests 300 which shuts down any instance when it has served that number of requests.
If you use it, make sure that PassengerMinInstances is set to 0 because it preceedes the setting for max requests.
Related
Is there anything in its architecture that makes it hard to do?
I want to run an existing rails+sidekiq application in a VM with very little memory, and loading the entire rails stack in two different process is using a lot of RAM.
Puma is built to spin up homogenous web worker threads, and divide incoming requests among them. If you wanted to modify it to spawn off separate Sidekiq threads, it should technically be possible with a crazy puma.rb file, but there's no precedent I can find for doing so (edit: Mike’s answer below points out that the sucker_punch gem can essentially do this, for the same purpose of memory efficiency). Practically-speaking, if your VM cannot support running two Rails processes at a time, it probably won't be able to handle the increased memory load as your application does the work of both Sidekiq and Puma… but that depends on your workload.
If this is just for development purposes, you might be able to accomplish what you're looking for by turning on Sidekiq's inline mode (normally meant just for testing):
require 'sidekiq/testing'
Sidekiq::Testing.inline!
This will cause all perform_async calls to actually execute inline, instead of going into Redis and being picked up by the Sidekiq process.
Nothing official.
This is what sucker_punch is designed for.
This is probably the silliest question today but...
The Rails team & many others recommend using passenger instead of a mongrel cluster, but I cannot find a clear list of exact benefits / advantages of this or what the potential pitfall are. Just wondering if anyone can help explain this?
Also is passenger its own server or does it use mongrel under the hood?
Thanks!
Before Passenger, Mongrel was the way to go, but a Mongrel cluster can be a nuisance to keep properly tuned. As your application grows in complexity, the memory footprint of each Mongrel instance will expand, and this can eat into available disk cache and degrade performance, so you'll have to pay close attention to the memory allocation balance on your deployments. From time to time you'll have to tweak it to add or remove Mongrels.
The other downside is you'll need to manage these Mongrel processes using some kind of launcher like monit and these can be fussy and difficult. Mongrel does not come with its own process manager.
Another serious problem is that each Mongrel is locked to a particular application and shifting loads between one app and another is very difficult to manage.
Mongrel is also dependent on an external load-balancer that you must configure yourself.
Passenger will handle launching all the Rails engine processes and will do its best to allocate memory efficiently. If you have a number of sites with conflicting priorities, Passenger will do a good job of launching servers on demand, and pruning them off when they're not used.
Passenger is also very quick to relaunch all instances of an application by looking for the tmp/restart.txt trigger file. You don't have to kill any processes or wait for a restart.
Under the hood, Passenger uses its own launcher and dispatch system. Although functionally it is similar to Mongrel, there are a number of significant performance improvements that Phusion has introduced that make Passenger significantly more memory-efficient than Mongrel.
Passenger is a complete package that just works and is surprisingly easy to manage. Mongrel is only a very basic web server.
We have a rails app running on passenger and we background process some tasks using a combination of RabbitMQ and Workling. The workling's worker process is started using the script/workling_client command. There is always only one worker process started, and the script/workling_client has a :multiple => false options, thus allowing only one instance. But sometimes, under mysterious circumstances which I haven't been able to track down, more worklings spawn up. If I let the system run for some time, more and more worklings appear. I'm not sure if these rogue worklings cause any problems, but it is still unsettling not to know why is it happening. We are using Monit to monitor the workling process. So if it dies, it will spawn it up again. But this still does not explain how come there are suddenly more than one of them.
So my question is: does anyone know what can be cause of this and how to make it stop? Is it possible that workling sometimes dies by itself, without deleting it's pid file? Could there be something wrong with the Daemons gem workling_client is build upon?
Not an answer - I have the same problems running RabbitMQ + Workling.
I'm using God to monitor the single workling process as well (:multiple => false)...
I found that the multiple worklings were eating up huge amounts of memory & causing serious resource usage, so it's important that I find a solution for this.
You might find this message thread helpful: http://groups.google.com/group/rubyonrails-talk/browse_thread/thread/ed8edd0368066292/5b17d91cc85c3ada?show_docid=5b17d91cc85c3ada&pli=1
I'm running an application that kicks off a Rufus Scheduler process in an initializer. The application is running with Passenger in production and I've noticed a couple weird behaviors:
First, in order to restart the server and make sure the initializer gets run, you have to both touch tmp/restart.txt and load the app in a browser. At that point, the initializer fires. The horrible thing is that if you only do the touch, the processes scheduled by Rufus get reset and aren't rescheduled until you load the app in a browser.
This alone I can deal with. But this leads to the second problem: I'll notice that the scheduled process hasn't run, so I load a page and suddenly the log file is telling me that it's running the initializers as if I'd rebooted. So, at some point, Passenger is randomly rebooting as if I'd touched tmp/restart.txt and wiping out my scheduled processes.
I have an incredibly poor understanding of Passenger and Rails's integration, so I don't know whether this occasional rebooting is aberrant or all part of the architecture. Can anyone offer any wisdom on this situation?
What you describe is the way Passenger works. It spawns new instances of the application when traffic warrants them, and shuts them down after periods of inactivity to free resources.
You should read the Passenger documentation, particularly the Resource Control and Optimization section. There are settings which can prevent the application from being shut down by Passenger, if that is what you want.
Using the PassengerPoolIdleTime setting, you could keep at least one process running, but you'll almost certainly want Passenger to start up other instances of the app as necessary. This thread on the Rufus Scheduler Google Group mentions using lock files to prevent more than one process from starting the scheduler, that may be useful to you.
I'm running into a problem in a Rails application.
After some hours, the application seems to start hanging, and I wasn't able to find where the problem was. There was nothing relevant in the log files, but when I tried to get the url from a browser nothing happened (like mongrel accept the request but wasn't able to respond).
What do you think I can test to understand where the problem is?
I might get voted down for dodging the question, but I recently moved from nginx + mongrel to mod_rails and have been really impressed. Moving to a much simpler setup will undoubtedly save me headaches in the future.
It was a really easy transition, I'd highly recommend it.
Are you sure the problem is caused by Mongrel? Have you tried running your application under WEBrick?
There are a few things you can check, but since you say there's nothing in the logs to indicate error, it sounds like you might be running into a bug when using the log rotation feature of the Logger class. It causes mongrel to lock up. Instead of relying on Logger to rotate your logs, consider using logrotate or some other external log rotation service.
Does this happen at a set number of hours/days every time? How much RAM do you have?
I had this same problem. The couple options I had narrowed it down to were MySQL adapter related. I was running on Red Hat Enterprise Linux 4 (or 5) and the app would hang after a given amount of idle time.
One suggested solution was to compile native MySQL bindings, I had been using the pure Ruby one.
The other was to set the timeout on the MySQL adapter higher than what the connection would idle out on. (I don't have the specific configuration recorded, but as I recall it was in environment.rb and it was some class variable in the mysql adapter.)
I don't recall if either of those solutions fixed it, we moved to Ubuntu shortly after that and hadn't had a problem since.
Check the Mongrel FAQ:
http://mongrel.rubyforge.org/wiki/FAQ
From my experience, mongrel hangs when:
the log file got too big (hundreds of megabytes in size). you have to setup log rotation.
the MySQL driver times out
you have to change the timeout settings of your MySQL driver by adding this to your environment.rb:
ActiveRecord::Base.verification_timeout = 14400
(this is further explained in the deployment section of the FAQ)
Unfortunately, Rails (and thereby Mongrel) using up too much memory over time and crashing is a known problem (50K+ Google entries for "Ruby, rails, crashing, memory"). The current ruby interpreter has the property that it sometimes simply fails entirely to give memory back to the system - it may reuse the memory it has but it won't give it up.
There are numerous schemes for monitoring, killing and restarting Mongrel instances in a production environment - for example: (choosing at random) rails monitor . Until the problem is fixed more decisively, one of these may be your best bet.
We have experienced this same issue. First off, install the mongrel_proctitle gem
http://github.com/rtomayko/mongrel_proctitle/tree/master
This gem/plugin will allow you to view the mongrel processes via "ps" and you can see if a Mongrel is hung. An issue we have seen with Mongrel is that it will happily accept connections and enqueue them, then wedge itself. This plugin will help you see when a Mongrel has been wedged but then you must use another monitoring app to actually restart a a wedged Mongrel, something like Monit or God
You might also want to consider putting a more balanced reverse proxy in front of your Mongrels, something HAproxy, instead of nginx, Apache or Lighttpd. With a setting of "maxconn 1" in HAproxy you can assure that the queue is being maintained by HAproxy versus Mongrel. The other reverse proxies (nginx, Apache, Lighttpd) only do round-robin which means that they can load up your Mongrel queue, inadvertently.
My personal choice is God as it is much more flexible.
tl;dr Install this gem plugin and keep an eye on your Mongrels. Try Apache+Phusion Passenger.