I am running Apache, Phusion Passenger, Ruby on Rails website. I have a critical section that I can only allow 1 execution at the same time. I noticed that phusion passenger uses fork() to spawn the processes for each request according to this documentation. Since they are individual process, I noticed that all my Mutex.synchronize do not work. I added some loggings inside the critical section, and I still noticed logs being printed interleaving.
I tried all these:
Mutex.synchronize
Thread.exclusive
Monitor.synchronize
None of them works.
How do you do synchronization in this case? Thanks a lot.
Mutex, Monitor and Thread.exclusive only work within a single process. For inter-process synchronization, use lock files. See File#flock.
Related
I am trying to understand exactly how requests to a rails application get processed with Phusion Passenger. I have read through the Passenger docs (found here: http://www.modrails.com/documentation/Architectural%20overview.html#_phusion_passenger_architecture) and I understand how they maintain copies of the rails framework and your application code in memory so that every request to the application doesn't get bogged down by spinning up another instance of your application. What I don't understand is how these separate application instances share the native ruby process on my linux machine. I have been doing some research and here is what I think is happening:
One request hits the web server which dispatches Passenger to fulfill the request on one of Passenger's idle worker processes. Another request comes in almost simultaneously and is handled by yet another idle Passenger worker process.
At this point there are two requests being executed which are being managed by two different Passenger worker processes. Passenger creates a green thread on Linux's native Ruby thread for each worker process. Each green thread is executed using context-switching so that blocking operations on one Passenger worker process do not prevent that other worker process from being executed.
Am I on the right track?
Thanks for your help!
The application instances don't "share the native Ruby process". An application instance is a Ruby process (or a Node.js process, or a Python process, depending on what language your app is written in), and is also the same as a "Passenger worker process". It also has got nothing to do with threading.
I serve my software using passenger. It spawns many ruby processes.
Sometimes one of these rubies becomes bloated and I want it to die.
I was hoping to use god to that intent. My idea was to monitor all these rubies and if it is consuming more than 500MB of memory for 3 cycles, god should try to gracefuly kill it. If it remains alive for more than 5 minutes then god should kill it not gracefully.
It seems to me that god always tries to run the service again, so it forces us to provide a start command. Is it possible to use god only to kill bad behaviored processes and let the passenger spawner to bring them back to live when necessary?
Answer to your question lies in question itself. you can kill ruby processes using god gem which is ruby process process monitor framework by github guys.
basically, here is how it works:
configure god to monitor process it can be anything from apache,passenger,mongrel or just simple file doing a long-running task.
Set conditionals in god's configuration file based upon which god will execute some predefined code.
here is a simple example(taken from docs). consider this as file long running process that runs undefiantly which we want to monitor for memory usage, lets call it simple.rb
loop do
puts 'Hello'
sleep 1
end
now, we install the god gem & configure it to as run as superuser so it can kill/spawn processes and next create a configuration file. example(also taken from docs):
God.watch do |w|
w.name = "simple"
w.start = "ruby /full/path/to/simple.rb"
w.keepalive(:memory_max => 500.megabytes)
end
Here, as you may have got the idea if the process memory usage goes above 500 megabytes, god will restart it. here are few resources that might help, if you are getting started with process management using god gem:
Example gist - Passenger worker monitor to kill workers which use too much RAM(Don't use god, but spawns a new passenger worker instead)
Project Homepage
Github Page
An indepth tutorial using god with rails & passenger
Now, please remember ALL configuration for god is actually legal ruby code so you can get creative & do all sorts of things.
lastly, if you are frequently finding yourself running long running process, I advice you to try JRuby which is works much better with long running processes due to JVM & LOT faster than MRI
I use the same setup on many of my projects and had the same memory leaking issues. After messing around with monitoring, we decided to use the passenger features to tackle it. Specifically it allows the setting (e.g.) PassengerMaxRequests 300 which shuts down any instance when it has served that number of requests.
If you use it, make sure that PassengerMinInstances is set to 0 because it preceedes the setting for max requests.
I'm running an application that kicks off a Rufus Scheduler process in an initializer. The application is running with Passenger in production and I've noticed a couple weird behaviors:
First, in order to restart the server and make sure the initializer gets run, you have to both touch tmp/restart.txt and load the app in a browser. At that point, the initializer fires. The horrible thing is that if you only do the touch, the processes scheduled by Rufus get reset and aren't rescheduled until you load the app in a browser.
This alone I can deal with. But this leads to the second problem: I'll notice that the scheduled process hasn't run, so I load a page and suddenly the log file is telling me that it's running the initializers as if I'd rebooted. So, at some point, Passenger is randomly rebooting as if I'd touched tmp/restart.txt and wiping out my scheduled processes.
I have an incredibly poor understanding of Passenger and Rails's integration, so I don't know whether this occasional rebooting is aberrant or all part of the architecture. Can anyone offer any wisdom on this situation?
What you describe is the way Passenger works. It spawns new instances of the application when traffic warrants them, and shuts them down after periods of inactivity to free resources.
You should read the Passenger documentation, particularly the Resource Control and Optimization section. There are settings which can prevent the application from being shut down by Passenger, if that is what you want.
Using the PassengerPoolIdleTime setting, you could keep at least one process running, but you'll almost certainly want Passenger to start up other instances of the app as necessary. This thread on the Rufus Scheduler Google Group mentions using lock files to prevent more than one process from starting the scheduler, that may be useful to you.
To jump on the band-wagon of Phusion Passenger we've setup a staging server for a small rails app to test things out.
So far it has been very nice to use, it makes installing/configuring and deploying apps a breeze. The problem is the site we're using doesn't get hit very often and it seems to shut down the servers in the background. Meaning when someone goes to the site they have a really long wait until it starts up a new server to handle the request. We've read through the documentation, tried quite a few different set-ups (smart/smart-lv2 modes, passengeridletime etc) and still haven't found a real solution.
After ploughing through Google results we can't really find useful information. Currently we have a cron job that makes a request every-so-often in an attempt to keep the servers running.
Is anyone else experiencing this problem and do you have any advice for a fix?
What's happening is that your Application and/or ApplicationSpawners are shutting down due to time-out. To process your new request, Passenger has to startup a new copy of your application, which can take several seconds, even on a fast machine. To fix the issue, there are a few Apache configuration options you can use to keep your Application alive.
Here's specifically what I've done on my servers. The PassengerSpawnMethod and PassengerMaxPreloaderIdleTime are the configuration options most important in your situation.
# Speeds up spawn time tremendously -- if your app is compatible.
# RMagick seems to be incompatible with smart spawning
# Older versions of Passenger called this RailsSpawnMethod
PassengerSpawnMethod smart
# Keep the application instances alive longer. Default is 300 (seconds)
PassengerPoolIdleTime 1000
# Keep the spawners alive, which speeds up spawning a new Application
# listener after a period of inactivity at the expense of memory.
# Older versions of Passenger called this RailsAppSpawnerIdleTime
PassengerMaxPreloaderIdleTime 0
# Just in case you're leaking memory, restart a listener
# after processing 5000 requests
PassengerMaxRequests 5000
By using "smart" spawning mode and turning off PassengerMaxPreloaderIdleTime, Passenger will keep 1 copy of your application in memory at all times (after the first request after starting Apache). Individual Application listeners will be forked from this copy, which is a super-cheap operation. It happens so quickly you can't tell whether or not your application has had to spawn a listener.
If your app is incompatible with smart spawning, I'd recommend keeping a large PassengerPoolIdleTime and hitting your site periodically using curl and a cronjob or monit or something to ensure the listener stays alive.
The Passenger User Guide is an awesome reference for these and more configuration options.
edit:
If your app is incompatible with smart spawning, there are some new options that are very nice
# Automatically hit your site when apache starts, so that you don't have to wait
# for the first request for passenger to "spin up" your application. This even
# helps when you have smart spawning enabled.
PassengerPreStart http://myexample.com/
PassengerPreStart http://myexample2.com:3500/
# the minimum number of application instances that must be kept around whenever
# the application is first accessed or after passenger cleans up idle instances
# With this option, 3 application instances will ALWAYS be available after the
# first request, even after passenger cleans up idle ones
PassengerMinInstances 3
So, if you combine PassengerPreStart and PassengerMinInstances, Passenger will spin up 3 instances immediately after apache loads, and will always keep at least 3 instances up, so your users will rarely (if ever) see a delay.
Or, if you're using smart spawning (recommended) with PassengerMaxPreloaderIdleTime 0 already, you can add PassengerPreStart to get the additional benefit of immediate startup.
Many thanks to the heroes at phusion.nl!
Just incase there are any nginx server users stumbling upon this question, both the 'PassengerMaxRequests' and 'PassengerStatThrottleRate' directives don't translate to nginx. However the others do:
rails_spawn_method smart;
rails_app_spawner_idle_time 0;
rails_framework_spawner_idle_time 0;
passenger_pool_idle_time 1000;
HTH!
EDIT rails_spawn_method is deprecated in passenger 3
instead use
passenger_spawn_method smart;
everything else is just good till date.
You can also use PassengerMinInstances:
http://www.modrails.com/documentation/Users%20guide%20Apache.html#PassengerMinInstances
This can be combined with PassengerPreStart
RE:
# Additionally keep a copy of the Rails framework in memory. If you're
# using multiple apps on the same version of Rails, this will speed up
# the creation of new RailsAppSpawners. This isn't necessary if you're
# only running one or 2 applications, or if your applications use
# different versions of Rails.
RailsFrameworkSpawnerIdleTime 0
Just something to add and might be useful.
The default spawn method in the current release
is "smart-lv2", which skips the framework spawner, so setting
the framework spawner timeout wouldn't have effect anyway unless you
explicitly set the spawn method to "smart".
Source:
http://groups.google.com/group/phusion-passenger/browse_thread/thread/c21b8d17cdb073fd?pli=1
If your host is a shared server, like mine, you can't change the settings and are stuck with a cron job.
I had also this problem but I was not able to change the passenger settings because I had no write permission to this file. I found a tool ( http://www.wekkars.com ) that keeps my app responding fast. Maybe this can also be a solution for you.
check the version of passenger. it was RailsSpawnMethod <string> for old versions.
If so (if I remember correctly), replace Passenger with Rails in all the configuration directives or look for old passenger docs for more details
I'm running into a problem in a Rails application.
After some hours, the application seems to start hanging, and I wasn't able to find where the problem was. There was nothing relevant in the log files, but when I tried to get the url from a browser nothing happened (like mongrel accept the request but wasn't able to respond).
What do you think I can test to understand where the problem is?
I might get voted down for dodging the question, but I recently moved from nginx + mongrel to mod_rails and have been really impressed. Moving to a much simpler setup will undoubtedly save me headaches in the future.
It was a really easy transition, I'd highly recommend it.
Are you sure the problem is caused by Mongrel? Have you tried running your application under WEBrick?
There are a few things you can check, but since you say there's nothing in the logs to indicate error, it sounds like you might be running into a bug when using the log rotation feature of the Logger class. It causes mongrel to lock up. Instead of relying on Logger to rotate your logs, consider using logrotate or some other external log rotation service.
Does this happen at a set number of hours/days every time? How much RAM do you have?
I had this same problem. The couple options I had narrowed it down to were MySQL adapter related. I was running on Red Hat Enterprise Linux 4 (or 5) and the app would hang after a given amount of idle time.
One suggested solution was to compile native MySQL bindings, I had been using the pure Ruby one.
The other was to set the timeout on the MySQL adapter higher than what the connection would idle out on. (I don't have the specific configuration recorded, but as I recall it was in environment.rb and it was some class variable in the mysql adapter.)
I don't recall if either of those solutions fixed it, we moved to Ubuntu shortly after that and hadn't had a problem since.
Check the Mongrel FAQ:
http://mongrel.rubyforge.org/wiki/FAQ
From my experience, mongrel hangs when:
the log file got too big (hundreds of megabytes in size). you have to setup log rotation.
the MySQL driver times out
you have to change the timeout settings of your MySQL driver by adding this to your environment.rb:
ActiveRecord::Base.verification_timeout = 14400
(this is further explained in the deployment section of the FAQ)
Unfortunately, Rails (and thereby Mongrel) using up too much memory over time and crashing is a known problem (50K+ Google entries for "Ruby, rails, crashing, memory"). The current ruby interpreter has the property that it sometimes simply fails entirely to give memory back to the system - it may reuse the memory it has but it won't give it up.
There are numerous schemes for monitoring, killing and restarting Mongrel instances in a production environment - for example: (choosing at random) rails monitor . Until the problem is fixed more decisively, one of these may be your best bet.
We have experienced this same issue. First off, install the mongrel_proctitle gem
http://github.com/rtomayko/mongrel_proctitle/tree/master
This gem/plugin will allow you to view the mongrel processes via "ps" and you can see if a Mongrel is hung. An issue we have seen with Mongrel is that it will happily accept connections and enqueue them, then wedge itself. This plugin will help you see when a Mongrel has been wedged but then you must use another monitoring app to actually restart a a wedged Mongrel, something like Monit or God
You might also want to consider putting a more balanced reverse proxy in front of your Mongrels, something HAproxy, instead of nginx, Apache or Lighttpd. With a setting of "maxconn 1" in HAproxy you can assure that the queue is being maintained by HAproxy versus Mongrel. The other reverse proxies (nginx, Apache, Lighttpd) only do round-robin which means that they can load up your Mongrel queue, inadvertently.
My personal choice is God as it is much more flexible.
tl;dr Install this gem plugin and keep an eye on your Mongrels. Try Apache+Phusion Passenger.