So I have tried this out on multiple computers with multiple setups (servers/apps) and I seem to consistently get Rails completing 8-15 requests per second even for doing selects on empty tables with 1 field. I think I'm doing something wrong here because I've read a lot of stats online where people are getting 60-200 with mongrel. So being down at 8 seems just awful. The first app I tested this on was a little more involved and had 2 queries in 1 controller but they were just selecting a few rows, not a big deal.
Is there some trick to this I don't realize? Ruby.exe is taking up nearly 50% of my cpu cycles but still this is pretty bad. I feel like I've tried this when messing with rails last year and have gotten something like 50 requests per second. Is it possible that routing is screwed up some how?
Any advice would be greatly appreciated. Even info as far as profiling tools go so I could at least figure out WHERE the problem is occurring.
Thanks ahead of time.
If you're on windows then that seems about right. Rails runs terribly slow on windows. Try running it on a linux box, or a mac if you have one. You could also try heroku. They have a free starter plan you can use for development.
If you must run in a windows environment you could try jruby for some extra speed.
Related
My website seems to go down for up to an hour almost every day. I don't know much about servers so would appreciate advice on what to do next.
The website (3dsbuzz.com) has a Wordpress section, Wiki, Gallery and a VBulletin forum each with their own custom code and plug-ins. So there are lots of potential places which could be inefficiently coded. It runs from a cloud server with 2GB of RAM. The MySQL database is 370MB. We get around 30,000 page views per day.
What would be the best way to reduce the amount of down time? Should I upgrade my server (I can't really afford to) or is 2GB reasonable? I have plenty of errors in my error log, but they don't always occur around the same times as the server going down, so I'm not sure how relevant it is.
A solution is to remove each conponent one by one until the server no longer crashes.
Then, you know where to look for and can investigate further.
I have Rails 3, with webrick, running a sqlite3 database. On my standard linux desktop, doing Word.all (Word is my model), I have no problems, even though there are 10,000 entries in my database. I have scopes to display them 2000 at a time, to make things more tractable.
On my Windows 7 laptop, it's a very different story. I can only get about 400 Words at a time, or I get that "Not enough space" error.
I can open up window's task manager, and the memory barely even blips. On the console, the command returns almost instantly (it clearly has not done anything to several hundred entries before giving up).
What is going on here? My laptop isn't exactly much worse than my desktop, so I don't think i"m actually hitting any RAM limits... Is there some weird Ruby thing going on?
EDIT: It's not just a server issue either, i see the same thing in the rails console, as well... So, webrick might not be an issue...
If it were code, you'd think I"d see it across platforms, not just on my laptop... Even then, how can it be my code if all I'm typing is Word.all (no custom code) and the database is clearly set up right (I don't have issues getting any individual entry, just not too many at a time)
-jenny
webrick is a very simple web application server think to work only for development.
I have no experience of it on windows os , but,anyway, i have many probel with it when test limit situation. you could try the mongrel gem , if the problem persist it's something in your code.
I've been using ZenTest to run all the tests in my Rails project for years and it's always been quite nippy. However, on my Mac it has suddenly started taking 3 times as long to run all the tests. We have 1219 tests and for the past year it would run all the tests in around 300 seconds on average. Now though, it's taking almost 900 seconds:
Finished in 861.3578 seconds.
1219 tests, 8167 assertions, 0 failures, 0 errors
==============================================================================
I can't think of any reason why such a slowdown would occur. I've tried updating to the latest gem version, reducing the log output from the tests and regenerating the test database, all to no avail. Can anyone suggest a way to improve the performance?
When you have eliminated the impossible, whatever remains, however improbable, must be the explanation: if it's not the gem, not the database (did you check indexes ?), not your Mac, not Rails (did you upgrade recently), could it be the code ?
I'd check git/svn/cvs logs for the few most recent changes you made, and look for anything that might e.g. be slowing down queries.
If you can't find anything right away, profile the code to see where the time is going. This will be slower than just remembering something you did change (which almost always turns out to be the explanation in this kind of situation), but might point you in the right direction.
Performance issues can be frustrating because any number of factors can have an impact. A missing index on the DB. Network latency. Low memory conditions. Don't give up, keep Tilton's Law in mind.
You are really going to do a little bit more homework here, I doubt its ZenTest:
Grab a version of your code when stuff was great and dandy a few months ago. Run all the tests, output all the test durations to a spreadsheet or something.
Grab a current version of your code base and repeat the process in 1)
If durations are the same, something about your DB configuration or machine config has changed
If all the tests are slower on average this is a hard one to diagnose, but it would seem that there is a new bit of code thats running in each test.
If a handful of new tests are really slow, fix them.
So I finally solved this. Here's how in three easy steps:
Insert OSX Leopard CD
Completely reinstall Leopard from scratch
Reinstall Ruby, MySQL etc
After doing this the tests run in under 260 seconds.
I have no idea what happened but it certainly seems to have been a MySQL issue somewhere.
I'm running a Rails app through Phusion Passenger (mod_rails) which will run smoothly for a while, then suddenly slow to a crawl (one or two requests per hour) and become unresponsive. CPU usage is low throughout the whole ordeal, although I'm not sure about memory.
Does anyone know where I should start to diagnose/fix the problem?
Update: restarting the app every now and then does fix the problem, although I'm looking for a more long-term solution. Memory usage gradually increases (initially ~30mb per instance, becomes 40mb after an hour, gets to 60 or 70mb by the time it crashes).
New Relic can show you combined memory usage. Engine Yard recommends tools like Rack::Bug, MemoryLogic or Oink. Here's a nice article on something similar that you might find useful.
If restarting the app cures the problem, looking at its resource usage would be a good place to start.
Sounds like you have a memory leak of some sort. If you'd like to bandaid the issue you can try setting the PassengerMaxRequests to something a bit lower until you figure out what's going on.
http://www.modrails.com/documentation/Users%20guide%20Apache.html#PassengerMaxRequests
This will restart your instances, individually, after they've served a set number of requests. You may have to fiddle with it to find the sweet spot where they are restarting automatically before they lock up.
Other tips are:
-Go through your plugins/gems and make sure they are up to date
-Check for heavy actions and requests where there is a lot of memory consumption (NewRelic is great for this)
-You may want to consider switching to REE as it has better garbage collection
Finally, you may want to set a cron job that looks at your currently running passenger instances and kills them if they are over a certain memory threshold. Passenger will handle restarting them.
I have a simple Rails app deployed on a 500 MB Slicehost VPN. I'm the only one who uses the app. When I run it on my laptop, it's fast enough. But the deployed version is insanely slow. It take 6 to 10 seconds to load the login screen.
I would like to find out why it's so slow. Is it my code? (Don't think so because it's much faster locally, but maybe.) Is it Slicehost's server being overloaded? Is it the Internet?
Can someone suggest a technique or set of steps I can take to help narrow down the cause of this problem?
Update:
Sorry forgot to mention. I'm running it under CentOS 5 using Phusion Passenger (AKA mod_rails or mod_rack).
If it is just slow on the first time you load it is probably because of passenger killing the process due to inactivity. I don't remember all the details but I do recall reading people who used cron jobs to keep at least one process alive to avoid this lag that can occur with passenger needed to reload the environment.
Edit: more details here
Specifically - pool idle time defaults to 2 minutes which means after two minutes of idling passenger would have to reload the environment to serve the next request.
First, find out if there's a particularly slow response from the server. Use Firefox and the Firebug plugin to see how long each component (including JavaScript and graphics) takes to download. Assuming the main page itself is what is taking all the time, you can start profiling the application. You'll need to find a good profiler, and as I don't actually work in Ruby on Rails, I can't suggest any: google "profile ruby on rails" for some options.
As YenTheFirst points out, the server software and config you're using may contribute to a slowdown, but A) slicehost doesn't choose that, you do, as Slicehost just provides very raw server "slices" that you can treat as dedicated machines. B) you're unlikely to see a script that runs instantly suddenly take 6 seconds just because it's running as CGI. Something else must be going on. Check how much RAM you're using: have you gone into swap? Is the login slow only the first time it's hit indicating some startup issue, or is it always that slow? Is static content served slow? That'd tend to mean some network issue (either on the Slicehost side, or your local network) is slowing things down, assuming you're not in swap.
When you say "fast enough" you're being vague: does the laptop version take 1 second to the Slicehost 6? That wouldn't be entirely surprising, if the laptop is decent: after all, the reason slices are cheap is because they're a fraction of a full server. You're using probably 1/32 of an 8 core machine at Slicehost, as opposed to both cores of a modern laptop. The Slicehost cores are quick, but your laptop could be a screamer compared to 1/4 of core. :)
Try to pint point where the slowness lies
1/ application is slow, or infrastructure (network + web server)
put a static file on your web server, and access it through your browser
2/ If it is fast, it is probable a problem with application + server configuration.
database access is slow
try a page with a simpel loop: is it slow?
3/ If it slow, it is probably your infrastructure. You can check:
bad network connection: do a packet capture (with Wireshark for example) and look for retransmissions, duplicate packets, etc.
DNS resolution is slow?
server is misconfigured?
etc.
What is Slicehost using to serve it?
Fast options are things like: Mongrel, or apache's mod_rails (also called passenger phusion or
something like that)
These are dedicated servers (or plugins to servers) which run an instance of your rails app.
If your host isn't using that, then it's probably defaulting to CGI. Rails comes with a simple CGI script that will serve the page, but it reloads the app for every page.
(edit: I suspect that this is the most likely case, that your app is running off of the CGI in /webapp_directory/public/dispatch.cgi, which would explain the slowness. This tends to be a default deployment on many hosts, since it doesn't require extra configuration on their part, but it doesn't give good performance)
If your host supports "Fast CGI", rails supports that too. Fast CGI will open a CGI session, and keep it open for multiple pages, so you get much better performance, but it's not nearly as good as Mongrel or mod_rails.
Secondly, is it in 'production' or 'development' mode? The easy way to tell is to go to a page in your app that gives an error. If it shows you a stack trace, it's in development mode, which is slower than production mode. Mongrel and mod_rails have startup options to determine whether to run the app in production or development mode.
Finally, if your database is slow for whatever reason, that will be a big bottleneck as well. If you do have a good deployment (Mongrel/mod_rails/etc.) in production mode, try looking into that.
Do you have a lot of data in your DB? I would double check that you have indexed all the appropriate columns- because this can make a huge difference. On your local dev system, you probably have a lot more memory than on your 500 mb slice, which would result in the DB running a lot slower if you have big, un indexed tables. You can also run the slow queries logger in MySql to pinpoint columns without indexes.
Other than that, yes- passenger will need to spool up a process for you if you have not been using the site recently. If this is the case, you should see a significant speed increase on second, and especially third and later page loads.
You might want to run a local virtual machine with 500 MB. Are you doing a lot of client-server interaction? Delays over the WAN are significant
You might want to check out RPM (there's a free "lite" version too) and/or New Relic's Tune Up.
Your CPU time is guaranteed by Slicehost using the Xen virtualization system, so it's not that. Don't have the other answers for you, sorry! Might try 'top' on a console while you're trying to access the page.
If you are using FireFox and doing localhost testing (or maybe even on LAN) you may want to try editing the network.dns.disableIPv6 setting.
Type about:config in the address bar and filter for network.dns.disableIPv6 and double-click to set to true.
This bug has been reported mainly from Vista OS's, but some others as well.
You could try running 'top' when you SSH in to see which process is heavy. If you also have problems logging you, perhaps you may try getting Statistics in the Slicehost manager.
If you discover it is MySQL's fault, consider decreasing the number of servers it can spawn.
512 seems decent for Rails application, you might have to check if you misconfigured too.