Do Rails/Haml/Sass execute slower than Node/Express/Jade/Stylus? - ruby-on-rails

I'm not sure if it is my imagination but this is what I've experienced:
When I use SASS (--watch), save the .sass file and switch to the browser very fast (1~2 seconds) sometimes the changes are not being reflected. With jade/stylus the changes don't delay at all.
I noticed that it takes more time to install a gem than a node module.
Starting an Node.js/Express.js server takes me like 1 second. Starting a Rails server takes me like 3~4 seconds.
Also noticed that node frameworks (e.g. Express.js) generate files faster.
Now I'm not sure if it is because Node.js/Express.js are younger projects and have fewer features or it is because Node.js is actually faster?
(I'm using Ubuntu 11.10 with an AMD CPU).

The short answer is: Ruby is slow. It's a huge trouble for rubyists and is a purpose for many hot discussions like "does something faster than Ruby?".
Javascript runs fast even on virtual machine. It is even faster than native Ruby interpreter.
But i prefer Ruby and Ruby on Rails because of its much more stable library set, huge community and because of Ruby's syntax. It's really sweet =)

Related

Phusion Passenger consumes a lot of RAM

I have RoR app, which uses Passenger + Nginx.
Also I have ImageMagick for some regular tasks with images.
In one day (yesterday-tomorrow) I saw that my app crashes, in case out of memory!
ImageMagick write to log 'Cannot allocate memory'.
When I check free RAM, I saw that there only ~120MB free. (Summary is 1GB).
Most part used Passenger.
I had restarted it, and all became OK.
What reason of this can be?
First of all, did you use all functions of ImageMagick, if not change ImageMagick to MiniMagick (you save part of memory).
Next thing image processing, how do you work with images? Always you can do it asynchronously (resque/sidekiq/rake task in cron - "smaller solution"), you can maybe save some MB..
Passenger, is quite memory demanding. Try something smaller like Thin / Puma.
Are you sure, your code is without memory-leaks? If you're using ruby 2.1+ there is several tools to detect it, for example excelent article, or if not, try to run your application in JRuby with Java memory profiler, for example VisualVM
some other question to think, did you really need full Rails? Rails is big and require quite a lot memory, maybe Sinatra or for simple API Grape can be sufficient...

Is changing `require` to concatenate Ruby files brilliant or crazy?

When booting a Rails application with lots of dependencies, a lot of time is spent (I think) in requireing files.
Suppose you were to create a deploy process that converted all require statements to file concatenations, using the same rules (don't get the same file twice, etc). Essentially, it would treat Ruby the way the asset pipeline treats javascript.
Would this make a real speed difference? Would it create any issues - for instance, with variable scope - other than making it harder to trace errors to their original source files?
In short, is this brilliant or crazy?
Update
As pst points out, this would be pointless in production, where the server likely loads everything once, then forks to handle new requests.
But consider the test environment, where you boot your Rails app every time you run your tests. Pre-concatenating all your gems could have an effect similar to the Spork gem.
I suppose my real question is how much time is spent in require vs parsing the contents of the files.
You'll be happy to see what made it into Ruby 2.0:
http://bugs.ruby-lang.org/issues/7158
tldr; it will make no difference amortized across requests1 - any trivial start up cost is inconsequential2.
1 A much better way to "increase performance" is just to reuse processes - e.g. only load a process once for N requests (which implies only "running" the require statements once) - as is already done.
2 For those who are really interested in if "it will parse faster", please run a benchmark. Then realize that it doesn't matter - even saving a second on start up is of no importance to a web-server infrastructure. (Of course, it would be only milliseconds faster - from a few additional disk seeks - if at all.)

How to investigate what makes my app so slow to start?

My 3.1.3 rails app takes quite a while to start up, and even running rails console seems to take longer than it reasonably should. For example, with my app it's 50 seconds from rails c to the command prompt. In a test fresh rails app (e.g. from rails new) it's about 5 seconds.
Needless to say, this is really annoying, particularly when trying to run tests, etc.
I've seen the links at https://stackoverflow.com/a/5652640/905282 but they're pretty involved; I was hoping for maybe something that would be at a higher level, like "oh yeah, here's how long each gem is taking up during startup".
Suggestions, or do I just need to dive into the details?
Ruby 1.9.3 fixes a performance problem in 1.9.2 when a large number of files have been loaded with require.
That post describes how the performance of including new files is O(N), getting progressively slower the more files are already loaded. Since Rails loads in a lot of files, it is a serious drag on start-up time.

memory requirement for jruby +rails+mongrel?

hi
I am planning to run jruby (1.5.3 latest) on mongrel but how much memory will it require on x64 server for a simple web site ? and how many instances will be required ?
10000 page views per day
for the same requirement what would be the numbers for ruby.
any reference production data would be welcome.
You probably won't use mongrel with jruby, at least i've never heard of it. We run an app using trinidad, which wraps tomcat7 and for similar performance to what you're looking for I use a 1gb heap.
Mongrel has really gone out of favour for more robust setups using passenger or thin or unicorn for instance.
If you're limited with memory, from my experience CRuby is the way to go. Try REE or ruby-1.9.2 with Passenger3 and nginx. It's a super simple setup and very fast.
JRuby definitely takes more memory, but if you have java requirements you don't have much choice.
10000 page views you should get away with a small ec2 instance (if that's what your instances refers to)
It's really hard to give a definitive answer though as it all depends on what type of app you're running. is it cpu intensive calculations, or memory intensive data?? who knows
From my experience, CRuby tends to be much simpler than JRuby, easier for local use (ie tests run significantly faster in cruby) and also very fast.

What do I need to know about JRuby on Rails after developing RoR Apps?

I have done a few projects using Ruby on Rails. I am going to use JRuby on Rails and hosting it on GAE. In that case what are the stuff that I need to know while developing JRuby apps. I read that
JRuby has the same syntax
I can access Java libraries
JRuby does not have access to some gems/plugins
JRuby app would take some time to load for the first time, so I have to keep it alive by sending
request every 5 mins or so
I cannot use ActiveRecord and instead I must DataMapper
Please correct if I am wrong about any of the statements I have made and Is there anything else that I must know?. Do I need to start reading about JRuby from the scratch or I can go about as usual developing Ruby apps?
I use JRuby everyday.
True:
JRuby has the same syntax
JRuby does not have access to some gems/plugins
I can access Java libraries
Some gems/plugins have jruby-specific versions, some don't work at all. In general, I have found few problems and as the libraries and platforms have matured a lot of the problems have gone away (JRuby has become a lot better).
You can access Java, but in general why would you want to?
False:
JRuby app would take some time to load for the first time, so I have to keep it alive by sending request every 5 mins or so
I cannot use ActiveRecord and instead I must DataMapper
Although I guess it is possible to imagine a server setup where the initial startup/warmup cost of the JVM means you need to ping the server, there is nothing inherent in JRuby that makes this true. If you need to keep the server alive, you should look at your deployment environment. Something similar happens in shared-hosting with passenger where an app can go out of memory after a period of inactivity.
Also, we use ActiveRecord with no problems at all.
afaik, rails 3 is 100% compatible with jruby, so there should be no problem on that path.
like every new platform, you should make yourself comfortable with it by playing around with jruby. i recommend using RVM to do that.
as far as you questions go:
JRuby is just an other runtime like MRI or Rubinus
since JRuby is within the JVM using Java is very easy, but you can also use RJB from MRI
some gems are not compatible, when they use native c libraries, that do not run on JRuby
the JVM and your application container need startup time and some time to load your app, but that is all, there is no need for keep alive, that is wrong
you can use whatever you want, most gems are updated to be compatible with JRuby
#TobyHede mostly covered issues that you thought of you might have so I'll leave it at that.
As for other things to have in mind, it's simply a different interpreter and funny discrepancies will crop up that will take some adaptation.
some methods are implemented differently, such as sleep 10.seconds will throw exception (you have to sleep 10.seconds.to_i) and I remember getting NoMethodError on Symbol class when switching from MRI to JRuby (don't remember which method wasn't implemented), just have in mind slight variations will be there
you will experience hangs and exceptions in gems that otherwise worked for you (pry for example when listing more then one page)
some gems may work differently, pry (again) will exit if you press ctrl+c for example, pretty annoying
slightly slower load times of everything and no zeus
you'll get occasional java exception stack traces with no indication on which line of ruby code it happened
Timeout.timeout often will not work as expected when its wrapped around net code and stars align badly (this has mostly been fixed in jruby core, but it seems to still be an issue with gems that do their own netcode in pure java)
hidden problems with thread-safety in third party code How do you choose gems for a high throughput multithreaded Rails app? - stay away from EventMachine for example
threads will be awesome (due to nativeness and no gil) and fibers will suck (due to no coroutine support in JVM they're ordinary threads), this is why you often won't get a performance boost with celluloid when compared to MRI
you used to run your rails with MRI Ruby as processes in an OS, you knew how to track their PIDs, bloat, run times, kill them, monitor them etc, this part is not evident when you switch to JRuby because everything has turned to threads in a single process. Java world has very good tools to handle these issues, but its something you'll have to learn
killall -9 ruby doesn't do the trick with jruby when your console hangs (which it does more often then before), you have to ps -ef and then track the proper processes without killing your netbeans etc (minor, but annoying)
due to my last point, knowing Java and the JVM will help you get out of tight spots in certain situations (depending on what you intend to do this may be something you actually really need), choice of deployment server will increase or decrease this need (torquebox for example is a bit notorious for this, other deployment options might be simpler, see http://thenerdings.blogspot.com/2012/09/pulling-plug-on-torquebox-and-jruby-for.html)
...
Also, see what jruby team says about differences, https://github.com/jruby/jruby/wiki/DifferencesBetweenMriAndJruby
But yeah, otherwise its "just the same as MRI Ruby" :)

Resources