How to optimize Rails for one kiosk user? - ruby-on-rails

I am playing around with using Rails to underpin a kiosk. This is a terminal where there is only one local user at a time.
Under this system, a browser like Chrome would access the Rails app.
Things I assume would be helpful:
Super-fast, very lightweight Rails server (I'm using Puma).
Eliminating standard processes/assumptions that are meant for internet website contexts (caching, CDNs, middleware, etc.).
In some level of detail preferably, how should one set up a Rails app for maximum performance in a single-user kiosk?

This might sound like a non-answer, but the approach I would take is to use Rails in its default (production) configuration, and optimise performance issues as they arise in your test bed. Running Rails in production mode will likely give you more than enough performance if you have a dedicated machine for a single user (often you'll have many clients to a single Rails instance). Without testing the application, you could sink a considerable amount of time into optimisations that don't impact the user experience.
It may be worth sitting Rails behind Apache/nginx (Passenger is a well understood way to get a Rails app on Apache) to serve your static assets, but from the information provided so far I'd be surprised if performance optimisation was necessary at this stage.
A challenge that might be worth considering at this stage is how you'll deploy changes to your kiosk/set of kiosks. Will they be brought in for updates or need to have changes applied over-the-air? That will likely impact how you deploy it onto the machine, and in my experience is a harder thing to change later on.

Related

Scaling to support a massive amount of traffic in a short period of time

Until now, our site has had a modest amount of traffic. None of our developers are big ops guys, but we've stayed ahead of it and keep the site up and running pretty quick. That said, our dev team is stretched, we've accumulated some technical debt, and there's plenty of opportunity to optimize.
Without getting into specifics, we just found out that we'll be expecting a massive amount of traffic in the near future in a very short period time. On the order of several million hits in a few hours. Scaling is one thing, but this is several orders of magnitude greater than what we're seeing now.
We're a Rails app hosted on S3 using ELB, and Postgresql.
I wanted to field some recommendations for broad starting points for scaling and load testing given this situation.
Update: Sorry, EC2, late night :)
#LastZactionHero
Pretty interesting question, let me answer you in detail, I hope you are talking about some e-commerce applications, enterprise or B2B apps doenst see spikes as such. Since you already mentioned that you are hosted your rails app on s3. Let me make couple of things clear.
1)You cant host an rails app on s3. S3 is simple storage service. Where you can only store files.
2) I guess you have hosted your rails app on AWS ec2 with a elastic load balancer attached above the ec2 instances which is pretty good.
3)You have a self managed Postgresql deployed on a ec2 instance.
If you are running on AWS you are half way safe and you can easily scale up and scale down.
I can see one problem in your present model, that your db. AWS has got db as a service. Thats called Relation database service.Which supports Mysql Oracle and MS SQL server.
RDS comes with lot of features like auto back up of your database, high IOPS etc.
But it doesnt support your Postgresql. You need to have or manage a self managed ec2 instance and run postgresql database, but make sure its fail safe and you do have proper back and restore system at place.
AWS provides auto scaling api and command line tools, pretty easy.
You dont have worry about the bandwidth issue etc, but I admit Angelo's answer too.
You can use elastic mem cache for caching your app. Use CDN if need to speed your app. RDS can manage upto 30000 IOPS, its a monster to it will do lot of work for you.
Feel free to ask me if you need any kind of help.
(Disclaimer: I am a senior devOps engineer working for an e-commerce company, use ruby on rails)
Congratulations and I hope your expectation pans out!!
This is such a difficult question to comprehensively answer given the available information. For example, is your site heavy on db reads, writes or both (and is your sharding/replication strategy in line with your db strain)? Is bandwidth an issue, etc? Obvious points would focus on making sure you have access to the appropriate hardware and that your recipies for whatever you use to provision/deploy your hardware is up to date and good to go. You can often throw hardware at a sudden spike in traffic until you can get to the root of whatever bottlenecks you discover (and yes, you will discover them at inconvenient times!)
Regarding scaling your app, you should at least:
1) Cache whatever you can. Pay attention to cache expiration, etc.
2) Be sure your DB has appropriate indexes set up (essentially, you should have an index on any field you're searching on.)
3) Watch your logs closely to identify potential long queries, N+1 queries, long view renders, etc.
4) Do things like what Shopify outlines in this post: http://www.shopify.com/technology/7535298-what-does-your-webserver-do-when-a-user-hits-refresh#axzz2O0gJDotV
5) Set up a good monitoring system (Monit, God, etc) for each layer of your stack - sudden spikes in traffic can quickly bottleneck your application in unexpected places and lead to more issues. The cascade can happen quickly.
6) Set up cron to automate all those little tasks you currently do manually...that you will probably forget about doing once you're dealing with traffic spikes.
7) Google scaling rails and you'll see tons of good info.
8) etc, etc, etc...
You can use some profiling tools (rubyperf, or something like NewRelic, etc) Whatever response you get from them is probably best to be considered as a rough baseline at best. Simple reason being that your profiling is dependent on your hardware stack which will certainly change depending on actual traffic patterns. Pretty easy to do if you have a site with one page of static content...incredibly difficult to do if you have a CMS site with a growing db and growing traffic.
Good luck!!!

Best web/app server to host multiple low hit rails/sinatra apps

I need to host a lot of simple rails/sinatra/padrino applications of different ruby versions each with 0..low hits per day. They belong to different owners and should be well isolated from each other.
When an app is hit it should respond in reasonably short time, but I expect several simultaneous visitors are hitting the same site to be a rare case.
I'm going to create separate os user for each application. Surely I'd like to put them as many per server as it's possible. Thus I need to choose the web server with the lowest memory footprint, which can run applications on behalf of different users with different ruby versions and gemsets.
I consider webrick,nginx+passenger,thin,apache+passenger. I suppose the reliability of all choices is sufficient for such a job, and while performance isn't an issue, the memory consumption is.
I found many posts regarding performance issues, but most of them discuss the performance tuning and issues. I couldn't find a comparison of web servers memory usage when idle.
Is "in process" webrick the best choice? Which one would you choose for that job?
And I couldn't figure out how to resolve subdomains to application ports with webrick. Shall I use nginx or apache for that?
I don't have much experience with hosting myself, but using Webrick for production is not a good idea I think. You can also check out mongrel which I saw used in production. In most cases though you will probably want to choose between thin and unicorn. Check out this http://cmelbye.github.com/2009/10/04/thin-vs-unicorn.html or google around. Good luck :-)
Why not use Heroku? Its free and gets you out of the hassle of server configuration and maintenance.

How can I make my web app more robust to handle unexpected traffic spikes?

I asked this on HN but didn't get much advice.
I'm a n00b in web app development. Nevertheless, I've been working on an app (in Ruby on Rails + deployed with Heroku) that has gotten some really positive feedback, so I'd like to dedicated more resources to it.
However, I'm not a sysadmin or anything of the sort, so I'm unsure as to what steps to take to ensure my app is robust and can handle unexpected traffic spikes without crashing.
Essentially, I'd like to prepare for the worst-case scenario in terms of handling unexpected traffic spikes etc.
Any specific pointers (especially with Heroku) will be helpful!
What is the overall distribution of http requests for loading a typical page on your app? Open that in Mozilla Firebug / Chrome Dev Tool and analyze http requests being made.
If you see that there are LOTS of static content being loaded (Like CSS / images/ JS ) for each page then it would indicate a cache issue (static content are not getting cached).
You could even move static content to a CDN (http://en.wikipedia.org/wiki/Content_delivery_network) These two are low hanging fruits.
next step is to ensure that your app can be hosted on multiple machines (E.g. it does not depend on same http session across each host and similar things). This way you can add more hosts to serve the demand.
To ensure your app is stable during all development and deployments, write unit tests, functional tests and integration tests (there are a lot of gems to handle this, like shoulda, rspec, cucumber, capybara, selenium...).
I would also use Hoptoad for error notification. There is a free offer for 1 project ( https://hoptoadapp.com/account/new/Egg )
The next thing to monitor your site could be NewRelic ( http://newrelic.com/ ). It gives you an overview what queries took long and where is a bottleneck in your app.
A very short answer: In theory you need to learn about capacity, scalability and performance.
On practice you can spin up more instances on heroku (or engines) but you'll be paying more money for it.

Rails Request Initialization

We all hear a lot about scaling issues in Rails.
I was just curious what the actual costs in handling a HTTP request is in the Rails framework. Meaning, what has to happen for each and every request which comes in? Is there class parsing? Configuration? Database Connection establishment?
That actually depends a lot on which web server you're using, and which configuration you're using, not to mention the application design itself. Configuration and design issues involved include:
Whether you're using fastcgi, old-school cgi, or some other request handling mechanism (affects whether you're going to have to rerun all of the app initialization code per request or not)
Whether you're using memcache (or an alternate caching strategy) or not (affects cost of database requests)
Whether you're using additional load balancing techniques or not
Which session persistence strategy you're using (if needed)
Whether you're using "development" mode or not, which causes code files to be reloaded whenever they're changed (as I recall; maybe it's just per-request) or not
Like most web app frameworks, there are solutions for connection pooling, caching, and process management. There are a whole bunch of ways to manage database access; the usual, default ones are not necessarily the highest performance, but it's not rocket science to adjust that strategy.
Someone who has dug into the internals more deeply can probably speak in more excruciating detail, but most apps use either FastCGI on Apache or an alternate more rails-friendly web server, which means that you only have app setup once per process.
Until the release of Phusion Passenger (aka mod_rails) the "standard" for deployment was not FastCGI but using a cluster of Mongrel servers fronted by Apache and mod_proxy (or Nginx etc).
The main issue behind the "Rails doesn't scale" is the fact that there are some quite complicated threading issues which has meant tiwht the current version of Ruby and the available serving mechanisms, Rails has not been threadsafe. This has meant that multiple containers have been required to run a Rails app to support high-levels of concurrent requests. Passenger makes some of this moot, as it handles all of this internally, and can also be run on a custom build of Ruby (Ruby Enterprise Edition) that changes the way memory is handled.
On top of this, the upcoming versions of both Ruby and Rails are directly addressing the threading issue and should close this argument once and for all.
As far as I am concerned the whole claim is pretty bogus. "Scale" is an architectural concern.
Here's a good high level overview of the lifecycle of a Rails request. After going through this, you can choose specific sections to profile and optimize.

Multiple rail apps using Apache and Mongrel

I am actually developing and application that has around 15 modules, all of them using the same database.
I am using Apache + Mongrel, I cannot use Passenger because I am working on Windows (please forgive me for this deadly sin!)
Which of the following is a better approach?
Deploy multiple small rails
applications using a virtual server
and a pair of mongrels for each
application.
Deploy only a big rails application
I am worried about the number of running mongrels and the memory/cpu load.
I'd suggest deploying a monolithic Rails application.
I use the request_routing plugin to drive 3 domains sharing the same database from one, big Rails application.
I'm running 4 mongrels, which seems to be enough for now, but YMMV.
It depends on hwo many simultaneous clients you expect to have. One mongrel, one client at a time (until Rails 2.2) since Rails isn't currently threaded.
Two is enough mongrels if you don't expect more than a few simultaneous users. You can raise that number by using page caching to bypass mongrel for pages that don't have user-specific dynamic content.
The only way to be truly sure is to test the system.
In my experience you'll need at least 4 mongrels for a moderately active site of just a few users at a time.
It would seem like one application would best fit your scenario... as others have said...
A good rule of thumb would be that the average behaving mongrel will consume 60mb of memory (or less)... take your total RAM available, subtract out for any other services (database, memcache, etc) and then figure out how many pieces of the pie you can have left from the remaining memory.
You can always scale them up or down from there...
It sounds like it would be a much better use of your hardware to integrate all modules into one comprehensive rails apps.
IMHO the primary weakness of Rails is the amount of resources needed to run a low or very low traffic app. On the other hand a few mongrels go a long way to serve a whole lot of traffic.

Resources