Best Practices for Optimizing Dynamic Page Load Times (JSON-generated HTML) - ruby-on-rails

I have a Rails app where I load up a base HTML layout and I fill in the main content with rows of divs from JSON. This works in 2 steps:
Render the HTML
Ajax call to get the JSON
This has the benefit of being able to cache the HTML layout which doesn't change much, but it seems to have more drawbacks:
2 HTTP requests
HTML isn't that complex, the generated html is where all the work is done, so I'm not saving that much on time probably.
Each request in my specific case requires that we check the current user, their roles, and some things related to that user, so those 2 calls are somewhat involved.
Granted, memcached will probably solve a lot of this, I am wondering if there are some best practices here. I'm thinking I could do this:
Render the first page of JSON inline, in a script block, along with the HTML. This would cut out those 2 server calls requiring user authentication. And, assuming 80% of the time you don't need to make the second ajax call (pagination/sorting in this case), that seems like a fairly good solution.
What are your thoughts on how to approach this?

There are advantages and disadvantages to doing stuff like this. In general I'd say it's only a good idea, if whatever you're delaying via an ajax call would delay the page load enough to annoy the end user for most of the use cases on your page.
A good example of this is browsing a repository on github. 90% of the time all you want is to navigate the files, so they use an ajax load to fill in the commit messages per file after the page load.
It sounds like you're trying to do this to speed up or do something fancy for your users, but I think you should consider instead, what part is slow, and what speed of page load (and maybe for what information on that page) on your users are expecting. As you say, using memcached or fragment caching might well give you the improvements you're looking for.

Are you using some kind of monitoring tool? I'm using the free version of New Relic RPM on Heroku. It gives a lot of data on request times for individual controller actions. Data like that could help you focus your optimization process.

Related

Some questions on Unobtrusive JavaScript

I am using Ruby on Rails and I heard of “Unobtrusive JavaScript” (UJS). After (but even before) my previous question, I ask myself:
Are there common-used patterns, rules, practices or techniques in order to respond pragmatically to JavaScript and HTML AJAX requests? If there are, what are those? For example, what responses should be returned? What kind of data? Is there a standard?
Practically speaking, how should my controller respond_to (à la Rails) depend on the request format? That is, how should my application respond with format.js, format.html or format.whatever in controllers when using the Rails framework?
About the previous matters, what is the solution of the Rails community and / or of the “general” public? What do you use?
Ajax
I don't know any patterns, but we take a "per feature" stance -
You'll have different use cases for different features. In the most part, you can handle these using the remote: true option (which just uses the ajax handler in UJS), which will allow you to either capture the response with .on("ajax:success" in your asset JS, or by using a .js.erb file in the backend
The bottom line is we do what will produce the least amount of code. We always look at it from the perspective of future development - in the future, will you get confused with what we're doing, or will it be the logical way?
I suppose we could probably find a more structured way of handling this, but with the varying amounts of data coming back, we prefer to handle each feature in its own way
--
Code
I would personally put code efficiency & focus functionality first
Instead of trying to make a pattern to fit all cases, I'd look at what you're trying to achieve, and creating code to get it to work. If you can refactor after that, great! Otherwise, I'd invest my energy into getting functionality working

Several PhantomJS calls in a RoR application

I have a RoR application that given a set of N URLs to parse, will perform N shell calls for a given PhantomJS (actually is a CasperJS) script.
So,
Right now I have something like this:
urls_to_parse = ['first.html', 'second.html',...]
urls_to_parse.each do |url|
parse_results = \`casperjs parse_urls.js '#{url}'\`
end
I have never done this before. Launching shell scripts from a RoR/Ruby application, so I am wondering if this is a good approach and what alternative may I have. So, why I use PhantomJS in combination with RoR?
I basically have an API (RoR app) that keeps receiving urls that need to be parsed. They need to be parsed in a headless browser manner. The page actually needs to be rendered (that's why I don't use Nokogiri or any other HTML parser).
I am concerned about putting this up to production performance wise, and before going forward I would like to know if I am doing this correctly, or I can do it in a better way.
It's possible I thought about doing the same thing, but even with a headless browser I would be really concerned about the speed and bandwidth your server is going to need to have. I use capser in conjuction with Python and it works very well for me. I read stdout spit back from firing the casper scripts, but I don't parse and scrape on the fly like you're talking about doing. I would imagine it's okay, but ideally you already have a cached database of results when people search. Maybe if it is a very very basic search you'll be okay, but I don't know.

Submitting dynamic forms on another website

I'm trying to submit input to the form, and parse the results in a RoR app. I've tried using mechanize, but it has some trouble with the way the page dynamically updates the results. It doesn't help that most fields are hidden.
Is there anyway to get mechanize to do what I'm looking for, or are there any alternatives to mechanize which I can use?
So whenever I want to do something like this, I go with the gem selenium-webdriver. It spawns a real browser (supports all major brands) and lets you control it with ruby code. You can do almost everything a real user could do. In addition, you have access to the (rendered) dom, so javascript generated content is not a problem.
Performance is much slower than with pure library clients, so its not a good fit for use in a web request cycle.
http://rubygems.org/gems/selenium-webdriver

Ruby service oriented architecture - how to ensure synchronization?

I'm a newbie to writing service-oriented applications, so this might be a trivial question for some.
My current setup is something like this:
1 - A base rails app. Also contains the routes and some application logic.
2 - A few services. I have extracted these from my base rails app. They are mostly resources that were DB extensive or used a no-sql solution.
So, what I have ended up doing is something like this
in my rails app, I have a places controller which responds to all the basic CRUD operations on places. Internally it does a HTTP call to the places service.
def show
req = Typhoeus::Request.new('http://127.0.0.1:7439/places/#{params[:id]}.json')
#places = req.response.body
end
The problem is, if I make more than 1 service call, then how to make sure that I have the response for all before rendering the views ? Also, even with 1 service call, how does the Rails rendering process work ? So for example, if the service takes a long time to respond, does the page gets rendered or does it wait infinitely for the response ?
I cannot answer your question specifically about Typhoeus as I've never used it, but I will try to answer more generally about this problem in SOA and hopefully it will be helpful.
The common thread is that the UI should be composed from many services and tolerant to the possibility that some of those services may be down or unresponsive.
You have a few options:
1) Drop down and do the composition from the browser. Use something like Backbone and make Ajax requests to each of the services. You can make many of these requests asynchronously and render each part of the page when they return - if one doesn't return, don't render that part - or have Backbone render some sort of placeholder in that region.
2) If you want to build up a model object in your controller (as in your example), you have to somehow handle timeouts and again - use a placeholder model for whatever service is being unresponsive. The nice thing about this is that, depending on the service, you can decide how critical is to have the data and how much time you're willing to wait before you consider it a timeout and move on.
Take, for example, the Amazon product page. It's very important to get the details about the product from its service - if you don't get that, it's probably worth throwing an error to the browser. But if the "Customers Who Purchased This Product Also Purchased..." service is not responding, it's OK to just stop waiting for it and render the page without it.
Again - I don't know Typhoeus so I'm not sure how to manage this using it, but hopefully this helps. Good luck!

Hybrid Rails Caching Options, am I reinventing something?

Well, not really reinventing, however, we have a large content-based website which handles load (after we fixed the SQL pooling issue) to a certain point, then we just run out of steam. A lot of this is due to bad code we are trying to fix up, but a lot is just due to the level of requests etc.
We were considering page caching, because well, it's damn quick (yep... :D) but that doesn't work because we have certain fragments within the page which are specific to the logged in user. But, not all hope is lost...
I was wondering if it would be ideal to do the following:
Page level caching, with sweepers to clean out the pages when content is updated.
Replace the user-specific fragments with placeholders (and perhaps generic content like... 'View your Account, or Sign Up Here')
When the users page loads fire off an async request (AJAX, or, some would call it AJAH), which requests the 'dynamic' fragment, then place the content placeholders with this fragment..
The main issue I can see with this is that users with JS turned off wouldn't see the content, but I honestly don't think we would be too affected by this, and IMHO people who disable javascript, for the most part, are idiots (yes, I said it!).
I'd also be interested to know if I'm (no doubt) reinventing something, and if anyone can point me to a site which is already doing something like this it would be appreciated.
Thanks awesome SO community!
Ryan Bates covered this technique in Railscast 169: Dynamic Page Caching. It's worth taking a look at.
Have you thought about server-side fragment caching? I've used it extensively, and it couldn't be more awesome. You could simply cache the content 'fragments' and render normally whatever depends on the logged in user.
There are many good resources for fragment caching, I'd start at the documentation:
http://api.rubyonrails.org/classes/ActionController/Caching/Fragments.html
Also very good from the Scaling Rails series:
http://railslab.newrelic.com/2009/02/09/episode-7-fragment-caching
When serving static content or cached content starts to slow down serving the real working processes put a reverse proxy as frontend to your application. It will free processes to do real work and reduce slowdowns due to file system caches becoming inefficient. And it will help you to make "client side caching" shareable to multiple visitors. Have a look at the fantastic screen cast series about caching and scaling by NewRelic: http://railslab.newrelic.com/scaling-rails

Resources