Ruby service oriented architecture - how to ensure synchronization? - ruby-on-rails

I'm a newbie to writing service-oriented applications, so this might be a trivial question for some.
My current setup is something like this:
1 - A base rails app. Also contains the routes and some application logic.
2 - A few services. I have extracted these from my base rails app. They are mostly resources that were DB extensive or used a no-sql solution.
So, what I have ended up doing is something like this
in my rails app, I have a places controller which responds to all the basic CRUD operations on places. Internally it does a HTTP call to the places service.
def show
req = Typhoeus::Request.new('http://127.0.0.1:7439/places/#{params[:id]}.json')
#places = req.response.body
end
The problem is, if I make more than 1 service call, then how to make sure that I have the response for all before rendering the views ? Also, even with 1 service call, how does the Rails rendering process work ? So for example, if the service takes a long time to respond, does the page gets rendered or does it wait infinitely for the response ?

I cannot answer your question specifically about Typhoeus as I've never used it, but I will try to answer more generally about this problem in SOA and hopefully it will be helpful.
The common thread is that the UI should be composed from many services and tolerant to the possibility that some of those services may be down or unresponsive.
You have a few options:
1) Drop down and do the composition from the browser. Use something like Backbone and make Ajax requests to each of the services. You can make many of these requests asynchronously and render each part of the page when they return - if one doesn't return, don't render that part - or have Backbone render some sort of placeholder in that region.
2) If you want to build up a model object in your controller (as in your example), you have to somehow handle timeouts and again - use a placeholder model for whatever service is being unresponsive. The nice thing about this is that, depending on the service, you can decide how critical is to have the data and how much time you're willing to wait before you consider it a timeout and move on.
Take, for example, the Amazon product page. It's very important to get the details about the product from its service - if you don't get that, it's probably worth throwing an error to the browser. But if the "Customers Who Purchased This Product Also Purchased..." service is not responding, it's OK to just stop waiting for it and render the page without it.
Again - I don't know Typhoeus so I'm not sure how to manage this using it, but hopefully this helps. Good luck!

Related

Angular and Ruby on Rails. Is it worth importing instance variables into Angular (and if so, how)?

I'm working on a web app with the client side made in Angular and the backend made in Ruby on Rails. The app will need to show a list of articles (dynamically generated data).
Here’s what I’m pondering about.
Online tutorials on building Angular apps coupled with Ruby on Rails are based on the following model of interactions between the client and the server. First, the client sends a request to the server, in response to which the server will send all the building blocks required to start up Angular, and then Angular will request all the missing data from the server. In my case, first Angular starts up then it requests a list of articles.
That's two request-response cycles, as illustrated by the following diagram.
What if, on the other hand, the server will during the very first response send the data necessary to display the initial view? Like, in my case, what if the first response also contained the first batch of articles to be somehow imported into Angular? Then there will be only one request-response cycle as shown on the following schematic:
Is this a good idea or a terrible one? And if it is not absolutely terrible, what is the way to import Rails’ instance variables (e.g. the first batch of articles sent as #articles) into Angular while it starts up?
(I found similar questions discussed — though very briefly and without any consensus reached — here and here.)
=======================
UPDATE:
OK, here is another StackOverflow discussion about this:
How to bootstrap data as it it were fetched by a $resource service in Angular.js
Is this a good idea or a terrible one?
I'd argue it's a Good Idea. It will improve your app's performance and requires minimally invasive changes. You could even configure your Angular app to conditionally make the requests in case the data isn't available on page load for some reason.
The Gon gem makes it trivial to use your controller instance vars in your views (as JS).

What's the better way to store custom logic code blocks configurable for each client in a multitenant architecture

We are running multitenant rails webapp with a lot of clients. We have a kind of workflow module in which after a action has been executed the client need to run different kinds of logic which are completely irrelevant. For example, after that action client A wants to update some details in our app and send out some emails. But client B wants us to run a report after that action is executed. So to tackle this we resorted to storing ruby code in the database with client ID, and after the action executed we take this code and execute it with eval. I somehow feel this a ugly way of doing things. How can make this better. The logic is completely random for different clients. So what do you guys suggest is better way to handle this?

How to use EventMachine(superfeeder-ruby gem) within a Rails controller?

thank you for taking a look at this.
I am new to rails, unfortunately. I currently have to implement an endpoint that Superfeedr can push updates to, but that endpoint has to be in a rails controller.
Initially it seemed to me that this should be a background job that runs and the rest of the web tends to agree, but I am being pressured to include this as a rails controller - which confuses me. I am not certain how to include EventMachine in a request/response cycle.
I know the web is full of examples, but none really answer my question with how to route this. I have no idea.
I have a rails controller, called Superfeeds. I want Superfeeder to push updates to something like myrailsapp/superfeeds/
Inside feeds I want to inspect the pushed content, and then write the results of that to another controller that actually has a model and will persist it.
Basically, the controller called Feeds just needs to receive and pass the information along. This confuses me however because it seems to have to implement something which is a long running process inside of a rails controller - and I am not even sure if this can work.
Does anyone know of a way that this has been done with rails, but not using EventMachine as a background job? In the end I really just need to know that this is possible.
-L
Inside feeds I want to inspect the pushed content, and then write the results of that to another controller that actually has a model and will persist it.
Why not do all the work in the one controller? If you're trying to separate out different concerns, you could even use two models - for instance one to do the inspecting/parsing and one to handle the persisting. But rarely would you need or want to to pass data from controller to controller.
I am not certain how to include EventMachine in a request/response cycle.
Never used superfeedr myself, but glanced at the docs quickly - are you using the XMPP or PubSubHubbBub client? I assume the latter? If so, you want to do the persistence (and any other time-consuming process) async (outside the request/resp cycle), right?
If you are using an EventMachine-based webserver such as Thin, basically every request cycle is run within an EM reactor. So you can make use of EM's facilities for offloading tasks such as the deferred thread pool. For an example of this in action, check out Enigmamachine, in particular here. (I believe that in addition your db client library needs to be asynchronous.)

Need help designing my first Rails app! (involves Twitter, databases, background processes)

Firstly let me mention that I'm new to web-frameworks.
I have to write my first web-app for a Uni project. I spent two weeks learning Grails and Django. Started working with Rails yesterday and loved it. So I've decided to go with it and discard my work in the other frameworks.
About the app
It's supposed to be a Twitter app that utilizes Twitter's Streaming API to record tweets which match a set of specified filters. (I'm going to use the Tweetstream gem which takes care of connecting to Twitter and capturing matching tweets).
The app's web interface should have the following functionality -
Creating new requests The user inputs a set of filter parameters (keywords to track) & URL/username/password of an existing PostgreSQL or MySQL database.
When a request is created, the web-app spawns a background ruby process. This process connects to Twitter via the Tweetstream gem. It also connects to the database specified by the user to stores received tweets.
View/terminate of existing requests
The user should be able to see a list of requests that are running as background processes by visiting a URL such as /listRequests.
See further details about a process/terminate the process
The user should be able to go to URL such as /requests/1/detail to view some details (e.g how long request has been running, number of tweets captured, etc). The user should also be able to terminate the process.
My inexperience is showing as I'm unable to comprehend -
what my models should be (maybe Request should be a model. Tweet doesn't need to be a model as it's not being stored locally)
how I'm going to connect to remote databases.
how I can create background processes (backgroundrb??) and associate them with request objects so that I can terminate then when the user asks.
At the end of the day, I've got to build this myself, so I'm not asking for you to design this for me. But some pointers in the right direction would be extremely helpful and appreciated!
Thanks!
Hmm.
Since the web app is just a thin wrapper around the heavy-lifting processes, it might be more appropriate to just use something like Sinatra here. Rails is a big framework that pulls in lots of stuff that you won't need for this project, even though it will work.
Does the "background process" requirement here strictly mean a separate process, or does it just mean concurrency? TweetStream uses the EventMachine gem to handle updates as they come, which uses a separate thread for each connection. It would be quite possible to spawn the TweetStream clients from a simple Sinatra web app, keep them in a big array, have them all run concurrently with no trouble, and simply run stop on a given client when you want it to stop. No need for a database or anything.
I'm not sure exactly what your prof is looking for you to do here, but MVC doesn't really fit. It's better to work with the requirements than to mush it into a design pattern that doesn't match it :/
Even so, I <3 Rails! Definitely get on that when you're working primarily with objects being represented in a database :)
Quite a project. Most of what will be challenging is not related to rails itself, but rather the integration with background processes. backgroundrb is a bit out of fashion. The last commit on the main github project is over a year ago, so it's likely not up to snuff for Rails 3. Search around and evaluate your options. Resque is popular, but I'm not sure if your real-time needs match with its queue-based structure.
As for your app, I see only a single model, but don't call it request. That's a reserved name in rails. Perhaps a Search model, or something along that line.
Connecting to different databases is straight forward but will require direct configuration of your ActiveRecord class during operation rather than using database.yml.

Best Practices for Optimizing Dynamic Page Load Times (JSON-generated HTML)

I have a Rails app where I load up a base HTML layout and I fill in the main content with rows of divs from JSON. This works in 2 steps:
Render the HTML
Ajax call to get the JSON
This has the benefit of being able to cache the HTML layout which doesn't change much, but it seems to have more drawbacks:
2 HTTP requests
HTML isn't that complex, the generated html is where all the work is done, so I'm not saving that much on time probably.
Each request in my specific case requires that we check the current user, their roles, and some things related to that user, so those 2 calls are somewhat involved.
Granted, memcached will probably solve a lot of this, I am wondering if there are some best practices here. I'm thinking I could do this:
Render the first page of JSON inline, in a script block, along with the HTML. This would cut out those 2 server calls requiring user authentication. And, assuming 80% of the time you don't need to make the second ajax call (pagination/sorting in this case), that seems like a fairly good solution.
What are your thoughts on how to approach this?
There are advantages and disadvantages to doing stuff like this. In general I'd say it's only a good idea, if whatever you're delaying via an ajax call would delay the page load enough to annoy the end user for most of the use cases on your page.
A good example of this is browsing a repository on github. 90% of the time all you want is to navigate the files, so they use an ajax load to fill in the commit messages per file after the page load.
It sounds like you're trying to do this to speed up or do something fancy for your users, but I think you should consider instead, what part is slow, and what speed of page load (and maybe for what information on that page) on your users are expecting. As you say, using memcached or fragment caching might well give you the improvements you're looking for.
Are you using some kind of monitoring tool? I'm using the free version of New Relic RPM on Heroku. It gives a lot of data on request times for individual controller actions. Data like that could help you focus your optimization process.

Resources