thank you for taking a look at this.
I am new to rails, unfortunately. I currently have to implement an endpoint that Superfeedr can push updates to, but that endpoint has to be in a rails controller.
Initially it seemed to me that this should be a background job that runs and the rest of the web tends to agree, but I am being pressured to include this as a rails controller - which confuses me. I am not certain how to include EventMachine in a request/response cycle.
I know the web is full of examples, but none really answer my question with how to route this. I have no idea.
I have a rails controller, called Superfeeds. I want Superfeeder to push updates to something like myrailsapp/superfeeds/
Inside feeds I want to inspect the pushed content, and then write the results of that to another controller that actually has a model and will persist it.
Basically, the controller called Feeds just needs to receive and pass the information along. This confuses me however because it seems to have to implement something which is a long running process inside of a rails controller - and I am not even sure if this can work.
Does anyone know of a way that this has been done with rails, but not using EventMachine as a background job? In the end I really just need to know that this is possible.
-L
Inside feeds I want to inspect the pushed content, and then write the results of that to another controller that actually has a model and will persist it.
Why not do all the work in the one controller? If you're trying to separate out different concerns, you could even use two models - for instance one to do the inspecting/parsing and one to handle the persisting. But rarely would you need or want to to pass data from controller to controller.
I am not certain how to include EventMachine in a request/response cycle.
Never used superfeedr myself, but glanced at the docs quickly - are you using the XMPP or PubSubHubbBub client? I assume the latter? If so, you want to do the persistence (and any other time-consuming process) async (outside the request/resp cycle), right?
If you are using an EventMachine-based webserver such as Thin, basically every request cycle is run within an EM reactor. So you can make use of EM's facilities for offloading tasks such as the deferred thread pool. For an example of this in action, check out Enigmamachine, in particular here. (I believe that in addition your db client library needs to be asynchronous.)
Related
I'm creating a new rails APP that's works with the same postgres DB as my API.
The API and the APP has a couples of common models.
Exemple :
The API and the APP has a Connection model
The APP will create a simple connection in the DB and the API will do the rest (like handler using the connection for many things )
So I wanted to use Hair Trigger that allowed to trigger a DB insert, or update and like it to a DB behavior that's will be execute.
But I would like to execute a method from one of my helpers.
like that :
class Connection < ActiveRecord::Base
trigger.after(:insert) do
SomeObject.method()
end
end
I can't use after_create because the creation action is not done on the same server (but same DB)
any ideas ?
Thanks !
The reason why Hair Trigger only supports SQL is that it's purpose is to help you maintain database triggers (https://www.postgresql.org/docs/9.6/static/sql-createtrigger.html).
Database triggers are decoupled from your application and will only be able to trigger SQL inside your database. It won't actually be able to trigger any code inside of your application.
A solution to your problem would be to use events. If you ever worked with Javascript you might have come across Event and EventListener before. The purpose of events is to decouple your code. One part of your application sends an event when an action has been taken e.g. Something was inserted into the database. Other parts, as many as you like, of your application can then subscribe to these events and take the actions necessary.
Since you are working with Rails I would suggest you to take a closer look at the Wisper gem (github.com/krisleech/wisper). If your App and API; like you are describing, lives in different processes (or servers), you should look into using a background worker to handle the events asynchronously. A common approach to this is to use Sidekiq Wisper (github.com/krisleech/wisper-sidekiq) that enables you to trigger the events async using Sidekiq as your background worker.
For a good introduction to the subject, here's a guide to get you started (I'm not the author): http://www.g9labs.com/2016/06/23/rails-pub-slash-sub-with-wisper-and-sidekiq/
There might be easier ways to solve the issue you are facing but you will need to post some more details on the use case for us to give you some better pointers.
I'm working on a web app with the client side made in Angular and the backend made in Ruby on Rails. The app will need to show a list of articles (dynamically generated data).
Here’s what I’m pondering about.
Online tutorials on building Angular apps coupled with Ruby on Rails are based on the following model of interactions between the client and the server. First, the client sends a request to the server, in response to which the server will send all the building blocks required to start up Angular, and then Angular will request all the missing data from the server. In my case, first Angular starts up then it requests a list of articles.
That's two request-response cycles, as illustrated by the following diagram.
What if, on the other hand, the server will during the very first response send the data necessary to display the initial view? Like, in my case, what if the first response also contained the first batch of articles to be somehow imported into Angular? Then there will be only one request-response cycle as shown on the following schematic:
Is this a good idea or a terrible one? And if it is not absolutely terrible, what is the way to import Rails’ instance variables (e.g. the first batch of articles sent as #articles) into Angular while it starts up?
(I found similar questions discussed — though very briefly and without any consensus reached — here and here.)
=======================
UPDATE:
OK, here is another StackOverflow discussion about this:
How to bootstrap data as it it were fetched by a $resource service in Angular.js
Is this a good idea or a terrible one?
I'd argue it's a Good Idea. It will improve your app's performance and requires minimally invasive changes. You could even configure your Angular app to conditionally make the requests in case the data isn't available on page load for some reason.
The Gon gem makes it trivial to use your controller instance vars in your views (as JS).
I'm a newbie to writing service-oriented applications, so this might be a trivial question for some.
My current setup is something like this:
1 - A base rails app. Also contains the routes and some application logic.
2 - A few services. I have extracted these from my base rails app. They are mostly resources that were DB extensive or used a no-sql solution.
So, what I have ended up doing is something like this
in my rails app, I have a places controller which responds to all the basic CRUD operations on places. Internally it does a HTTP call to the places service.
def show
req = Typhoeus::Request.new('http://127.0.0.1:7439/places/#{params[:id]}.json')
#places = req.response.body
end
The problem is, if I make more than 1 service call, then how to make sure that I have the response for all before rendering the views ? Also, even with 1 service call, how does the Rails rendering process work ? So for example, if the service takes a long time to respond, does the page gets rendered or does it wait infinitely for the response ?
I cannot answer your question specifically about Typhoeus as I've never used it, but I will try to answer more generally about this problem in SOA and hopefully it will be helpful.
The common thread is that the UI should be composed from many services and tolerant to the possibility that some of those services may be down or unresponsive.
You have a few options:
1) Drop down and do the composition from the browser. Use something like Backbone and make Ajax requests to each of the services. You can make many of these requests asynchronously and render each part of the page when they return - if one doesn't return, don't render that part - or have Backbone render some sort of placeholder in that region.
2) If you want to build up a model object in your controller (as in your example), you have to somehow handle timeouts and again - use a placeholder model for whatever service is being unresponsive. The nice thing about this is that, depending on the service, you can decide how critical is to have the data and how much time you're willing to wait before you consider it a timeout and move on.
Take, for example, the Amazon product page. It's very important to get the details about the product from its service - if you don't get that, it's probably worth throwing an error to the browser. But if the "Customers Who Purchased This Product Also Purchased..." service is not responding, it's OK to just stop waiting for it and render the page without it.
Again - I don't know Typhoeus so I'm not sure how to manage this using it, but hopefully this helps. Good luck!
I'm trying to write an application that reads data from a View and uses a controller to communicate (put / get data from a channel) with an EventMachine instance.
The design I have right now is that I will persist the EventMachine reactor and a dictionary
that contains the ID for the event machine loop and references to the channel in/out. The event loop will connect to an IRC server and get/send requests driven by the front end.
Currently I have a view that will post to a controller that references a fixed event machine (##myeventmachine) and a dictionary of connections (##connections) that I defined in an initializer. The problem now seems to be that the controller/view are disposed after a single execution so I cannot store connection state in it (or can I?).
The whole idea of having something global in the application is making me cringe though, so I wonder if there's a better, more "rails like" way of persisting that information. This is my first rails/ruby project so I'm a bit lost here. Most of the information I found deals with how to use the asynchronous nature of EventMachine, and not how to persist the reactor/channels across many instances of the view/controller.
I am running Ruby on Rails 3 and I have almost completed my application. All controllers, views and models are implemented for my needs (login function, logout function, new user creation function, mailer, ...). I also created a layout (mostly using sub-layouts and'content_for') and implemented some AJAX functions.
So, what should I do now (except the deployment) in order to improve \ complete my application?
One my doubt, for example, is whether to use view files as they are or create a dedicated controller only to handle pages. That is, is it good to create a "User page" that renders 'views/user/show.html.erb' and 'views/user/edit.html.erb' templates in one page instead of using views files separately calling those using link_to?
Of course there are other doubts that, maybe, I don't know yet. I would appreciate if you could share your and make suggestions.
Is this something that you're going to keep and maintain for a while? If so my advice would be to test it. Look into Test::Unit or RSpec (which I prefer) and test your models and controllers. You'll no doubt want to go back and refactor, and it'll be a lot easier if you have a healthy set of tests validating that everything still works as expected.