I'm trying to write an application that reads data from a View and uses a controller to communicate (put / get data from a channel) with an EventMachine instance.
The design I have right now is that I will persist the EventMachine reactor and a dictionary
that contains the ID for the event machine loop and references to the channel in/out. The event loop will connect to an IRC server and get/send requests driven by the front end.
Currently I have a view that will post to a controller that references a fixed event machine (##myeventmachine) and a dictionary of connections (##connections) that I defined in an initializer. The problem now seems to be that the controller/view are disposed after a single execution so I cannot store connection state in it (or can I?).
The whole idea of having something global in the application is making me cringe though, so I wonder if there's a better, more "rails like" way of persisting that information. This is my first rails/ruby project so I'm a bit lost here. Most of the information I found deals with how to use the asynchronous nature of EventMachine, and not how to persist the reactor/channels across many instances of the view/controller.
Related
I'm creating a new rails APP that's works with the same postgres DB as my API.
The API and the APP has a couples of common models.
Exemple :
The API and the APP has a Connection model
The APP will create a simple connection in the DB and the API will do the rest (like handler using the connection for many things )
So I wanted to use Hair Trigger that allowed to trigger a DB insert, or update and like it to a DB behavior that's will be execute.
But I would like to execute a method from one of my helpers.
like that :
class Connection < ActiveRecord::Base
trigger.after(:insert) do
SomeObject.method()
end
end
I can't use after_create because the creation action is not done on the same server (but same DB)
any ideas ?
Thanks !
The reason why Hair Trigger only supports SQL is that it's purpose is to help you maintain database triggers (https://www.postgresql.org/docs/9.6/static/sql-createtrigger.html).
Database triggers are decoupled from your application and will only be able to trigger SQL inside your database. It won't actually be able to trigger any code inside of your application.
A solution to your problem would be to use events. If you ever worked with Javascript you might have come across Event and EventListener before. The purpose of events is to decouple your code. One part of your application sends an event when an action has been taken e.g. Something was inserted into the database. Other parts, as many as you like, of your application can then subscribe to these events and take the actions necessary.
Since you are working with Rails I would suggest you to take a closer look at the Wisper gem (github.com/krisleech/wisper). If your App and API; like you are describing, lives in different processes (or servers), you should look into using a background worker to handle the events asynchronously. A common approach to this is to use Sidekiq Wisper (github.com/krisleech/wisper-sidekiq) that enables you to trigger the events async using Sidekiq as your background worker.
For a good introduction to the subject, here's a guide to get you started (I'm not the author): http://www.g9labs.com/2016/06/23/rails-pub-slash-sub-with-wisper-and-sidekiq/
There might be easier ways to solve the issue you are facing but you will need to post some more details on the use case for us to give you some better pointers.
Say I have a running rails project, and now I need to add entries to its database from an outside source. This is to be done automatically once a day and can be reduced to loading data from a text file.
Now I'm wondering, what is the conventional way to do this in a Rails project? Do I create a controller method which runs once a day and how do I call it? Do I access the database from outside with something like the sequel gem?
I think it depends of your application restrictions and business requirements.
My opinion is that both ways are good.
But I'd prefer to connect directly to database of use some message queue, just to avoid HTTP, to decrease number of HTTP calls.
thank you for taking a look at this.
I am new to rails, unfortunately. I currently have to implement an endpoint that Superfeedr can push updates to, but that endpoint has to be in a rails controller.
Initially it seemed to me that this should be a background job that runs and the rest of the web tends to agree, but I am being pressured to include this as a rails controller - which confuses me. I am not certain how to include EventMachine in a request/response cycle.
I know the web is full of examples, but none really answer my question with how to route this. I have no idea.
I have a rails controller, called Superfeeds. I want Superfeeder to push updates to something like myrailsapp/superfeeds/
Inside feeds I want to inspect the pushed content, and then write the results of that to another controller that actually has a model and will persist it.
Basically, the controller called Feeds just needs to receive and pass the information along. This confuses me however because it seems to have to implement something which is a long running process inside of a rails controller - and I am not even sure if this can work.
Does anyone know of a way that this has been done with rails, but not using EventMachine as a background job? In the end I really just need to know that this is possible.
-L
Inside feeds I want to inspect the pushed content, and then write the results of that to another controller that actually has a model and will persist it.
Why not do all the work in the one controller? If you're trying to separate out different concerns, you could even use two models - for instance one to do the inspecting/parsing and one to handle the persisting. But rarely would you need or want to to pass data from controller to controller.
I am not certain how to include EventMachine in a request/response cycle.
Never used superfeedr myself, but glanced at the docs quickly - are you using the XMPP or PubSubHubbBub client? I assume the latter? If so, you want to do the persistence (and any other time-consuming process) async (outside the request/resp cycle), right?
If you are using an EventMachine-based webserver such as Thin, basically every request cycle is run within an EM reactor. So you can make use of EM's facilities for offloading tasks such as the deferred thread pool. For an example of this in action, check out Enigmamachine, in particular here. (I believe that in addition your db client library needs to be asynchronous.)
Firstly let me mention that I'm new to web-frameworks.
I have to write my first web-app for a Uni project. I spent two weeks learning Grails and Django. Started working with Rails yesterday and loved it. So I've decided to go with it and discard my work in the other frameworks.
About the app
It's supposed to be a Twitter app that utilizes Twitter's Streaming API to record tweets which match a set of specified filters. (I'm going to use the Tweetstream gem which takes care of connecting to Twitter and capturing matching tweets).
The app's web interface should have the following functionality -
Creating new requests The user inputs a set of filter parameters (keywords to track) & URL/username/password of an existing PostgreSQL or MySQL database.
When a request is created, the web-app spawns a background ruby process. This process connects to Twitter via the Tweetstream gem. It also connects to the database specified by the user to stores received tweets.
View/terminate of existing requests
The user should be able to see a list of requests that are running as background processes by visiting a URL such as /listRequests.
See further details about a process/terminate the process
The user should be able to go to URL such as /requests/1/detail to view some details (e.g how long request has been running, number of tweets captured, etc). The user should also be able to terminate the process.
My inexperience is showing as I'm unable to comprehend -
what my models should be (maybe Request should be a model. Tweet doesn't need to be a model as it's not being stored locally)
how I'm going to connect to remote databases.
how I can create background processes (backgroundrb??) and associate them with request objects so that I can terminate then when the user asks.
At the end of the day, I've got to build this myself, so I'm not asking for you to design this for me. But some pointers in the right direction would be extremely helpful and appreciated!
Thanks!
Hmm.
Since the web app is just a thin wrapper around the heavy-lifting processes, it might be more appropriate to just use something like Sinatra here. Rails is a big framework that pulls in lots of stuff that you won't need for this project, even though it will work.
Does the "background process" requirement here strictly mean a separate process, or does it just mean concurrency? TweetStream uses the EventMachine gem to handle updates as they come, which uses a separate thread for each connection. It would be quite possible to spawn the TweetStream clients from a simple Sinatra web app, keep them in a big array, have them all run concurrently with no trouble, and simply run stop on a given client when you want it to stop. No need for a database or anything.
I'm not sure exactly what your prof is looking for you to do here, but MVC doesn't really fit. It's better to work with the requirements than to mush it into a design pattern that doesn't match it :/
Even so, I <3 Rails! Definitely get on that when you're working primarily with objects being represented in a database :)
Quite a project. Most of what will be challenging is not related to rails itself, but rather the integration with background processes. backgroundrb is a bit out of fashion. The last commit on the main github project is over a year ago, so it's likely not up to snuff for Rails 3. Search around and evaluate your options. Resque is popular, but I'm not sure if your real-time needs match with its queue-based structure.
As for your app, I see only a single model, but don't call it request. That's a reserved name in rails. Perhaps a Search model, or something along that line.
Connecting to different databases is straight forward but will require direct configuration of your ActiveRecord class during operation rather than using database.yml.
I have a Ruby on Rails (2.3.5) application and an APE (Ajax Push Engine) server. When records are created within the Rails application, i need to push the new record out on applicable channels to the APE server. Records can be created in the rails app by the traditional path through the controller's create action, or it can be created by several event machines that are constantly monitoring various inputstream and creating records when they see data that meets a certain criteria.
It seems to me that the best/right place to put the code that pushes the data out to the APE server (which in turn pushes it out to the clients) is in the Model's after_create hook (since not all record creations will flow through the controller's create action).
The final caveat is I want to push a piece of formatted HTML out to the APE server (rather than a JSON representation of the data). The reason I want to do this is 1) I already have logic to produce the desired layout in existing partials 2) I don't want to create a javascript implementation of the partials (javascript that takes a JSON object and creates all the HTML around it for presentation). This would quickly become a maintenance nightmare.
The problem with this is it would require "rendering" partials from within the Model (which im having trouble doing anyhow because they don't seem to have access to Helpers when they're rendered in this manner).
Anyhow - Just wondering what the right way to go about organizing all of this is.
Thanks
After talking w some folks in #rails and #ape this is appears to be the best approach to this issue.