rails - postgres trigger action with a method? - ruby-on-rails

I'm creating a new rails APP that's works with the same postgres DB as my API.
The API and the APP has a couples of common models.
Exemple :
The API and the APP has a Connection model
The APP will create a simple connection in the DB and the API will do the rest (like handler using the connection for many things )
So I wanted to use Hair Trigger that allowed to trigger a DB insert, or update and like it to a DB behavior that's will be execute.
But I would like to execute a method from one of my helpers.
like that :
class Connection < ActiveRecord::Base
trigger.after(:insert) do
SomeObject.method()
end
end
I can't use after_create because the creation action is not done on the same server (but same DB)
any ideas ?
Thanks !

The reason why Hair Trigger only supports SQL is that it's purpose is to help you maintain database triggers (https://www.postgresql.org/docs/9.6/static/sql-createtrigger.html).
Database triggers are decoupled from your application and will only be able to trigger SQL inside your database. It won't actually be able to trigger any code inside of your application.
A solution to your problem would be to use events. If you ever worked with Javascript you might have come across Event and EventListener before. The purpose of events is to decouple your code. One part of your application sends an event when an action has been taken e.g. Something was inserted into the database. Other parts, as many as you like, of your application can then subscribe to these events and take the actions necessary.
Since you are working with Rails I would suggest you to take a closer look at the Wisper gem (github.com/krisleech/wisper). If your App and API; like you are describing, lives in different processes (or servers), you should look into using a background worker to handle the events asynchronously. A common approach to this is to use Sidekiq Wisper (github.com/krisleech/wisper-sidekiq) that enables you to trigger the events async using Sidekiq as your background worker.
For a good introduction to the subject, here's a guide to get you started (I'm not the author): http://www.g9labs.com/2016/06/23/rails-pub-slash-sub-with-wisper-and-sidekiq/
There might be easier ways to solve the issue you are facing but you will need to post some more details on the use case for us to give you some better pointers.

Related

Rails: writing a record asynchronously

in my Rails 5 RC1 app write some log entries into a DB table through an ActiveRecord model.
Writing this log entry takes a couple of milliseconds and delays the response for the end user.
I am searching for a mechanism how I can execute the log-writing into the "background" so it is not blocking/delaying the response (kind of "fire-and-forget"). Do you have some hints on how to do that?
I tried to wrap the respective part into
Thread.new { code }
But this even seems to further delay the response a few MS.
I appreciate any hint!
Thanks and regards
Try looking into messaging services like RabbitMQ for asynchronous transactions and activities. I use RabbitMQ in exactly the way you describe.
If you are using ActiveJob, you could use :async, which looks to be the default queue_adapter for ActiveJob. However, this doesn't persist jobs between restarts and is not really recommended for production.
See Rails 5 changed Active Job default adapter from Inline to Async.

Rails convention on adding database entries from outside source regularly

Say I have a running rails project, and now I need to add entries to its database from an outside source. This is to be done automatically once a day and can be reduced to loading data from a text file.
Now I'm wondering, what is the conventional way to do this in a Rails project? Do I create a controller method which runs once a day and how do I call it? Do I access the database from outside with something like the sequel gem?
I think it depends of your application restrictions and business requirements.
My opinion is that both ways are good.
But I'd prefer to connect directly to database of use some message queue, just to avoid HTTP, to decrease number of HTTP calls.

What's the better way to store custom logic code blocks configurable for each client in a multitenant architecture

We are running multitenant rails webapp with a lot of clients. We have a kind of workflow module in which after a action has been executed the client need to run different kinds of logic which are completely irrelevant. For example, after that action client A wants to update some details in our app and send out some emails. But client B wants us to run a report after that action is executed. So to tackle this we resorted to storing ruby code in the database with client ID, and after the action executed we take this code and execute it with eval. I somehow feel this a ugly way of doing things. How can make this better. The logic is completely random for different clients. So what do you guys suggest is better way to handle this?

How to use EventMachine(superfeeder-ruby gem) within a Rails controller?

thank you for taking a look at this.
I am new to rails, unfortunately. I currently have to implement an endpoint that Superfeedr can push updates to, but that endpoint has to be in a rails controller.
Initially it seemed to me that this should be a background job that runs and the rest of the web tends to agree, but I am being pressured to include this as a rails controller - which confuses me. I am not certain how to include EventMachine in a request/response cycle.
I know the web is full of examples, but none really answer my question with how to route this. I have no idea.
I have a rails controller, called Superfeeds. I want Superfeeder to push updates to something like myrailsapp/superfeeds/
Inside feeds I want to inspect the pushed content, and then write the results of that to another controller that actually has a model and will persist it.
Basically, the controller called Feeds just needs to receive and pass the information along. This confuses me however because it seems to have to implement something which is a long running process inside of a rails controller - and I am not even sure if this can work.
Does anyone know of a way that this has been done with rails, but not using EventMachine as a background job? In the end I really just need to know that this is possible.
-L
Inside feeds I want to inspect the pushed content, and then write the results of that to another controller that actually has a model and will persist it.
Why not do all the work in the one controller? If you're trying to separate out different concerns, you could even use two models - for instance one to do the inspecting/parsing and one to handle the persisting. But rarely would you need or want to to pass data from controller to controller.
I am not certain how to include EventMachine in a request/response cycle.
Never used superfeedr myself, but glanced at the docs quickly - are you using the XMPP or PubSubHubbBub client? I assume the latter? If so, you want to do the persistence (and any other time-consuming process) async (outside the request/resp cycle), right?
If you are using an EventMachine-based webserver such as Thin, basically every request cycle is run within an EM reactor. So you can make use of EM's facilities for offloading tasks such as the deferred thread pool. For an example of this in action, check out Enigmamachine, in particular here. (I believe that in addition your db client library needs to be asynchronous.)

Need help designing my first Rails app! (involves Twitter, databases, background processes)

Firstly let me mention that I'm new to web-frameworks.
I have to write my first web-app for a Uni project. I spent two weeks learning Grails and Django. Started working with Rails yesterday and loved it. So I've decided to go with it and discard my work in the other frameworks.
About the app
It's supposed to be a Twitter app that utilizes Twitter's Streaming API to record tweets which match a set of specified filters. (I'm going to use the Tweetstream gem which takes care of connecting to Twitter and capturing matching tweets).
The app's web interface should have the following functionality -
Creating new requests The user inputs a set of filter parameters (keywords to track) & URL/username/password of an existing PostgreSQL or MySQL database.
When a request is created, the web-app spawns a background ruby process. This process connects to Twitter via the Tweetstream gem. It also connects to the database specified by the user to stores received tweets.
View/terminate of existing requests
The user should be able to see a list of requests that are running as background processes by visiting a URL such as /listRequests.
See further details about a process/terminate the process
The user should be able to go to URL such as /requests/1/detail to view some details (e.g how long request has been running, number of tweets captured, etc). The user should also be able to terminate the process.
My inexperience is showing as I'm unable to comprehend -
what my models should be (maybe Request should be a model. Tweet doesn't need to be a model as it's not being stored locally)
how I'm going to connect to remote databases.
how I can create background processes (backgroundrb??) and associate them with request objects so that I can terminate then when the user asks.
At the end of the day, I've got to build this myself, so I'm not asking for you to design this for me. But some pointers in the right direction would be extremely helpful and appreciated!
Thanks!
Hmm.
Since the web app is just a thin wrapper around the heavy-lifting processes, it might be more appropriate to just use something like Sinatra here. Rails is a big framework that pulls in lots of stuff that you won't need for this project, even though it will work.
Does the "background process" requirement here strictly mean a separate process, or does it just mean concurrency? TweetStream uses the EventMachine gem to handle updates as they come, which uses a separate thread for each connection. It would be quite possible to spawn the TweetStream clients from a simple Sinatra web app, keep them in a big array, have them all run concurrently with no trouble, and simply run stop on a given client when you want it to stop. No need for a database or anything.
I'm not sure exactly what your prof is looking for you to do here, but MVC doesn't really fit. It's better to work with the requirements than to mush it into a design pattern that doesn't match it :/
Even so, I <3 Rails! Definitely get on that when you're working primarily with objects being represented in a database :)
Quite a project. Most of what will be challenging is not related to rails itself, but rather the integration with background processes. backgroundrb is a bit out of fashion. The last commit on the main github project is over a year ago, so it's likely not up to snuff for Rails 3. Search around and evaluate your options. Resque is popular, but I'm not sure if your real-time needs match with its queue-based structure.
As for your app, I see only a single model, but don't call it request. That's a reserved name in rails. Perhaps a Search model, or something along that line.
Connecting to different databases is straight forward but will require direct configuration of your ActiveRecord class during operation rather than using database.yml.

Resources