Notify multiple Rails app when pgsql database changes - ruby-on-rails

I have two Rails apps (one for web, and one for backend) accessing to the same PGSQL database. I would like to notify the other app if one app changes a table in the database.
How should I go about it?

I think this depends on:
How reliable you need it to be.
How fast you need the notifications to be delivered.
The FAYE solution suggested by #techvineet provides a good fast but unreliable option. (N.b. I don't mean it'll fail often, but it likely will occasionally, maybe 1/1000, If that causes you a problem, then avoid)
If you need something 100% reliable, and speed isn't important, you could write audit events to the database, and then poll that table from each app, if these are committed in the same transaction as the actual work is done, you should be safe... But it'll be as slow as your polling cycle.
Lastly, you if you want something fast AND reliable, then you could look at using something like ActiveMQ or RabbitMQ to give you reliable messaging between the applications to notify changes. You'll need a worker process in each app to listen to changes and deal with them appropriately.
My last comment would be that this 'smells' a little. The fact that you're trying to do this makes me think the architecture of your app might need looking at in the longer term. An obvious way of doing it might be to encapsulate all the business logic into an app which exposes an API, and then calling that API from both front and back end applications.

You can try using Faye http://faye.jcoglan.com/, which is a publish and subscribe messaging server. It can be integrated with Rails https://github.com/jamesotron/faye-rails.git. Messages can be transferred from one app to another by subscribing to the messages and publishing.
Hope this will help.

Related

Is there a way to schedule edits to firebase database?

I am trying to create automated edits to the database in firebase. Is there a way to do that on the server-side? I am new to iOS development and swift so any help would be greatly appreciated.
Also, I've tried Zapier but the service is not specific enough for my needs.
Yes - Firebase has quite a flexible set of options for server-side updates and it is simple enough to schedule a cronjob to connect to firebase and perform some scheduled update or edits.
The most generic approach is to use the REST API to perform your updates although there are specific libraries to support Node and other platforms.
It is worth being aware of the recent major upgrade to version 3 of Firebase which introduced quite a few significant changes - it can be easy to confuse the older examples floating around with the new API so be aware of the differences as you put together your first proof of concept examples.
I assume that you are looking to run on your own server although another alternative is to use a container hosting environment ( Google Apps etc ).
If you have your own server and are looking to integrate I would suggest starting with:
https://firebase.google.com/docs/server/setup#prerequisites
Then perhaps a quick look at:
https://firebase.googleblog.com/docs/web/quickstart.html
and
https://www.firebase.com/docs/rest/
If you are just getting started I would suggest a first task being to authenticate, retrieve and update a Firebase record.
You can configure server auth keys through the FB console and use these as part of you authentication process.
If you are unfamiliar with JWT then it is worth spending a little time getting up to speed on this and working through the examples at https://www.firebase.com/docs/rest/guide/user-auth.html
Further to your comment:
So the first approach that comes to mind is to run some kind of scheduled job in your Cron which would connect using the REST API, perform some kind of query on the existing data to identify those records that require an update and remove or modify them.
Giving a little more though you could extend this approach without having to run at a recurring period less than the minimal anticipated deletion time you could run the scheduler just to clean up at some longer period but filter your results to the client so that you are not including stale data. This approach is discussed a little at Firebase chat - removing old messages
Getting the right solution to your particular scenario will depend a lot on how well you structure your data which can be counter-intuitive; particularly for users who have come from an RDBMS background.
There may be an inclination to keep the data slim and unpolluted with old irrelevant data however Firebase is quite good at managing large minimally structured data and the overhead of this bloat may not be as bad a thing as you may think.
If the filtering itself isn't sufficient and you don't have a server that you can CRON a cleanup process then you can implement a firebase worker process in Node or similar and have this running on a container service such as Heroku or Google Apps. See Firebase push notifications - node worker for some ideas on how to approach this.
When asked Google advised that they didn't advise on where best to host worker services but they did mention both Google App Engine and Heroku.
Another approach if you don't want to implement and host a watcher/worker process is to simply include some code in the client that checks for and removes stale data periodically.
The firebase Queue is very cool but may be a bit of an overkill for simply expiring stale data.

Extract, transform, load within Rabbit?

One of the things that i do pretty often is transforming SQL data into cache and document-based stores, for performance reasons. I don't want my frontend applications hitting my database, so i have high-speed cache solutions, as well efficient Solr and other solutions.
I use RabbitMQ as the central communication hub to achieve this ETL flow, which looks like this: Backend application sends a message to Rabbit with the new data, or changes made into existing data. I then have a node.js script which consumes the queue, makes small batches of data and populates all the necessary systems: Redis, Mongo, Solr, etc.
However, i'm wondering if there's a better way of doing this. Maybe Rabbit has some kind of scripting support to create erlang logic for queues?
However, i'm wondering if there's a better way of doing this. Maybe Rabbit has some kind of scripting support to create erlang logic for queues?
it doesn't. it's just a message queueing system.
personally, I think your current design sounds good.
The only thing I would wonder, is whether or not each of your target systems has a queue of it's own. That way, any one of them can go down and not affect the others.
I would probably do something like this:
back-end produces data message and sends through RMQ
RMQ is configured with a fanout exchange, and has one bound queue per target system
each system receives the message in it's own queue
otherwise, what you have sounds about right to me!

Structuring backend queries

So this is more of a methodology question than a coding question. I want to ask this before I actually start coding in order to choose the best route. I have a messaging app. When the app launches I query in the background all the messages from the backend where current_user_id is equal to recipient_id. Now I have all of the messages stored the user needs to see so I locally store them into a sqlite database.
Great, but what about when the user gets new messages? How can i structure a query to receive those without having to query the entire table again? Also how do I set this up as a continual process? Is the phone always requesting update information from the backend while its in the foreground?
Thanks. I really appreciate your help. I'm currently using iOS and as stated SQLite. Also my backend is AWS node.js.
It looks like your goal is to ultimately synchronize data between two sources over a network with a constraint that the client is updated in a reasonable amount of time. You have a design choice to make between a push vs pull architecture.
Push architectures have the servers push data to clients when an event occurs.
Pull architectures have the device periodically poll the server for changes. This can be achieved through timed events.
There are hybrid approaches too.
Each have their advantages and disadvantages as some require constant polling. Others require constant connection based protocols which presents more scaling challenges.

How do I trigger a push notification to a mobile device when a row gets inserted into a SQL Server table?

We've been tasked with implementing push notifications in our iOS and Android app. One of the features of the app is chat messaging, so we would like to push notify our users when they receive a message. The messages can be generated from the web app, so regardless of the origin, the chat messages get inserted into a Chat SQL Table via C# Web Services.
In my research I found PushSharp would be a good fit for our C# backend -- trying to avoid having to pay for a push notification service if we can. What I'm having a difficult time visualizing is how to trigger the push notification when a new message gets inserted to the DB table.
What's the best practice? I assume manually polling for new records is not.
Any advice would be appreciated.
M.
Probably it's too late but for the new guys that just came here occasionally, I suggest to try debezium, it consumes events for each row-level change made to the database. Only committed changes are visible, so your application doesn't have to worry about transactions or changes that are rolled back.
There are a couple of solutions available to you. Some depend on the level of control you have on the table. Here are a couple of ideas :
Use a daemon to run a script that periodically checks for new entries and sends pushes when necessary. The script can rely on a tuple id field (probably the primary key) to record the last field it checked and then pick up from there periodically. You can use supervise or monit to set that up but there are many other solutions out there that might be better fitted for your server.
A more simple solution would be to create a cronjob entry that triggers the script mentioned above periodically.
If you don't control the original table, you can create a TRIGGER in MySQL that inserts a record in a separate table that you can control entirely and can poll
If you don't want to poll (which is in fact not preferable if you have a lot of data to go through at a high rate), you'll have to look into message queue systems (like RabbitMQ) or into PUBSUB (I personally like Redis PUB/SUB).
Without more information about what your current architecture is, it's difficult to give you more details or point you to a better solution.

Best way to run rails with long delays

I'm writing a Rails web service that interacts with various pieces of hardware scattered throughout the country.
When a call is made to the web service, the Rails app then attempts to contact the appropriate piece of hardware, get the needed information, and reply to the web client. The time between the client's call and the reply may be up to 10 seconds, depending upon lots of factors.
I do not want to split the web service call in two (ask for information, answer immediately with a pending reply, then force another api call to get the actual results).
I basically see two options. Either run JRuby and use multithreading or else run several regular Ruby instances and hope that not many people try to use the service at a time. JRuby seems like the much better solution, but it still doesn't seem to be mainstream and have out of the box support at Heroku and EngineYard. The multiple instance solution seems like a total kludge.
1) Am I right about my two options? Is there a better one I'm missing?
2) Is there an easy deployment option for JRuby?
I do not want to split the web service call in two (ask for information, answer immediately with a pending reply, then force another api call to get the actual results).
From an engineering perspective, this seems like it would be the best alternative.
Why don't you want to do it?
There's a third option: If you host your Rails app with Passenger and enable global queueing, you can do this transparently. I have some actions that take several minutes, with no issues (caveat: some browsers may time out, but that may not be a concern for you).
If you're worried about browser timeout, or you cannot control the deployment environment, you may want to process it in the background:
User requests data
You enter request into a queue
Your web service returns a "ticket" identifier to check the progress
A background process processes the jobs in the queue
The user polls back, referencing the "ticket" id
As far as hosting in JRuby, I've deployed a couple of small internal applications using the glassfish gem, but I'm not sure how much I would trust it for customer-facing apps. Just make sure you run config.threadsafe! in production.rb. I've heard good things about Trinidad, too.
You can also run the web service call in a delayed background job so that it's not hogging up a web-server and can even be run on a separate physical box. This is also a much more scaleable approach. If you make the web call using AJAX then you can ping the server every second or two to see if your results are ready, that way your client is not held in limbo while the results are being calculated and the request does not time out.

Resources