Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I want to make use of Firebase's services such as authentication and the real-time database. It makes integrating with the front-end clients very easy. However, I am struggling with the database portion. I still want a relational database, so I can more easily interact with my other data, so I thought of Rails and Postgres for this. I know Rails has ActionCable, but I want to utilize more of the services in firebase. I wanted to have my frontend clients talk to my Rails app and then have the Rails save to Postgres then transforming the data in a suitable NoSQL format for Firebase's DB. Is this a proper way of using Firebase? How else can I utilize Firebase's real-time DB without making it the source of truth?
I think you can do it. I usually do things like that using an event source architecture.
Let's say I have a class called User and I want to keep them in sync with a user collection on firebase. For that purpose, I would put a listener on User:
models/user_observer.rb
class UserObserver < ActiveRecord::Observer
include Wisper::Publisher
def initialize
subscribe(FirebaseUserPublisher, async: true)
super
end
def after_save(user)
publish_changes_to_firebase(user)
end
end
so that, every time a user is changed/created, the changes will be async reflected on your firebase and the FirebaseUserPublisher class will take care of it.
Remember that this means eventual consistency: if the FirebaseUserPublisher fails, your data is out of sync
This is basically what we are doing in a large collaborative web-application. We have a Java-based backend which watches specific keys in our Firebase RTDB for changes/additions/deletions, and responds accordingly. It uses the data from Firebase and then fires up a new Apache Mesos executor for each workspace that is currently active in our web-application (Has "presence"), with each instance having its own PostgreSQL database that the executor uses. This database is populated using data scattered around our RTDB.
When we want to have the backend perform a long-running calculation or task for the client, we write some data to a specific location ("dbRequests") outlining the task we want and some user-defined parameters. The backend executor reads the request and then goes off and spawns a worker service to execute the task for us. Once that is done, it writes new data as a child to the original request which the client picks-up and displays to the user (Or, in the case of reports, opens a new tab with the rendered charts using the calculated data). Generally, it works well for our use-case: many connected users working in a digital workspace with live updates for each connected client.
The biggest problem we've faced using this model is running data migration scripts, and managing all of the data we have in the RTDB. To be expected I suppose, since it's essentially a giant JSON-structure ;).
Related
So, i'm relatively new to Vue, and I'm currently using it to build a small app that displays order data from Square's API.
I'm currently working on a stack that uses rails to make api calls using the square.rb gem. The frontend is entirely Vue which uses Pinia as a store, and there isnt going to be any kind of database behind this because reasons.
All data is provided directly via Square's API. I am currently polling to update order info, but my client wants to make this app truly real time, as it deals with food deliveries through ride-share companies and the purpose of this app is to show in real time statuses of orders for an in house screen at the restaurant.
Now Square has a webhook subscription service, and based on my reading it sounds like I can consume these to update my app, but there are a few logical leaps that I havent been able to make yet with how to get that data to the frontend of my app.
My questions are the following, with the intent being to connect the dots between the different technologies I might need to employ here to make this work. Kinda get a sense of what i'd need and where to link it up.
Can I use vue to consume webhook payloads directly and update through reactivity? That would be ideal, but I have found no docs yet that give me a good idea of whether thats possible.
If that is not possible, do I need to use some sort of socket connection (socket.io) to listen for these webhook updates?
If the current setup or proposed setup in the questions above is not feasible, what is a better solution for handling this while still using Vue?
Hopefully the title is clear, I couldn't find a better name but if someone can improve it please update it, thanks.
I would like the Firebase database to write on a node if a certain condition is met. For example, if one node receives an input from a client (say an angular app) then another node in the database should write certain data, something like a callback that is fired when a node receives some data.
I know there are 4 rule types (.read .write .validate .indexOn), what I am thinking of is some kind of .callback rule that is fired and writes on a node after some other node has received an input.
Obviously this can be achieved via a server side script but Firebase is about a server-less approach so I am trying to understand what are its current limits and what I can do with it.
Thanks for your responses
firebaser here
Running the multi-location update client-side or on a server-side process that you control are currently the only ways to accomplish this.
There is currently no way to trigger updates based on modifications to the database on Firebase servers. It is no big secret that we've been working on such functionality for a while now, but we have made no announcement as to when that will be available.
Also see Can I host a listener on Firebase?, which (I realize now) is probably a duplicate.
I have already read Rails - How do I temporarily store a rails model instance? and similar questions but I cannot find a successful answer.
Imagine I have the model Customer, which may contain a huge amount of information attached (simple attributes, data in other tables through has_many relation, etc...). I want the application's user to access all data in a single page with a single Save button on it. As the user makes changes in the data (i.e. he changes simple attributes, adds or deletes has_many items,...) I want the application to update the model, but without committing changes to the database. Only when the user clicks on Save, the model must be committed.
For achieving this I need the model to be kept by Rails between HTTP requests. Furthermore, two different users may be changing the model's data at the same time, so these temporary instances should be bound to the Rails session.
Is there any way to achieve this? Is it actually a good idea? And, if not, how can one design a web application in which changes in a model cannot be retained in the browser but in the server until the user wants to commit them?
EDIT
Based on user smallbutton.com's proposal, I wonder if serializing the model instance to a temporary file (whose path would be stored in the session hash), and then reloading it each time a new request arrives, would do the trick. Would it work in all cases? Is there any piece of information that would be lost during serialization/deserialization?
As HTTP requests are stateless you need some kind of storeage between requests. The session is the easiest way to store data between requests. As for you the session will not be enough because you need it to be accessed by multiple users.
I see two ways to achive your goal:
1) Get some fast external data storage like a key-value server (redis, or anything you prefer http://nosql-database.org/) where you put your objects via serializing/deserializing (eg. JSON).
This may be fast depending on your design choices and data model but this is the harder approach.
2) Just store your Objects in the DB as you would regularly do and get them versioned: (https://github.com/airblade/paper_trail). Then you can just store a timestamp when people hit the save-button and you can always go back to this state. This would be the easier approach i guess but may be a bit slower depending on the size of your data model changes ( but I think it'll do )
EDIT: If you need real-time collaboration between users you should probably have a look at something like Firebase
EDIT2: Anwer to your second question, whether you can put the data into a file:
Sure you can do that. But you would need some kind of locking to prevent data loss if more than one person is editing. You will need that aswell if you go for 1) but tools like redis already include locks to achive your goal (eg. redis-semaphore). Depending on your data you may need to build some logic for merging different changes of different users.
3) Another aproach that came to my mind would be doing all editing with Javascript and save it in one db-transaction. This would go well with synchronization tools like firebase (or your own synchronization via Rails streaming API)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am creating one application. This application have two databases.
Local Database (In the mobile)
Cloud Database
Both databases are automatically syncing.
Through my application when I Browse the page I want to keep that page in my phone. again When I browsing without internet (Offline) I want to access that store files.
My problem is without internet offline browsing (stored web pages) need to access my local database and give the informations.
Eg. I want to search something. if it is internet search cloud database. (Same time store the web page HTML content in the phone). Offline searching want to access the local database. In here it want to use the stored HTML content. But access the local database.
It is possible or not? I am a beginner. Please guide me
You use scripts/code on server to connect to db on the server when you are online. Even if you download and save you 'HTML' file locally, when you load it in UIWebView it is not going to start fetching data from your local sqlite db. (hence all the down votes i guess)
So simply put, what you are asking is not possible.
However, an alternative which may work, subject to your app requirements, is to change your app code to always perform searches on local database and instead of HTML show search results using native UI. As you claim that your local db syncs with the sever automatically, when you are online, after your db is up to date, your search results will fetch fresh, synced up data from local db. Since your search uses local db you'll be getting last synced data when you are offline.
So here is the basic structure I'm proposing:
Data warehouse (for want of a better word)
E-commerce online
Back-end MIS
etc
So the idea is that I have an Order for example. An order can be created via e-commerce site, or via back-end MIS. Either case the order should filter out to e-commerce to show order to user, and vise versa.
There will be other apps in the future.
So the thinking is, to have a central warehouse that wraps this data in a service API, and then the other apps push / pull to it.
Sound OK? I guess the question is syncing the data. When I create an order, do I push the order at create time to the warehouse, or put it to some queue, or is there some other method to keep all these in sync, assuming, near realtime to realtime sync is required.
Assume your REST server is just another data store. How would each client get updates from a plain old database when needed?
If you had each client poll the data store at regular intervals, that would be one solution.