Hopefully the title is clear, I couldn't find a better name but if someone can improve it please update it, thanks.
I would like the Firebase database to write on a node if a certain condition is met. For example, if one node receives an input from a client (say an angular app) then another node in the database should write certain data, something like a callback that is fired when a node receives some data.
I know there are 4 rule types (.read .write .validate .indexOn), what I am thinking of is some kind of .callback rule that is fired and writes on a node after some other node has received an input.
Obviously this can be achieved via a server side script but Firebase is about a server-less approach so I am trying to understand what are its current limits and what I can do with it.
Thanks for your responses
firebaser here
Running the multi-location update client-side or on a server-side process that you control are currently the only ways to accomplish this.
There is currently no way to trigger updates based on modifications to the database on Firebase servers. It is no big secret that we've been working on such functionality for a while now, but we have made no announcement as to when that will be available.
Also see Can I host a listener on Firebase?, which (I realize now) is probably a duplicate.
Related
So, i'm relatively new to Vue, and I'm currently using it to build a small app that displays order data from Square's API.
I'm currently working on a stack that uses rails to make api calls using the square.rb gem. The frontend is entirely Vue which uses Pinia as a store, and there isnt going to be any kind of database behind this because reasons.
All data is provided directly via Square's API. I am currently polling to update order info, but my client wants to make this app truly real time, as it deals with food deliveries through ride-share companies and the purpose of this app is to show in real time statuses of orders for an in house screen at the restaurant.
Now Square has a webhook subscription service, and based on my reading it sounds like I can consume these to update my app, but there are a few logical leaps that I havent been able to make yet with how to get that data to the frontend of my app.
My questions are the following, with the intent being to connect the dots between the different technologies I might need to employ here to make this work. Kinda get a sense of what i'd need and where to link it up.
Can I use vue to consume webhook payloads directly and update through reactivity? That would be ideal, but I have found no docs yet that give me a good idea of whether thats possible.
If that is not possible, do I need to use some sort of socket connection (socket.io) to listen for these webhook updates?
If the current setup or proposed setup in the questions above is not feasible, what is a better solution for handling this while still using Vue?
"Marten does need to know what the event types are before you issue queries against the event data"
But I do for example
session.Events.FetchStream(streamId)
and
session.Events.Load<MembersJoined>()
and they works fine
thanks
I have found that, in practice, if your application commits events to Marten prior to reading any streams, you don't have to register any events.
If, however, you wind up reading streams prior to committing anything, you need to configure your events first. My application conditionally rebuilds projections on startup prior to accepting input, so it wound up causing this problem.
How to configure Marten for your events
When configuring Marten, pass in an IEnumerable<Type> of all your event types. For instance, my registration looks like this:
cfg.Events.AddEventTypes(
typeof(EventBase)
.Assembly
.GetTypes()
.Where(typeof(EventBase).IsAssignableFrom)
);
I recommend using a base class for all events as it makes things like this super simple.
Having configured Marten with those events, you are now free to query streams as you need to.
I am looking for solution of logging data changes for public API.
There is a need to tell client app which tables form db has changed and need to be synchronised since the app synchronised last time and also need to be for specific brand and country.
Current Solution:
Version table with class_names of models which is touched from every model on create, delete, touch and save action.
When we are touching version for specific model we also look at the reflected associations and touch them too.
Version model is scoped to brand and country
REST API is responding to a request that includes last_sync_at:timestamp, brand and country
Rails look at Version with given attributes and return class_names of models which were changed since lans_sync_at timestamp.
This solution works but the problem is performance and is also hard to maintenance.
UPDATE 1:
Maybe the simple question is.
What is the best practice how to find out and tell frontend apps when and what needs to be synchronized. In terms of whole concept.
Conditions:
Front end apps needs to download only their own content changes not whole dataset.
Does not invoked synchronization when application from different country or brand needs to be synchronized.
Thank you.
I think that the best solution would be to use redis (or some other key-value store) and save your information there. Writing to redis is much faster than any sql db. You can write some service class that would save the data like:
RegisterTableUpdate.set(table_name, country_id, brand_id, timestamp)
Such call would save given timestamp under key that could look like i.e. table-update-1-1-users, where first number is country id, second number is brand id, followed by table name (or you could use country and brand names if needed). If you would like to find out which tables have changed you would just need to find redis keys with query "table-update-1-1-*", iterate through them and check which are newer than timestamp sent through api.
It is worth to rmember that redis is not as reliable as sql databases. Its reliability depends on configuration so you might want to read redis guidelines and decide if you would like to go for it.
You can take advantage of the fact that ActiveModel automatically logs every time it updates a table row (the 'Updated at' column)
When checking what needs to be updated, select the objects you are interested in and compare their 'Updated at' with the timestamp from the client app
The advantage of this approach is that you don't need to keep an additional table that lists all the updates on models, which should speed things up for the API users and be easier to maintain.
The disadvantage is that you cannot see the changes in data over time, you only know that a change occurred and you can access the latest version. If you need to track changes in data over time efficiently, than I'm afraid you'll have to rework things from the top.
(read last part - this is what you are interested in)
I would recommend that you use the decorator design pattern for changing the client queries. So the client sends a query of what he wants and the server decides what to give him based on the client's last update.
so:
the client sends a query that includes the time it last synched
the server sees the query and takes into account the client's nature (device-country)
the server decorates (changes accordingly) the query to request from the DB only the relevant data, and if that is not possible:
after the data are returned from the database manager they are trimmed to be relevant to where they are going
returns to the client all the new stuff that the client cares about.
I assume that you have a time entered field on your DB entries.
In that case the "decoration" of the query (abstractly) would be just to add something like a "WHERE" clause in your query and state you want data entered after the last update.
Finally, if you want that to be done for many devices/locales/whatever implement a decorator for the query and the result of the query and serve them to your clients as they should be served. (Keep in mind that in contrast with a subclassing approach you will only have to implement one decorator for each device/locale/whatever - not for all combinations!
Hope this helped!
Im developing a azure website where users can upload blob and metadata. I want uploaded stuff too be deleted after some time.
The only way i can think off is going for a cloudapp instead of a website with a worker role that checks like every hour if the uploaded file has expired and continue and delete it. However im going for a simple website here without workerroles.
I have a function that checks if the uploaded item should be deleted and if the user do something on the page i can easily call this function, BUT.. If the user isnt doing anything and the time runs out it wont delete it because the user never calls the function.. The storage will never be deleted. How would you solve this?
Thanks
Too broad to give one right answer, as you can solve this in many ways. But... from an objective perspective because you're using Web Sites I do suggest you look at Web Jobs and see if this might be the right tool for you (as this gives you the ability to run periodic jobs without the bulk of extra VMs in web/worker configuration). You'll still need a way to manage your metadata to know what to delete.
Regarding other Azure-specific built-in mechanisms, you can also consider queuing delete messages, with an invisibility time equal to the time the content is to be available. After that time expires, the queue message becomes visible, and any queue consumer would then see the message and be able to act on it. This can be your Web Job (which has SDK support for queues) or really any other mechanism you build.
Again, a very broad question with no single right answer, so I'm just pointing out the Azure-specific mechanisms that could help solve this particular problem.
Like David said in his answer, there can be many solutions to your problem. One solution could be to rely on blob itself. In this approach you can periodically fetch the list of blobs in the blob container and decide if the blob should be removed or not. The periodic fetching could be done through a Azure WebJob (if application is deployed as a website) or through a Azure Worker Role. Worker role approach is independent of how your main application is deployed. It could be deployed as a cloud service or as a website.
With that, there are two possible approaches you can take:
Rely on Blob's Last Modified Date: Whenever a blob is updated, its Last Modified property gets updated. You can use that to identify if the blob should be deleted or not. This approach would work best if the uploaded blob is never modified.
Rely on Blob's custom metadata: Whenever a blob is uploaded, you could set the upload date/time in blob's metadata. When you fetch the list of blobs, you could compare the upload date/time metadata value with the current date/time and decide if the blob should be deleted or not.
Another approach might be to use the container name to be the "expiry date"
This might make deletion easier, as you then could just remove expired containers
I am developing something with ejabberd server. I came to the need of changing the subscription logic. I am using ejabberd-2.1.11
My need is on how the subscription works, I would like to change the logic so that users upload their roster contact with subscription both automatically and and save in in the rosterusers table-colum subscription immediately to be B. So that they should be able to see online and in their contacts at least when the first one has already registered to the server. ( hope this make sense for you and is valid)
I am a very beginner in erlang and ejabberd architecture but I have already developed some basic modules, my question to you is if you could help me on this regard, how difficult is to make this change and if you could give me some hints where the changes would be
I'd stay away from modifying the server, it conforms to standards and follows the specification. So if you ever need to move to another server or upgrade, you know it's just going to work.
What you would do to achieve this is implement this behavior on the client using the server's features.
If you are really sure you want to modify the server, mod_roster.erl is the file you want to be looking at.
If using an external DB, you can also modify the DB directly, but changes won't be reflected until the clients log back in.