The scenario is that I have two ActiveModels: Invitation and Guest.
I've published a bunch of events to the event stream of an invitation and at some point that invitation is accepted and subsequently a guest is created. I would like to copy the events on the event stream of the invitation across to the event stream of the guest.
I've thought of duplicating using dup the original events, updating the stream field to be the guest's event stream but this will violate the unique constraint of the event_id field. So I would like the EventStore's publish mechanism to handle the persistence of the event.
I've thought of copying the data attribute of the original events across to new instances of the events and using publish but then I would get metadata (request_id, remote_ip, timestamp) on the event that does not reflect that of the original events which is important to keep for audibility.
Is there some technique to perform this kind of transfer / duplication of RailsEventStore events?
There is a technique — linking. In recent Rails Event Store releases (since v0.22.0) you're able to link an existing event to other streams: https://railseventstore.org/docs/link/. It retains the data and metadata.
Related
In my chat app, when a user gets banned, I want to throw him out of the chat. Moreover, I have a pinned message (which has nothing to do with banning of users) which is on to of the chat view (via Stack). I now want to connect to Firestore to the specific document of the group and constantly observe, if the user is banned (does that cost a lot of bandwidth. Moreover I want to update the pinned status message. Both infos are in the same document in the fields (admin -> list of all banned users and pinnedMessage).
Note that I am using both Firestore and Realtime Database (Realtime Database exclusively for saving of chat messages and tokens for FCM), Firestore for the rest (group/user details etc.). I am also familiar with Streambuilder and Futurebuilder which I think are not appropriate here?
I also want to save resources...
When you read or listen to a document in Firestore, you are always reading the entire contents of the document. Unlike Realtime Database, you can cannot choose to listen to or read a specific field in a document from a mobile client.
If this means you would be reading too much data to accomplish a certain task, consider splitting the document up into multiple documents that contain only the data necessary for that task.
I am building a social media app where multiple users can edit the same CloudKit record at the same time. Should I implement a locking mechanism so that only one user can edit at a time (these edits may conflict with each other), or does CloudKit have a handy built-in way for dealing with this?
If I implement a locking mechanism, my plan would be to add a binary attribute to the editable records--this attribute would have a value of 1 if someone else is editing, and 0 if no one is currently editing. Does this sound like a reasonable way to do this?
Cloudkit has a mechanism for managing this called the change token. It's similar to a timestamp that's updated each time a record is changed. When you try to write, the last change token you have is passed to the server along with your new data. You can set policies that says how the server should handle collisions, such as the last writer always overwrites. Or that the last writer is rejected.
In the latter case, the second writer will receive a NSError. Embedded in the userInfo of that error are three versions of the record: the current on one the server, the version you tried to submit, and the common ancestor. This allows you compare the differences, merge the data as appropriate and re-save. Or you can re-fetch the record (which will update your version of the change token), and then save again.
I would recommend watching the WWDC cloudkit videos. I believe WWDC session 2014 session 231, "Advanced Cloudkit," and WWDC 2015 session 715 "Cloudkit Tips and Tricks," have the most helpful info.
I see the entries in the API documentation for getting "CourseCompletion" objects. But do not see how these are entered in the Learning Environment. Can you explain what these objects are?
CourseCompletion records are essentially meta-data type notes that you can attach to a user/course-offering combination to make a record of a user having "completed" a course on such-and-such a date. The course completion record can also carry an expiry date for when the "completion" becomes out of date or no longer relevant. These features are not heavily used by D2L customers, and are not exposed through the Web UI.
I don't believe there is any automation within the back-end service around the creation or modification of these records (for example, there isn't an event in the system when a course completion record would get created: a client would need to manually create such a record when it wants one to exist).
I'm using Shippinglogic to gather tracking information for submitted tracking numbers.
I'm handling a number of things behind the scenes of the UI, but I'm not sure how to properly organize this.
So here's the flow:
User submits tracking number either via form input or URL (example.com/track/1234567890). If the number doesn't already exist in the database, then the next step happens...
After a number is submitted, I run the number through some logic to determine who the carrier is (UPS, FedEx, USPS, DHL, etc). The user never specifies...it's all done automatically.
After the carrier is determined, then I need to make the actual call to the carrier API (via Shippinglogic) to get tracking information.
After I get the tracking details, I need to save it to the database.
Then, the tracking details are finally returned to the user.
Since users can submit either via form or via a URL (without any sort of POST action), I'm trying to run it all through my show method in the controller where I check if the number exists and if not, submit it via Number.create(:tracking_number => '1234567890') but once I get into the model, I just kinda get lost on what to do next.
Well I would have the users directed to the new or create actions where you can handle creation and detect if the record already exists. Once that's handled you most likely want to send them off to the show page where you can display the tracking information from your data source and any information you have saved in your database. This way you are preserving the nature of the application and other developers would be able to work with the application if they need to.
Edit:
I had a project like this and I move my detection code out into a separate function inside the model so I could make changes to it and abstract it from a specific call on the model. I performed my API requests in the background on the model so I could cache data in the database and refresh the records that were deemed active once an hour.
Basically if it needed to use the data from the record or save some data as part of the record I made a function in the model. This enabled me to split a bunch of functions out from specific modifications to controller actions and the like.
I'm developing an iPhone app that uses a user account and a web API to get results (json) from a website. The results are a list of user's events.
Just looking for some advice or strategies - when to cache and when to make an api call... and if the iPhone SDK has anything built in to handle these scenarios.
When I get the results from the server, they populate an array in a controller. In the UI, you can go from a table listing view, to a view of an individual event result - so two controllers share a reference to the same event object.
What gets tricky is that a user can change the details of an event. In this case I make a copy of the local Event object for the user's changes, in case they make an error. If the api call successfully goes through and updates that event on the server, I take these local changes from the Event copy and set the original Event object to match with setters.
I have the original controller observing if any change is made to the local Event object so that it can reflect it in the UI.
Is this the right way of doing things? I don't want to make too many API calls to reload data from the server, But after a user makes an update should I be pulling down the list again with the API call?
...I want to be careful that my local objects don't become out of sync with the remote.
Any advice is appreciated.
I took a similar approach with an app I built. I simply made a duplicate version of the remote data model with Core Data, and I use etags on the backend to prevent sync issues (in my case, it's okay to create duplicate records).
It sounds like you're taking a good approach to this.
Some time back, I developed an iOS app where in, I had almost same requirement to store data on server as well as locally to avoid several network call and also user can see their information without any delay.
In that app, user can store photos, nodes, checkIns and social media post and with all this data, app can form a beautiful timeline. So what we did was, we had everything locally and whenever user phone come in some WIFI zone, we start uploading that data to server and sync both (local and remote) databases.
Note this method works well when only one user can access this data.