AWS QLDB Delete Record and its Revision History - revision-history

We are looking into using AWS QLDB to store some data that will remain in QLDB for a few months but then we want to move untouched data into AWS Glacier. Is it possible to completely remove the record and its revision history from a QLDB table such that there is no record of it ever being there ?

QLDB is an append only, immutable ledger database. It is only possible to remove document from user and committed view. The revisions would still be there in the history table and the journal storage.

Related

How to move one database in influx to another server?

I want to move one database from one server to another
I followed this guide: https://docs.influxdata.com/influxdb/v0.12/administration/config/
But when I restored metadata I wiped out all my usernames and passwords with new db.
Do I need to restore metadata at all and is there are way to restore it without wiping out existing databases?
Metadata should not be imported when importing one database to an existing server.

How do I deal with data changes every time there are new objects available in Parse?

In my app, I update PFObjects very frequently and save them locally with the Parse's local datastore feature. Every time the user launches my app, I'd like to check if there is new data available and if so, update the objects.
When the user opens the app, I retrieve the new objects and compares them with the ones already in memory. If they are not equal, then it replaces the new objects with the ones just fetched.
How do I deal with data changes every time there are new objects available in Parse?
Check the updatedAt property on the remote version of the object and update the object if it is newer. (Also would need to check for any completely new objects in your remote database)
But if you're going to retrieve the objects from Parse anyway, then why not just replace them regardless?
If you don't have that much data, you could run a Parse query for the most recently updated object, and compare that against local data, and download and replace the local data with all the remote data only when its latest update is older than the remote database's latest update.

Best way to store and version lookup Entities in local storage

For my SPA I have a series of Lookup entities that are loading into my entity manager on page load for various pick lists and lookup values. What I'd like to do is store these entities in local storage and import them into my manager instead of requesting them over the network.
These lookups can be edited by 3 people in my company. What I'm trying to figure out is how to version these lookups in local storage so that the file can be updated when a lookup changes (or at least give the client-side capability for determining when the records stale to request new ones). How can I achieve this? My lookups are simply tables in my overall database, and I don't see a way for the client-side to recognize when the lookups have changed.
I'm reluctant to add a timestamp column because I would need to evaluate the entities in local storage and compare them to the ones on the database and get the ones needed. Not sure how I would save page load time there.
I'm considering moving all of my lookups into a separate database and version the whole thing, requesting new lookups when any one of them changes. I would need to write a mechanism for versioning this db whenever one of the 3 people makes an edit.
Has anyone found a better solution to a problem of this type? My lookups() function is cannibalizing the wait time on users' first access.
Consider maintaining a separate version file or web API endpoint.
Invalidate lookups by type or as a whole rather than individually.
Bump the version number(s) when anything changes. Stash version number with your local copy. Compare and reload as needed.

Adding some but not all data from new app version’s database to existing database

I am creating a core data app with preloaded data using an SQL file. I am able to create the preloaded data, insert that SQL file into the project, and there is no problem. When users open the app for the first time the pre-populated store is copied over to the default store.
However, I am thinking ahead that in future versions I will want continue to add to this database. I will want users to be able download the current version with the latest DB without erasing user-generated data or user-edits to data in the preloaded DB.
This is not a migration issue because the model has not changed. Basically, when a new version of the app is opened for the first time I want it to check for the presence of new objects in the pre-populated store and add them to the user store. Any suggestions are welcome.
Make your preloaded data include, for each object, the version where that object was first added to the preload file. Then add new data by
Look up the previous app version in user defaults. Compare it to the current version. If they're the same, stop, otherwise continue to the next step. (If there is no previous version number, continue to the next step).
Fetch all objects that were added more recently than the saved version number from step 1.
Add those objects to the user's persistent store.
Update the app version in user defaults so it'll be current next time.
You can do this check every time the app launches. You'll want to save a numeric version number in user defaults that will always increase in future versions of the app.
The simple way to record the version info in the preload file is just to add an extra field on each entity type. A more elegant way would be to create a new entity called something like VersionWhereAdded that includes a version number and a to-many relationship to objects added in that version.

How do I restore three items from a backup made using Heroku PG Backups?

I've got a Rails app running on Heroku and installed the free pgbackups addon.
There's three records I'd like to restore from the backup.
According to Heroku docs when you do a restore, it restores the whole database.
How do I restore just these three records?
Create a new database, load the pgbackup into it, and then cherry pick what you want out of it.
As far as I know Heroku is using what's referred to as the "-Fc" format for everything, which is described in the pg_dump section of the manual as the custom format. That can't be read by anything but pg_restore, so you're limited to what it knows how to do. You can get pg_restore to only process a single table, which can speed things up if your database is big and you only care about a few records in one table. But there's no way to only get a few records restored out of there; you'll have to restore the entire table they're in and then dump them back out.

Resources