Not able to import EntityState Added - breeze

I'm having an issue with importing cached data. I have a very simple page right now to use as a proof of concept. There is a single table in the back end. I have all CRUD functionality working. When the user makes a change to the local data, I update a record in local storage.
var bundle = em.exportEntities(em.getChanges());
window.localStorage.setItem("waterLevelChanges", bundle);
after the page is loaded I import the entities
var bundle = window.localStorage.getItem("waterLevelChanges");
if (bundle)
em.importEntities(bundle);
This works perfectly if I'm editing an existing record. However, any records that I have added, but not saved to the database won't populate. In the bundle they have an EntityState of "Added". I read that there is an issue if you don't use the temp keys, but I'm letting Breeze use the temp keys and manage them as it likes. I have verified the data is stored in the local cache by looking in the developer tools. I can also see it in the bundle when I debug.

Our tests show that your stated scenario should work fine.
See this DocCode:export/importTests.js test where a new Order is exported, saved to browser storage, and reimported; it's "Added" state is confirmed. The Order type has server-generated, temporary keys.
I think you'll have to create a repro of your failing scenario to convince me that it doesn't work. Perhaps you might start by forking this Todos plunker and modifying it to make your point; a TodoItem also has has server-generated, temporary keys.

Related

How to bust the cache or obtain cache key when using <distributed-cache> Tag helper in Asp.net Core MVC

Having an issue with the <distributed-cache> tag and Redis.
We have a <partial> Razor view that displays results of a long-running query. We acquire the data from a service injected in using #inject. The data updates very rarely, hence we wrap the content in a <distributed-cache> tag helper, with a long expires-after attribute.
However, when the data is finally updated (in another part of our app), we need to delete that key from the distributed cache, in order to force the page to update on next execution. (It's not possible for us to predict when the data will change - we can only respond to an external event.)
The problem we're having is that, despite having a fixed name attribute, the cache key appears to be impossible to predict. For example <distributed-cache name='_myQuery' vary-by-user='true'> creates a key something along the lines 7/za/Bc/ZRn/MsR/hG69TYTx1LEzqBvlyH1OLJgrpk4= in Redis.
How can I either:
Predict/calculate what the cache key is going to be in redis, so that I can delete it in another part of our application?;
Force the <distributed-cache> tag to ignore the cached value this one time? I know we have the enabled property, because this isn't going to work because the page doesn't know when to invalidate the cache.
After digging into the ASP.NET Core source I've found the CacheTagKey.cs file, which contains the GenerateKey() and GenerateHaskedKey() methods. These methods create the key from a bunch of params, then SHA256 hashes the key and returns base64.
So it looks like I can use this to predict the cache key and solve the issue.

Delphi FireDAC: how to refresh data in cache

i need to refresh data in a TFDQuery which is in cached updates.
to simplify my problem, let's suppose my MsACCESS database is composed of 2 tables that i have to join.
LABTEST(id_test, dat_test, id_client, sample_typ)
SAMPLEType(id, SampleName)
in the Delphi application, i am using TFDConnection and 1 TFDQuery (in cached updates) in which i join the 2 tables which script is:
"SELECT T.id_test, T.dat_test, T.id_client, T.sample_typ, S.SampleName
FROM LABTEST T
left JOIN SAMPLEType S ON T.sample_typ = S.id"
in my application, i also use a DBGrid to show the result of the query.
and a button to edit the field "sample_typ", like this:
qr.Edit;
qr.FieldByName('sample_typ').AsString:=ce2.text;
qr.Post;
the edition of the 'sample_typ' field works fine but the corresponding 'sampleName' field is not changing (in the grid) after an update.
in fact it is not refreshed !
the problem is here: if i do refresh of the query, an exception is raised: "cannot refresh dataset. cached updates must be commited or canceled
and batch mode terminated before refreshing"
if i commit the updates, data will be sent to database and i don't want that, i need to keep the data in cache till the end of the operation.
also if i get out of the cache, data will be refreshed in the grid but will be sent to the database after qr.post and i don't want that.
i need to refresh data in the cache. what is the solution ?
Thanks in advance.
The issue comes down to the fact that you haven't told your UI that there is any dependency on the two fields - it clearly can't know how to do the join itself without resubmitting it so if you don't want to send the updates and reload you will have a problem.
It's not clear exactly what you are trying to do, but these two ideas may help you.
If you are not going to edit the fields in the SAMPLEType tables (S) then load the values from that table into a lookup table. You can load this into a TFDMemTable. You can use an adapter which loads from a query. Your UI controls can then show the value based on the valus looked up in your local TFDMemTable. Dependiong on the UI control this might be a 'LookupField' or some such.
You may also be able to store your main data in a TFDMemTable with an Adapter - you can specify diferent TFDCommands to read the whole recordset, refresh a record, update, insert and delete a record. The TFDCommands can act on multiple tables for joined recordsets like this. That would automatically refresh the individual record for you when you post it.

Updating core data performance

I'm creating an app that uses core data to store information from a web server. When there's an internet connection, the app will check if there are any changes in the entries and update them. Now, I'm wondering which is the best way to go about it. Each entry in my database has a last updated timestamp. Which of these 2 will be more efficient:
Go through all entries and check the timestamp to see which entry needs to be updated.
Delete the whole entity and re-download everything again.
Sorry if this seems like an obvious question and thanks!
I'd say option 1 would be most efficient, as there is rarely a case where downloading everything (especially in a large database with large amounts of data) is more efficient than only downloading the parts that you need.
I recently did something similiar.
I solve the problem, by assigning an unique ID and a global 'updated timestamp' and thinking about 'delta' change.
I explain better, I have a global 'latest update' variable stored in user preferences, with a default value of 01/01/2010.
This is roughly my JSON service:
response: {
metadata: {latestUpdate: 2013...ecc}
entities: {....}
}
Then, this is what's going on:
pass the 'latest update' to the web service and retrieve a list of entities
update the core data store
if everything went fine with core data, the 'latestUpdate' from the service metadata became my new 'latest update variable' stored in user preferences
That's it. I am only retrieving the needed change, and of course the web service is structured to deliver a proper list. Which is: a web service backed by a database, can deal with this matter quite well, and leave the iphone to be a 'simple client' only.
But I have to say that for small amount of data, it is still quite performant (and more bug free) to download the whole list at each request.
As per our discussion in the comments above, you can model your core data object entries with version control like this
CoreDataEntityPerson:
name : String
name_version : int
image : BinaryData
image_version : int
You can now model the server xml in the following way:
<person>
<name>michael</name>
<name_version>1</name_version>

<image_version>1</image_version>
</person>
Now, you can follow the following steps :
When the response arrives and you parse it, you initially create a new object from entity and fill the data directly.
Next time, when you perform an update on the server, you increase the version count of an entry by 1 and store it.
E.g. lets say the name michael is now changed to abraham, then version count of name_version on server will be 2
This updated version count will come in the response data.
Now, while storing the data in the same object, if you find the version count to be same, then the data update of that entry can be skipped, but if you find the version count to be changed, then the update of that entry needs to be done.
This way you can efficiently perform check on each entry and perform updates only on the changed entries.
Advice:
The above approach works best when you're dealing with large amount of data updation.
In case of simple text entries for an object, simple overwrite of data on all entries is efficient enough. And this also keeps the data reponse model simple.

Issue using executeQueryLocally without server metadata

I'm not using EF, so have followed the NoDb sample to successfully load data from my WebApi without using the server side metadata. After the initial load, I was hoping to use the local data cache in the EntityManager while the user interacts with the page. The problem is when I call executeQueryLocally, the cached data set is empty. I stepped through the code to see why the data wasn't being saved to the cache, and there were two issues:
in _getEntityType, metadataStore.isEmpty() was returning true.
in _getEntityType, metadataStore._getEntityTypeNameForResourceName was returning nothing
To get around the this, I added calls in my code to metadataStore.addDataService and metadataStore._setEntityTypeForResourceName. After adding these, the cache was saved properly and executeQueryLocally worked. I'm assuming this was not the intended way to get this to work... Is there something else I am doing wrong? Or is this a bug that can be fixed?
Sorry for taking so long getting back to this one.
We just made the metadataStore.setEntityTypeForResourceName public in breeze v.1.1.3. ( we renamed the method to remove the first '_".
Otherwise, you did exactly the right thing. Good catch.

TClientDataSet and processing records with StatusFilter

I'm using a TClientDataSet as a local dataset without the provider concept. After working with it a method is called that should generate the corresponding SQL statements using the StatusFilter to resolve the changes (generate SQL basically).
This looked easy initially after reading documentation (set StatusFilter to [dsInsert], process all inserts SQL, set StatusFilter to [dsModified] process all updates, the same with deletes) but after a few tests now looks far from trivial, for example:
If I add a record, then edit it: setting the StatusFilter to [dsInserted] displays it, but with the original data.
If I add a record, then edit, then delete it: the record appears with StatusFilter set to [dsInserted] and [dsModified] also.
And other similar situations..
1) I know that if first I process all inserts, then all updates then all deletes the database will be updated in the correct state but it looks far from right this approach (generating useless sql statements).
2) I've tried to access the PRecInfo(ClientDataSet.ActiveBuffer + ClientDataSet.RecordSize).Attribute information (dsRecNew, dsRecOrg, etc.) but still not manage to resolve the logic.
3) I can program the logic to resolve it, for example before processing and insert, set StatusFilter to [dsDeleted], and locating by the primary key if the record to see if its deleted thereafter.. the same with edits, before inserting, checking if the record was updated after so the insert sql in the updated version and so on.. but it should be more easy..
¿Did someone tried to solve this in an elegant and straightforward way? ¿I'm missing something? Thanks

Resources