I have a requirement in which locally created events have to be synced with sever synchroniously. To explain this briefly lets consider this scenario, there were two events occurred in the offline app called A and B here A > B. In this case B should sync only when A is completed its sync.
To fix this I must have an extra attribute in my entity to identify which is created earlier. This attribute can maintain either created time or any incremental number.
Here only i am facing some clarifications
Solution :1 Based on created time
If I maintain created time in that attribute, Will it be proper for below scenario
Lets say I created on event “A” today then I changed my device’s date to previous day’s date and then I am coming back to my app and creating an another event “B”. Here which one will be earlier? if app says “B” is most recently inserted object then there is no issue I can stick with this solution itself otherwise I need to move to some other solution. Is there any optimised solution to find inserted order by maintaining created time?
Solution :2 Based on incremental number
I believe core data does not provide any auto-incremental id so we need to maintain it manually. If so what would be the better approach to maintain the maximum assigned value? Will it be good if I store the maximum assigned value in NSUserDefaults? Whenever app creates an event the value will be fetched from NSUserDefaults and +1 will be added and then I will assign final value to the event. Is this approach proper one? or else please guide me if you know any better solution
There is no auto-incrementing number built into Core Data as that is more a business logic specific item. However, it is not difficult to implement.
You can store the last number used in the metadata of the persistent store. During your insert, simply increment that number, add it to each entity as you go. When you are done inserting, update the number in the metadata.
That is how Core Data updates its own insert numbers for the objectID.
Related
I'm in the middle of adding an "offline mode" feature to an app I'm currently working on. Basically the idea is that users should able to make changes to the data, for example, edit the description of an item, without being connected to the internet, and the changes should survive between app launches.
Each change would normally result in an API request when working online but situation is different in offline mode.
Right now this is implemented by storing all data coming from the API in a Core Data database that acts as a cache. Entities that can be edited by user in addition to normal attributes have the following ones:
locallyCreated - whether the object was created offline
locallyDeleted - object was deleted offline
locallyUpdated - updated
This makes it possible to look for new/deleted/updated objects and send corresponding API requests when doing sync.
This worked well for creating and deleting objects, however, one disadvantage I found with this approach is when new data is retrieved from the API all local changes (i.e. attributes of objects marked as locally updated) are lost, which means that they have to be stored separately somehow.
What would be the best way to approach this problem?
Since you have your locallyUpdated key, the obvious answer is to modify your code that imports server changes, so that it doesn't overwrite changes to any object marked as changed. One way or another you need to avoid overwriting those changes, and you're already keeping a record of which objects have changes, so you already have the tools for a basic solution.
But you'll soon run into the complexity of syncing data. What if the local object has changes on one key, but the incoming data from the server has changes on a different key? You can't resolve that just by knowing that the local copy has changed somehow. Maybe you decide that the server always wins, or that the local copy always wins. Those are easy, if they make sense for your app. If you need to merge changes though, you have some work ahead of you. You would need to record not only a Boolean value indicating that changes were made, but also a list of which keys had changed. This can get complicated, but it's the nature of data syncing.
We started designing a process for detecting changes in our ERP database for creating datawarehouse databases. Since they don't like placing triggers on the ERP databases or even enabling CDC (sql server), we are thinking in reading changes from the databases that get replicated to a central repository through transaction replication, then have an extra copy that will merge the changes (we will have CDC on the extra copy)...
I wonder if there is a possibility where data that changes within, let's say 15 minutes, is important enough to consider a change in our design, the way we plan in designing this would not be able to track every single change, it will only get the latest after a period of time, so for example if a value on a row changes from A to B, then 1 minute later, changes from B to C, the replication system will bring that last value to the central repository, then we will merge the table with our extra copy (that extra copy might have had the value of A, then it will be updated with C, and we lost the value of B).
Is there a good scenario in a data warehouse database where you need to track ALL changes a table has gone through ?
Taking care of historical data in a DW is important in some cases such as:
When the dimension value changes. Say, a supplier merged with another and changed their commercial name
When the fact table uses calculations derived based on other information outside the fact table that changes. Say conversion rate changes for example.
When you need to run queries that reflect fact information in previous periods (versions of the fact table).
An example where every change maters may be a bank account's balance or a storage warehouse item count or a stock price, etc.
For your particular case, you should check with your customer how the system will be used and what is its benefits exactly, and design accordingly. How granular the change should be captured (every hour, day, etc.) is primarily your customer's call.
Some techniques in handling dimension data change is in Kimball-Slowly Changing Dimension.
In direct answer to your question: depends on the application.
Examples:
The value is the description field of an item in some inventory, where the items themselves do not change (i.e. item ID X is always a sparkly-thingy). In this case saving short lived descriptions is probably not required.
The value is the last reading of temperature sensor. If it goes over a certain value action is taken to bring the temperature back. In this case you certainly need to save each an every change.
This raises three points:
The second case where every single change is required shows very bad design. Such a system would surely insert new values with a time stamp into a table and not update a single value.
Bad designs do exist. Hence:
The amount data being warehoused depends on the nature of data.
a. Will you be able to derive any intelligence from your warehoused data?
b. Will you be able to know based on changes at the database level what happened at the business level?
c. What happens to your data when the database schema changes because you upgraded the ERP product?
I'm wondering whether saving a log of changes on the table level is usable. You might be able to reverse engineer what a set of changes means and then save that to the warehouse, or actually get the ERP to "tell" you what it has done and save those changes.
My app has it's own sql database with let's say 2000 rows.
I know that in future I will add some new and delete some old. Each row got prioritet parameter that changes in order of user interaction.
I want to store all prioritets locally so my future app updates would not erase them.
So the question is what is the best way of doing that? Remember that I will need fast access to those prioritets in future and they must be easily mutable.
I have records that are added, updated. Then sync them with server.
According to server response, if one of them fail to update, I would like to have that NSManagedObject to previous value. As I research, UndoManager works as stack, so I can't find any record with Id and undo that record, am I right?
And finally, what would you suggest for this issue?
You could track your objects by introducing your own ID attribute and syncing that with the server. I think this is a solid and robust design - I have used it many times without problems.
Apple does provide an objectID with each managed object, but this is really meant to ensure consistency of data across different managed object contexts. I would not recommend "abusing" this ID for external systems.
Your server could provide the old values (along with the message that it was not updated) and you could write that back into your Core Data store, finding the record using your ID attribute. For more granular change and update management, you could even use a time stamp attribute.
I have what I would presume is a very common situation, but as I'm new to iOS programming, I'm not sure of the optimal way to code it.
Synopsis:
I have data on a server which can be retrieved by the iPhone app via a REST service. On the server side, the data is objects with a foreign key (an integer id number).
I'm storing the data retrieved via REST in Core Data. The managed objects have an "objId" attribute so that I can uniquely identify the managed objects in the rest of my code.
My app must always reflect the server data.
On subsequent requests made to the server:
some objects may not be returned, they have been deleted on the server - in which case I need to delete the corresponding objects from Core Data - so that I'm reflecting the state of the server correctly.
some objects have attributes which have changed, therefore the corresponding managed objects need updating with the new data.
my solution - and question to you
To get things going in my app, I made the easiest solution of deleting all objects in Core Data, then adding all new objects in, created with the latest server side data.
I don't think this is the best way to approach it :) As I progress on with my app, I now want to link up my tableview with NSFetchedResultsController, and have realised that my approach of deleting everything and re-adding is not going to work any more.
What is the tried and trusted way of syncing Core Data with server side data?
Do I need to make a fetch request for each object id I get back from the server, and then update the object with the new data?
And then go through all of the objects in core data and see which ones have not been updated, and delete those?
Is that the best way to do it? It just seems a little expensive to do a fetch for each object in Core Data, that's all.
Pseudo code is fine for any answers :)
thanks in advance!
Well, consider your download. First, you should be doing this in a background thread (if not, there are lots of SO posts that talk about how to do that).
I would suggest that you implement what makes sense first, and then, after you can get valid performance data from running Instruments, consider performance optimization. Of course, use some common sense on "easy" performance stuff (your design can take care of the big ones easily enough).
Anyway, get your data from the online resource, and then, for each object fetched, use the "unique object id" to fetch the object from core data. You know there is only one object with that ID, so you can set fetchLimit to 1 on your fetch request. You can also configure your "object id" attribute to be an INDEX in the database. This way, you get the fastest search from the underlying database, and it knows to stop looking once it finds your one object. This should be pretty snappy.
Now you have your object. Change any attributes necessary. Save, rinse, and repeat.
Furthermore, for several reasons, you may want to know when objects were last updated. I'd suggest adding a timestamp to each object that gets changed with the current time every time an object is changed. This will also help in deleting objects. Since your online database does not tell you which objects are deleted, you must have some way to know that an item is "old and no longer needed."
An easy way to do this is to remember the time you started your update. After processing all objects from the download, you now have a way to find all the objects that were deleted from the online database. Basically, any object with a "last update" timestamp before the time you began the update should be removed (since they were not added or modified in the last update). You can also index the database on this field, which will make finding those objects faster - unless your database is huge, I'd wait to see what Instruments has to say about this one though.