I save quite a bit of data to a mysql database on the phone, then upload when we have wifi. I am syncing about 6 tables and have it working properly, need to add more error checking, but it is working with some wait statements thrown in where I need them.
The problem is I am doing each update separately with a request and when the request comes back I don't have the information that I need to delete that entry from the local database, so that it doesn't re-upload duplicate information. Is there a way to save a variable to each upload so that when it comes back I can delete that entry? I can do that with one variable. It tells me the record I updated, but they are all referenced items to the owner and I can't find the info I need. I will be doing about 100 uploads at a time.
Related
I would like to bundle a local realm database with an iOS app and publish it to the app store. The initial database will ship with about 500 data records in a table, named TableA.
Then, in an app update I would like to insert an additional 250 records to TableA.
What would be an optimal solution for this scenario?
I have thought about including a JSON file in the app update with the 250 new records, and writing the data from the JSON into the realm database. Can anyone provide feedback on this solution, or suggest a better one?
When user first open the app, check your condition, and then you can show a progress bar and do your job. I think it's fine.
How does .DeleteSelf really work? Docs says:
When the reference object’s action is set to
CKReferenceActionDeleteSelf, the target of the reference—that is, the
record stored in the reference’s recordID property—becomes the owner
of the source record. Deleting the target (owner) record deletes all
its source records.
but my impression is that deleting a target will not always delete source. And it is quite annoying when it remains in the container, client downloads it, and expect that the reference point to somewhere, but target does not exist when building up slice of the server data store on client?
How do you treat this case? You ignore that sort of records? Or periodically you look up the CloudKit storage, searching for corrupt records to delete them?
Or instead of deleting a record is it better to set an attribute that it is in a deleted state, but keep it in the database?
I just struggled with this one for a while and I thought I would share my findings...
It is fundamentally a permission issue. The cascading delete will only work if the user deleting the records has 'write' permissions to all the records needing to be deleted.
So in the CloudKit Dashboard, the cascading delete will only work for the records created with the developer's iCloud account.
If you need to delete records that don't belong to the user deleting them, you can add 'write' permissions for a Record Type under Security.
If you are deleting via CloudKit Dashboard you have to wait before switching record types to check the other end of the reference. More than likely you switched before the delete actually happened. You can use Safari's Web Inspector on the Network tab to check when the delete has actually finished. It takes a very long time to delete multiple records.
I am trying to implement Cloudkit sync with local cache (CoreData).
So far I managed to get the recordZone defined and get the relevant notifications. In the next step I check with CKFetchRecordChangesOperation what happened.
recordChangedBlock (i.e. according to Apple: ...for each record in the zone that changed since the previous fetch request....)
I do get the relevant record, but how do I know, whether this record was added or modified (without checking against my local cache)?
recordWithIDWasDeletedBlock I get the recordId, but how do I know which record it is in my local cache ? I could think of storing the recordId in the local cache in order to have a reference for such cases, but I can't believe that this is what I'm supposed to do...
Any suggestion is more than appreciated
There is no info in the recordChangedBlock that tells you whether it was added or changed. Keep in mind, even if it did, you still have to check whether the record exists or not in the local store. A record can be added into CloudKit and then changed several times all while your app isn't running. When your app finally runs, it will get only the last change notification. But the record doesn't exist in your local cache yet. So you must always see if you have the record locally or not and add/update accordingly.
With deletion, all you get is the CloudKit record id. Nothing else. What I do is ensure the CloudKit record id is based on the local key. This way I can easily find and remove the local record when the data is removed from Cloudkit. It also means that the local copy of the CloudKit data on all of the user's devices ends up with the same keys.
I have a database of many users,which i want to store on both locally and on my server.Whenever user updates any information,i successfully updates it in local database using core database. But how to change this information into the server?? I am not getting this please help?
I thinking of sqlite file to server every time user updates his information. but how to send data of sqlite file to server?
Add a column to your local DB that is the last time updated. (I think there may be a way to get SQLite to fill this in semi-automatically for you, but even doing it "manually" is no big deal.) Then, when you want to upload updates, query for rows updated since the last upload. Ship to the server as JSON records.
You can also keep a separate table that tracks updates, but that's for more complex scenarios.
You have to use some tactics for this. Here is a short explanation of this work.
Database Structure
Web service
You have to design database at local and server side and manage flag(Bool) and update time.
Like when you launch app then you have to check your local data and take last update date and send request to server what's new update after this date. If there is any update then you can send data as a result of that webservice and handle that response at local device.
When you do some changes at local device then you have to manage flag, update time and created date. flag will show it has update on server or not. If updated then Y otherwise N. And you have to send created and updated date with this.
During this request you have to manage in a same timezone. You can use standard UTC timezone because there may be chances that User can switch in different timezones so handle this.
If you need more clarification then you can ask at our chat room https://chat.stackoverflow.com/rooms/43424/coders-diary
This approach definitely work for you.
My app was rejected recently due to the fact that it installs a database within a directory which will be backed up to iCloud. As the database comes with a lot of prepopulated data and the app stores user generated data into the same file.
So mixing up user-generated-content with prepopulated data is not was Apple wants us to do.
So far so good.
Separate my database into two and mark the store file with the prepopulated data with NSURLIsExcludedFromBackupKey = YES.
But what happens if the user wants to modify the data in that store, because he found a failure and wish to modify it.
Or myself make an online update available which modifies values with that store.
How do I cope with that.
Do I have to delete the store file, create a new one (now with NSURLIsExcludedFromBackupKey = NO) or store the database under /tmp or /Library/caches right from the beginning and move it into /Application Support (which is backed up automatically) but with the threat that my database is being removed by the system for some reason what is the case for /Library/caches?
It is a bit annoying that Apple will not allow you to backup prepopulated data if your app is of the kind where you could actually change the prepopulated data in the app. If the prepopulated database is big, I can however understand that they don't want your app to waste the user´s iCloud space with information that is already in the AppStore.
Woody has a good idea on the approach, but I'm not sure Apple would look away from the fact that you are actually wasting just as much space if you copy the prepopulated data to the user-backed-up DB on app startup.
What about something like this:
A: DB with pre-populated data, not backed up
B: DB with user added
data, backed up
When user make changes to object in A, crete a new row in B that "overrides" the row in A, for example by using the same ID or by having a column in the DB that tells the app which object in A should be replaced by the new row in B.
Whenever you need to update your app, you will replace DB A with new content and that's it. This could lead to conflicts with the data that the user has changed. You will have to decide whether the user data is more important than the updated data, and how to handle these conflicts (for example by trying to keep them both).
If you need to change the structure of DB B in an update, for example if you need to add a column, you will have to include an update routine in your app that detects that the user is having an old DB version and write code to migrate the user data to the new database on first startup after the update.
When you startup, if the user database is unpopulated you could copy the data across from the pre-populated datatabse, and maybe give the user an option to reset defaults which does the same again?