Any way to debug app with dataset in production environment? - ios

My CloudKit dataset in Production Environment is somewhat bigger than Development, and other exotic difference could exist.
There is a nasty deadlock using my app in Production Mode. Is it possible to debug client in any way? Or should I log as many thing as possible and send somehow out?
It is a threading issue, so without examining threads in Xcode it is really though to do anything. Any idea? I am using Core Data to local storage.

Rollback changes in the source code, to be able to run app.
Sync down records from Production Environment to local Core Data Storage.
Copy out in Xcode Device menu the sqlite database from container.
Create an temporary project with same model, populate it with the database.
Set up temporary project to able to use previous CloudKit container.
Reset Development Environment in Dashboard.
Upload all record from temporary project.
Run original project with original source code.

I would recommend using a crash reporting service. While there are a few options out there, I worked with Crashlytics, and I was very happy with the reports that they provided, always helping me to fix bugs in production.

When the app will go the the background, at some point it will be killed by iOS because your thread won't have answered to the -applicationDidEnterBackground, and then you will get a backtrace of all your threads.
If you want a better chance to trigger the kill (if the locked thread is not the main thread), you could grab a background task (- beginBackgroundTaskWithExpirationHandler:) in your working threads: if they are locked at some point they will never release the background task and they'll trigger the kill.
Now just wait for the iOS scheduler to kill your app and grab the stack trace. In there, you should be able to find the culprit by looking at all you thread's backtraces and identify which ones are locked in a mutex lock() function.
I bet you don't even need symbolication for that.

Related

database becomes read-only/Corrupted

In our application we have been using an encrypted sqlite database in db3 format that is downloaded from server and then again after processing is uploaded. The app is live and is used by several users.
Sometimes, very intermittently in one or two instances, the database gets corrupted. The user has to discard the entire application and reinstall again to work resulting in data loss.
Only once we could detect that one of the tables got missing from the database through no drop table command was written anywhere in the code.
Did anyone face this instance before? Any idea why does this happens?
Please note: The application is iPad application written in objective C.
One of the main reason:
iDevices shut down quite a time before they'd actually run out of power. Before your device shuts down your App will get notified that it's going to background, and then get notified that it's going to quit. If you're handling those two notifications properly (i.e. closing all SQLite connections at one or the other) then you should not be getting database corruption.

Unique Realm container objects

I implemented real time sync following Realm's tasks demo app.
There a dummy container is used, to hold a List with the models.
The demo app doesn't seem to support offline usage.
I wondered what happens when, given this setup, I start the app on an online as well as an offline device and then go online with the offline device.
My initial expectation was that I'd end with 2 containers (which would be an invalid state), but when I tested surprisingly there was only 1 container at the end.
But sometimes I get 2 containers and haven't been able to identify what causes this.
The question then is, how does this exactly work? I assume the reason that the container is normally not duplicated when I sync the offline device for the first time is that it's handled as the same object, maybe because it doesn't have a primary key or something? But then why is it sometimes duplicated? And what would be the best practice here? Do I maybe have to use a primary key or check after connecting if there's duplication and if yes do a manual merge of the containers?
At the moment, Realm Tasks merely checks if the default Realm is empty before it tries to add a new base list container object. If the synchronization process hasn't completed by the time this check occurs, it's reasonable that a second container would be created. When testing the app on a local network, this usually isn't a problem since the download speeds are so fast, but we definitely should test this a bit more thoroughly.
Adding a primary key will definitely help since it means that if a second list is created locally, it will get merged with the version that comes down from the server.
We've recently been focusing on the 'on-boarding' process when a second device connects to a user's Realm Mobile Platform account via the new progress notification system. A more logical approach would be to wait for the synchronization to complete the initial download after logging in, and then checking for the presence of the objects. Once the documentation is complete, we'll most likely be revamping how Realm Tasks handles this.
The demo app (as well as the Realm Mobile Platform) does support offline, but only after the user has logged in for the first time (which is when these container objects are initially generated). After that time, the apps can be used offline, and any changes done in that interim are synchronized the next time it comes online.
We're planning on building 'anonymous user' feature where a user can start using the app straight away (even offline) and then any changes they made before they log in (due to them being offline) are then transferred to the user account after they do so.

How long can core data migration take on startup?

I have seen successful core-data migrations of my 4Gb database on my iPad on application launch taking several minutes.
And now suddenly, some users report crashes after installing a new version and the app is kicked out with a: failed to launch in time error.
I just tested again by restoring an old database and I am sure that core data migration can take way more than 10 seconds.
But other people are concerned it should not and try to take it to the background, or at least out of the run loop at launch time:
iPhone app launch times and Core Data migration
Can this have anything to to with other conditions, e.g. being connected to a power source? Or have a battery level of more than 50 %?
Update: I reproduced a crash by just starting the app on the device (unplugged) instead of debugging.
Then I tried starting the app on the device with USB attached: Crash.
Then started the app via the debugger: No crash (and the migration took about 4 minutes.)
Extra info: I have only enterprise users (about 75 of them) and they all have a database of 4.5Gb. Some users have no problem upgrading and some have. The upgrades all take minutes if they succeed. The crashes always come after 20 seconds. (And they keep crashing if you try again on these devices).
I followed the advice to place the migration out of the run loop, but I am still wondering why the old method works on some devices and not on others. All users are on iOS 7.
This is a common launch problem. A Core Data migration can take any amount of time, 0 to N depending on the complexity of the model and the amount of data and the type of migration occurring.
Ideally you should not be creating your Core Data stack in the -applicationDidFinish... method and migration is one of the reasons.
My recommendation is to rework your launch so that you display something until the stack has initialized. This could be just your default image in a view. Then when the Core Data stack has initialized you can switch over to your full view controller stack.
I would also recommend taking this a bit further so that you can tell the user that a migration is in process and I would further put the migration on a background queue so that you can update the UI while the migration is happening.
Lastly, if you are doing a heavy migration, I would look into doing a lightweight migration instead. The lightweight migration is far faster as well as other benefits.
If you look at the crash log it will likely say that the app was killed because it took too long to startup. The watchdog process kills apps that take too long to startup - >20 seconds I think. This is because the core data migration process was run during app startup.
I'd recommend you manually run the migration in the background. The following new book on Core Data has code and explanation for how to do a background manual migration.
http://www.amazon.com/gp/aw/d/0321905768
It is not a rule that don't run migration on background thread, but its a suggestion because if you run on background thread and your app start running, it is not guaranteed that your core data stack will not touch.
You can take this migration out of the didFinishLaunching but make sure that stack is not touch. You can handle this by some check like place a viewController with message that app is updating which don't allow user to do anything, and at mean time you can perfrom background migration. When migration process finish you can simply dismiss that viewController take user to home viewController.
When your app is running on the iOS platform you can not gurantee every thing, like some time if native apps need more memory then memory will cut off from your app quota and can get some wired kills.

SQLite Persistence throughout app lifecycle on iOS

I've been reading up on SQLite3 included in the iOS firmware which might serve my needs for the app i'm writiung.
What I can't figure out is if it is persistent or goes away like some objects do.
For example if I do sqlite3_open() which appears to be a C function rather than an Objective-C object, if I open this at the start of my application, will it stay persistent until I close it no matter how many views I push/pop all over the place.
Obviously that would depend on where I put it but if I was doing a universal app and had some central functions for loading / saving data which were common to both iPhone/iPad, if, in my didFinishLoading: I put a call to open the SQLite database and then called various exec's of queries, would it remain persistent throughout the lifecycle of the application.
or
Am I better off opening and closing as needed, i'm coming from a PHP background so i'd normally open a database at the start of the script and then run many queries and then finally close it before browser output.
From the 1,000,000th i've learned over the last few months about iOS programming, I think the latter might be the better way as there's possibility of app exit prematurely or it going to background.
I'd just like a second opinion on my thinking please.
I dont know directly, but I think you are right - you only need to open it once at the start of your app.
Looking at sqlitepersistentobjects, an ORM framework for iOS, it only opens the DB when its first used, and never closes it except when there is a problem opening it :)
Single opened sqlite database used throughout the app from different places in your app is fine.
You are using word "persistent" which is confusing. What you mean is "reuse of single connection, for executing different statements in the app, possibly from different threads". Persistence has completely different meaning in context of databases - it means that the requested modification of data has been safely stored to media (disk, flash drive) and the device can even unexpectedly shut down without affecting written data.
It's recommended to keep running sqlite statements from a single, dedicated thread.
It's not recommended to connect to sqlite database from different processes for and executing parallel modifications.
A good alternative solution is to use sqlite async extension which sends all writes to a dedicated, background thread.
You can check out https://github.com/mirek/CoreSQLite3 framework if you want to use custom built (newer version) of sqlite.

Self Updating

What's the best way to terminate a program and then run additional code from the program that's being terminated? For example, what would be the best way for a program to self update itself?
You have a couple options:
You could use another application .exe to do the auto update. This is probably the best method.
You can also rename a program's exe while it is running. Hence allowing you to get the file from some update server and replace it. On the program's next startup it will be using the new .exe. You can then delete the renamed file on startup.
It'd be really helpful to know what language we're talking about here. I'm sure I could give you some really great tips for doing this in PowerBuilder or Cobol, but that might not really be what you're after! If you're talking Java however, then you could use a shut down hook - works great for me.
Another thing to consider is that most of the "major" apps I've been using (FileZilla, Paint.NET, etc.), are having the updaters uninstall the previous version of the app and then doing a fresh install of the new version of the application.
I understand this won't work for really large applications, but this does seem to be a "preferred" process for the small to medium size applications.
I don't know of a way to do it without a second program that the primary program launches prior to shutting down. Program 2 downloads and installs the changes and then relaunches the primary program.
We did something like this in our previous app. We captured the termination of the program (in .NET 2.0) from either the X or the close button, and then kicked off a background update process that the user didn't see. It would check the server (client-server app) for an update, and if there was one available, it would download in the background using BITS. Then the next time the application opened, it would realize that there was a new version (we set a flag) and popped up a message alerting the user to the new version, and a button to click if they wanted to view the new features added to this version.
It makes it easier if you have a secondary app that runs to do the updates. You would execute the "updater" app, and then inside of it wait for the other process to exit. If you need access to the regular apps DLLs and such but they also need updating, you can run the updater from a secondary location with already updated DLLs so that they are not in use in the original location.
If you're using writing a .NET application, you might consider using ClickOnce. If you need quite a bit of customization, you might look elsewhere.
We have an external process that performs updating for us. When it finds an update, it downloads it to a secondary folder and then waits for the main application to exit. On exit, it replaces all of the current files. The primary process just kicks the update process off every 4 hours. Because the update process will wait for the exit of the primary app, the primary app doesn't have to do any special processing other than start the update application.
This is a side issue, but if you're considering writing your own update process, I would encourage you to look into using compression of some sort to (1) save on download and (2) provide one file to pull from an update server.
Hope that makes sense!

Resources