Queue of AFHTTPRequestOperations creating Memory Buildup - ios

I just updated to AFNetworking 2.0 and I am re-writing my code to download data & insert it into Core Data.
I download JSON data files (anywhere from 10-200mb files), write them to disk, then pass them off to background threads to process the data. Below is the code that downloads the JSON & write it to disk. If I just let this run (without even processing the data), the app uses up memory until it is killed.
I assume as the data is coming in, it is being stored in memory, but once I save to disk why would it stay in memory? Shouldn't the autorelease pool take care of this? I also set the responseData, and downloadData to nil. Is there something blatantly obvious that I am doing wrong here?
#autoreleasepool
{
for(int i = 1; i <= totalPages; i++)
{
NSString *path = ....
NSURLRequest *request = [NSURLRequest requestWithURL:[NSURL URLWithString:path]];
AFHTTPRequestOperation *op = [[AFHTTPRequestOperation alloc] initWithRequest:request];
op.responseSerializer =[AFJSONResponseSerializer serializer];
[op setCompletionBlockWithSuccess:^(AFHTTPRequestOperation *operation, id responseObject)
{
//convert dictionary to data
NSData *downloadData = [NSKeyedArchiver archivedDataWithRootObject:responseObject];
//save to disk
NSError *saveError = nil;
if (![fileManager fileExistsAtPath:targetPath isDirectory:false])
{
[downloadData writeToFile:targetPath options:NSDataWritingAtomic error:&saveError];
if (saveError != nil)
{
NSLog(#"Download save failed! Error: %#", [saveError description]);
}
}
responseObject = nil;
downloadData = nil;
} failure:^(AFHTTPRequestOperation *operation, NSError *error) {
DLog(#"Error: %#", error);
}];
}
[mutableOperations addObject:op];
}
NSArray *operations = [AFURLConnectionOperation batchOfRequestOperations:mutableOperations progressBlock:^(NSUInteger numberOfFinishedOperations, NSUInteger totalNumberOfOperations) {
DLog(#"%lu of %lu complete", (unsigned long)numberOfFinishedOperations, (unsigned long)totalNumberOfOperations);
} completionBlock:^(NSArray *operations) {
DLog(#"All operations in batch complete");
}];
mutableOperations = nil;
[manager.operationQueue addOperations:operations waitUntilFinished:NO];
Thanks!
EDIT #1
Adding an #autoreleasepool within my complete block seemed to slow the memory usage a bit, but it still builds up and eventually crashes the app.

If your JSON files are really 10-200mb each, this would definitely cause memory problems, because this sort of request is going to load the responses in memory (rather than streaming them to persistent storage). Worse, because your using JSON, I think the problem is twice as bad, because you're going to be loading this into a dictionary/array, which also takes up memory. So, if you have four 100mb downloads going on, your peak memory usage could be of the order of magnitude of 800mb (100mb for the NSData plus ~100mb for the array/dictionary (possibly much larger), times four for the four concurrent requests). You could quickly run out of memory.
So, a couple of reactions:
When dealing with this volume of data, you'd want to pursue a streaming interface (a NSURLConnection or NSURLSessionDataTask where you write the data as it comes in, rather than holding it in memory; or use NSURLSessionDownloadTask which does this for you), one that writes the data directly to persistent storage (rather than trying to hold it in a NSData in RAM as it's being downloaded).
If you use NSURLSessionDownloadTask, this is really simple. If you need to support iOS versions prior to 7.0, I'm not sure if AFNetworking supports streaming of the responses directly to persistent storage. I'd wager you could write your own response serializer that does that, but I haven't tried it. I've always written my own NSURLConnectionDataDelegate methods that download directly to persistent storage (e.g. something like this).
You might not want to use JSON for this (because NSJSONSerialization will load the whole resource into memory, and then parse it to a NSArray/NSDictionary, also in memory), but rather use a format that lends itself to streamed parsing of the response (e.g. XML) and write a parser that stores the data to your data store (Core Data or SQLite) as it's being parsed, rather than trying to load the whole thing in RAM.
Note, even NSXMLParser is surprisingly memory inefficient (see this question). In the XMLPerformance sample, Apple demonstrates how you can use the more cumbersome LibXML2 to minimize the memory footprint of your XML parser.
By the way, I don't know if your JSON includes any binary data that you have encoded (e.g. base 64 or the like), but if so, you might want to consider a binary transfer format that doesn't have to do this conversion. Using base-64 or uuencode or whatever can increase your bandwidth and memory requirements. (If you're not dealing with binary data that has been encoded, then ignore this point.)
As an aside, you might want to use Reachability to confirm the user's connection type (Wifi vs cellular), because it is considered bad form to download that much data over cellular (at least not without the user's permission), not only because of speed issues, but also the risk of using up an excessive portion of their carrier's monthly data plan. I've even heard that Apple historically rejected apps that tried to download too much data over cellular.

Related

Getting byte Data from File

WHAT IM DOING I am trying to get an audio file (could be up to an hour long. eg. a Podcast) that I've recorded with AVAudioRecorder to be uploaded to our backend. In addition to being uploaded to the server it needs to be able to be "Paused" and "Resumed" if the user chooses. Because of this, I believe, I need to use dataWithBytesNoCopy:buffer on the NSData class to achieve this.
WHERE IM AT I know for a fact I can get the data with using the passed self.mediaURL property:
if (self.mediaURL) {
NSData *audioData = [NSData dataWithContentsOfURL:self.mediaURL];
if (audioData) {
[payloadDic setObject:audioData forKey:#"audioData"];
}
}
However, this will not give me the desired functionality. I am trying to keep track of the bytes uploaded so that I can resume if the user pauses.
QUESTION How can I use the provided self.mediaURL so that I can retrieve the file and be able to calculate the byte length like this example?
Byte *buffer = (Byte*)malloc((long)audioFile.size);
NSUInteger buffered =[rep getBytes:buffer fromOffset:0.0 length:(long)rep.size error:nil];
NSMutableData *body = [[NSMutableData alloc] init];
body = [NSMutableData dataWithBytesNoCopy:buffer length:buffered freeWhenDone:YES];
Instead of making things more complicated for yourself by trying to reinvent the wheel, use what the system gives you. NSURLSession lets you do a background upload. You hand the task to the session (created using the background session configuration) and just walk away. The upload takes place in pieces, when it can. No "pause" or "resume" needed; the system takes care of everything. Your app doesn't even have to be running. If authentication is needed, your app will be woken up in the background as required. This architecture is just made for the situation you describe.
If the problem is that you want random access to file data without having to read the whole thing into a massive NSData, use NSFileHandle.

Uploading UIImage data to the server getting memory peaks?

I am trying to upload images data to the server, it uploaded successfully, but getting memory peaks on per image upload. And also uploading more than 20 images getting App shut's down and receiving memory warnings.
How to resolve this issue?
Edit:
I am using NSURLConnection.
image = [params valueForKey:key];
partData = UIImageJPEGRepresentation([ImageUtility getQualityFilterImage:[params valueForKey:key]], 0.8f);
if([partData isKindOfClass:[NSString class]])
partData=[((NSString*)partData) dataUsingEncoding:NSUTF8StringEncoding];
[body appendData:partData];
[body appendData:[[NSString stringWithFormat:#"\r\n--%#--\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]];
[urlRequest setHTTPBody:body];
A couple of thoughts:
If uploading multiple images, you might want to constrain how many you perform concurrently.
For example, this is my memory when I issued 20 uploads of roughly 5mb per image, memory spiked up to 113 mb:
If, however, I constrained this to only doing no more than 4 at a time (using operation queues), the memory usage was improved, maxing out at 55mb (i.e. roughly 38mb over baseline):
Others have suggested that you might consider not doing more than one upload at a time, and that further reduces the peak memory usage further (in my example, peak memory usage was 27mb, only 10mb over baseline), but recognize that you pay a serious performance penalty for that.
If using NSURLSession, if you use at upload task with the fromFile option rather than loading the asset into a NSData, even when doing 4 concurrently it used dramatically less memory, less than 1 mb over baseline:
I notice that you're using the Xcode gauges to analyze memory usage. I'd suggest using Instruments' Allocations tool (as shown above). The reason I suggest this is not only do I believe it to be more accurate, but more importantly, you seem to have some pattern of memory not falling completely back down to your baseline after performing some actions. This suggests that you might have some leaks. And only Instruments will help you identifying these leaks.
I'd suggest watching WWDC 2013 video Fixing Memory Issues or WWDC 2012 video iOS App Performance: Memory which illustrate (amongst other things) how to use Instruments to identify memory problems.
I notice that you're extracting the NSData from the UIImage objects. I don't know how you're getting the images, but if they're from your ALAssetsLibrary (or if you downloaded them from another resource), you might want to grab the original asset rather than loading it into a UIImage and then creating a NSData from that. If you use UIImageJPEGRepresentation, you invariably either (a) make the NSData larger than the original asset (making the memory issue worse); or (b) reduce the quality unnecessarily in an effort to make the NSData smaller. Plus you strip out much of the meta data.
If you have to use UIImageJPEGRepresentation (e.g. it's a programmatically created UIImage), then so be it, do what you have to do. But if you have access to the original digital asset (e.g. see https://stackoverflow.com/a/27709329/1271826 for ALAssetsLibrary example), then do that. Quality may be maximized without making the asset larger than it needs to be.
Bottom line, you first want to make sure that you don't have any memory leaks, and then you want to minimize how much you hold in memory at any given time (not using any NSData if you can).
Memory peaks are harmless. You are going to get them when you are performing heavy uploads and downloads. But it should go back to its normal level later on, which is the case in the image you have posted. So i feel it is harmless and obvious in your case.
You should only worry when your application uses memory and doesnt release it later...
If you are performing heavy upload.. have a look at answer in this thread.
Can you show your code ? Have you tried with NSURLSession ? I don't have problems with the memory if I upload photos with NSURLSession.
NSURL *url = [NSURL URLWithString:#"SERVER"];
NSURLSessionConfiguration *configuration = [NSURLSessionConfiguration defaultSessionConfiguration];
NSURLSession *session = [NSURLSession sessionWithConfiguration:configuration];
for (UIImage *image in self.arrayImages) {
UIImage *imageToUPload = image;
NSData *data = UIImagePNGRepresentation(imageToUPload);
NSURLSessionUploadTask *task = [session uploadTaskWithRequest:[NSURLRequest requestWithURL:url] fromData:data completionHandler:^(NSData *data, NSURLResponse *response, NSError *error) {
}];
[task resume];
}

What's the advantages of using NSOutputstream?

I need to download large files from the Internet , and save it to local disk.
At first, i save the data like this:
- (void)saveToLocalFile:(NSData *)data withOffset:(unsigned long long)offset{
NSString* localFile = [self tempLocalFile];
dispatch_async(mFileOperationQueue_, ^{
NSFileHandle* fileHandle = [NSFileHandle fileHandleForWritingAtPath:localFile];
if (fileHandle == nil) {
[data writeToFile:localFile atomically:YES];
return;
}
else {
[fileHandle seekToFileOffset:offset];
[fileHandle writeData:data];
[fileHandle closeFile];
}
});
}
As AFNetworking use NSOutputstream to save data to local like this:
NSUInteger length = [data length];
while (YES) {
NSInteger totalNumberOfBytesWritten = 0;
if ([self.outputStream hasSpaceAvailable]) {
const uint8_t *dataBuffer = (uint8_t *)[data bytes];
NSInteger numberOfBytesWritten = 0;
while (totalNumberOfBytesWritten < (NSInteger)length) {
numberOfBytesWritten = [self.outputStream write:&dataBuffer[(NSUInteger)totalNumberOfBytesWritten] maxLength:(length - (NSUInteger)totalNumberOfBytesWritten)];
if (numberOfBytesWritten == -1) {
break;
}
totalNumberOfBytesWritten += numberOfBytesWritten;
}
break;
}
if (self.outputStream.streamError) {
[self.connection cancel];
[self performSelector:#selector(connection:didFailWithError:) withObject:self.connection withObject:self.outputStream.streamError];
return;
}
}
What are the advantages to use NSOutputstream than NSFileHandle when writing a file ?
What are the advantages in terms of performance ?
There are several different technologies for reading and writing the contents of files, nearly all of which are supported by both iOS and OS X. All of them do essentially the same thing but in slightly different ways. Some technologies require you to read and write file data sequentially, while others may allow you to jump around and operate on only part of a file. Some technologies provide automatic support for reading and writing asynchronously, while others execute synchronously so that you have more control over their execution.
Choosing from the available technologies is a matter of deciding how much control you want over the reading and writing process and how much effort you want to spend writing your file management code. Higher-level technologies like Cocoa streams limit your flexibility but provide an easy-to-use interface. Lower-level technologies like POSIX and Grand Central Dispatch (GCD) give you maximum flexibility and power but require you to write a little more code.
Reading and Writing Files Asynchronously
Because file operations involve accessing a disk (possibly one on a network server), performing those operations asynchronously is almost always preferred. Technologies such as Cocoa streams and Grand Central Dispatch (GCD) are designed to execute asynchronously at all times, which allows you to focus on reading and writing file data rather than worrying about where your code executes.
Processing an Entire File Linearly Using Streams
If you always read or write a file’s contents from start to finish, streams provide a simple interface for doing so asynchronously. Streams are typically used for managing sockets and other types of sources where data may become available over time. However, you can also use streams to read or write an entire file in one or more bursts. There are two types of streams available:
Use an NSOutputStream to write data sequentially to disk.
Use an
NSInputStream object to read data sequentially from disk.
Please go through the Apple Documentaion for code explanation.

Caching on AFNetworking 2.0

So here's the deal. I recently started using AFNetworking to download a few files on start using the following code:
NSMutableURLRequest* rq = [api requestWithMethod:#"GET" path:#"YOUR/URL/TO/FILE" parameters:nil];
AFHTTPRequestOperation *operation = [[[AFHTTPRequestOperation alloc] initWithRequest:rq] autorelease];
NSString* path=[#"/PATH/TO/APP" stringByAppendingPathComponent: imageNameToDisk];
operation.outputStream = [NSOutputStream outputStreamToFileAtPath:path append:NO];
[operation setCompletionBlockWithSuccess:^(AFHTTPRequestOperation *operation, id responseObject) {
NSLog(#"SUCCCESSFULL IMG RETRIEVE to %#!",path)
} failure:^(AFHTTPRequestOperation *operation, NSError *error) {
// Deal with failure
}];
With my path actually plugged into the path variable (Sorry, not on the right computer right now to actually copy pasta the text, but it's exactly the same as the above with different path stuffs)
And everything is working great! I'm getting the file successfully downloaded and everything. My current issue is that I'm trying to get caching to work, but I'm having a lot of difficulties. Basically, I'm not sure what I actually have to do client side as of AFNetworking 2.0. Do I still need to set up the NSURlCache? Do I need to set the caching type header on the request operation differently? I thought that maybe it was just entirely built in, but I'm receiving a status of 200 every time the code runs, even with no changes in the file. If I do have to use the NSUrlCache, do I have to manually save the e-tag on the success blocks requestoperation myself and then feed that back in? Any help on how to progress would be much appreciated. Thanks guys!
AFNetworking uses NSURLCache for caching by default. From the FAQ:
AFNetworking takes advantage of the caching functionality already provided by NSURLCache and any of its subclasses. So long as your NSURLRequest objects have the correct cache policy, and your server response contains a valid Cache-Control header, responses will be automatically cached for subsequent requests.
Note that this mechanism caches NSData, so every time you retrieve from this cache you need to perform a somewhat expensive NSData-to-UIImage operation. This is not performant enough for rapid display, for example if you're showing images in a UITableView or UICollectionView.
If this is the case, look at UIImageView+AFNetworking, which adds downloads and caching of UIImage objects to UIImageView. For some applications you can just use the out-of-the-box implementation, but it is very basic. You may want to look at the source code for this class (it's not very long) and use it as a starting point for your own caching mechanism.

What is the fastest way to load a large CSV file into core data

Conclusion
Problem closed, I think.
Looks like the problem had nothing to do with the methodology, but that the XCode did not clean the project correctly in between builds.
It looks like after all those tests, the sqlite file that was being used was still the very first one that wasn't indexed......
Beware of XCode 4.3.2, I have nothing but problems with Clean not cleaning, or adding files to project not automatically being added to the bundle resources...
Thanks for the different answers..
Update 3
Since I invite anybody to just try the same steps to see if they get the same results, let me detail what I did:
I start with blank project
I defined a datamodel with one Entity, 3 attributes (2 strings, 1 float)
The first string is indexed
In did finishLaunchingWithOptions, I am calling:
[self performSelectorInBackground:#selector(populateDB) withObject:nil];
The code for populateDb is below:
-(void)populateDB{
NSLog(#"start");
NSPersistentStoreCoordinator *coordinator = [self persistentStoreCoordinator];
NSManagedObjectContext *context;
if (coordinator != nil) {
context = [[NSManagedObjectContext alloc] init];
[context setPersistentStoreCoordinator:coordinator];
}
NSString *filePath = [[NSBundle mainBundle] pathForResource:#"input" ofType:#"txt"];
if (filePath) {
NSString * myText = [[NSString alloc]
initWithContentsOfFile:filePath
encoding:NSUTF8StringEncoding
error:nil];
if (myText) {
__block int count = 0;
[myText enumerateLinesUsingBlock:^(NSString * line, BOOL * stop) {
line=[line stringByReplacingOccurrencesOfString:#"\t" withString:#" "];
NSArray *lineComponents=[line componentsSeparatedByString:#" "];
if(lineComponents){
if([lineComponents count]==3){
float f=[[lineComponents objectAtIndex:0] floatValue];
NSNumber *number=[NSNumber numberWithFloat:f];
NSString *string1=[lineComponents objectAtIndex:1];
NSString *string2=[lineComponents objectAtIndex:2];
NSManagedObject *object=[NSEntityDescription insertNewObjectForEntityForName:#"Bigram" inManagedObjectContext:context];
[object setValue:number forKey:#"number"];
[object setValue:string1 forKey:#"string1"];
[object setValue:string2 forKey:#"string2"];
NSError *error;
count++;
if(count>=1000){
if (![context save:&error]) {
NSLog(#"Whoops, couldn't save: %#", [error localizedDescription]);
}
count=0;
}
}
}
}];
NSLog(#"done importing");
NSError *error;
if (![context save:&error]) {
NSLog(#"Whoops, couldn't save: %#", [error localizedDescription]);
}
}
}
NSLog(#"end");
}
Everything else is default core data code, nothing added.
I run that in the simulator.
I go to ~/Library/Application Support/iPhone Simulator/5.1/Applications//Documents
There is the sqlite file that is generated
I take that and I copy it in my bundle
I comment out the call to populateDb
I edit persistentStoreCoordinator to copy the sqlite file from bundle to documents at first run
- (NSPersistentStoreCoordinator *)persistentStoreCoordinator
{
#synchronized (self)
{
if (__persistentStoreCoordinator != nil)
return __persistentStoreCoordinator;
NSString *defaultStorePath = [[NSBundle mainBundle] pathForResource:#"myProject" ofType:#"sqlite"];
NSString *storePath = [[[self applicationDocumentsDirectory] path] stringByAppendingPathComponent: #"myProject.sqlite"];
NSError *error;
if (![[NSFileManager defaultManager] fileExistsAtPath:storePath])
{
if ([[NSFileManager defaultManager] copyItemAtPath:defaultStorePath toPath:storePath error:&error])
NSLog(#"Copied starting data to %#", storePath);
else
NSLog(#"Error copying default DB to %# (%#)", storePath, error);
}
NSURL *storeURL = [NSURL fileURLWithPath:storePath];
__persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]];
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption,
[NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil];
if (![__persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeURL options:options error:&error])
{
NSLog(#"Unresolved error %#, %#", error, [error userInfo]);
abort();
}
return __persistentStoreCoordinator;
}
}
I remove the app from the simulator, I check that ~/Library/Application Support/iPhone Simulator/5.1/Applications/ is now removedI rebuild and launch again
As expected, the sqlite file is copied over to ~/Library/Application Support/iPhone Simulator/5.1/Applications//Documents
However the size of the file is smaller than in the bundle, significantly!
Also, doing a simple query with a predicate like this predicate = [NSPredicate predicateWithFormat:#"string1 == %#", string1]; clearly shows that string1 is not indexed anymore
Following that, I create a new version of the datamodel, with a meaningless update, just to force a lightweight migration
If run on the simulator, the migration takes a few seconds, the database doubles in size and the same query now takes less than a second to return instead of minutes.
This would solve my problem, force a migration, but that same migration takes 3 minutes on the iPad and happens in the foreground.
So hat's where I am at right now, the best solution for me would still be to prevent the indexes to be removed, any other importing solution at launch time just takes too much time.
Let me know if you need more clarifications...
Update 2
So the best result I have had so far is to seed the core data database with the sqlite file produced from a quick tool with similar data model, but without the indexes set when producing the sqlite file. Then, I import this sqlite file in the core data app with the indexes set, and allowing for a lightweight migration. For 2 millions record on the new iPad, this migration stills take 3 minutes. The final app should have 5 times this number of records, so we're still looking at a long long processing time.
If I go that route, the new question would be: can a lightweight migration be performed in the background?
Update
My question is NOT how to create a tool to populate a Core Data database, and then import the sqlite file into my app. I know how to do this, I have done it countless times. But until now, I had not realized that such method could have some side effect: in my case, an indexed attribute in the resulting database clearly got 'unindexed' when importing the sqlite file that way.
If you were able to verify that any indexed data is still indexed after such transfer, I am interested to know how you proceed, or otherwise what would be the best strategy to seed such database efficiently.
Original
I have a large CSV file (millions of lines) with 4 columns, strings and floats.
This is for an iOS app.
I need this to be loaded into core data the first time the app is loaded.
The app is pretty much non functional until the data is available, so loading time matters, as a first time user obviously does not want the app to take 20 minutes to load before being able to run it.
Right now, my current code takes 20 min on the new iPad to process a 2 millions line csv file.
I am using a background context to not lock the UI, and save the context every 1,000 records
The first idea I had was to generate the database on the simulator, then to copy/paste it in the document folder at first launch, as this is the common non official way of seeding a large database. Unfortunately, the indexes don't seem to survive such a transfer, and although the database was available after just a few seconds, performance is terrible because my indexes were lost. I posted a question about the indexes already, but there doesn't seem to be a good answer to that.
So what I am looking for, either:
a way to improve performance on loading millions of records in core data
if the database is pre-loaded and moved at first startup, a way to keep my indexes
best practices for handling this kind of scenario. I don't remember using any app that requires me to wait for x minutes before first use (but maybe The Daily, and that was a terrible experience).
Any creative way to make the user wait without him realizing it: background import while going through tutorial, etc...
Not Using Core Data?
...
Pre-generate your database using an offline application (say, a command-line utility) written in Cocoa, that runs on OS X, and uses the same Core Data framework that iOS uses. You don't need to worry about "indexes surviving" or anything -- the output is a Core Data-generated .sqlite database file, directly and immediately usable by an iOS app.
As long as you can do the DB generation off-line, it's the best solution by far. I have successfully used this technique to pre-generated databases for iOS deployment myself. Check my previous questions/answers for a bit more detail.
I'm just starting out with SQLite and I need to integrate a DB into one of my apps that will have a lot of indexed data in a SQLite database. I was hoping I could do some method where I could bulk insert my information into a SQLite file and add that file to my project. After discovering and reading through your question, the provided answer and the numerous comments, I decided to check out the SQLite source to see if I could make heads or tails of this issue.
My initial thought was that the iOS implementation of SQLite is, in fact, throwing out your indices. The reason is because you initially create your DB index on x86/x64 system. The iOS is an ARM processor, and numbers are handled differently. If you want your indexes to be fast, you should generate them in such a way that they are optimized for the processor in which they will be searched.
Since SQLite is for multiple platforms, it would make since to drop any indices that have been created in another architecture and rebuild them. However, since no one wants to wait for an index to rebuild the first time it is accessed, the SQLite devs most likely decided to just drop the index.
After digging into the SQLite code, I've come to the conclusion that this is most likely happening. If not for the processor architecture reason, I did find code (see analyze.c and other meta-information in sqliteint.h) where indices were being deleted if they were generated under an unexpected context. My hunch is that the context that drives this process is how the underlying b-tree data structure was constructed for the existing key. If the current instance of SQLite can't consume the key, it deletes it.
It is worth mentioning that the iOS Simulator is just that-- a simulator. It is not an emulator of the, hardware. As such, your app is running in a pseudo-iOS device, running on an x86/x64 processor.
When your app and SQLite DB are loaded to your iOS device, an ARM-compiled variant is loaded, which also links to the ARM compiled libraries within iOS. I couldn't find ARM specific code associated with SQLite, so I imagine Apple had to modify it to their suit. The could also be part of the problem. This may not be an issue with the root-SQLite code, it could be an issue with the Apple/ARM compiled variant.
The only reasonable solution that I can come up with is that you can create a generator application that you run on your iOS machine. Run the application, build the keys, and then rip the SQLite file from the device. I'd imagine such a file would work across all devices, since all ARM processors used by iOS are 32-bit.
Again, this answer is a bit of an educated guess. I'm going to re-tag your question as SQLite. Hopefully a guru may find this and be able to weigh in on this issue. I'd really like to know the truth for my own benefit.

Resources