Elusive crash: terminated due to memory issue - ios

I'm still trying to debug an elusive crash in my app. See here for my earlier post.
The app takes sound from the microphone, processes it, and continuously updates the display with the processed results. After running uneventfully for many minutes, the app will halt with Message from debugger: terminated due to memory issue. There is no stack trace.
The timing of the crash makes it appear that there is some finite resource that gets exhausted after so many minute of running. The time it takes to crash is quite uniform. The time to crash may vary unpredictably when I change something in my code, but as long as the code stays the same, the time to crash keeps approximately the same. On a recent set of 10 test runs, the time to crash varied between 1014 seconds and 1029 seconds.
The number of times the display gets updated is even more uniform. On that same set of 10 tests, the number of calls to UIView.draw varied from 15311 to 15322. That's a variation of 0.07 percent, as opposed to 1.5 percent in the time to crash.
It's not running out of memory. My code is written in Swift 3, so I'm not doing any explicit mallocs or frees. I've made my class references weak where needed. And I've tested under the Activity Monitor, Allocations, and Leaks Instruments under XCode. My program takes up 44.6 MiB, and it doesn't grow with time.
And I've been careful about thread safety when accessing shared data. All shared data is read and written on the same serial DispatchQueue.
I've traced the crash to a section of code that writes a byte array to disk, then reads in another array of bytes. Here's a simplified version of that code:
var inputBuf:Buffer = Buffer()
var outputBuf: Buffer = Buffer()
var fileHandle:FileHandle? = ...
struct Buffer {
let bufferSize = 16384
var fileIndex:Int = 0
var bytes:[UInt8]
init() {
bytes = [UInt8](repeating:0, count:bufferSize)
}
func save(fileHandle:FileHandle) {
fileHandle.seek(toFileOffset: UInt64(Int64(fileIndex)))
fileHandle.write(Data(bytes))
}
}
func bug()
{
outputBuf.save(fileHandle:fileHandle!)
fileHandle!.seek(toFileOffset: UInt64(inputBuf.fileIndex))
let data = fileHandle!.readData(ofLength: inputBuf.bufferSize )
for i in 0..<data.count {
inputBuf.bytes[i] = data[i] // May crash here
}
}
Usually the crash occurs during the loop that copies data from the result of the readData to my buffer. But on one occasion, the loop completed before the crash. That leads me to suspect the actual crash occurs on another thread. There's no stack trace, so my only debugging technique is to insert print statements in the code.
fileIndex is always between 0 and 2592500. I modified the code to close the FileHandle after use and create a new FileHandle when next needed. It did not affect the outcome.

It was the zombie detector! I turned off zombie detection and the app runs forever.

Related

Octave crashes due to clearing variables in script without breakpoints - Attempts to write to last memory block

On my system (Windows, 8 GB RAM 64-bit i7), Octave is having this problem handling medium sized arrays. I have task manager open, and the memory never going beyond 200 MB before the graphing section. It will often crash around 150 MB. The interesting thing is that if I put breakpoints into my code to find where the problem lies, the problem goes away, and I am actually able to get through everything, and move on to the graphing portion. It also crashes through that unless I add breakpoints every other graph.
With breakpoints, I am able to load it up to the full script load which should be around 1 GB. I'm not crazy right? This is supposed to be easy stuff that Matlab would breeze through in a second.
Below is a snippit of code that will crash unless I use breakpoints at every double new line.
n2040id = fopen(n_20to40ft_file);
n2040data = dlmread(n2040id,',', [8 1 70849 98]);
fclose(n2040id);
n20test = n2040data(3410:22066,:);
n30test = n2040data(26730:45748,:);
n40test = n2040data(49874:68706,:);
clear n2040data;
%% 20Ft Test Processing
n20spo2 = n20test(:,88);
n20spo2(n20spo2 == 0) = [];
n20co = n20test(:,89);
n20co(n20co == 0) = [];
clear n20test;
%% 30 Ft Test Processing
n30spo2 = n30test(:,88);
n30spo2(n30spo2 == 0) = [];
n30co = n30test(:,89);
n30co(n30co == 0) = [];
clear n30test;
%% 40 Ft Test Processing
n40spo2 = n40test(:,88);
n40spo2(n40spo2 == 0) = [];
n40co = n40test(:,89);
n40co(n40co == 0) = [];
clear n40test;
This snippit uses about an extra 60-90 MB of memory when compared to the memory before that point which is cleared before every break when I am done with it. The first array is a double of size 70841x98 while the others become around 450x1 to 900x1. These are not difficult arrays to deal with by a long shot. Yet it will crash unless I put in those breakpoints, then I can just press continue and it's fine.
I've also tried using clear -v but that crashed too unless I used breakpoints.
Now, I debugged with visual studio and got this error:
No symbol file loaded for liboctgui-3.dll as well as and error that it was trying to access 0xFFFFFFFFFFFFFFFF and it got "permission denied" trying to access it. Why on earth would it be trying to access the last memory block?
This actually doesn't happen if I don't clear any variables. It will take up the extra 1 -1.4 GB happily. Is this a known issue? releasing memory shouldn't cause a program to attempt to access the very last possibly memory block.

Extremely high Memory & CPU usage when uploading parsed JSON data to Firebase in loop function

This is my very first question here so go easy on me!
I'm a newbie coder and I'm currently trying to loop through JSON, parse the data and backup the information to my Firebase server - using Alamofire to request the JSON information.
Swift 4, Alamofire 4.5.1, Firebase 4.2.0
The process works - but not without infinitely increasing device memory usage & up to 200% CPU usage. Through commenting out lines, I singled the memory and CPU usage down to the Firebase upload setValue line in my data pulling function - which iterates through a JSON database of unknown length (by pulling a max of 1000 rows of data at a time - hence the increasing offset values). The database that I'm pulling information from is huge, and with the increasing memory usage, the function grinds to a very slow pace.
The function detects if it's found an empty JSON (end of the results), and then either ends or parses the JSON, uploads the information to Firebase, increases the offset value by 1000 rows, and then repeats itself with the new offset value.
var offset: Int! = 0
var finished: Bool! = false
func pullCities() {
print("step 1")
let call = GET_CITIES + "&offset=\(self.offset!)&rows=1000"
let cityURL = URL(string: call)!
Alamofire.request(cityURL).authenticate(user: USERNAME, password: PASSWORD).responseJSON { response in
let result = response.result
print("step 2")
if let dict = result.value as? [Dictionary<String, Any>] {
print("step 3")
if dict.count == 0 {
self.finished = true
print("CITIES COMPLETE")
} else {
print("step 4")
for item in dict {
if let id = item["city"] as? String {
let country = item["country"] as? String
let ref = DataService.ds.Database.child("countries").child(country!).child("cities").child(id)
ref.setValue(item)
}
}
self.finished = false
print("SUCCESS CITY \(self.offset!)")
self.offset = self.offset! + 1000
}
}
if self.finished == true {
return
} else {
self.pullCities()
}
}
}
It seems to me like the data being uploaded to Firebase is being saved somewhere and not emptied once the upload completes? Although I couldn't find much information on this issue when searching through the web.
Things I've tried:
a repeat, while function (no good as I only want 1 active repetition of each loop - and still had high memory, CPU usage)
performance monitoring (Xcode call tree found that "CFString (immutable)" and "__NSArrayM" were the main reason for the soaring memory usage - both relating to the setValue line above)
memory usage graphing (very clear that memory from this function doesn't get emptied when it loops back round - no decreases in memory at all)
autoreleasepool blocks (as per suggestions, unsuccessful)
Whole Module Optimisation already enabled (as per suggestions, unsuccessful)
Any help would be greatly appreciated!
UPDATE
Pictured below is the Allocations graph after a single run of the loop (1,000 rows of data). It shows that what is likely happening is that Firebase is caching the data for every item in the result dict, but appears to only de-allocate memory as one whole chunk when every single upload has finished?
Ideally, it should be de-allocating after every successful upload and not all at once. If anyone could give some advice on this I would be very grateful!
FINAL UPDATE
If anyone should come across this with the same problem, I didn't find a solution. My requirements changed so I switched the code over to nodejs which works flawlessly. HTTP requests are also very easy to code for on javascript!
I had a similar issue working with data on external websites and the only way I could fix it was to wrap the loop in an autoreleasepool {} block which forced the memory to clear down on each iteration. Given ARC you might think such a structure is not needed in Swift but see this SO discussion:
Is it necessary to use autoreleasepool in a Swift program?
Hope that helps.
sometimes compiler is not able to properly optimise your code unless you enable whole module optimisation in project build settings. this is usually happening when generics is being used.
try to turn it on even for debug env and test.

Memory leak: steady increase in memory usage with simple device motion logging

Consider this simple Swift code that logs device motion data to a CSV file on disk.
let motionManager = CMMotionManager()
var handle: NSFileHandle? = nil
override func viewDidLoad() {
super.viewDidLoad()
let documents = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true)[0] as NSString
let file = documents.stringByAppendingPathComponent("/data.csv")
NSFileManager.defaultManager().createFileAtPath(file, contents: nil, attributes: nil)
handle = NSFileHandle(forUpdatingAtPath: file)
motionManager.startDeviceMotionUpdatesToQueue(NSOperationQueue.currentQueue(), withHandler: {(data, error) in
let data_points = [data.timestamp, data.attitude.roll, data.attitude.pitch, data.attitude.yaw, data.userAcceleration.x,
data.userAcceleration.y, data.userAcceleration.z, data.rotationRate.x, data.rotationRate.y, data.rotationRate.z]
let line = ",".join(data_points.map { $0.description }) + "\n"
let encoded = line.dataUsingEncoding(NSUTF8StringEncoding)!
self.handle!.writeData(encoded)
})
}
I've been stuck on this for days. There appears to be a memory leak, as memory
consumption steadily increases until the OS suspends the app for exceeding resources.
It's critical that this app be able to run for long periods without interruption. Some notes:
I've tried using NSOutputStream and a CSV-writing library (CHCSVParser), but the issue is still present
Executing the logging code asynchronously (wrapping startDeviceMotionUpdatesToQueue in dispatch_async) does not remove the issue
Performing the sensor data processing in a background NSOperationQueue does fix the issue (only when maxConcurrentOperationCount >= 2). However, that causes concurrency issues in file writing: the output file is garbled with lines intertwined between each other.
The issue does not seem to appear when logging accelerometer data only, but does seem to appear when logging multiple sensors (e.g. accelerometer + gyroscope). Perhaps there's a threshold of file writing throughput that triggers this issue?
The memory spikes seem to be spaced out at roughly 10 second intervals (steps in the above graph). Perhaps that's indicative of something? (could be an artifact of the memory instrumentation infrastructure, or perhaps it's garbage collection)
Any pointers? I've tried to use Instruments, but I don't have the skills the use it effectively. It seems that the exploding memory usage is caused by __NSOperationInternal. Here's a sample Instruments trace.
Thank you.
First, see this answer of mine:
https://stackoverflow.com/a/28566113/341994
You should not be looking at the Memory graphs in the debugger; believe only what Instruments tells you. Debug builds and Release builds are memory-managed very differently in Swift.
Second, if there is still trouble, try wrapping the interior of your handler in an autoreleasepool closure. I do not expect that that would make a difference, however (as this is not a loop), and I do not expect that it will be necessary, as I suspect that using Instruments will reveal that there was never any problem in the first place. However, the autoreleasepool call will make sure that autoreleased objects are not given a chance to accumulate.

Why do I get memory warnings with only 7 MB of memory allocated?

I am running my iOS App on iPod touch device and I get memory warnings even if the total allocation peak is only 7 MB as shown below (this happens when the Game Scene is pushed):
What I find strange is that:
the left peak (at time 0.00) corresponds to 20 MB of memory allocated (Introduction Scene) and despite this DOES NOT give any memory warning.
the central peak (at time 35.00) corresponds to raughly 7 MB of memory allocated (Game Scene is being pushed) and DOES give memory warning.
I do not understand why I get those warnings if the total memory is only 7 MB. Is this normal? How can I avoid this?
Looking at the allocation density we can see the following schema, which (to me) does not show much difference between the moment when the Intro Scene is being pushed (0.00) and the moment in which the Game Scene is being pushed (35.00). Being the density peaks similar I would assume that the memory warnings are due to something else that I am not able to spot.
EDIT:
I have been following a suggestion to use "Activity monitor" instead but unfortunately my App crashes when loading the Game Scene with only 30 MB of memory allocated. Here is the Activity monitor report.
Looking at the report I can see a total real memory usage sum of about 105 MB. Given this should refer to RAM memory and given my model should have 256 MB of RAM this should not cause APP crashes or Memory leaks problems.
I run the Leak monitor and it does not show any leak on my App. I also killed all the other apps.
However, analyzing the report, I see an astonishing 167 MB of Virtual Memory value associated to my App. Is this normal? What does that value mean? Can this be the reason for the crash? How can I detect which areas of my code are responsible for this?
My iPod is a 4th Generation model with 6.4 GB of capacity (memory) and only 290 MB of memory free. I am not sure if this somehow effects the Virtual Memory paging performance.
EDIT 2: I have also looked more at SpringBoard and its Virtual Memory usage is 180 MB. Is this normal? I found some questions/answers that seem to suggest that SpringBoard is responsible for autoreleasing objects (it should be the process for managing the screen and home botton but I am not sure if it has also to do with memory management). Is this correct?
Another note. I am using ARC. However I am not sure this has to do much with the issue as there are no apparent memory leaks and XCode should convert the code adding release/dealloc/retain calls to the compiled binary.
EDIT 3: As said before I am using ARC and Cocos2d (2.0). I have been playing around with the Activity monitor. I found out that if I remove the GameCenter authentication mechanism then the Activity Monitor runs fine (new doubt: did anyone else had a similar issue? Is the GameCenter authentication view being retained somewhere?). However I noticed that every time I navigate back and forwards among the various scenes prior the GameScene (Initial Scene -> Character Selection -> Planet Selection -> Character Selection -> Planet Selection -> etc.. -> Character Selection ..) the REAL MEMORY usage increases. After a while I start to get memory warnings and the App gets killed by iOS. Now the question is:
-> am I replacing the scenes in the correct way? I call the following from the various scene:
[[CCDirector sharedDirector] replaceScene: [MainMenuScene scene]];
I have Cocos2d 2.0 as static library and the code of replaceScene is this:
-(void) replaceScene: (CCScene*) scene
{
NSAssert( scene != nil, #"Argument must be non-nil");
NSUInteger index = [scenesStack_ count];
sendCleanupToScene_ = YES;
[scenesStack_ replaceObjectAtIndex:index-1 withObject:scene];
nextScene_ = scene; // nextScene_ is a weak ref
}
I wonder if somehow the scene does not get deallocated properly. I verified that the cleanup method is being called however I also added a CCLOG call on the CCLayer dealloc method and rebuild the static library. The result is that the dealloc method doesn't seem to be called.
Is this normal? :D
I found that other people had similar issues. I am wondering if it has to do with retain cycles and self blocks. I really need to spend some time studying this unless, from EDIT 3, anyone can tell me already what I am doing wrong :-)
All memory capacity shared through all apps&processes run in iOS. So, other apps can use a lot of memory and your app receive memory warning too. You'll receive memory warnings until it is not enough.
To understand what actually happens with memory in your app you should
Profile your app with Leaks (ARC is not guarantee that you don't have leaks, i.e. self-capturing issue).
Use heapshot analysis (shortly described here http://bentrengrove.com/blog/2013/4/26/heapshot-analysis)
And checkout this post about memory & virtual memory in iOS: http://liam.flookes.com/wp/2012/05/03/finding-ios-memory/
I solved this by adding a print of the process effective memory usage in the console. In this way I could get a precise measurament of the real memory used by the App process. Using instrument proved to be imprecise as the real memory used did not match with the one shown on instruments.
This code can be used to get the effective memory usage:
-(vm_size_t)report_memory
{
struct task_basic_info info;
mach_msg_type_number_t size = sizeof(info);
kern_return_t kerr = task_info(mach_task_self(),
TASK_BASIC_INFO,
(task_info_t)&info,
&size);
if( kerr == KERN_SUCCESS ) {
} else {
NSLog(#"Error with task_info(): %s", mach_error_string(kerr));
}
return info.resident_size;
}

Core Data Import - Not releasing memory

My question is about Core Data and memory not being released. I am doing a sync process importing data from a WebService which returns a json. I load in, in memory, the data to import, loop through and create NSManagedObjects. The imported data needs to create objects that have relationships to other objects, in total there are around 11.000. But to isolate the problem I am right now only creating the items of the first and second level, leaving the relationship out, those are 9043 objects.
I started checking the amount of memory used, because the app was crashing at the end of the process (with the full data set). The first memory check is after loading in memory the json, so that the measurement really takes only in consideration the creation, and insert of the objects into Core Data. What I use to check the memory used is this code (source)
-(void) get_free_memory {
struct task_basic_info info;
mach_msg_type_number_t size = sizeof(info);
kern_return_t kerr = task_info(mach_task_self(),
TASK_BASIC_INFO,
(task_info_t)&info,
&size);
if( kerr == KERN_SUCCESS ) {
NSLog(#"Memory in use (in bytes): %f",(float)(info.resident_size/1024.0)/1024.0 );
} else {
NSLog(#"Error with task_info(): %s", mach_error_string(kerr));
}
}
My setup:
1 Persistent Store Coordinator
1 Main ManagedObjectContext (MMC) (NSMainQueueConcurrencyType used to read (only reading) the data in the app)
1 Background ManagedObjectContext (BMC) (NSPrivateQueueConcurrencyType, undoManager is set to nil, used to import the data)
The BMC is independent to the MMC, so BMC is no child context of MMC. And they do not share any parent context. I don't need BMC to notify changes to MMC. So BMC only needs to create/update/delete the data.
Plaform:
iPad 2 and 3
iOS, I have tested to set the deployment target to 5.1 and 6.1. There is no difference
XCode 4.6.2
ARC
Problem:
Importing the data, the used memory doesn't stop to increase and iOS doesn't seem to be able to drain the memory even after the end of the process. Which, in case the data sample increases, leads to Memory Warnings and after the closing of the app.
Research:
Apple documentation
Efficiently importing Data
Reducing Memory Overhead
Good recap of the points to have in mind when importing data to Core Data (Stackoverflow)
Tests done and analysis of the memory release. He seems to have the same problem as I, and he sent an Apple Bug report with no response yet from Apple. (Source)
Importing and displaying large data sets (Source)
Indicates the best way to import large amount of data. Although he mentions:
"I can import millions of records in a stable 3MB of memory without
calling -reset."
This makes me think this might be somehow possible? (Source)
Tests:
Data Sample: creating a total of 9043 objects.
Turned off the creation of relationships, as the documentation says they are "expensive"
No fetching is being done
Code:
- (void)processItems {
[self.context performBlock:^{
for (int i=0; i < [self.downloadedRecords count];) {
#autoreleasepool
{
[self get_free_memory]; // prints current memory used
for (NSUInteger j = 0; j < batchSize && i < [self.downloadedRecords count]; j++, i++)
{
NSDictionary *record = [self.downloadedRecords objectAtIndex:i];
Item *item=[self createItem];
objectsCount++;
// fills in the item object with data from the record, no relationship creation is happening
[self updateItem:item WithRecord:record];
// creates the subitems, fills them in with data from record, relationship creation is turned off
[self processSubitemsWithItem:item AndRecord:record];
}
// Context save is done before draining the autoreleasepool, as specified in research 5)
[self.context save:nil];
// Faulting all the created items
for (NSManagedObject *object in [self.context registeredObjects]) {
[self.context refreshObject:object mergeChanges:NO];
}
// Double tap the previous action by reseting the context
[self.context reset];
}
}
}];
[self check_memory];// performs a repeated selector to [self get_free_memory] to view the memory after the sync
}
Measurment:
It goes from 16.97 MB to 30 MB, after the sync it goes down to 28 MB. Repeating the get_memory call each 5 seconds maintains the memory at 28 MB.
Other tests without any luck:
recreating the persistent store as indicated in research 2) has no effect
tested to let the thread wait a bit to see if memory restores, example 4)
setting context to nil after the whole process
Doing the whole process without saving context at any point (loosing therefor the info). That actually gave as result maintaing less amount of memory, leaving it at 20 MB. But it still doesn't decrease and... I need the info stored :)
Maybe I am missing something but I have really tested a lot, and after following the guidelines I would expect to see the memory decreasing again. I have run Allocations instruments to check the heap growth, and this seems to be fine too. Also no memory Leaks.
I am running out of ideas to test/adjust... I would really appreciate if anyone could help me with ideas of what else I could test, or maybe pointing to what I am doing wrong. Or it is just like that, how it is supposed to work... which I doubt...
Thanks for any help.
EDIT
I have used instruments to profile the memory usage with the Activity Monitor template and the result shown in "Real Memory Usage" is the same as the one that gets printed in the console with the get_free_memory and the memory still never seems to get released.
Ok this is quite embarrassing... Zombies were enabled on the Scheme, on the Arguments they were turned off but on Diagnostics "Enable Zombie Objects" was checked...
Turning this off maintains the memory stable.
Thanks for the ones that read trough the question and tried to solve it!
It seems to me, the key take away of your favorite source ("3MB, millions of records") is the batching that is mentioned -- beside disabling the undo manager which is also recommended by Apple and very important).
I think the important thing here is that this batching has to apply to the #autoreleasepool as well.
It's insufficient to drain the autorelease pool every 1000
iterations. You need to actually save the MOC, then drain the pool.
In your code, try putting a second #autoreleasepool into the second for loop. Then adjust your batch size to fine-tune.
I have made tests with more than 500.000 records on an original iPad 1. The size of the JSON string alone was close to 40MB. Still, it all works without crashes, and some tuning even leads to acceptable speed. In my tests, I could claim up to app. 70MB of memory on an original iPad.

Resources