How to tell if malloc failed in swift on iOS? - ios

How can I detect if malloc fails in swift?
The end goal is to simply allocate the required amount of space, and if ios can not allocate it, report this elegantly to the user (instead of being terminated).
When I try the code below, the pointer is never nil and errno is always 0.
let pointer : UnsafeMutableRawPointer? = malloc(fileSize)
print("errno = \(errno)")
if (pointer == nil) {
print("Malloc failed")
}

Why are you using malloc in Swift, at all?
let pointer = UnsafeMutablePointer<UInt8>.allocate(capacity: fileSize)
More to the point, under what circumstance does reading a file need you to manually allocate memory like this? 🤔 Foundation provides APIs for reading files directly into Data.

LLVM has an optimization which elides unused malloc calls. If your test code never actually needs the allocation, it will never be performed.

Related

Understanding Swift thread safety

I have encountered a data race in my app using Xcode's Thread Sanitizer and I have a question on how to address it.
I have a var defined as:
var myDict = [Double : [Date:[String:Any]]]()
I have a thread setup where I call a setup() function:
let queue = DispatchQueue(label: "my-queue", qos: .utility)
queue.async {
self.setup {
}
}
My setup() function essentially loops through tons of data and populates myDict. This can take a while, which is why we need to do it asynchronously.
On the main thread, my UI accesses myDict to display its data. In a cellForRow: method:
if !myDict.keys.contains(someObject) {
//Do something
}
And that is where I get my data race alert and the subsequent crash.
Exception NSException * "-[_NSCoreDataTaggedObjectID objectForKey:]:
unrecognized selector sent to instance
0x8000000000000000" 0x0000000283df6a60
Please kindly help me understand how to access a variable in a thread safe manner in Swift. I feel like I'm possibly half way there with setting, but I'm confused on how to approach getting on the main thread.
One way to access it asynchronously:
typealias Dict = [Double : [Date:[String:Any]]]
var myDict = Dict()
func getMyDict(f: #escaping (Dict) -> ()) {
queue.async {
DispatchQueue.main.async {
f(myDict)
}
}
}
getMyDict { dict in
assert(Thread.isMainThread)
}
Making the assumption, that queue possibly schedules long lasting closures.
How it works?
You can only access myDict from within queue. In the above function, myDict will be accessed on this queue, and a copy of it gets imported to the main queue. While you are showing the copy of myDict in a UI, you can simultaneously mutate the original myDict. "Copy on write" semantics on Dictionary ensures that copies are cheap.
You can call getMyDict from any thread and it will always call the closure on the main thread (in this implementation).
Caveat:
getMyDict is an async function. Which shouldn't be a caveat at all nowadays, but I just want to emphasise this ;)
Alternatives:
Swift Combine. Make myDict a published Value from some Publisher which implements your logic.
later, you may also consider to use async & await when it is available.
Preface: This will be a pretty long non-answer. I don't actually know what's wrong with your code, but I can share the things I do know that can help you troubleshoot it, and learn some interesting things along the way.
Understanding the error
Exception NSException * "-[_NSCoreDataTaggedObjectID objectForKey:]: unrecognized selector sent to instance 0x8000000000000000"
An Objective C exception was thrown (and not caught).
The exception happened when attempting to invoke -[_NSCoreDataTaggedObjectID objectForKey:]. This is a conventional way to refer to an Objective C method in writing. In this case, it's:
An instance method (hence the -, rather than a + that would be used for class methods)
On the class _NSCoreDataTaggedObjectID (more on this later)
On the method named objectForKey:
The object receiving this method invocation is the one with address 0x8000000000000000.
This is a pretty weird address. Something is up.
Another hint is the strange class name of _NSCoreDataTaggedObjectID. There's a few observations we can make about it:
The prefixed _NS suggests that it's an internal implementation detail of CoreData.
We google the name to find class dumps of the CoreData framework, which show us that:
_NSCoreDataTaggedObjectID subclasses _NSScalarObjectID
Which subclasses _NSCoreManagedObjectID
Which subclasses NSManagedObjectID
NSManagedObjectID is a public API, which has its own first-party documentation.
It has the word "tagged" in its name, which has a special meaning in the Objective C world.
Some back story
Objective C used message passing as its sole mechanism for method dispatch (unlike Swift which usually prefers static and v-table dispatch, depending on the context). Every method call you wrote was essentially syntactic sugar overtop of objc_msgSend (and its variants), passing to it the receiver object, the selector (the "name" of the method being invoked) and the arguments. This was a special function that would do the job of checking the class of the receiver object, and looking through that classes' hierarchy until it found a method implementation for the desired selector.
This was great, because it allows you to do a lot of cool runtime dynamic behaviour. For example, menu bar items on a macOS app would just define the method name they invoke. Clicking on them would "send that message" to the responder chain, which would invoke that method on the first object that had an implementation for it (the lingo is "the first object that answers to that message").
This works really well, but has several trade-offs. One of them was that everything had to be an object. And by object, we mean a heap-allocated memory region, whose first several words of memory stored meta-data for the object. This meta-data would contain a pointer to the class of the object, which was necessary for doing the method-loopup process in objc_msgSend as I just described.
The issue is, that for small objects, (particularly NSNumber values, small strings, empty arrays, etc.) the overhead of these several words of object meta-data might be several times bigger than the actual object data you're interested in. E.g. even though NSNumber(value: true /* or false */) stores a single bit of "useful" data, on 64 bit systems there would be 128 bits of object overhead. Add to that all the malloc/free and retain/release overhead associated with dealing with large numbers of tiny object, and you got a real performance issue.
"Tagged pointers" were a solution to this problem. The idea is that for small enough values of particular privileged classes, we won't allocate heap memory for their objects. Instead, we'll store their objects' data directly in their pointer representation. Of course, we would need a way to know if a given pointer is a real pointer (that points to a real heap-allocated object), or a "fake pointer" that encodes data inline.
The key realization that malloc only ever returns memory aligned to 16-byte boundaries. This means that 4 bits of every memory address were always 0 (if they weren't, then it wouldn't have been 16-byte aligned). These "unused" 4 bits could be employed to discriminate real pointers from tagged pointers. Exactly which bits are used and how differs between process architectures and runtime versions, but the general idea is the same.
If a pointer value had 0000 for those 4 bits then the system would know it's a real object pointer that points to a real heap-allocated object. All other possible values of those 4-bit values could be used to signal what kind of data is stored in the remaining bits. The Objective C runtime is actually opensource, so you can actually see the tagged pointer classes and their tags:
{
// 60-bit payloads
OBJC_TAG_NSAtom = 0,
OBJC_TAG_1 = 1,
OBJC_TAG_NSString = 2,
OBJC_TAG_NSNumber = 3,
OBJC_TAG_NSIndexPath = 4,
OBJC_TAG_NSManagedObjectID = 5,
OBJC_TAG_NSDate = 6,
// 60-bit reserved
OBJC_TAG_RESERVED_7 = 7,
// 52-bit payloads
OBJC_TAG_Photos_1 = 8,
OBJC_TAG_Photos_2 = 9,
OBJC_TAG_Photos_3 = 10,
OBJC_TAG_Photos_4 = 11,
OBJC_TAG_XPC_1 = 12,
OBJC_TAG_XPC_2 = 13,
OBJC_TAG_XPC_3 = 14,
OBJC_TAG_XPC_4 = 15,
OBJC_TAG_NSColor = 16,
OBJC_TAG_UIColor = 17,
OBJC_TAG_CGColor = 18,
OBJC_TAG_NSIndexSet = 19,
OBJC_TAG_NSMethodSignature = 20,
OBJC_TAG_UTTypeRecord = 21,
// When using the split tagged pointer representation
// (OBJC_SPLIT_TAGGED_POINTERS), this is the first tag where
// the tag and payload are unobfuscated. All tags from here to
// OBJC_TAG_Last52BitPayload are unobfuscated. The shared cache
// builder is able to construct these as long as the low bit is
// not set (i.e. even-numbered tags).
OBJC_TAG_FirstUnobfuscatedSplitTag = 136, // 128 + 8, first ext tag with high bit set
OBJC_TAG_Constant_CFString = 136,
OBJC_TAG_First60BitPayload = 0,
OBJC_TAG_Last60BitPayload = 6,
OBJC_TAG_First52BitPayload = 8,
OBJC_TAG_Last52BitPayload = 263,
OBJC_TAG_RESERVED_264 = 264
You can see, strings, index paths, dates, and other similar "small and numerous" classes all have reserved pointer tag values. For each of these "normal classes" (NSString, NSDate, NSNumber, etc.), there's a special internal subclass which implements all the same public API, but using a tagged pointer instead of a regular object.
As you can see, there's a value for OBJC_TAG_NSManagedObjectID. It turns out, that NSManagedObjectID objects were numerous and small enough that they would benefit greatly for this tagged-pointer representation. After all, the value of NSManagedObjectID might be a single integer, much like NSNumber, which would be wasteful to heap-allocate.
If you'd like to learn more about tagged pointers, I'd recommend Mike Ash's writings, such as https://www.mikeash.com/pyblog/friday-qa-2012-07-27-lets-build-tagged-pointers.html
There was also a recent WWDC talk on the subject: WWDC 2020 - Advancements in the Objective-C runtime
The strange address
So in the previous section we found out that _NSCoreDataTaggedObjectID is the tagged-pointer subclass of NSManagedObjectID. Now we can notice something else that's strange, the pointer value we saw had a lot of zeros: 0x8000000000000000. So what we're dealing with is probably some kind of uninitialized-state of an object.
Conclusion
The call stack can shed further light on where this happens exactly, but what we know is that somewhere in your program, the objectForKey: method is being invoked on an uninitialized value of NSManagedObjectID.
You're probably accessing a value too-early, before it's properly initialized.
To work around this you can take one of several approaches:
A future ideal world, use would just use the structured concurrency of Swift 5.5 (once that's available on enough devices) and async/await to push the work on the background and await the result.
Use a completion handler to invoke your value-consuming code only after the value is ready. This is most immediately-easy, but will blow up your code base with completion handler boilerplate and bugs.
Use a concurrency abstraction library, like Combine, RxSwift, or PromiseKit. This will be a bit more work to set up, but usually leads to much clearer/safer code than throwing completion handlers in everywhere.
The basic pattern to achieve thread safety is to never mutate/access the same property from multiple threads at the same time. The simplest solution is to just never let any background queue interact with your property directly. So, create a local variable that the background queue will use, and then dispatch the updating of the property to the main queue.
Personally, I wouldn't have setup interact with myDict at all, but rather return the result via the completion handler, e.g.
// properties
var myDict = ...
private let queue = DispatchQueue(label: "my-queue", qos: .utility) // some background queue on which we'll recalculate what will eventually be used to update `myProperty`
// method doesn't reference `myDict` at all, but uses local var only
func setup(completion: #escaping (Foo) -> Void) {
queue.async {
var results = ... // some local variable that we'll use as we're building up our results
// do time-consuming population of `results` here;
// do not touch `myDict` here, though;
// when all done, dispatch update of `myDict` back to the main queue
DispatchQueue.main.async { // dispatch update of property back to the main queue
completion(results)
}
}
}
Then the routine that calls setup can update the property (and trigger necessary UI update, too).
setup { results in
self.myDict = results
// also trigger UI update here, too
}
(Note, your closure parameter type (Foo in my example) would be whatever type myDict is. Maybe a typealias as advised elsewhere, or better, use custom types rather than dictionary within dictionary within dictionary. Use whatever type you’d prefer.)
By the way, your question’s title and preamble talks about TSAN and thread safety, but you then share a “unrecognized selector” exception, which is a completely different issue. So, you may well have two completely separate issues going on. A TSAN data race error would have produced a very different message. (Something like the error I show here.) Now, if setup is mutating myDict from a background thread, that undoubtedly will lead to thread-safety problems, but your reported exception suggests there might also be some other problem, too...

Swift - Is setting variable property concurrency or multithreading safe?

As I know setting a variable property when using concurrency or multithreading is not safe but I can't produce a crash with below code.
class Node {
var data = 0
}
var node = Node()
let concurrentQueue = DispatchQueue(label: "queue", attributes: .concurrent)
for i in 0...1000 {
concurrentQueue.async {
node.data = i // Should get crash at this line
}
}
UPDATE1
Thanks #MartinR for pointing out in his comment.
Enable the “Thread Sanitizer” and it'll report an error immediately.
UPDATE2
The code got EXC_BAD_ACCESS KERN_INVALID_ADDRESS crash if changing data to reference type. It doesn't always happen but sometimes it will. For example:
class Data {}
class Node {
var data = Data() // Use reference type instead of value type
}
var node = Node()
let concurrentQueue = DispatchQueue(label: "queue", attributes: .concurrent)
for i in 0...1000 {
concurrentQueue.async {
node.data = Data() // EXC_BAD_ACCESS KERN_INVALID_ADDRESS
}
}
This behavior also happens in Objective-C. Setting object property concurrently will cause crash. But with primitive type, the crash will not happen.
Questions
Does setting value type property concurrently will produce a crash?
If it doesn't produce a crash, what is the difference between setting value type property and setting reference type property?
It's perfect if anyone can also explain why setting reference type property concurrently will produce a crash.
First off
Class value are stored in heap memory while struct/enum value are stored in stack memory and compiler will try to allocate those memory at compile time (according to my first ref and many online answer). You can check using this code:
class MemTest {}
class Node {
var data = MemTest()
}
let node = Node()
let concurrentQueue = DispatchQueue(label: "queue", attributes: .concurrent)
for index in 0...100000 {
concurrentQueue.async {
node.data = MemTest()
withUnsafePointer(to: &node.data) {
print("Node data # \($0)")
}
withUnsafePointer(to: node.data) {
print("Node data value no. \(index) # \($0)")
}
}
How to: run 2 time and check memory address for value changed time 500, switch MemTest between class and struct will show the difference. Struct will show the same while class will show different address between each time.
So changing value type is just like changing address but the the memory blocks will not be restored while changing reference type is not just changing address but also restore and allocate new memory blocks which will cause the program to break.
Secondly
But if running first code block of #trungduc with withUnsafePointer, it will show us that the index var of for loop is allocated on the go and in the heap memory, so why is that?
As mentioned before, compiler will only try to allocate the mem if the value type is calculable at compile time. If the they are not calculable, the value will be allocated in heap memory and stay there till the end of scope (according to my second ref). So may be the explanation here is the system will restored the allocated stack after everything is done - end of scope (This I'm not very sure). So in this case the code will not produce a crash as we know.
My conclusion, the reference type variable's mem will be allocated and restored with no restriction in this case whereas value type variable's mem will be allocated and removed only after the system enter and exit a the scope which contains said variable
Thanks
#Gokhan Topcu for Check section "Memory Management"
#Bruno Rocha for Check section "Heap Allocated Value Types"
Some words
My answer is not very solid and might have lots of grammar and spelling error. All update are appreciated. Thanks in advance
Update
For the part I'm not very sure:
variable index was copied to a new memory address with operator = so it doesn't matter where the scope end, stack will be released after for loop
After some digging, in #trungduc's code, with reference type variable, it will do 3 things:
Allocate new memory for class Data
Reduce reference to old Data stored in node.data, even free old Data if it's no longer referenced
Point node.data to new Data
While for value type it will do 1 thing only:
Point node.data to Integer in stack memory
The major difference is in step 2 where there is a chance the old Data memory is restored
There are possibilities where this scenario will happen with reference type
________Task 1________|________Task 2________
Allocate new Data #1 |
|Allocate new Data #2
Load pointer to old |
Data |
Reduce reference count|
to old Data |
|Load pointer to old
|Data
Free old Data |
|Reduce reference count
|to old Data (!)
|Free old Data (!)
Reference new Data #1 |
|Reference new Data #2
while with value type, this will happen
________Task 1________|________Task 2________
Reference to Integer 1|
|Reference to Integer 2
In the first case we will have various alternative scenarios but in most case, we get a segmentation fault because thread 2 tries to dereference that pointer after thread 1 free it. There might be other issues like memory leaking as we notice, thread 2 might not reduce reference count to Data #1 correctly
Whereas in second case, it's just changing the pointer.
Note
In second case, it will never cause crash on Intel CPU but not guaranteed on other CPUs as well because many CPUs do not promise that doing this will not cause a crash.
Non-atomic doesn't mean that app will crash if multiple threads are using the shared resource.
Atomic Properties
Defining a property as atomic will guarantee that a valid value will be returned. Notice that valid does not always mean correct.
This also does not mean that atomic properties are thread safe. Different threads can attempt to write and read a the same time. One of two values will be returned — the value before the change or the value of the change
Non-Atomic Properties
Non atomic properties has no guarantee regarding the returned value. It can be the correct value, a partially written value or even some garbage value.
It simply means that the final value will not be consistent. You won't know which thread will update the value last.
You can refer the link for more clarification on this.

Crashlytics iOS - Crash at line 0 - Swift sources

I'm currently facing a problem with some Swift source files when a crash occurs. Indeed, on Crashlytics I have a weird info about the line and the reason of the crash. It tells me the source has crashed at the line 0 and it gives me a SIGTRAP error. I read that this error occurs when a Thread hits a BreakPoint. But the problem is that this error occurs while I'm not debugging (application test from TestFlight).
Here is an example when Crashlytics tells me there's a SIGTRAP Error at line 0 :
// Method that crashs
private func extractSubDataFrom(writeBuffer: inout Data, chunkSize: Int) -> Data? {
guard chunkSize > 0 else { // Prevent from having a 0 division
return nil
}
// Get nb of chunks to write (then the number of bytes from it)
let nbOfChunksToWrite: Int = Int(floor(Double(writeBuffer.count) / Double(chunkSize)))
let dataCountToWrite = max(0, nbOfChunksToWrite * chunkSize)
guard dataCountToWrite > 0 else {
return nil // Not enough data to write for now
}
// Extract data
let subData = writeBuffer.extractSubDataWith(range: 0..<dataCountToWrite)
return subData
}
Another Swift file to explain what happens at the line "writeBuffer.extractSubDataWith(range: 0..
public extension Data {
//MARK: - Public
public mutating func extractSubDataWith(range: Range) -> Data? {
guard range.lowerBound >= 0 && range.upperBound <= self.count else {
return nil
}
// Get a copy of data and remove them from self
let subData = self.subdata(in: range)
self.removeSubrange(range)
return subData
}
}
Could you tell me what I'm doing wrong ? Or what can occurs this weird SIGTRAP error ?
Thank you
Crashing with a line of zero is indeed weird. But, common in Swift code.
The Swift compiler can do code generation on your behalf. This can happen quite a bit with generic functions, but may also happen for other reasons. When the compiler generates code, it also produces debug information for the code it generates. This debug information typically references the file that caused the code to be generated. But, the compiler tags it all with a line of 0 to distinguish it from code that was actually written by the developer.
These generic functions also do not have to be written by you - I've seen this happen with standard library functions too.
(Aside: I believe that the DWARF standard can, in fact, describe this situation more precisely. But, unfortunately Apple doesn't seem to use it in that way.)
Apple verified this line zero behavior via a Radar I filed about it a number of years ago. You can also poke around in your app's own debug data (via, for example dwarfdump) if you want to confirm.
One reason you might want to try to do this, is if you really don't trust that Crashlytics is labelling the lines correctly. There's a lot of stuff between their UI and the raw crash data. It is conceivable something's gone wrong. The only way you can confirm this is to grab the crashing address + binary, and do the lookup yourself. If dwarfdump tells you this happened at line zero, then that confirms this is just an artifact of compile-time code generation.
However, I would tend to believe there's nothing wrong with the Crashlytics UI. I just wanted to point it out as a possibility.
As for SIGTRAP - there's nothing weird about that at all. This is just an indication that the code being run has decided to terminate the process. This is different, for example, from a SIGBUS, where the OS does the terminating. This could be caused by Swift integer and/or range bounds checking. Your code does have some of that kind of thing in both places. And, since that would be so performance-critical - would be a prime candidate for inline code generation.
Update
It now seems like, at least in some situations, the compiler also now uses a file name of <compiler-generated>. I'm sure they did this to make this case clearer. So, it could be that with more recent versions of Swift, you'll instead see <compiler-generated>:0. This might not help tracking down a crash, but will least make things more obvious.

How to try/catch this type of error in iOS

I have some code
captureSession = AVCaptureSession()
captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
let backCamera = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo) etc...
Of course, this correctly does not work on simulator. No problem.
If you do run it on a simulator, it correctly crashes right here
captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
like this
As a curiosity, how would you "catch" that crash?
If I try to "try" it in different ways,
try captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
I only get...
/Users/jpm/Desktop/development/-/classes/CameraPlane.swift:67:3: No calls to throwing functions occur within 'try' expression
How is it you wrap and catch that type of call?
Just BTW for anyone specifically dealing with this annoyance in the Camera,
func cameraBegin()
{
captureSession = AVCaptureSession()
if ( AVCaptureDevice.devices().count == 0 )
{
print("Running on simulator.")
return
}
...
try can only be used when calling a function that throws. These functions are explicitly marked with throws.
If you want to unwrap an optional safely (which I guess is what you want to achieve), you can use guard.
var number: Int?
guard let unwrappedNumber = number else {
return
}
print("number: \(unwrappedNumber)")
Unfortunately, EXC_BAD_ACCESS means it's too late to catch anything because the program will be stopped.
You can only "catch" exceptions that are provided by the language / runtime you use. That would be error objects in swift (or OjbC methods with an NSError** parameter), obj-c Exceptions you could catch with #try/#catch blocks (but they disrupt memory management and shouldn't be used for runtime error handling) or C++ exceptions.
So far, swift / objc don't have the concept of a NullReferenceException for a number of reasons, but EXC_BAD_ACCESS also occur when accessing already freed / invalid memory, which was a common programming error before ObjC ARC and swift. So only guarding against accessing the "0" memory address wouldn't help in those situations. Any handling of this would have to deal with all three of the above mentioned error/exception handling mechanisms and potentially corrupt memory management (unlike Java / .NET, there is no garbage collector that cleans up all the objects left over after performing an unexpected non-local return).
So the handling of the "bad" memory access is not protected by anything and will result in an immediate crash with no reasonable way to recover. So the only thing you can do is to perform runtime checks if the operation you are about to do is valid and use the "bang" (!) operator with caution.

iPhone Memory management for a deallocated password (Malloc Scribble in production?, fill with zeroes deallocated memory?)

I'm doing some research on how iPhone manage the heap and stack but it's very difficult to find a good source of information about this. I'm trying to trace how a password is kept in memory, even after the NSString is deallocated.
As far as I can tell, an iPhone will not clear the memory content (write zeros or garbage) once the release count in ARC go down to 0. So the string with the password will live in memory until that memory position is overridden.
There's a debug option in Xcode, Malloc Scribble, to debug memory problems that will fill deallocated memory with 0x55, by enabling/disabling this option (and disabling Zombies), and after a memory dump of the simulator (using gcore) I can check if the content has been replaced in memory with 0x55.
I wonder if this is something that can be done with the Apple Store builds, fill deallocated memory with garbage data, if my assumption that iPhone will not do that by default is correct or not, or if there's any other better option to handle sensitive data in memory, and how it should be cleared after using it (Mutable data maybe? write in that memory position?)
I don't think that there's something that can be done on the build settings level. You can, however, apply some sort of memory scrubbing yourself by zeroing the memory (use memset with the pointer to your string).
As #Artal was saying, memset can be used to write in a memory position. I found this framework "iMAS Secure Memory" that can be useful to handle this:
The "iMAS Secure Memory" framework provides a set of tools for
securing, clearing, and validating memory regions and individual
variables. It allows an object to have it's data sections overwritten
in memory either with an encrypted version or null bytes
They have a method that it should be useful to clear a memory position:
// Return NO if wipe failed
extern inline BOOL wipe(NSObject* obj) {
NSLog(#"Object pointer: %p", obj);
if(handleType(obj, #"", &wipeWrapper) == YES) {
if (getSize(obj) > 0){
NSLog(#"WIPE OBJ");
memset(getStart(obj), 0, getSize(obj));
}
else
NSLog(#"WIPE: Unsupported Object.");
}
return YES;
}

Resources