Method parameters are nil when called using IMP in Release configuration - ios

Our Swift application needed some lower-level C/Objective-C code, so we added a Dynamic Library to make integration with the application easier.
The library has a single, shared instance of a controller, but the style of the callbacks doesn't work well for closures, so we went with a protocol. However since multiple classes need to use this controller it would need to have multiple delegates. So each class registers itself as a delegate and when a protocol method is called it iterates through each delegate, gets the IMP for the selector, and calls it.
On debug builds this worked fine, it was only until we used the Release configuration that we noticed that the parameters to these functions were nil in the implementation of the protocol methods, even if they were not nil when called.
This is how our protocol methods are called:
- (void) delegateCall:(SEL)sel withObject:(id)object {
for (id delegate in self.delegates) {
if ([delegate respondsToSelector:sel]) {
IMP imp = [delegate methodForSelector:sel];
void (*func)(__strong id,SEL,...) = (void (*)(__strong id, SEL, ...))imp;
func(delegate, sel, object);
}
}
}
Let's use the example protocol method: - (void) blah:(NSNumber * _Null_unspecified)aNumber;
If we call [self delegateCall:#selector(blah:) withObject:#32];, the object will be nil in the implementation of blah:
func blah(_ aNumber: NSNumber) {
if aNumber == nil {
print("The number is nil somehow?!?!?!") // <-- Release
} else {
print("The number is: \(aNumber.intValue)") // <-- Debug, prints 32
}
}
If we use call the method in code on the delegates (rather than using IMP) the issue does not happen:
for (id delegate in self.delegates) {
[delegate blah:#32];
}

Having never tried casting an IMP instance to a function with variadic arguments, I can't say for sure how it would/should work (it would probably involve parsing a va_list, for instance), but since you know that you have one and only one parameter, I think you should be able to solve this particular problem by just eliminating your use of variadic arguments when you cast your IMP instance to a function pointer:
- (void) delegateCall:(SEL)sel withObject:(id)object {
for (id delegate in self.delegates) {
if ([delegate respondsToSelector:sel]) {
IMP imp = [delegate methodForSelector:sel];
void (*func)(__strong id, SEL, id) = (void (*)(_strong id, SEL, id))imp;
func(delegate, sel, object);
}
}
}
Since you know that the argument is already an id, this should be a perfectly safe replacement.
As to why your original implementation works in a debug build but not in a release build, I can only guess; it might be related to the fact that release builds typically strip all symbols during link, and the runtime might be able to take advantage of the symbols, if present, in order to guess the correct argument ordering when invoking? Perhaps the compiler uses the wrong calling convention in a release configuration when generating the call to a function declared with a fixed argument footprint but invoked with variadic arguments? I'd be interested in further information if anyone has a more definitive answer to the debug/release question.
See the discussion on calling conventions here for a possible alternative using reinterpret_cast, if in fact your problem is due to a calling convention mismatch.

Related

how to create an autorelease object

Is this method create an autorelease object?
- (instancetype)autoreleasePerson {
return [[Person alloc] init];
}
I created an Command Line Tool Project to test this:
int main(int argc, const char * argv[]) {
#autoreleasepool {
{
[Person autoreleasePerson];
}
NSLog(#"did out scope");
NSLog(#"will out autoreleasepool");
}
NSLog(#"did out autoreleasepool");
return 0;
}
And the output is:
2022-02-04 23:22:23.224298+0800 MyTest[8921:4007144] did out scope
2022-02-04 23:22:23.224771+0800 MyTest[8921:4007144] will out autoreleasepool
2022-02-04 23:22:23.224876+0800 MyTest[8921:4007144] -[Person dealloc]
2022-02-04 23:22:23.224948+0800 MyTest[8921:4007144] did out autoreleasepool
The person instance will dealloc when the autoreleasepool drains!
But when I use the same Person class in my iOS APP project:
- (void)viewDidLoad {
[super viewDidLoad];
{
[Person autoreleasePerson];
}
NSLog(#"out scope");
}
The output is:
2022-02-04 23:28:13.992969+0800 MyAppTest[9023:4011490] -[Person dealloc] <Person: 0x600001fe8ff0>
2022-02-04 23:28:13.993075+0800 MyAppTest[9023:4011490] out scope
The person instance released once out of scope!
Why is this so?
It looks like on macOS the default behaviour is to autorelease return values, except for cases where the method name starts with "new", "init" or "copy":
+ (Person *)createPerson {
return [Person new]; // autorelease & return
}
+ (Person *)newPerson {
return [Person new]; // direct return
}
To control this behaviour apply a compiler attribute:
+ (Person *)createPerson __attribute__((ns_returns_retained)) {
return [Person new]; // direct return
}
+ (Person *)newPerson __attribute__((ns_returns_not_retained)) {
return [Person new]; // autorelease & return
}
To check whether a call to objc_autoreleaseReturnValue was added by the compiler, enable Debug -> Debug Workflow -> Always Show Disassembly,
and put a breakpoint inside these methods on return lines. A call to objc_autoreleaseReturnValue should be visible then:
See ARC reference - Retained return values
Both of the results are valid. You should never assume that there is an autorelease in ARC. See the section "Unretained return values" in the ARC specification:
A method or function which returns a retainable object type but does
not return a retained value must ensure that the object is still valid
across the return boundary.
When returning from such a function or method, ARC retains the value
at the point of evaluation of the return statement, then leaves all
local scopes, and then balances out the retain while ensuring that the
value lives across the call boundary. In the worst case, this may
involve an autorelease, but callers must not assume that the value is
actually in the autorelease pool.
So maybe it's autoreleased, and maybe not (i.e. maybe ARC optimizes it out).
Here, ARC will call objc_autoreleaseReturnValue() when returning from autoreleasePerson, because +alloc returns a retained reference, but autoreleasePerson returns a non-retained reference. What objc_autoreleaseReturnValue() does is check to see if the result of the return will be passed to objc_retainAutoreleasedReturnValue() in the calling function frame. If so, it can skip both the autorelease in the called function, and the retain in the calling function (since they "cancel out"), and hand off ownership directly into a retained reference in the calling function.
objc_retainAutoreleasedReturnValue() is called when ARC will retain the result of a function call. Now, I don't know why in this case calling [Person autoreleasePerson]; will involve a retain of the result, since the result is unused. Perhaps the compiler is treating it as Person temp = [Person autoreleasePerson];, and thus retains and then releases it. This may seem unnecessary, but it is valid for ARC to do it this way. And if ARC does happen to treat it this way internally, then the optimization described above can skip both the autorelease and retain, and it will be simply released in the calling function. Maybe it's doing this in one of your cases and not the other. Who knows why? But my point is that both are valid.
See this article for a more detailed explanation.

_Nullable method in Objective-C

If I have a method:
- (NSString*)convertName;
And then I do something like:
- (NSString*)convertName {
if (![myName isEqualToString:#"someString"]) {
return NULL;
}
.......
}
Why the compiler is letting me do this if I didn't specify _Nullable?
In Objective-C, if your methods are either:
Not declared within a NS_ASSUME_NONNULL_BEGIN/NS_ASSUME_NONNULL_END block
Not declared explicitly with _Nullable/_Nonnull parameters/return types
then the compiler will not enforce non-optionality. In this case optionality is implied (which is the opposite of Swift, where optionality must be explicitly stated), but in an unsafe way.
You can see this if your method above is represented in Swift – it would be shown as being declared like so:
func convertName() -> String!

Why I always get NO when performSelector:withObject:#YES in iOS, which is different in macOS?

I have some iOS code as follows:
//iOS
- (void)viewDidLoad {
[super viewDidLoad];
[self performSelector:#selector(handler:) withObject:#YES];
}
- (void)handler:(BOOL)arg { //always NO
if(arg) {
NSLog(#"Uh-hah!"); //won't log
}
}
I know I shouldn't write like this. It's wrong since #YES is an object, I should receive an id as argument and unbox it in handler:, like:
- (void)handler:(id)arg {
if([arg boolValue]) {...}
}
As a wrong code, for any other object of whatever class instead of #YES, I always get arg == NO.The problem is, why ON EARTH bool arg is always NO?
I did some research and here is what I've learned:
in iOS, BOOL is actually _Bool(or macro bool) in C (_Bool keyword)
in macOS, BOOL is actually signed char
If I create an identical macOS code, I'll get different result, like:
//macOS
- (void)viewDidLoad {
[super viewDidLoad];
[self performSelector:#selector(handler:) withObject:#YES]; //#YES's address: say 0x00007fffa38533e8
}
- (void)handler:(BOOL)arg { //\xe8 (=-24)
if(arg) {
NSLog(#"Uh-hah!"); //"Uh-huh!"
}
}
It makes sense since BOOL is just signed char, the argument is cast from the lowest byte of #YES object's address.
However, this explanation won't apply to iOS code. I thought any non-zero number would be cast to true(and the address itself must be non-zero).Bu why I got NO? *
-[NSObject performSelector:withObject:] is only supposed to be used with a method that takes exactly one object-pointer parameter, and returns an object-pointer. Your method takes a BOOL parameter, not an object-pointer parameter, so it cannot be used with -[NSObject performSelector:withObject:].
If you are always going to send the message handler: and you know the method has a BOOL parameter, you should just call it directly:
[self handler:YES];
If the name of the method will be determined dynamically at runtime, but you know the signature of the method will always be the same (in this case exactly one parameter of type BOOL, returning nothing), you can call it with objc_msgSend(). You must cast objc_msgSend to the appropriate function type for the underlying implementing function of the method before calling it (remember, the implementing function for Objective-C methods have the first two parameters being self and _cmd, followed by the declared parameters). This is because objc_msgSend is a trampoline that calls into the appropriate implementing function with all the registers and stack used for storing the arguments intact, so you must call it with the calling convention for the implementing function. In your case, you would do:
SEL selector = #selector(handler:); // assume this is computed dynamically at runtime
((void (*)(id, SEL, BOOL))objc_msgSend)(self, selector, YES);
By the way, if you look at the source code for -[NSObject performSelector:withObject:], you will see that they do the same thing -- they know that the signature of the method must be one parameter of type id and a return type of id, so they cast objc_msgSend to id (*)(id, SEL, id) and then call it.
In the rare case where the signature of the method will also vary dynamically and is not known at compile-time, then you will have to use NSInvocation.
Let's consider what happened in your case when you used -[NSObject performSelector:withObject:] with a method of the wrong signature. Inside, they call objc_msgSend(), which is equivalent to calling the underlying implementing function, with a function pointer of the type id (*)(id, SEL, id). However, the implementing function of your method actually has type void (id, SEL, BOOL). So you are calling a function using a function pointer of a different type. According to the C standard (C99 standard, section 6.5.2.2, paragraph 9):
If the function is defined with a type that is not compatible with the
type (of the expression) pointed to by the expression that denotes the
called function, the behavior is undefined.
So basically what you are seeing is undefined behavior. Undefined behavior means anything can happen. It could return one thing on one system and another thing on another, as you're seeing, or it could crash the whole program. You can't rely on any specific behavior.
The issue is with the handler's declaration. The param type of a handler in this case should be id (Objective C) or Any (Swift).
- (void)handler:(id)arg {
if(arg) { // Would be same as object passed
NSLog(#"Uh-hah!");
}
The answer is in question only..
in iOS,BOOL declaration is:
'typedef bool BOOL' and you have also mentioned this..
so for iOS it will take the default value as false. so your log will never print.

Objective-C and the self keyword [duplicate]

What does self mean in Objective-C? When and where should I use it?
Is it similar to this in Java?
self refers to the instance of the current class that you are working in, and yes, it is analagous to this in Java.
You use it if you want to perform an operation on the current instance of that class. For example, if you are writing an instance method on a class, and you want to call a method on that same instance to do something or retrieve some data, you would use self:
int value = [self returnSomeInteger];
This is also often used for accessor methods on an instance (i.e. setters and getters) especially with setter methods, if they implement extra functionality rather than just setting the value of an instance variable, so that you do not have to repeat that code over and over when you want to set the value of that variable, for example:
[self setSomeVariable:newValue];
One of the most common uses of self is during initialization of a class. Sample code might look like:
- (id)init
{
self = [super init];
if(self!=nil) {
//Do stuff, such as initializing instance variables
}
return self;
}
This invokes the superclass's (via super) initializer, which is how chained initialization occurs up the class hierarchy. The returned value is then set to self, however, because the superclass's initializer could return a different object than the superclass.
self is an implied argument to all Obj-C methods that contains a pointer to the current object in instance methods, and a pointer to the current class in class methods.
Another implied argument is _cmd, which is the selector that was sent to the method.
Please be aware that you only get self and _cmd in Obj-C methods. If you declare a C(++) method, for instance as a callback from some C library, you won't get self or cmd.
For more information, see the Using Hidden Arguments section of the Objective-C Runtime Programming guide.
Yes, it's exactly the same as "this" in Java - it points to the "current" object.
Two important notes:
The class itself, e.g. UIView (I'm NOT talking about a UIView object) is itself an object, and there is a self associated with it. So for example, you can reference self in a class method like this:
// This works
+(void) showYourself { [self performSelector: #selector(makeTheMostOfYourself)]; }
// Class method!
+(void) makeTheMostOfYourself { }
Note that the compiler does NOT raise any warnings or errors, even if the self you mean to reference is an object and not a class. It is VERY easy to cause crashes this way, for example:
// This will crash!
+(void) showYourself { [self performSelector: #selector(makeTheMostOfYourself)]; }
// Object method!
-(void) makeTheMostOfYourself { }
// This will crash too!
-(void) showYourself2 { [self performSelector: #selector(makeTheMostOfYourself2)]; }
// Class method!
+(void) makeTheMostOfYourself2 { }
Sadly, this makes class methods a bit harder to use, which is unfortunate because they are a valuable tool for encapsulation through information hiding. Just be careful.
Wow, that many half-correct answers and misleading hints. This let me answer the Q even there is a accepted answer for years:
First of all: It is really hard to compare a concept of messaging/calling in the context of an early binding, static typing language as Java with a late binding, dynamically typing languages as Objective-C. At one point this will break. I would say: No, this is not similiar, since the typing and dispatching concepts of both language are fundamental different so nothing can be similar to the other one. However, …
Then we should differ between the "two sides" of self.
A. Using self
When you use it in a message, it is simply an object reference as any other:
[self doSomething];
[anotherObject doSomething];
Technically both lines works identically (accept of having a different receiver, of course). This especially means, that the first line does not lead to an execution of a method inside the class of self, because self does not necessarily refer to "that class". As every message inside Objective-C (single exception: messages to super)this can lead to the execution of a method in a subclass:
#interface A : NSObject
- (void)doSomething;
- (void)doAnotherThing;
#end
#implementation
- (void)doSomething
{
[self doAntoherThing];
}
- (void)doAnotherThing
{
NSLog( #"A" );
}
#interface B : A
- (void)doSomething; // Not necessary, simply as a marker
#end
#implementation B
- (void)doAnotherThing
{
NSLog( #"B" );
}
In a code like this
B *b = [B new;]
[b doSomething];
The line
[self doAnotherThing];
in class A will lead to the execution of -doAnotherThing (B), because messages to self are late bound as every other message. The result on the console will b "B", not "A". Using self as a receiver you should not think of a single special rule. There is completely none.
(And the above example is a very good example for using self in class methods, because the same situation can occur on class methods. Using the class itself breaks polymorphism, what is one of the worst idea at all in OOP. DO use self in class methods, too.)
B. Getting self
What is self pointing to? It points to the object to whom the message is sent that caused the execution of the current method.
Having …
…[someObject doSomething]… // some object is a reference to an instance object
… as a message, a method is called, in the most simple case …
- (void)doSomething
{ … }
In such a case, self can point to an instance of the class, the method belongs to. And it can point to an instance of a subclass, the method belongs to, too. You don't know. (And this information is preserved using self to send a message as explained above.)
If the message is sent to a class object, self points to the class object, that was the receiver of the message. This is completely analogous. Therefore it is possible that self points to a subclass object:
#interface A : NSObject
+ (void)doSomething;
+ (void)doAnotherThing;
#end
#implementation
+ (void)doSomething
{
[self doAntoherThing];
}
+ (void)doAnotherThing
{
NSLog( #"A" );
}
#interface B : A
- (void)doSomething; // Not necessary, simply as a marker
#end
#implementation B
+ (void)doAnotherThing
{
NSLog( #"B" );
}
Having this classes
…[A doSomething]…
self inside -doSomething (A) points to the class object of B. Therefore [self doAnotherThing] of B(!) is executed. This is clearly different from
+ (void)doSomething
{
[A doAntoherThing];
}
The latter version causes relevant harm to the principles of OOP.
As a side note it is possible that self inside a class method of a root class points to an instance object of the root class or any subclass. You have to keep this in mind, when writing categories on NSObject.
self is an object pointer to the current instances dispatch table. It is an implicit first argument to every member function of an object, and is assigned when that function is called.
In functions like init, you need to be careful that when you call the super class init you reassign self to be the return value as the super class init may redefine what self points to.
super is similar to self except it points to the superclass dispatch table.

Writing getter and setter for BOOL variable

Obviously, with obj-c, there's usually no reason to write getters and setters (thanks to useful mr #synthesize).
So now, needing to do just this, I've come across the problem that I don't know how to write them. :p
I'm sure I'm probably not going about solving my problem the right way - it would be much easier to just subclass my object and such - but I'm trying to write category code to add properties because (in the beginning) it was quicker, and because I wanted to learn how to use category code in my app.
I've got this:
-(BOOL)isMethodStep {
return self.isMethodStep;
}
-(void)setIsMethodStep:(BOOL)theBoolean {
if(self.isMethodStep != theBoolean){
self.isMethodStep = theBoolean;
}
}
and I've tried it without the if query in the setter, but neither seem to work. Loading it with breakpoints shows that for some reason it gets stuck in a continuous loop in the getter method.
Is this code right or am I doing something wrong?
Thanks
Tom
In
-(BOOL)isMethodStep {
return self.isMethodStep;
}
return self.isMethodStep; calls the same isMethodStep method causing an infinite loop. Same thing for setter.
Just use your iVars directly in your accessor method implementations:
-(BOOL)isMethodStep {
return isMethodStep;
}
-(void)setIsMethodStep:(BOOL)theBoolean {
if(isMethodStep != theBoolean){
isMethodStep = theBoolean;
}
}
You don't want to use the self. property syntax within the setter/getter, because that invokes the setter/getter again, instead of directly assigning to the variable.
You need to just say:
-(BOOL)isMethodStep {
return isMethodStep;
}
-(void)setIsMethodStep:(BOOL)theBoolean {
isMethodStep = theBoolean;
}
(assuming "isMethodStep" is the name of your variable). I would omit the test in the setter method too...

Resources