NSDictionary<FBGraphUser> *user syntax explanation - ios

In the Facebook iOS SDK requests are returned with the following handler:
^(FBRequestConnection *connection,
NSDictionary<FBGraphUser> *user,
NSError *error) { }
The user variable can then be accessed with calls like these...
self.userNameLabel.text = user.name;
self.userProfileImage.profileID = user.id;
This syntax is somewhat similar to the syntax id <protocolDelegate> object syntax that is a common property declaration, except for that the NSDictionary is the id object explicitely, and that dictionary conforms to the protocol? But where does the dot syntax come from and how does one state that an arbitrary NSFoundation object corresponds to a protocol without subclassing the object itself and making it conform?
I did some additional research about dot notation and NSDictionary and it appears that it is not possible to use dot notation on a dictionary without adding a category to NSDictionary. However, I did not see any reference of the <> syntax in the Apple Documentation to indicate that this particular instance of NSDictionary conformed to that notation.
And the Facebook documentation is a little sparse on how this wrapping works:
The FBGraphUser protocol represents the most commonly used properties
of a Facebook user object. It may be used to access an NSDictionary
object that has been wrapped with an FBGraphObject facade.
If one follows this lead to the FBGraphObject documentation then there is methods that return dictionaries that conform to this "facade..." but no further explanation on how one goes about wrapping a dictionary.
So I guess my questions are a few:
What would the underlying code look like to make this sort of
syntax work?
Why does it exist?
Why would facebook implement it this way as opposed to just
making an object that they can convert the data into?
Any explanation or insight would be very appreciated!

Basically, NSDictionary<FBGraphUser> *user, implies an object that inherits from NSDictionary, adding functionality (specifically, typed access) declared by the FBGraphUser protocol.
The reasons behind this approach are described in quite a bit of detail in the FBGraphObject documentation (the FBGraphUser protocol extends the FBGraphObject protocol). What might be confusing you is that FBGraphObject is a protocol (described here) and a class (described here), which inherits from NSMutableDictionary.
In terms of inner implementation, it's some pretty advanced Objective-C dynamic magic, which you probably don't want to worry about. All you need to know is you can treat the object as a dictionary if you wish, or use the additional methods in the protocol. If you really want to know the details, you can look at the source code for FBGraphObject, in particular, these methods:
#pragma mark -
#pragma mark NSObject overrides
// make the respondsToSelector method do the right thing for the selectors we handle
- (BOOL)respondsToSelector:(SEL)sel
{
return [super respondsToSelector:sel] ||
([FBGraphObject inferredImplTypeForSelector:sel] != SelectorInferredImplTypeNone);
}
- (BOOL)conformsToProtocol:(Protocol *)protocol {
return [super conformsToProtocol:protocol] ||
([FBGraphObject isProtocolImplementationInferable:protocol
checkFBGraphObjectAdoption:YES]);
}
// returns the signature for the method that we will actually invoke
- (NSMethodSignature *)methodSignatureForSelector:(SEL)sel {
SEL alternateSelector = sel;
// if we should forward, to where?
switch ([FBGraphObject inferredImplTypeForSelector:sel]) {
case SelectorInferredImplTypeGet:
alternateSelector = #selector(objectForKey:);
break;
case SelectorInferredImplTypeSet:
alternateSelector = #selector(setObject:forKey:);
break;
case SelectorInferredImplTypeNone:
default:
break;
}
return [super methodSignatureForSelector:alternateSelector];
}
// forwards otherwise missing selectors that match the FBGraphObject convention
- (void)forwardInvocation:(NSInvocation *)invocation {
// if we should forward, to where?
switch ([FBGraphObject inferredImplTypeForSelector:[invocation selector]]) {
case SelectorInferredImplTypeGet: {
// property getter impl uses the selector name as an argument...
NSString *propertyName = NSStringFromSelector([invocation selector]);
[invocation setArgument:&propertyName atIndex:2];
//... to the replacement method objectForKey:
invocation.selector = #selector(objectForKey:);
[invocation invokeWithTarget:self];
break;
}
case SelectorInferredImplTypeSet: {
// property setter impl uses the selector name as an argument...
NSMutableString *propertyName = [NSMutableString stringWithString:NSStringFromSelector([invocation selector])];
// remove 'set' and trailing ':', and lowercase the new first character
[propertyName deleteCharactersInRange:NSMakeRange(0, 3)]; // "set"
[propertyName deleteCharactersInRange:NSMakeRange(propertyName.length - 1, 1)]; // ":"
NSString *firstChar = [[propertyName substringWithRange:NSMakeRange(0,1)] lowercaseString];
[propertyName replaceCharactersInRange:NSMakeRange(0, 1) withString:firstChar];
// the object argument is already in the right place (2), but we need to set the key argument
[invocation setArgument:&propertyName atIndex:3];
// and replace the missing method with setObject:forKey:
invocation.selector = #selector(setObject:forKey:);
[invocation invokeWithTarget:self];
break;
}
case SelectorInferredImplTypeNone:
default:
[super forwardInvocation:invocation];
return;
}
}

This syntax is somewhat similar to the syntax id object syntax
"Somewhat similar"? How'bout "identical"?
and that dictionary conforms to the protocol
Nah, the declaration says that you have to pass in an object of which the class is NSDictionary, which, at the same time, conforms to the FBGraphUser protocol.
But where does the dot syntax come from
I don't understand this. It comes from the programmer who wrote the piece of code in question. And it is possible because the FBGraphUser protocol declares some properties, which can then be accessed via dot notation.
and how does one state that an arbitrary NSFoundation object corresponds to a protocol without subclassing the object itself and making it conform?
It's not called "NSFoundation", just Foundation. And it's not the object that doesn't "correspond" (because it rather "conforms") to the protocol, but its class. And you just showed the syntax for that yourself.
And how is it implemented? Simple: a category.
#import <Foundation/Foundation.h>
#protocol Foo
#property (readonly, assign) int answer;
#end
#interface NSDictionary (MyCategory) <Foo>
#end
#implementation NSDictionary (MyCategory)
- (int)answer
{
return 42;
}
#end
int main()
{
NSDictionary *d = [NSDictionary dictionary];
NSLog(#"%d", d.answer);
return 0;
}
This is an SSCCE, i. e. it compiles and runs as-is, try it!
What would the underlying code look like to make this sort of syntax work?
Answered above.
Why does it exist?
Because the language is defined like so.
Why would facebook implement it this way as opposed to just making an object that they can convert the data into?
I don't know, ask the Facebook guys.

Related

Objective-C method signatures: Parameter types can differ between declaration and implementation?

I can declare a method in the #interface having parameter type NSString*:
- (id) initWithString:(NSString*)str;
while in the implementation it is NSNumber*:
- (id) initWithString:(NSNumber*)str
For a full example, see the code below. When calling [Work test] the output is a.x = Hi, so the passed-in NSString* went through and one can see that the "correct" initWithString method was called.
Why is this code accepted by the compiler?
Can I make the compiler complain when parameter types differ?
Citing from Apple's documentation Defining Classes :
The only requirement is that the signature matches, which means you must keep the name of the method as well as the parameter and return types exactly the same.
My test code:
#interface ClassA : NSObject
#property (strong, nonatomic) NSNumber *x;
- (id) initWithString:(NSString*)str;
- (void) feed:(NSString*)str;
#end
#implementation ClassA
- (id) initWithString:(NSNumber*)str
{
self = [super init];
if (self) {
self.x = str;
}
return self;
}
- (void) feed:(NSNumber*)str
{
self.x = str;
}
#end
#implementation Work
+ (void) test
{
ClassA *a = [[ClassA alloc] initWithString:#"Hi"];
NSLog(#"a.x = %#", a.x);
}
#end
I added the feed method to see, whether it is "special" to init-like methods, but the compiler doesn't complain either.
(Ran this on Yosemite / Xcode 6.4 / iOS8.4 Simulator.)
PS: If I didn't use the right terms, please correct me :-)
Can I make the compiler complain when parameter types differ?
There's a warning for this which you can activate by including the following line in the header:
#pragma clang diagnostic error "-Wmethod-signatures"
You can also put -Wmethod-signatures into the project's "Other Warning Flags" Xcode build setting to activate this for the whole project.
I don't really understand why Apple is so hesitant to activate helpful warnings like this by default.
My standard pattern on virtually every project is to put -Weverything in "Other Warning Flags". This activates all warnings clang has to offer.
Since there are some warnings that are a little too pedantic or don't serve my coding style, I individually deactivate unwanted warning types as they pop up.
I'm surprised by the quote you found stating that param and return types matter to the uniqueness of the method signature. Re-reading, I think you found a bug in the doc.
Defining a parameter type in the interface will generate a warning for callers that do not pass that type (or cast the parameter to that type), no matter the implementation. Changing the parameter type in the implementation is exactly like casting the parameter within the method. Nothing wrong with that, not even a cause for warning. So long as the different type shares methods (polymorphic or inherited) with the declared type.
In other words, restating by example...
The following will cause a compiler error, proving that distinct param types offers no distinction to the compiler (same is true for return type)...
// .h
- (void)foo:(NSString *)str;
// .m
- (void)foo:(NSString *)str {
NSLog(#"called foo %#", [str class]);
}
- (void)foo:(NSNumber *)str { <----- duplicate declaration error
}
The following will cause no compiler warnings, errors or runtime errors...
// .h
- (void)foo:(NSString *)str;
// .m
- (void)foo:(NSNumber *)str {
// everything implements 'class', so no problem here
NSLog(#"called foo %#", [str class]);
}
The following is exactly like the previous example in every respect...
// .h
- (void)foo:(NSString *)str;
// .m
- (void)foo:(NSString *)str {
NSNumber *number = (NSNumber *)str;
NSLog(#"called foo %#", [number class]);
}
The following will cause no warnings, but will generate a runtime error because we are abusing the cast by invoking a method that the passed type doesn't implement (presuming the caller calls with a string as the interface indicates)...
// .h
- (void)foo:(NSString *)str;
// .m
- (void)foo:(NSNumber *)str {
NSLog(#"does str equal 2? %d", [str isEqualToNumber:#2]); <--- crash!
}
All of the foregoing matches intuition and behavior in practice, just not that passage in the doc. Interesting find!
In Objective-C a method is defined as a string (known as a selector) in the form of doSomethingWithParam:anotherParam:. Or in your case it will be initWithString:. Note there's no parameter types in these strings. One side-effect of defining methods like this is that Objective-C, unlike Java or C++ doesn't allow overloading operators by just changing the parameter type. Another side-effect is the behavior you observed.
EDIT: Additionally, it appears that the compiler does not look at the implementation at all when checking method calls, just the interface. Proof: declare a method in a header, don't specify any implementation for that method, and call this method from your code. This will compile just fine, but of course you'll get an "unrecognized selector" exception when you run this code.
It'd be great if someone could provide a nice explanation of the default compiler behavior.

No visible #interface for primitive accessor

I'm trying to implement a getter method for a simple transient property. The transient property is a fullName property. Typical example of fullName = firstName + lastName.
I'm developing an iOS app (just in case something related works differently on OS X)
Following the 'Mastering Core Data' WWDC 2010 Keynote I've created a category for my Person NSManagedObject subclass. In that category I've added the following method:
- (NSString *)fullName {
[self willAccessValueForKey:#"fullName"];
NSString *fullName = [self primitiveFullName];
[self didACCessValueForKey:#"fullName"];
if (fullName == nil) {
fullName = [NSString stringWithFormat:#"%# %#", self.firstName, self.lastName];
[self setPrimitiveFullName:fullName];
}
return fullName;}
Person class has been created automatically by Xcode and has the fullName property and its implementation with #dynamic.
When I try to compile the project I get an error for this category saying "No visible #interface for 'Person' declares selector 'primitiveFullName'".
Why I'm getting this error when Apple documentation says "For example, given an entity with an attribute firstName, Core Data automatically generates firstName, setFirstName:, primitiveFirstName, and setPrimitiveFirstName:." ??
The error is a compile error, not a warning
Should I something special to have the primitive accessors generated?
primitiveFullName and setPrimitiveFullName definitely will be generated automatically in runtime. But in compile time, compiler cannot find these methods. To suppress compile error when you invoke these methods, you should declare the prototype of these methods.
If when in 2010, Objective-C compiler said only warnings to invoke those undeclared methods. But after ARC introduced, Objective-C compiler disallow invoking such undeclared methods.
Like below:
#interface Person (PrimitiveAccessors)
- (NSString *)primitiveFullName;
- (void)setPrimitiveFullName:(NSString *)newName;
#end
#implementation Person (TransientProperties)
- (NSString *)fullName {
[self willAccessValueForKey:#"fullName"];
NSString *fullName = [self primitiveFullName];
[self didAccessValueForKey:#"fullName"];
if (fullName == nil) {
fullName = [NSString stringWithFormat:#"%# %#", self.firstName, self.lastName];
[self setPrimitiveFullName:fullName];
}
return fullName;
}
#end
First one comment on your design not an answer to the Q itself, but strongly related:
Your implementation is inconvenient: If you have a computed property, you should compute it. You should not store it. What happens, if firstName or lastName changed? Do you want to overwrite the setters for this properties, too, to nil fullName? If it is a computed property, compute it.
- (NSString*)fullName
{
return [NSString stringWithFormat:#"%# %#", [self valueForKey:#"firstName"], [self valueForKey:#"lastName"]];
}
Done.
This should solve your problem, because it is a different design. However, if you want to keep your old approach, you can use -primitiveValueForKey: and -setPrimitiveValue:forKey:.

Delegate dynamic replacement with blocks [duplicate]

I love blocks and it makes me sad when I can't use them. In particular, this happens mostly every time I use delegates (e.g.: with UIKit classes, mostly pre-block functionality).
So I wonder... Is it possible -using the crazy power of ObjC-, to do something like this?
// id _delegate; // Most likely declared as class variable or it will be released
_delegate = [DelegateFactory delegateOfProtocol:#protocol(SomeProtocol)];
_delegate performBlock:^{
// Do something
} onSelector:#selector(someProtocolMethod)]; // would execute the given block when the given selector is called on the dynamic delegate object.
theObject.delegate = (id<SomeProtocol>)_delegate;
// Profit!
performBlock:onSelector:
If YES, how? And is there a reason why we shouldn't be doing this as much as possible?
Edit
Looks like it IS possible. Current answers focus on the first part of the question, which is how. But it'd be nice to have some discussion on the "should we do it" part.
Okay, I finally got around to putting WoolDelegate up on GitHub. Now it should only take me another month to write a proper README (although I guess this is a good start).
The delegate class itself is pretty straightforward. It simply maintains a dictionary mapping SELs to Block. When an instance recieves a message to which it doesn't respond, it ends up in forwardInvocation: and looks in the dictionary for the selector:
- (void)forwardInvocation:(NSInvocation *)anInvocation {
SEL sel = [anInvocation selector];
GenericBlock handler = [self handlerForSelector:sel];
If it's found, the Block's invocation function pointer is pulled out and passed along to the juicy bits:
IMP handlerIMP = BlockIMP(handler);
[anInvocation Wool_invokeUsingIMP:handlerIMP];
}
(The BlockIMP() function, along with other Block-probing code, is thanks to Mike Ash. Actually, a lot of this project is built on stuff I learned from his Friday Q&A's. If you haven't read those essays, you're missing out.)
I should note that this goes through the full method resolution machinery every time a particular message is sent; there's a speed hit there. The alternative is the path that Erik H. and EMKPantry each took, which is creating a new clas for each delegate object that you need, and using class_addMethod(). Since every instance of WoolDelegate has its own dictionary of handlers, we don't need to do that, but on the other hand there's no way to "cache" the lookup or the invocation. A method can only be added to a class, not to an instance.
I did it this way for two reasons: this was an excercise to see if I could work out the part that's coming next -- the hand-off from NSInvocation to Block invocation -- and the creation of a new class for every needed instance simply seemed inelegant to me. Whether it's less elegant than my solution, I will leave to each reader's judgement.
Moving on, the meat of this procedure is actually in the NSInvocation category that's found in the project. This utilizes libffi to call a function that's unknown until runtime -- the Block's invocation -- with arguments that are also unknown until runtime (which are accessible via the NSInvocation). Normally, this is not possible, for the same reason that a va_list cannot be passed on: the compiler has to know how many arguments there are and how big they are. libffi contains assembler for each platform that knows/is based on those platforms' calling conventions.
There's three steps here: libffi needs a list of the types of the arguments to the function that's being called; it needs the argument values themselves put into a particular format; then the function (the Block's invocation pointer) needs to be invoked via libffi and the return value put back into the NSInvocation.
The real work for the first part is handled largely by a function which is again written by Mike Ash, called from Wool_buildFFIArgTypeList. libffi has internal structs that it uses to describe the types of function arguments. When preparing a call to a function, the library needs a list of pointers to these structures. The NSMethodSignature for the NSInvocation allows access of each argument's encoding string; translating from there to the correct ffi_type is handled by a set of if/else lookups:
arg_types[i] = libffi_type_for_objc_encoding([sig getArgumentTypeAtIndex:actual_arg_idx]);
...
if(str[0] == #encode(type)[0]) \
{ \
if(sizeof(type) == 1) \
return &ffi_type_sint8; \
else if(sizeof(type) == 2) \
return &ffi_type_sint16; \
Next, libffi wants pointers to the argument values themselves. This is done in Wool_buildArgValList: get the size of each argument, again from the NSMethodSignature, and allocate a chunk of memory that size, then return the list:
NSUInteger arg_size;
NSGetSizeAndAlignment([sig getArgumentTypeAtIndex:actual_arg_idx],
&arg_size,
NULL);
/* Get a piece of memory that size and put its address in the list. */
arg_list[i] = [self Wool_allocate:arg_size];
/* Put the value into the allocated spot. */
[self getArgument:arg_list[i] atIndex:actual_arg_idx];
(An aside: there's several notes in the code about skipping over the SEL, which is the (hidden) second passed argument to any method invocation. The Block's invocation pointer doesn't have a slot to hold the SEL; it just has itself as the first argument, and the rest are the "normal" arguments. Since the Block, as written in client code, could never access that argument anyways (it doesn't exist at the time), I decided to ignore it.)
libffi now needs to do some "prep"; as long as that succeeds (and space for the return value can be allocated), the invocation function pointer can now be "called", and the return value can be set:
ffi_call(&inv_cif, (genericfunc)theIMP, ret_val, arg_vals);
if( ret_val ){
[self setReturnValue:ret_val];
free(ret_val);
}
There's some demonstrations of the functionality in main.m in the project.
Finally, as for your question of "should this be done?", I think the answer is "yes, as long as it makes you more productive". WoolDelegate is completely generic, and an instance can act like any fully written-out class. My intention for it, though, was to make simple, one-off delegates -- that only need one or two methods, and don't need to live past their delegators -- less work than writing a whole new class, and more legible/maintainable than sticking some delegate methods into a view controller because it's the easiest place to put them. Taking advantage of the runtime and the language's dynamism like this hopefully can increase your code's readability, in the same way, e.g., Block-based NSNotification handlers do.
I just put together a little project that lets you do just this...
#interface EJHDelegateObject : NSObject
+ (id)delegateObjectForProtocol:(Protocol*) protocol;
#property (nonatomic, strong) Protocol *protocol;
- (void)addImplementation:(id)blockImplementation forSelector:(SEL)selector;
#end
#implementation EJHDelegateObject
static NSInteger counter;
+ (id)delegateObjectForProtocol:(Protocol *)protocol
{
NSString *className = [NSString stringWithFormat:#"%s%#%i",protocol_getName(protocol),#"_EJH_implementation_", counter++];
Class protocolClass = objc_allocateClassPair([EJHDelegateObject class], [className cStringUsingEncoding:NSUTF8StringEncoding], 0);
class_addProtocol(protocolClass, protocol);
objc_registerClassPair(protocolClass);
EJHDelegateObject *object = [[protocolClass alloc] init];
object.protocol = protocol;
return object;
}
- (void)addImplementation:(id)blockImplementation forSelector:(SEL)selector
{
unsigned int outCount;
struct objc_method_description *methodDescriptions = protocol_copyMethodDescriptionList(self.protocol, NO, YES, &outCount);
struct objc_method_description description;
BOOL descriptionFound = NO;
for (int i = 0; i < outCount; i++){
description = methodDescriptions[i];
if (description.name == selector){
descriptionFound = YES;
break;
}
}
if (descriptionFound){
class_addMethod([self class], selector, imp_implementationWithBlock(blockImplementation), description.types);
}
}
#end
And using an EJHDelegateObject:
self.alertViewDelegate = [EJHDelegateObject delegateObjectForProtocol:#protocol(UIAlertViewDelegate)];
[self.alertViewDelegate addImplementation:^(id _self, UIAlertView* alertView, NSInteger buttonIndex){
NSLog(#"%# dismissed with index %i", alertView, buttonIndex);
} forSelector:#selector(alertView:didDismissWithButtonIndex:)];
UIAlertView *alertView = [[UIAlertView alloc] initWithTitle:#"Example" message:#"My delegate is an EJHDelegateObject" delegate:self.alertViewDelegate cancelButtonTitle:#"Cancel" otherButtonTitles:#"OK", nil];
[alertView show];
Edit: This is what I've come up after having understood your requirement. This is just a quick hack, an idea to get you started, it's not properly implemented, nor is it tested. It is supposed to work for delegate methods that take the sender as their only argument. It works It is supposed to work with normal and struct-returning delegate methods.
typedef void *(^UBDCallback)(id);
typedef void(^UBDCallbackStret)(void *, id);
void *UBDDelegateMethod(UniversalBlockDelegate *self, SEL _cmd, id sender)
{
UBDCallback cb = [self blockForSelector:_cmd];
return cb(sender);
}
void UBDelegateMethodStret(void *retadrr, UniversalBlockDelegate *self, SEL _cmd, id sender)
{
UBDCallbackStret cb = [self blockForSelector:_cmd];
cb(retaddr, sender);
}
#interface UniversalBlockDelegate: NSObject
- (BOOL)addDelegateSelector:(SEL)sel isStret:(BOOL)stret methodSignature:(const char *)mSig block:(id)block;
#end
#implementation UniversalBlockDelegate {
SEL selectors[128];
id blocks[128];
int count;
}
- (id)blockForSelector:(SEL)sel
{
int idx = -1;
for (int i = 0; i < count; i++) {
if (selectors[i] == sel) {
return blocks[i];
}
}
return nil;
}
- (void)dealloc
{
for (int i = 0; i < count; i++) {
[blocks[i] release];
}
[super dealloc];
}
- (BOOL)addDelegateSelector:(SEL)sel isStret:(BOOL)stret methodSignature:(const char *)mSig block:(id)block
{
if (count >= 128) return NO;
selectors[count] = sel;
blocks[count++] = [block copy];
class_addMethod(self.class, sel, (IMP)(stret ? UBDDelegateMethodStret : UBDDelegateMethod), mSig);
return YES;
}
#end
Usage:
UIWebView *webView = [[UIWebView alloc] initWithFrame:CGRectZero];
UniversalBlockDelegate *d = [[UniversalBlockDelegate alloc] init];
webView.delegate = d;
[d addDelegateSelector:#selector(webViewDidFinishLoading:) isStret:NO methodSignature:"v#:#" block:^(id webView) {
NSLog(#"Web View '%#' finished loading!", webView);
}];
[webView loadRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:#"http://google.com"]]];

Creating delegates on the spot with blocks

I love blocks and it makes me sad when I can't use them. In particular, this happens mostly every time I use delegates (e.g.: with UIKit classes, mostly pre-block functionality).
So I wonder... Is it possible -using the crazy power of ObjC-, to do something like this?
// id _delegate; // Most likely declared as class variable or it will be released
_delegate = [DelegateFactory delegateOfProtocol:#protocol(SomeProtocol)];
_delegate performBlock:^{
// Do something
} onSelector:#selector(someProtocolMethod)]; // would execute the given block when the given selector is called on the dynamic delegate object.
theObject.delegate = (id<SomeProtocol>)_delegate;
// Profit!
performBlock:onSelector:
If YES, how? And is there a reason why we shouldn't be doing this as much as possible?
Edit
Looks like it IS possible. Current answers focus on the first part of the question, which is how. But it'd be nice to have some discussion on the "should we do it" part.
Okay, I finally got around to putting WoolDelegate up on GitHub. Now it should only take me another month to write a proper README (although I guess this is a good start).
The delegate class itself is pretty straightforward. It simply maintains a dictionary mapping SELs to Block. When an instance recieves a message to which it doesn't respond, it ends up in forwardInvocation: and looks in the dictionary for the selector:
- (void)forwardInvocation:(NSInvocation *)anInvocation {
SEL sel = [anInvocation selector];
GenericBlock handler = [self handlerForSelector:sel];
If it's found, the Block's invocation function pointer is pulled out and passed along to the juicy bits:
IMP handlerIMP = BlockIMP(handler);
[anInvocation Wool_invokeUsingIMP:handlerIMP];
}
(The BlockIMP() function, along with other Block-probing code, is thanks to Mike Ash. Actually, a lot of this project is built on stuff I learned from his Friday Q&A's. If you haven't read those essays, you're missing out.)
I should note that this goes through the full method resolution machinery every time a particular message is sent; there's a speed hit there. The alternative is the path that Erik H. and EMKPantry each took, which is creating a new clas for each delegate object that you need, and using class_addMethod(). Since every instance of WoolDelegate has its own dictionary of handlers, we don't need to do that, but on the other hand there's no way to "cache" the lookup or the invocation. A method can only be added to a class, not to an instance.
I did it this way for two reasons: this was an excercise to see if I could work out the part that's coming next -- the hand-off from NSInvocation to Block invocation -- and the creation of a new class for every needed instance simply seemed inelegant to me. Whether it's less elegant than my solution, I will leave to each reader's judgement.
Moving on, the meat of this procedure is actually in the NSInvocation category that's found in the project. This utilizes libffi to call a function that's unknown until runtime -- the Block's invocation -- with arguments that are also unknown until runtime (which are accessible via the NSInvocation). Normally, this is not possible, for the same reason that a va_list cannot be passed on: the compiler has to know how many arguments there are and how big they are. libffi contains assembler for each platform that knows/is based on those platforms' calling conventions.
There's three steps here: libffi needs a list of the types of the arguments to the function that's being called; it needs the argument values themselves put into a particular format; then the function (the Block's invocation pointer) needs to be invoked via libffi and the return value put back into the NSInvocation.
The real work for the first part is handled largely by a function which is again written by Mike Ash, called from Wool_buildFFIArgTypeList. libffi has internal structs that it uses to describe the types of function arguments. When preparing a call to a function, the library needs a list of pointers to these structures. The NSMethodSignature for the NSInvocation allows access of each argument's encoding string; translating from there to the correct ffi_type is handled by a set of if/else lookups:
arg_types[i] = libffi_type_for_objc_encoding([sig getArgumentTypeAtIndex:actual_arg_idx]);
...
if(str[0] == #encode(type)[0]) \
{ \
if(sizeof(type) == 1) \
return &ffi_type_sint8; \
else if(sizeof(type) == 2) \
return &ffi_type_sint16; \
Next, libffi wants pointers to the argument values themselves. This is done in Wool_buildArgValList: get the size of each argument, again from the NSMethodSignature, and allocate a chunk of memory that size, then return the list:
NSUInteger arg_size;
NSGetSizeAndAlignment([sig getArgumentTypeAtIndex:actual_arg_idx],
&arg_size,
NULL);
/* Get a piece of memory that size and put its address in the list. */
arg_list[i] = [self Wool_allocate:arg_size];
/* Put the value into the allocated spot. */
[self getArgument:arg_list[i] atIndex:actual_arg_idx];
(An aside: there's several notes in the code about skipping over the SEL, which is the (hidden) second passed argument to any method invocation. The Block's invocation pointer doesn't have a slot to hold the SEL; it just has itself as the first argument, and the rest are the "normal" arguments. Since the Block, as written in client code, could never access that argument anyways (it doesn't exist at the time), I decided to ignore it.)
libffi now needs to do some "prep"; as long as that succeeds (and space for the return value can be allocated), the invocation function pointer can now be "called", and the return value can be set:
ffi_call(&inv_cif, (genericfunc)theIMP, ret_val, arg_vals);
if( ret_val ){
[self setReturnValue:ret_val];
free(ret_val);
}
There's some demonstrations of the functionality in main.m in the project.
Finally, as for your question of "should this be done?", I think the answer is "yes, as long as it makes you more productive". WoolDelegate is completely generic, and an instance can act like any fully written-out class. My intention for it, though, was to make simple, one-off delegates -- that only need one or two methods, and don't need to live past their delegators -- less work than writing a whole new class, and more legible/maintainable than sticking some delegate methods into a view controller because it's the easiest place to put them. Taking advantage of the runtime and the language's dynamism like this hopefully can increase your code's readability, in the same way, e.g., Block-based NSNotification handlers do.
I just put together a little project that lets you do just this...
#interface EJHDelegateObject : NSObject
+ (id)delegateObjectForProtocol:(Protocol*) protocol;
#property (nonatomic, strong) Protocol *protocol;
- (void)addImplementation:(id)blockImplementation forSelector:(SEL)selector;
#end
#implementation EJHDelegateObject
static NSInteger counter;
+ (id)delegateObjectForProtocol:(Protocol *)protocol
{
NSString *className = [NSString stringWithFormat:#"%s%#%i",protocol_getName(protocol),#"_EJH_implementation_", counter++];
Class protocolClass = objc_allocateClassPair([EJHDelegateObject class], [className cStringUsingEncoding:NSUTF8StringEncoding], 0);
class_addProtocol(protocolClass, protocol);
objc_registerClassPair(protocolClass);
EJHDelegateObject *object = [[protocolClass alloc] init];
object.protocol = protocol;
return object;
}
- (void)addImplementation:(id)blockImplementation forSelector:(SEL)selector
{
unsigned int outCount;
struct objc_method_description *methodDescriptions = protocol_copyMethodDescriptionList(self.protocol, NO, YES, &outCount);
struct objc_method_description description;
BOOL descriptionFound = NO;
for (int i = 0; i < outCount; i++){
description = methodDescriptions[i];
if (description.name == selector){
descriptionFound = YES;
break;
}
}
if (descriptionFound){
class_addMethod([self class], selector, imp_implementationWithBlock(blockImplementation), description.types);
}
}
#end
And using an EJHDelegateObject:
self.alertViewDelegate = [EJHDelegateObject delegateObjectForProtocol:#protocol(UIAlertViewDelegate)];
[self.alertViewDelegate addImplementation:^(id _self, UIAlertView* alertView, NSInteger buttonIndex){
NSLog(#"%# dismissed with index %i", alertView, buttonIndex);
} forSelector:#selector(alertView:didDismissWithButtonIndex:)];
UIAlertView *alertView = [[UIAlertView alloc] initWithTitle:#"Example" message:#"My delegate is an EJHDelegateObject" delegate:self.alertViewDelegate cancelButtonTitle:#"Cancel" otherButtonTitles:#"OK", nil];
[alertView show];
Edit: This is what I've come up after having understood your requirement. This is just a quick hack, an idea to get you started, it's not properly implemented, nor is it tested. It is supposed to work for delegate methods that take the sender as their only argument. It works It is supposed to work with normal and struct-returning delegate methods.
typedef void *(^UBDCallback)(id);
typedef void(^UBDCallbackStret)(void *, id);
void *UBDDelegateMethod(UniversalBlockDelegate *self, SEL _cmd, id sender)
{
UBDCallback cb = [self blockForSelector:_cmd];
return cb(sender);
}
void UBDelegateMethodStret(void *retadrr, UniversalBlockDelegate *self, SEL _cmd, id sender)
{
UBDCallbackStret cb = [self blockForSelector:_cmd];
cb(retaddr, sender);
}
#interface UniversalBlockDelegate: NSObject
- (BOOL)addDelegateSelector:(SEL)sel isStret:(BOOL)stret methodSignature:(const char *)mSig block:(id)block;
#end
#implementation UniversalBlockDelegate {
SEL selectors[128];
id blocks[128];
int count;
}
- (id)blockForSelector:(SEL)sel
{
int idx = -1;
for (int i = 0; i < count; i++) {
if (selectors[i] == sel) {
return blocks[i];
}
}
return nil;
}
- (void)dealloc
{
for (int i = 0; i < count; i++) {
[blocks[i] release];
}
[super dealloc];
}
- (BOOL)addDelegateSelector:(SEL)sel isStret:(BOOL)stret methodSignature:(const char *)mSig block:(id)block
{
if (count >= 128) return NO;
selectors[count] = sel;
blocks[count++] = [block copy];
class_addMethod(self.class, sel, (IMP)(stret ? UBDDelegateMethodStret : UBDDelegateMethod), mSig);
return YES;
}
#end
Usage:
UIWebView *webView = [[UIWebView alloc] initWithFrame:CGRectZero];
UniversalBlockDelegate *d = [[UniversalBlockDelegate alloc] init];
webView.delegate = d;
[d addDelegateSelector:#selector(webViewDidFinishLoading:) isStret:NO methodSignature:"v#:#" block:^(id webView) {
NSLog(#"Web View '%#' finished loading!", webView);
}];
[webView loadRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:#"http://google.com"]]];

NSDictionary: method only defined for abstract class. My app crashed

My app crashed after I called addImageToQueue. I added initWithObjects: forKeys: count: but it doesn't helped me.
Terminating app due to uncaught exception 'NSInvalidArgumentException',
reason: '*** -[NSDictionary initWithObjects:forKeys:count:]:
method only defined for abstract class.
Define -[DictionaryWithTag initWithObjects:forKeys:count:]!'
my code
- (void)addImageToQueue:(NSDictionary *)dict
{
DictionaryWithTag *dictTag = [DictionaryWithTag dictionaryWithDictionary:dict];
}
#interface DictionaryWithTag : NSDictionary
#property (nonatomic, assign) int tag;
- (id)initWithObjects:(id *)objects forKeys:(id *)keys count:(NSUInteger)count;
#end
#implementation DictionaryWithTag
#synthesize tag;
- (id)initWithObjects:(id *)objects forKeys:(id *)keys count:(NSUInteger)count
{
return [super initWithObjects:objects forKeys:keys count:count];
}
#end
Are you subclassing NSDictionary? That's not a common thing to do in Cocoa-land, which might explain why you're not seeing the results you expect.
NSDictionary is a class cluster. That means that you never actually work with an instance of NSDictionary, but rather with one of its private subclasses. See Apple's description of a class cluster here. From that doc:
You create and interact with instances of the cluster just as you would any other class. Behind the scenes, though, when you create an instance of the public class, the class returns an object of the appropriate subclass based on the creation method that you invoke. (You don’t, and can’t, choose the actual class of the instance.)
What your error message is telling you is that if you want to subclass NSDictionary, you have to implement your own backend storage for it (for example by writing a hash table in C). It's not just asking you to declare that method, it's asking you to write it from scratch, handling the storage yourself. That's because subclassing a class cluster directly like that is the same as saying you want to provide a new implementation for how dictionaries work. As I'm sure you can tell, that's a significant task.
Assuming you definitely want to subclass NSDictionary, your best bet is to write your subclass to contain a normal NSMutableDictionary as a property, and use that to handle your storage. This tutorial shows you one way to do that. That's not actually that hard, you just need to pass the required methods through to your dictionary property.
You could also try using associative references, which "simulate the addition of object instance variables to an existing class". That way you could associate an NSNumber with your existing dictionary to represent the tag, and no subclassing is needed.
Of course, you could also just have tag as a key in the dictionary, and store the value inside it like any other dictionary key.
From https://stackoverflow.com/a/1191351/467588, this is what I did to make a subclass of NSDictionary works. I just declare an NSDictionary as an instance variable of my class and add some more required methods. It's called "Composite Object" - thanks #mahboudz.
#interface MyCustomNSDictionary : NSDictionary {
NSDictionary *_dict;
}
#end
#implementation MyCustomNSDictionary
- (id)initWithObjects:(const id [])objects forKeys:(const id [])keys count:(NSUInteger)cnt {
_dict = [NSDictionary dictionaryWithObjects:objects forKeys:keys count:cnt];
return self;
}
- (NSUInteger)count {
return [_dict count];
}
- (id)objectForKey:(id)aKey {
return [_dict objectForKey:aKey];
}
- (NSEnumerator *)keyEnumerator {
return [_dict keyEnumerator];
}
#end
I just did a little trick.
I'm not sure that its the best solution (or even it is good to do it).
#interface MyDictionary : NSDictionary
#end
#implementation MyDictionary
+ (id) allocMyDictionary
{
return [[self alloc] init];
}
- (id) init
{
self = (MyDictionary *)[[NSDictionary alloc] init];
return self;
}
#end
This worked fine for me.

Resources