I have a basic understanding of weak reference with the blocks. The problem I am facing is,
Whenever I access self inside the block, the retain count of self gets increased by 2, where as when I access self inside the default block(example UIViewAnimation) the retain count for self gets increased by 1.
Just wanted to understand why it is getting increased by 2.
Thanks in advance!
According to Clang source code for generating code of Objective-C blocks.
CGBlocks.cpp
CGDecl.cpp
CGObjC.cpp
Objective-C blocks literal is generated by EmitBlockLiteral function.
llvm::Value *CodeGenFunction::EmitBlockLiteral(const CGBlockInfo &blockInfo) {
LLVM document explains deeply what is Block literal. Anyway this function generates a block descriptor and a copy helper function of the specified block. The copy helper function is for capturing auto variables and self.
buildBlockDescriptor -> buildCopyHelper -> GenerateCopyHelperFunction
In GenerateCopyHelperFunction function, Clang emits objc_storeStrong for each Objective-C object auto variable that will be captured by the block.
for (const auto &CI : blockDecl->captures()) {
...
EmitARCStoreStrongCall(...
So, this line would count up the retain count of self (1 -> 2).
After that, EmitBlockLiteral function emits objc_retain for each Objective-C object auto variable that will be captured by the block, as well.
// Next, captured variables.
for (const auto &CI : blockDecl->captures()) {
...
EmitExprAsInit -> EmitScalarInit -> EmitARCRetain
Therefore this line would count up the retain count of self too (2 -> 3).
I don't know the exact reason. But apparently, there is some reason to retain Objective-C object before capturing object by the block copy helper function.
Using self inside a block, usually creates a cycle which could be the reason why it gets increased by 2. To fix this, you should try using a weak self. Check this question out
capturing self strongly in this block is likely to lead to a retain cycle
Use something like this
__unsafe_unretained typeof(self) weakSelf = self;
Related
I have encountered a data race in my app using Xcode's Thread Sanitizer and I have a question on how to address it.
I have a var defined as:
var myDict = [Double : [Date:[String:Any]]]()
I have a thread setup where I call a setup() function:
let queue = DispatchQueue(label: "my-queue", qos: .utility)
queue.async {
self.setup {
}
}
My setup() function essentially loops through tons of data and populates myDict. This can take a while, which is why we need to do it asynchronously.
On the main thread, my UI accesses myDict to display its data. In a cellForRow: method:
if !myDict.keys.contains(someObject) {
//Do something
}
And that is where I get my data race alert and the subsequent crash.
Exception NSException * "-[_NSCoreDataTaggedObjectID objectForKey:]:
unrecognized selector sent to instance
0x8000000000000000" 0x0000000283df6a60
Please kindly help me understand how to access a variable in a thread safe manner in Swift. I feel like I'm possibly half way there with setting, but I'm confused on how to approach getting on the main thread.
One way to access it asynchronously:
typealias Dict = [Double : [Date:[String:Any]]]
var myDict = Dict()
func getMyDict(f: #escaping (Dict) -> ()) {
queue.async {
DispatchQueue.main.async {
f(myDict)
}
}
}
getMyDict { dict in
assert(Thread.isMainThread)
}
Making the assumption, that queue possibly schedules long lasting closures.
How it works?
You can only access myDict from within queue. In the above function, myDict will be accessed on this queue, and a copy of it gets imported to the main queue. While you are showing the copy of myDict in a UI, you can simultaneously mutate the original myDict. "Copy on write" semantics on Dictionary ensures that copies are cheap.
You can call getMyDict from any thread and it will always call the closure on the main thread (in this implementation).
Caveat:
getMyDict is an async function. Which shouldn't be a caveat at all nowadays, but I just want to emphasise this ;)
Alternatives:
Swift Combine. Make myDict a published Value from some Publisher which implements your logic.
later, you may also consider to use async & await when it is available.
Preface: This will be a pretty long non-answer. I don't actually know what's wrong with your code, but I can share the things I do know that can help you troubleshoot it, and learn some interesting things along the way.
Understanding the error
Exception NSException * "-[_NSCoreDataTaggedObjectID objectForKey:]: unrecognized selector sent to instance 0x8000000000000000"
An Objective C exception was thrown (and not caught).
The exception happened when attempting to invoke -[_NSCoreDataTaggedObjectID objectForKey:]. This is a conventional way to refer to an Objective C method in writing. In this case, it's:
An instance method (hence the -, rather than a + that would be used for class methods)
On the class _NSCoreDataTaggedObjectID (more on this later)
On the method named objectForKey:
The object receiving this method invocation is the one with address 0x8000000000000000.
This is a pretty weird address. Something is up.
Another hint is the strange class name of _NSCoreDataTaggedObjectID. There's a few observations we can make about it:
The prefixed _NS suggests that it's an internal implementation detail of CoreData.
We google the name to find class dumps of the CoreData framework, which show us that:
_NSCoreDataTaggedObjectID subclasses _NSScalarObjectID
Which subclasses _NSCoreManagedObjectID
Which subclasses NSManagedObjectID
NSManagedObjectID is a public API, which has its own first-party documentation.
It has the word "tagged" in its name, which has a special meaning in the Objective C world.
Some back story
Objective C used message passing as its sole mechanism for method dispatch (unlike Swift which usually prefers static and v-table dispatch, depending on the context). Every method call you wrote was essentially syntactic sugar overtop of objc_msgSend (and its variants), passing to it the receiver object, the selector (the "name" of the method being invoked) and the arguments. This was a special function that would do the job of checking the class of the receiver object, and looking through that classes' hierarchy until it found a method implementation for the desired selector.
This was great, because it allows you to do a lot of cool runtime dynamic behaviour. For example, menu bar items on a macOS app would just define the method name they invoke. Clicking on them would "send that message" to the responder chain, which would invoke that method on the first object that had an implementation for it (the lingo is "the first object that answers to that message").
This works really well, but has several trade-offs. One of them was that everything had to be an object. And by object, we mean a heap-allocated memory region, whose first several words of memory stored meta-data for the object. This meta-data would contain a pointer to the class of the object, which was necessary for doing the method-loopup process in objc_msgSend as I just described.
The issue is, that for small objects, (particularly NSNumber values, small strings, empty arrays, etc.) the overhead of these several words of object meta-data might be several times bigger than the actual object data you're interested in. E.g. even though NSNumber(value: true /* or false */) stores a single bit of "useful" data, on 64 bit systems there would be 128 bits of object overhead. Add to that all the malloc/free and retain/release overhead associated with dealing with large numbers of tiny object, and you got a real performance issue.
"Tagged pointers" were a solution to this problem. The idea is that for small enough values of particular privileged classes, we won't allocate heap memory for their objects. Instead, we'll store their objects' data directly in their pointer representation. Of course, we would need a way to know if a given pointer is a real pointer (that points to a real heap-allocated object), or a "fake pointer" that encodes data inline.
The key realization that malloc only ever returns memory aligned to 16-byte boundaries. This means that 4 bits of every memory address were always 0 (if they weren't, then it wouldn't have been 16-byte aligned). These "unused" 4 bits could be employed to discriminate real pointers from tagged pointers. Exactly which bits are used and how differs between process architectures and runtime versions, but the general idea is the same.
If a pointer value had 0000 for those 4 bits then the system would know it's a real object pointer that points to a real heap-allocated object. All other possible values of those 4-bit values could be used to signal what kind of data is stored in the remaining bits. The Objective C runtime is actually opensource, so you can actually see the tagged pointer classes and their tags:
{
// 60-bit payloads
OBJC_TAG_NSAtom = 0,
OBJC_TAG_1 = 1,
OBJC_TAG_NSString = 2,
OBJC_TAG_NSNumber = 3,
OBJC_TAG_NSIndexPath = 4,
OBJC_TAG_NSManagedObjectID = 5,
OBJC_TAG_NSDate = 6,
// 60-bit reserved
OBJC_TAG_RESERVED_7 = 7,
// 52-bit payloads
OBJC_TAG_Photos_1 = 8,
OBJC_TAG_Photos_2 = 9,
OBJC_TAG_Photos_3 = 10,
OBJC_TAG_Photos_4 = 11,
OBJC_TAG_XPC_1 = 12,
OBJC_TAG_XPC_2 = 13,
OBJC_TAG_XPC_3 = 14,
OBJC_TAG_XPC_4 = 15,
OBJC_TAG_NSColor = 16,
OBJC_TAG_UIColor = 17,
OBJC_TAG_CGColor = 18,
OBJC_TAG_NSIndexSet = 19,
OBJC_TAG_NSMethodSignature = 20,
OBJC_TAG_UTTypeRecord = 21,
// When using the split tagged pointer representation
// (OBJC_SPLIT_TAGGED_POINTERS), this is the first tag where
// the tag and payload are unobfuscated. All tags from here to
// OBJC_TAG_Last52BitPayload are unobfuscated. The shared cache
// builder is able to construct these as long as the low bit is
// not set (i.e. even-numbered tags).
OBJC_TAG_FirstUnobfuscatedSplitTag = 136, // 128 + 8, first ext tag with high bit set
OBJC_TAG_Constant_CFString = 136,
OBJC_TAG_First60BitPayload = 0,
OBJC_TAG_Last60BitPayload = 6,
OBJC_TAG_First52BitPayload = 8,
OBJC_TAG_Last52BitPayload = 263,
OBJC_TAG_RESERVED_264 = 264
You can see, strings, index paths, dates, and other similar "small and numerous" classes all have reserved pointer tag values. For each of these "normal classes" (NSString, NSDate, NSNumber, etc.), there's a special internal subclass which implements all the same public API, but using a tagged pointer instead of a regular object.
As you can see, there's a value for OBJC_TAG_NSManagedObjectID. It turns out, that NSManagedObjectID objects were numerous and small enough that they would benefit greatly for this tagged-pointer representation. After all, the value of NSManagedObjectID might be a single integer, much like NSNumber, which would be wasteful to heap-allocate.
If you'd like to learn more about tagged pointers, I'd recommend Mike Ash's writings, such as https://www.mikeash.com/pyblog/friday-qa-2012-07-27-lets-build-tagged-pointers.html
There was also a recent WWDC talk on the subject: WWDC 2020 - Advancements in the Objective-C runtime
The strange address
So in the previous section we found out that _NSCoreDataTaggedObjectID is the tagged-pointer subclass of NSManagedObjectID. Now we can notice something else that's strange, the pointer value we saw had a lot of zeros: 0x8000000000000000. So what we're dealing with is probably some kind of uninitialized-state of an object.
Conclusion
The call stack can shed further light on where this happens exactly, but what we know is that somewhere in your program, the objectForKey: method is being invoked on an uninitialized value of NSManagedObjectID.
You're probably accessing a value too-early, before it's properly initialized.
To work around this you can take one of several approaches:
A future ideal world, use would just use the structured concurrency of Swift 5.5 (once that's available on enough devices) and async/await to push the work on the background and await the result.
Use a completion handler to invoke your value-consuming code only after the value is ready. This is most immediately-easy, but will blow up your code base with completion handler boilerplate and bugs.
Use a concurrency abstraction library, like Combine, RxSwift, or PromiseKit. This will be a bit more work to set up, but usually leads to much clearer/safer code than throwing completion handlers in everywhere.
The basic pattern to achieve thread safety is to never mutate/access the same property from multiple threads at the same time. The simplest solution is to just never let any background queue interact with your property directly. So, create a local variable that the background queue will use, and then dispatch the updating of the property to the main queue.
Personally, I wouldn't have setup interact with myDict at all, but rather return the result via the completion handler, e.g.
// properties
var myDict = ...
private let queue = DispatchQueue(label: "my-queue", qos: .utility) // some background queue on which we'll recalculate what will eventually be used to update `myProperty`
// method doesn't reference `myDict` at all, but uses local var only
func setup(completion: #escaping (Foo) -> Void) {
queue.async {
var results = ... // some local variable that we'll use as we're building up our results
// do time-consuming population of `results` here;
// do not touch `myDict` here, though;
// when all done, dispatch update of `myDict` back to the main queue
DispatchQueue.main.async { // dispatch update of property back to the main queue
completion(results)
}
}
}
Then the routine that calls setup can update the property (and trigger necessary UI update, too).
setup { results in
self.myDict = results
// also trigger UI update here, too
}
(Note, your closure parameter type (Foo in my example) would be whatever type myDict is. Maybe a typealias as advised elsewhere, or better, use custom types rather than dictionary within dictionary within dictionary. Use whatever type you’d prefer.)
By the way, your question’s title and preamble talks about TSAN and thread safety, but you then share a “unrecognized selector” exception, which is a completely different issue. So, you may well have two completely separate issues going on. A TSAN data race error would have produced a very different message. (Something like the error I show here.) Now, if setup is mutating myDict from a background thread, that undoubtedly will lead to thread-safety problems, but your reported exception suggests there might also be some other problem, too...
Apple offers a CPU and GPU Synchronization sample project that shows how to synchronize access to shared resources between CPU and GPU. To do so, it uses a semaphore which is stored in an instance variable:
#implementation AAPLRenderer
{
dispatch_semaphore_t _inFlightSemaphore;
// other ivars
}
This semaphore is then defined in another method:
- (nonnull instancetype)initWithMetalKitView:(nonnull MTKView *)mtkView
{
self = [super init];
if(self)
{
_device = mtkView.device;
_inFlightSemaphore = dispatch_semaphore_create(MaxBuffersInFlight);
// further initializations
}
return self;
}
MaxBuffersInFlight is defined as follows:
// The max number of command buffers in flight
static const NSUInteger MaxBuffersInFlight = 3;
Finally, the semaphore is utilized as follows:
/// Called whenever the view needs to render
- (void)drawInMTKView:(nonnull MTKView *)view
{
// Wait to ensure only MaxBuffersInFlight number of frames are getting processed
// by any stage in the Metal pipeline (App, Metal, Drivers, GPU, etc)
dispatch_semaphore_wait(_inFlightSemaphore, DISPATCH_TIME_FOREVER);
// Iterate through our Metal buffers, and cycle back to the first when we've written to MaxBuffersInFlight
_currentBuffer = (_currentBuffer + 1) % MaxBuffersInFlight;
// Update data in our buffers
[self updateState];
// Create a new command buffer for each render pass to the current drawable
id<MTLCommandBuffer> commandBuffer = [_commandQueue commandBuffer];
commandBuffer.label = #"MyCommand";
// Add completion hander which signals _inFlightSemaphore when Metal and the GPU has fully
// finished processing the commands we're encoding this frame. This indicates when the
// dynamic buffers filled with our vertices, that we're writing to this frame, will no longer
// be needed by Metal and the GPU, meaning we can overwrite the buffer contents without
// corrupting the rendering.
__block dispatch_semaphore_t block_sema = _inFlightSemaphore;
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> buffer)
{
dispatch_semaphore_signal(block_sema);
}];
// rest of the method
}
What I fail to understand here is the necessity of the line
__block dispatch_semaphore_t block_sema = _inFlightSemaphore;
Why do I have to copy the instance variable into a local variable and mark this local variable with __block. If I just drop that local variable and instead write
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> buffer)
{
dispatch_semaphore_signal(_inFlightSemaphore);
}];
It seems to work as well. I also tried to mark the instance variable with __block as follows:
__block dispatch_semaphore_t _bufferAccessSemaphore;
This compiles with Clang and seems to work as well. But because this is about preventing race conditions I want to be sure that it works.
So the question is why does Apple create that local semaphore copy marked with __block? Is it really necessary or does the approach with directly accessing the instance variable work just as well?
As a side note, the answer to this SO question remarks that marking instance variables with __block can't be done. The answer is according to gcc but why would Clang allow this if it shouldn't be done?
The important semantic distinction here is that when you use the ivar directly in the block, the block takes a strong reference to self. By creating a local variable that refers to the semaphore instead, only the semaphore is captured (by reference) by the block, instead of self, reducing the likelihood of a retain cycle.
As for the __block qualifier, you'd normally use that to indicate that a local variable should be mutable within the referencing block. However, since the semaphore variable is not mutated by the call to signal, the qualifier isn't strictly necessary here. It still serves a useful purpose from a style perspective, though, in the sense that it emphasizes the lifetime and purpose of the variable.
On the topic of why an ivar can be qualified with __block,
why would Clang allow this if it shouldn't be done?
Perhaps exactly because capturing an ivar in a block implies strongly capturing self. With or without a __block qualifier, if you use an ivar in a block, you're potentially running the risk of a retain cycle, so having the qualifier there doesn't create additional risk. Better to use a local variable (which, by the way, could be a __weak reference to self just as easily as a __block-qualified reference to an ivar) to be explicit and safe.
I think warrenm correctly answered your question as to why one would use a local variable rather than the ivar (and its implicit reference to self). +1
But you asked about why a local variable would marked as __block in this case. The author could have done that to make his/her intent explicit (e.g., to indicate the variable will outlive the scope of the method). Or they could have potentially done it for the sake of efficiency (e.g., why make a new copy of the pointer?).
For example, consider:
- (void)foo {
dispatch_semaphore_t semaphore = dispatch_semaphore_create(4);
NSLog(#"foo: %p %#", &semaphore, semaphore);
for (int i = 0; i < 10; i++) {
dispatch_async(dispatch_get_global_queue(QOS_CLASS_DEFAULT, 0), ^{
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
NSLog(#"bar: %p %#", &semaphore, semaphore);
[NSThread sleepForTimeInterval:1];
dispatch_semaphore_signal(semaphore);
});
}
}
While these are all using the same semaphore, in the absence of __block, each dispatched block will get its own pointer to that semaphore.
However, if you add __block to the declaration of that local semaphore variable, and each dispatched block will use the same pointer which will be sitting on the heap (i.e. the address of &semaphore will be the same for each block).
It’s not meaningful here, IMHO, where there’s only one block being dispatched, but hopefully this illustrates the impact of __block qualifier on local var. (Obviously, the traditional use of __block qualifier is if you’re changing the value of the object in question, but that’s not relevant here.)
... marking instance variables with __block can't be done [in gcc] ... but why would Clang allow this if it shouldn't be done?
Regarding why no error on __block for ivars, as that referenced answer said, it’s basically redundant. I’m not sure that just because something is redundant that it should be disallowed.
I have almost 4 years of experience with Objective C and a newbie in swift. Am trying to understand the concept of swift from the perspective of Objective C. So if I am wrong please guide me through :)
In objective c, we have blocks (chunck of code which can be executed later asynchronously) which made absolutely perfect sense. But in swift now we can pass a function as a parameter to another function, which can be executed later, and then we have closure as well.
As per Apple "functions are special cases of clauses."
As per O'Reilly "when a function is passed around as a value, it carries along its internal references to external variables. That is what makes a function a closure."
So I tried a little bit to understand the same :)
Here is my closure
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a ni
let tempNumber : Int = 5
let numbers = [1,2,3,4,5]
print (numbers.map({ $0 - tempNumber}))
}
The variable tempNumber is declared even before the closure was declared, yet closure has the access to the variable. Now rather then a map, I tried using a custom class passed closure as a parameter and tried executing the same code :) Though now closure is getting executed in differrent scope, it still has the access to tempNumber.
I concluded : closures have an access to the variables and methods which are declared in the same scope as closure it self, though it gets execcuted in differrent scope.
Now rather then passing closure as paramter, tried passing function as a parameter,
class test {
func testFunctionAsParameter(testMethod : (Int) -> Int){
let seconds = 4.0
let delay = seconds * Double(NSEC_PER_SEC) // nanoseconds per seconds
let dispatchTime = dispatch_time(DISPATCH_TIME_NOW, Int64(delay))
dispatch_after(dispatchTime, dispatch_get_main_queue(), {
self.callLater(testMethod)
})
}
func callLater(testMethod : (Int) -> Int) -> Int {
return testMethod(100)
}
}
In a differrent class I created an instance of Test and used it as follow
/* in differrent class */
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a ni
let tempAge : Int = 5
func test2(val : Int) -> Int {
return val - tempAge;
}
let testObj = test();
print(testObj.testFunctionAsParameter(test2))
}
Declared a class called test, which has a method called testFunctionAsParameter which in turn calls another method called callLater and finally this method executes the passed function :)
Now all these circus, just to ensure that passed method gets executed in differrent scope :)
When I executed the above code :) I was shocked to see that even though function passed as a parameter finally gets executed in different scope, still has the access to the variables testNumber that was declared at the same scope as the method declaration :)
I concluded : O'Reilly's statement "when a function is passed around as a value, it carries along its internal references to external variables." was bang on :)
Now my doubt is apple says functions are special cases of clauses. I thought special case must be something to do with scope :) But to my surprise code shows that both closure and function have access to variables in external scope !!!!
Other then, syntax differrence how closure is differrent from function passed as argument ??? Now there must be some differrence internally, otherwise Apple wouldn't have spent so much time in designing it :)
If not scope?? then what else is different in closure and function ?? O'Reilly states "when a function is passed around as a value, it carries along its internal references to external variables. That is what makes a function a closure." so what is it trying to point out ? that closure wont carry references to external variables ? Now they can't be wrong either, are they?
I am going mad with two conflicting statements from Apple and O'Reilly :( Please help, am I understanding something wrong ?? Please help me understand the difference.
In swift there really isn't any difference between functions and closures. A closure is an anonymous function (A function with no name.) That's about it, other than the syntax differences you noted.
In Objective-C functions and blocks/closures ARE different.
Based on your post, it looks like you have a pretty full understanding on the topic and you may just be getting lost in the semantics. It basically boils down to:
1. a closure is a closure
2. a function is a closure with a name
Here's a post with more details. But again, it's mostly a discussion of semantics. The "answer" is very simple.
What is the difference between functions and closures?
According to the Xcode instruments, my code has a memory leak (at #3). But I get the feeling I'm missing something in my mental model of what's going on, so I have a few questions about the following logic:
__block MyType *blockObject = object; //1
dispatch_async(dispatch_get_main_queue(), ^{
if ([self.selectedObjects containsObject:blockObject]) { //2
[self.selectedObjects removeObject:blockObject];
[[NSNotificationCenter defaultCenter] postNotificationName:ObjectDeselectionNotification object:blockObject]; //3
} else {
[self.selectedObjects addObject:blockCart];
[[NSNotificationCenter defaultCenter] postNotificationName:ObjectSelectionNotification object:blockCart];
}
});
1) I'm using a __block reference because I'm executing this code async and don't want a reference to this variable copied to the heap. Is this a valid usage of __block even though I'm not mutating the variable?
2) Calling self.selectedObjects will create a retain on self. Does the block automatically release this after it has exited?
3) I apparently have a leak at this point, but I'm not exactly sure why. Is NotificationCenter retaining my __block object that is supposed to be disposed of after my block exits?
From the code you've shown, I don't see any problems...
1) Your object would not be "copied" onto the heap - it is already on the heap being an alloc'd object. Rather, it's reference count would be incremented by 1 as it is now owned by the block. You do not need the __block reference as you are not assigning anything to the pointer. In fact, you do not need blockObject at all and can just pass object.
2.) self should be released once the block is done. However, post a notification is synchronous (this block will not finish until all the objects responding to the notification are done).
3.) I'm not sure what the exact implementation that NSNotificationCenter uses, but it doesn't really matter because the posting is synchronous. It will call every observer of your notification and the selectors they want to receive your notification on
It seems as though you are running all this code within another block - can you paste the full method?
Please remove this answer if incorrect (you've already accepted) but I'm not sure you accepted because the answer worked for you.
I don't think you should be referencing self in that block - you will be creating a retain cycle.
__weak YourClass *weakSelf = self;
use weakSelf instead and cut the tie between the creator and the block floating on the dispatch queue?
Description:
I pass my block to async method, and it is called when operation is finished. I would like to decline calling the block before operation is finished. But if I assign nil to the block variable in my class it is called anyway. I debugged it and I see that if I assign nil to block variable1 the variable2 in not nil. The code below illustrates it:
void (^d1)(NSArray *data) = ^(NSArray *data) {};
void (__weak^d2)(NSArray *data) = d1;
d1 = nil;
Output:
(lldb) po d1
<__NSGlobalBlock__: 0x9c22b8>]
(lldb) po d2
<__NSGlobalBlock__: 0x9c22b8>
(lldb) po d1
<nil>
(lldb) po d2
<__NSGlobalBlock__: 0x9c22b8>
The question:
Why block d2 is not nil? Is it copied by value and isn't copied as pointer?
First of all, you should never depend on an object to be deallocated in a particular place. It is always possible for someone else (e.g. an autorelease pool) to hold a reference to it without you knowing. It is not incorrect for an object to be not deallocated in any place.
Okay, to your case: the __NSGlobalBlock__ gives a hint: this is a "global block". In the current implementation of blocks, there are 3 types of block objects: stack blocks (__NSStackBlock__), heap blocks (__NSMallocBlock__), and global blocks (__NSGlobalBlock__).
Blocks that are closures (i.e. that use some local variable from an outer scope) start out as "stack blocks" -- the object structure has automatic storage duration, just like a local variable, and it becomes invalid when it leaves the scope it's defined in. When a stack block is "copied", it gets moved to a dynamically-allocated object on the heap, like all other objects in Objective-C. This is a heap block. It is reference-counted like other objects in Cocoa, and copying it has the same effect as retain.
Blocks that are not closures (which is what the block in your example is) are implemented as "global blocks". Because it does not contain any captured variables, all instances of the block are the same, and thus there just exactly one instance of it in the entire program. It is statically allocated like string literals, and lives for the entire lifetime of the program. It is not reference-counted, and retain, release, and copy on it have no effect. It cannot be deallocated; that is why your weak reference never gets set to nil. A __weak reference only gets set to nil if the object it points to gets deallocated.
d1 is a reference to the block you created, ^(NSArray *data) {};. Once you assign d2 to d1's reference, the reference count on that block increments. When you set d1 to nil, there is still a strong pointer to the block, d2.