Returning values from methods using NSRecursiveLock - ios

I am transitioning some thread safety code from #synchronized to NSRecursiveLock.
Consider this code in which myItemsArray is an NSMutableArray:
- (NSUInteger) numberOfItems {
#synchronized(self.myItemsArray) {
return self.myItemsArray.count;
}
}
I believe the following code is incorrect because the lock would never get unlocked:
- (NSUInteger) numberOfItems {
[self.myRecursiveLock lock];
return self.myItemsArray.count;
[self.myRecursiveLock unlock];
}
So I'm using this approach instead:
- (NSUInteger) numberOfItems {
[self.myRecursiveLock lock];
NSUInteger itemCount = self.myItemsArray.count;
[self.myRecursiveLock unlock];
return itemCount;
}
However, I think this approach would break the thread safety, since another thread could add or remove an item after -unlock is called, but before itemCount is returned.
I'm not sure if I'm correct that the last approach isn't thread-safe, because I see this pattern in many widely used third-party libraries (for example, [AFHTTPRequestOperation -responseObject])
What is the correct way to return a value from a method synchronized using NSRecursiveLock?

Protecting numberOfItems can never ensure that the count is up-to-date. The array might be modified by another thread immediately after the method returns. It only prevents that two threads call the count method simultaneously.

Related

how to create an autorelease object

Is this method create an autorelease object?
- (instancetype)autoreleasePerson {
return [[Person alloc] init];
}
I created an Command Line Tool Project to test this:
int main(int argc, const char * argv[]) {
#autoreleasepool {
{
[Person autoreleasePerson];
}
NSLog(#"did out scope");
NSLog(#"will out autoreleasepool");
}
NSLog(#"did out autoreleasepool");
return 0;
}
And the output is:
2022-02-04 23:22:23.224298+0800 MyTest[8921:4007144] did out scope
2022-02-04 23:22:23.224771+0800 MyTest[8921:4007144] will out autoreleasepool
2022-02-04 23:22:23.224876+0800 MyTest[8921:4007144] -[Person dealloc]
2022-02-04 23:22:23.224948+0800 MyTest[8921:4007144] did out autoreleasepool
The person instance will dealloc when the autoreleasepool drains!
But when I use the same Person class in my iOS APP project:
- (void)viewDidLoad {
[super viewDidLoad];
{
[Person autoreleasePerson];
}
NSLog(#"out scope");
}
The output is:
2022-02-04 23:28:13.992969+0800 MyAppTest[9023:4011490] -[Person dealloc] <Person: 0x600001fe8ff0>
2022-02-04 23:28:13.993075+0800 MyAppTest[9023:4011490] out scope
The person instance released once out of scope!
Why is this so?
It looks like on macOS the default behaviour is to autorelease return values, except for cases where the method name starts with "new", "init" or "copy":
+ (Person *)createPerson {
return [Person new]; // autorelease & return
}
+ (Person *)newPerson {
return [Person new]; // direct return
}
To control this behaviour apply a compiler attribute:
+ (Person *)createPerson __attribute__((ns_returns_retained)) {
return [Person new]; // direct return
}
+ (Person *)newPerson __attribute__((ns_returns_not_retained)) {
return [Person new]; // autorelease & return
}
To check whether a call to objc_autoreleaseReturnValue was added by the compiler, enable Debug -> Debug Workflow -> Always Show Disassembly,
and put a breakpoint inside these methods on return lines. A call to objc_autoreleaseReturnValue should be visible then:
See ARC reference - Retained return values
Both of the results are valid. You should never assume that there is an autorelease in ARC. See the section "Unretained return values" in the ARC specification:
A method or function which returns a retainable object type but does
not return a retained value must ensure that the object is still valid
across the return boundary.
When returning from such a function or method, ARC retains the value
at the point of evaluation of the return statement, then leaves all
local scopes, and then balances out the retain while ensuring that the
value lives across the call boundary. In the worst case, this may
involve an autorelease, but callers must not assume that the value is
actually in the autorelease pool.
So maybe it's autoreleased, and maybe not (i.e. maybe ARC optimizes it out).
Here, ARC will call objc_autoreleaseReturnValue() when returning from autoreleasePerson, because +alloc returns a retained reference, but autoreleasePerson returns a non-retained reference. What objc_autoreleaseReturnValue() does is check to see if the result of the return will be passed to objc_retainAutoreleasedReturnValue() in the calling function frame. If so, it can skip both the autorelease in the called function, and the retain in the calling function (since they "cancel out"), and hand off ownership directly into a retained reference in the calling function.
objc_retainAutoreleasedReturnValue() is called when ARC will retain the result of a function call. Now, I don't know why in this case calling [Person autoreleasePerson]; will involve a retain of the result, since the result is unused. Perhaps the compiler is treating it as Person temp = [Person autoreleasePerson];, and thus retains and then releases it. This may seem unnecessary, but it is valid for ARC to do it this way. And if ARC does happen to treat it this way internally, then the optimization described above can skip both the autorelease and retain, and it will be simply released in the calling function. Maybe it's doing this in one of your cases and not the other. Who knows why? But my point is that both are valid.
See this article for a more detailed explanation.

GCD why to use dispatch_sync when I read shared resource

I have some question about use dispatch_sync when I read a shared resource.
I have searched several questions on Stack Overflow (such as: GCD dispatch_barrier or dispatch_sync?), but I didn't found an exact answer.
I don't understand that why to use
- (void)addPhoto:(Photo *)photo
{
if (photo) { // 1
dispatch_barrier_async(self.concurrentPhotoQueue, ^{ // 2
[_photosArray addObject:photo]; // 3
dispatch_async(dispatch_get_main_queue(), ^{ // 4
[self postContentAddedNotification];
});
});
}
}
- (NSArray *)photos
{
__block NSArray *array; // 1
dispatch_sync(self.concurrentPhotoQueue, ^{ // 2
array = [NSArray arrayWithArray:_photosArray];
});
return array;
}
I know why to use dispatch_barrier_async,but I don't know why to use dispatch_sync when I read _photosArray, I guess the write operation of _photosArray is in the self.concurrentPhotoQueue, so the read operation of _photosArray also need in the self.concurrentPhotoQueue or else use dispatch_sync in order to achieve multi-read?
What will happen if I don't use dispatch_sync when I do read operation? such as:
- (NSArray *)photos
{
__block NSArray *array;
array = [NSArray arrayWithArray:_photosArray];
return array;
}
Thank you very much!
Probably concurrentPhotoQueue is a serial queue. The main reason for concurrentPhotoQueue is to synchronize the access to the photos array.
Since it is serial all accesses from this queue are serialized, and no race conditions may occur if there are no accesses from other queues / threads in your app.
Writing access may be asynchronous because the writer needs no result of the writing operation in general. But reading must be done synchronously, because the caller has to wait for the result. If your photos method would use dispatch_async it would write the result to array after the photos method has returned. Thus, photos always would return nil.
Your unsynchronized version of photos might produce a race condition: _photosArray could be modified while it copies its contents, such that the number of copied items and the length of the array differ. This could lead to a crash inside of arrayWithArray:.

make a simple NSInteger counter thread safe

I define a NSInteger counter and updated its value in a callback like the following code shows (callback is in another thread):
-(void) myFunc {
NSLog(#"initialise counter...");
// I try to use volatile to make it thread safe
__block volatile NSInteger counter = 0;
[self addObserver:myObserver withCallback:^{
// this is in another thread
counter += 1;
NSLog(#"counter = %d", counter);
}];
}
I use volatile keyword to make the counter thread safe, it is accessed in a callback block which belongs to another thread.
When I invoke myFunc two times:
// 1st time call
[self myFunc];
// 2nd time call
[self myFunc];
the output is like this:
initialise counter...
counter = 1;
counter = 2;
counter = 3;
counter = 4;
counter = 1; // weird
initialise counter...
counter = 2; // weird
counter = 3;
counter = 1; // weird
counter = 4;
It looks like the 2nd time call produce a counter with wrong initial value, and the output before counter=4 is counter=1 which is also weird.
Is it because my code is not thread safe even with volatile keyword? If so, how to make my counter thread safe? If it is thread safe, why I get weird output?
For the simple case of an atomic counter, GCD is overkill. Use the OSAtomic functions:
-(void) myFunc {
static int64_t counter;
[self addObserver:myObserver withCallback:^{
// this is in another thread
int64_t my_value = OSAtomicIncrement64Barrier(&counter);
NSLog(#"counter = %d", my_value);
}];
}
Note that the code logs the result of the increment function rather than the static variable. The result gives you the atomic result of your specific operation. Using the static variable would give you a snapshot of the counter that's not atomic with respect to your increment operation.
First of all using a local variable is corrupted. It will be removed from stack, when the function returns. Therefore the block copies the variable's value (capture) when the block definition is executed (counter = 0) and works on the copy.
If you have a shared resource as the counter is, you have to put accesses to it into a block.
// global
dispatch_queue_t counterQueue;
int counter;
// initialize
counterQueue = dispatch_queue_create( "com.yourname.counterQueue", DISPATCH_QUEUE_SERIAL);
counter = 0;
// Whenever you read or write to counter
dispatch_async( counterQueue,
^{
counter++;
NSLog( #"%d", counter" );
}
// or
int lastValue;
dispatch_sync( counterQueue,
^{
lastValue = counter;
}
// Do something with it.
There are lots of things wrong with your code. It looks like you're calling myFunc repeatedly. Each time you do, it creates a new instance of the counter.
Make your counter an instance variable or app-wide global.
A simple way to make incrementing (and logging) the counter thread-safe is to make the body of the observer use dispatch_async(dispatch_get_main_queue()<your code here>). That way the code that messes with the counter always runs on the main thread, even if it's called from other threads. This isn't the most performant way to handle it, but it's easy.
Otherwise you're going to need to use locks or some other concurrency technique. That requires a strong understanding of thread safety, which your post, frankly, shows that you don't have. (Not to be mean, it's one of the more difficult subjects in computing.)
EDIT:
As Avi points out in his comment, using the main queue to manage the counter would cause the other threads to block waiting on the main thread, and is not a very good solution. (It would work, but would take away just about all the performance benefit of using multiple threads)
It would be better to set up a single serial queue and make that a lazily loaded property of the object that manages this counter, protected with dispatch_once(). However, I don't have enough coffee on-board to write out that code in a forum post.

Idiomatic way to execute an array of blocks

I have an object that can execute an arbitrary queue of updates. I use blocks to embody the updates. I add an update using my addUpdate: method.
- (void) addUpdate: (void(^)())block {
[self.updates addObject: block];
}
Later, I want to execute all of them. I don't care if they run concurrently or not. The basic primitive way would seem to be something like:
for (NSUInteger index = 0; index < self.updates.count; index++) {
void (^block)() = self.updates[index];
block();
}
or with fast enumeration
for (void (^block)() in self.updates) {
block();
}
Or is there something I should be doing with GCD to make this happen?
The most terse way I can think of to do this would be:
[self.updates makeObjectsPerformSelector: #selector(invoke)];
How "idiomatic" that is will probably be situation-dependent...
EDIT: This depends on the fact that blocks are implemented in the runtime as Objective-C objects, and respond to the selector -invoke. In other words, the expression block(); can also be expressed as [block invoke];. I'm not aware of any more succinct way to execute an array of blocks.
For non-concurrent execution, for-in is the way to go. For concurrent execution, you could use NSArray's -enumerateUsing... methods and pass the concurrent flag, or use dispatch_apply() instead of a loop.

What is the parameter that #synchronized() takes

I know what #synchronized() does, but...
sometimes we have:
1- #synchronized(self)
2- #synchronized([MyClass class])
3- #synchrinized(myObj)
What is the difference, and what is the parameter I should pass to this block ?
From the documentation:
The object passed to the #synchronized directive is a unique
identifier used to distinguish the protected block. If you execute the
preceding method in two different threads, passing a different object
for the anObj parameter on each thread, each would take its lock and
continue processing without being blocked by the other. If you pass
the same object in both cases, however, one of the threads would
acquire the lock first and the other would block until the first
thread completed the critical section.
So it depends on what you want to protect from being executed simultaneously,
and there are applications for all three cases.
For example, in
-(void)addToMyArray1:(id)obj
{
#synchronized(self) {
[self.myArray1 addObject:obj];
}
}
-(void)addToMyArray2:(id)obj
{
#synchronized(self) {
[self.myArray2 addObject:obj];
}
}
both #synchronized blocks cannot be executed simultaneously by two threads calling
the method on the same instance (self), thus protecting simultaneous access to the
arrays from different threads.
But it also prevents the block from the first method
to be executed simultaneously executed with the block from the second method, because they
use the same lock self. Therefore, for more fine-grained locking, you could use
different locks:
-(void)addToMyArray1:(id)obj
{
#synchronized(self.myArray1) {
[self.myArray1 addObject:obj];
}
}
-(void)addToMyArray2:(id)obj
{
#synchronized(self.myArray2) {
[self.myArray2 addObject:obj];
}
}
Now the simultaneous access to self.myArray1 and self.myArray2 from different threads
is still protected, but independently of each other.
A lock on the class can be used to protect access to a global variable.
This is just a trivial example for demonstration purposes:
static int numberOfInstances = 0;
-(id)init
{
self = [super init];
if (self) {
#synchronized([self class]) {
numberOfInstances++;
}
}
}
#synchronized should have the same object passed each time. So #synchronized(self) would work best.

Resources