Using blocks with dispatch_async - ios

Trying my hand with blocks in Objective C.
I ran into a strange problem.
Below i have created a block and submitted the block for asynchronous execution on a global dispatch queue.
It doesn't print anything for me. When i replaced the keyword async with sync it works fine and prints the result immediately.
#implementation BlockTest
+(void) blocksTest{
__block int m = 0;
void (^__block myblock)(void);
myblock = ^(){
NSLog(#"myblock %u ", ++m);
};
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
NSLog(#"dispatch_async dispatch_get_global_queue");
myblock();
});
}
#end
int main(int argc, const char * argv[])
{
[BlockTest blocksTest];
}
Can someone please help me with this problem ?

Your program is exiting before the block has a chance to run.
The nature of an asynchronous call like dispatch_async() is that it returns and allows the caller to continue on, possibly before the task that was submitted has completed. Probably before it has even been started.
So, +blocksTest returns to the call site in main(). main() continues to its end (by the way, without returning a value, which is bad for a non-void function). When main() returns, the whole process exits. Any queues, tasks, worker threads, etc. that GCD was managing is all torn down during process termination.
The process does not wait for all threads to exit or become idle.
You could solve this by calling dispatch_main() after the call to +blocksTest in main(). In that case, though, the program will never terminate unless you submit a task which calls exit() at some point. For example, you could put the call to exit inside the block you create in +blocksTest.
Actually, in this case, because the task would run on a background thread and not the main thread, anything which delays the immediate exit would be sufficient. For example, a call to sleep() for a second would do. You could also run the main run loop for a period of time. There's no period of time that's guaranteed to be enough that the global queue has had a chance to run your task to completion, but in practical terms, it would just need a fraction of a second.
There's a complication in that methods to run the run loop exit if there are no input sources or timers scheduled. So, you'd have to schedule a bogus source (like an NSPort). As you can tell, this is a kludgy approach if you're not otherwise using the run loop for real stuff.

Because when you used dispatch_async, the block may have started the execution, but haven't reached the point where you print. However, if you use dispatch_sync, it makes sure that the entire block execution is done before it returns. Remember, dispatch_sync is the main thread.

Related

How does wait succeed for a block that is to be executed on the next dispatch?

import XCTest
#testable import TestWait
class TestWait: XCTestCase {
func testX() {
guard Thread.isMainThread else {
fatalError()
}
let exp = expectation(description: "x")
DispatchQueue.main.async {
print("block execution")
exp.fulfill()
}
print("before wait")
wait(for: [exp], timeout: 2)
print("after wait")
}
}
Output:
before wait
block execution
after wait
I'm trying to rationalize the sequence of the prints. This is what I think:
the test is ran on main thread
it dispatches a block off the main thread, but since the dispatch happens from the main thread, then the block execution has to wait till the current block is executed
"before wait" is printed
we wait for the expectation to get fulfilled. This wait sleeps the current thread, ie the main thread for 2 seconds.
So how in the world does wait succeed even though we still haven't dispatched off of main thread. I mean "after wait" isn't printed yet! So we must still be on main thread. Hence the "block execution" never has a chance to happen.
What is wrong with my explanation? I'm guessing I it must be something with how wait is implemented
The wait(for:timeout:) of XCTestCase is not like the GCD group/semaphore wait functions with which you are likely acquainted.
When you call wait(for:timeout:), much like the GCD wait calls, it will not return until the timeout expires or the expectations are resolved. But, in the case of XCTestCase and unlike the GCD variations, inside wait(for:timeout:), it is looping, repeatedly calling run(mode:before:) until the expectations are resolved or it times out. That means that although testX will not proceed until the wait is satisfied, the calls to run(mode:before:) will allow the run loop to continue to process events (including anything dispatched to that queue, including the completion handler closure). Hence no deadlock.
Probably needless to say, this is a feature of XCTestCase but is not a pattern to employ in your own code.
Regardless, for more information about how Run Loops work, see the Threading Programming Guide: Run Loops.
When in doubt, look at the source code!
https://github.com/apple/swift-corelibs-xctest/blob/ab1677255f187ad6eba20f54fc4cf425ff7399d7/Sources/XCTest/Public/Asynchronous/XCTWaiter.swift#L358
The whole waiting code is not simple but the actual wait boils down to:
_ = runLoop.run(mode: .default, before: Date(timeIntervalSinceNow: timeIntervalToRun))
You shouldn't think about waits in terms of threads but in terms of queues. By RunLoop.current.run() you basically tell the current code to start executing other items in the queue.
The wait function utilizes NSRunLoop inside most likely. The run loop doesn't block the main thread like sleep functions do. Despite execution of the function testX does not move on. The run loop still accepts events scheduled at the thread and dispatches them to be executed.
UPDATE:
This is how I envision the work of a run loop. In a pseudocode:
while (currentDate < dateToStopRunning && someConditionIsTrue())
{
if (hasEventToDispatch()) //has scheduled block?
{
runTheEvent();// yes, we have a block, so we run it!
}
}
The block that you put for async execution is checked inside hasEventToDispatch() method and is executed. It fullfils the expectation which is checked at the next iteration of the while loop in someConditionIsTrue() so the while loop exits. testX continues exection and after wait is printed

iOS dispatch_get_global_queue nested inside dispatch_get_main_queue

I've inherited a codebase that's using the following structure for threading:
dispatch_async(dispatch_get_main_queue(), { () -> Void in
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), { () -> Void in
//Several AFNetworking Server calls...
})
})
I'm not very experienced with threading, so I'm trying to figure out what the possible intention behind this structure. Why grab the main queue only to access another queue immediately? Is this a common practice? For a little more context, this code is executed in an UIApplicationDidBecomeActiveNotification notification, making several necessary service calls.
Is this structure safe? Essentially my goal is to make the service calls without blocking the UI. Any help or input is appreciated.
So I think this is an interesting few lines that somebody decided to write, so let's break down what's happening here (I may be breaking things down too much, sorry in advance, it just helps my own train of thought)
dispatch_async(dispatch_get_main_queue(), dispatch_block_t block)
This will put the block as a task on the main queue (which you the code is already running in), then immediately continue executing the code in the rest of the method (If he had wanted to wait for the block task to finish before continuing, he'd have made a dispatch_sync call instead).
The main queue is serial, so it will perform these tasks exactly in this order:
go ahead and execute the block after the end of the current method (the end of the run loop for the current task)
execute any other tasks that may have been asynchronously added to the main queue before you dispatch_async your block task into the queue
execute the block task
Now block just dispatches another task to the high priority global queue.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), block2)
The DISPATCH_QUEUE_PRIORITY_HIGH is a concurrent queue-- so if you were to dispatch multiple tasks to this queue, it could potentially do them in parallel, depending on several system factors.
Your old co-worker wanted to make sure the networking calls in block2 were done ASAP
Because block is calling dispatch_async (which returns immediately), block task finishes, allowing the main queue to execute the next task in the queue.
The net result so far is that block2 is queued into the high priority global queue. After it executes, and your network calls complete, callback methods will be called and yadayada
...So what is the order of what's happening?
dispatch_async(dispatch_get_main_queue(), { () -> Void in
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), { () -> Void in
//Several AFNetworking Server calls...
})
})
//moreCode
1) moreCode executes
2) block executes (adds block2 with network calls onto global queue)
3/4) Next task in main queue executes
4/3) Network task in global queue executes
The order of which would happen first may vary between 3 and 4, but that's concurrency for you :)
So unless old coworker wanted moreCode to execute first before adding the network calls to a global queue, you can go ahead and remove that initial dispatch_async into the main queue.
Assuming it looks like they wanted the network calls done ASAP, there probably is no reason to delay the addition of those networking tasks into a global queue.
Open to any input ^^. My experience involves reading all of the documentation on GCD today, then deciding to look at some GCD tagged questions

Resart a task in FreeRTOS

I have a specific task routine which performs some operations in a specific order, and these operations handle few volatile variables. There is a specific interrupt which updates these volatile variables asynchronously. Hence, the task routine should restart if such an interrupt occurs. Normally FreeRTOS will resume the task, but this will result in wrong derived values, hence the requirement for restarting the routine. I also cannot keep the task routine under critical section, because I should not be missing any interrupts.
Is there a way in FreeRTOS with which I can achieve this? Like a vtaskRestart API. I could have deleted the task and re-created it, but this adds a lot of memory management complications, which I would like to avoid. Currently my only option is to add checks in the routine on a flag to see if a context switch have occured and if yes, restart, else continue.
Googling did not fetch any clue on this. Seems like people never faced such a problem or may be its that this design is poor. In FreeRTOS forum, few who asked for a task-restart didn't seem to have this same problem. stackOverflow didn't have a result on freertos + task + restart. So, this could be the first post with this tag combination ;)
Can someone please tell me if this is directly possible in FreeRTOS?
You can use semaphore for this purpose. If you decide using semaphore, you should do the steps below.
Firstly, you should create a binary semaphore.
The semaphore must be given in the interrupt routine with
xSemaphoreGiveFromISR( Example_xSemaphore, &xHigherPriorityTaskWoken
);
And, you must check taking semaphore in the task.
void vExample_Task( void * pvParameters )
{
for( ;; )
{
if (xSemaphoreTake( Example_xSemaphore, Example_PROCESS_TIME)==pdTRUE)
{
}
}
}
For this purpose you should use a queue and use the queue peek function to yield at your volatile data.
I'm using it as I have a real time timer and this way I make the time available to all my task, without any blocking.
Here it how it goes:
Declare the queue:
xQueueHandle RTC_Time_Queue;
Create the queue of 1 element:
RTC_Time_Queue = xQueueCreate( 1, sizeof(your volatile struct) );
Overwrite the queue everytime your interrupt occurs:
xQueueOverwriteFromISR(RTC_Time_Queue, (void*) &time);
And from other task peek the queue:
xQueuePeek(RTC_GetReadQueue(), (void*) &TheTime, 0);
The 0 at the end of xQueuePeek means you don't want to wait if the queue is empty. The queue peek won't delete the value in the queue so it will be present every time you peek and the code will never stop.
Also you should avoid having variable being accessed from ISR and the RTOS code as you may get unexpected corruption.

GCD global concurrent queue not always concurrent(iOS device)?

On iOS device, I recently found that a strange behavior.
Code1:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
NSLog(#"1111");
});
while (1) {
sleep(1);
}
});
Code2:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
NSLog(#"1111");
});
while (1) {
sleep(0.5);
}
});
Code1 and Code2's only difference is that Code1 sleep 1second every loop and Code2 sleep 0.5.
If you run these two code on iOS device with single core, Code1 will print out the #"1111", but Code2 won't.
I don't why, the global queue is assumed to be concurrent.It should always print out the number no matter what other blocks are doing. And if it is something due to that single core device's limit, why sleep(0.5) and sleep(1) would make the difference?
I really want to know the reason for this.
EDIT
I found use sleep(0.5) is my stupid mistake. sleep() function take an unsigned int parameter.So sleep(0.5) is equal to sleep(0). But do sleep(0) will block the whole concurrent queue?
The reason is that your second sleep() is essentially a sleep(0) which means that you're now buzz-looping the thread that GCD gave to you, and that's probably the same thread that would have executed the nested dispatch_async() if you had given it a chance to do anything else, which the first example does. During the one second sleep, GCD sees that the thread is blocked and creates a new one to service the outstanding queued request(s). In the second example, you're essentially computationally starving the enqueued work - GCD is not smart enough to know that a thread has been locked into an infinite loop, and you're not giving the system enough work to justify (in GCD's eyes) the creation of another thread, so... You've essentially discovered a bug in GCD's low-threshold of work logic, I think.
Just checked out, 1st and 2nd snippets print "1111".
Note, nesting of dispatch_async you use won't give any profit, because you set the same priorities (DISPATCH_QUEUE_PRIORITY_DEFAULT) all the tasks "NSLog(#"1111");"
and "
while (1) {
sleep(0.5);
"
will be added to the same target queue. As the result I can assume that in the first case block with WHILE will be executed first, and because it will not finish never, the next task in the queue(NSLog(...)) will be never called.
You can try to use different priorities for the queues (DISPATCH_QUEUE_PRIORITY_LOW f.e.).

pthreads : pthread_cond_signal() from within critical section

I have the following piece of code in thread A, which blocks using pthread_cond_wait()
pthread_mutex_lock(&my_lock);
if ( false == testCondition )
pthread_cond_wait(&my_wait,&my_lock);
pthread_mutex_unlock(&my_lock);
I have the following piece of code in thread B, which signals thread A
pthread_mutex_lock(&my_lock);
testCondition = true;
pthread_cond_signal(&my_wait);
pthread_mutex_unlock(&my_lock);
Provided there are no other threads, would it make any difference if pthread_cond_signal(&my_wait) is moved out of the critical section block as shown below ?
pthread_mutex_lock(&my_lock);
testCondition = true;
pthread_mutex_unlock(&my_lock);
pthread_cond_signal(&my_wait);
My recommendation is typically to keep the pthread_cond_signal() call inside the locked region, but probably not for the reasons you think.
In most cases, it doesn't really matter whether you call pthread_cond_signal() with the lock held or not. Ben is right that some schedulers may force a context switch when the lock is released if there is another thread waiting, so your thread may get switched away before it can call pthread_cond_signal(). On the other hand, some schedulers will run the waiting thread as soon as you call pthread_cond_signal(), so if you call it with the lock held, the waiting thread will wake up and then go right back to sleep (because it's now blocked on the mutex) until the signaling thread unlocks it. The exact behavior is highly implementation-specific and may change between operating system versions, so it isn't anything you can rely on.
But, all of this looks past what should be your primary concern, which is the readability and correctness of your code. You're not likely to see any real-world performance benefit from this kind of micro-optimization (remember the first rule of optimization: profile first, optimize second). However, it's easier to think about the control flow if you know that the set of waiting threads can't change between the point where you set the condition and send the signal. Otherwise, you have to think about things like "what if thread A sets testCondition=TRUE and releases the lock, and then thread B runs and sees that testCondition is true, so it skips the pthread_cond_wait() and goes on to reset testCondition to FALSE, and then finally thread A runs and calls pthread_cond_signal(), which wakes up thread C because thread B wasn't actually waiting, but testCondition isn't true anymore". This is confusing and can lead to hard-to-diagnose race conditions in your code. For that reason, I think it's better to signal with the lock held; that way, you know that setting the condition and sending the signal are atomic with respect to each other.
On a related note, the way you are calling pthread_cond_wait() is incorrect. It's possible (although rare) for pthread_cond_wait() to return without the condition variable actually being signaled, and there are other cases (for example, the race I described above) where a signal could end up awakening a thread even though the condition isn't true. In order to be safe, you need to put the pthread_cond_wait() call inside a while() loop that tests the condition, so that you call back into pthread_cond_wait() if the condition isn't satisfied after you reacquire the lock. In your example it would look like this:
pthread_mutex_lock(&my_lock);
while ( false == testCondition ) {
pthread_cond_wait(&my_wait,&my_lock);
}
pthread_mutex_unlock(&my_lock);
(I also corrected what was probably a typo in your original example, which is the use of my_mutex for the pthread_cond_wait() call instead of my_lock.)
The thread waiting on the condition variable should keep the mutex locked, and the other thread should always signal with the mutex locked. This way, you know the other thread is waiting on the condition when you send the signal. Otherwise, it's possible the waiting thread won't see the condition being signaled and will block indefinitely waiting on it.
Condition variables are typically used like this:
#include <stdio.h>
#include <pthread.h>
#include <unistd.h>
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
int go = 0;
void *threadproc(void *data) {
printf("Sending go signal\n");
pthread_mutex_lock(&lock);
go = 1;
pthread_cond_signal(&cond);
pthread_mutex_unlock(&lock);
}
int main(int argc, char *argv[]) {
pthread_t thread;
pthread_mutex_lock(&lock);
printf("Waiting for signal to go\n");
pthread_create(&thread, NULL, &threadproc, NULL);
while(!go) {
pthread_cond_wait(&cond, &lock);
}
printf("We're allowed to go now!\n");
pthread_mutex_unlock(&lock);
pthread_join(thread, NULL);
return 0;
}
This is valid:
void *threadproc(void *data) {
printf("Sending go signal\n");
go = 1;
pthread_cond_signal(&cond);
}
However, consider what's happening in main
while(!go) {
/* Suppose a long delay happens here, during which the signal is sent */
pthread_cond_wait(&cond, &lock);
}
If the delay described by that comment happens, pthread_cond_wait will be left waiting—possibly forever. This is why you want to signal with the mutex locked.
Both are correct, however for reactivity issues, most schedulers give hand to another thread when a lock is released. I you don't signal before unlocking, your waiting thread A is not in the ready list and thous will not be scheduled until B is scheduled again and call pthread_cond_signal().
The Open Group Base Specifications Issue 7 IEEE Std 1003.1, 2013 Edition (which as far as I can tell is the official pthread specification) says this on the matter:
The pthread_cond_broadcast() or pthread_cond_signal() functions may be
called by a thread whether or not it currently owns the mutex that
threads calling pthread_cond_wait() or pthread_cond_timedwait() have
associated with the condition variable during their waits; however, if
predictable scheduling behavior is required, then that mutex shall be
locked by the thread calling pthread_cond_broadcast() or
pthread_cond_signal().
To add my personal experience, I was working on an application that had code where the conditional variable was destroyed (and the memory containing it freed) by the thread that was woken up. We found that on a multi-core device (an iPad Air 2) the pthread_cond_signal() could actually crash sometimes if it was outside the mutex lock, as the waiter woke up and destroyed the conditional variable before the pthread_cond_signal had completed. This was quite unexpected.
So I would definitely veer towards the 'signal inside the lock' version, it appears to be safer.
Here is nice write up about the conditional variables: Techniques for Improving the Scalability of Applications Using POSIX Thread Condition Variables (look under 'Avoiding the Mutex Contention' section and point 7)
It says that, the second version may have some performance benefits. Because it makes possible for thread with pthread_cond_wait to wait less frequently.

Resources