Need explanation for an excerpt from Apple's documentation on NSRunLoop - ios

Apple's official documentation is sometimes difficult for understanding, especially for non-native speakers. This is an excerpt from Anatomy of NSRunLoop
A run loop is very much like its name sounds. It is a loop your thread enters and uses to run event handlers in response to incoming events. Your code provides the control statements used to implement the actual loop portion of the run loop—in other words, your code provides the while or for loop that drives the run loop. Within your loop, you use a run loop object to "run” the event-processing code that receives events and calls the installed handlers.
This confuses me. My code never provides while or for loops even for non-main threads. What is being meant here? Can anyone explain?

Keep reading until Using Run Loop Objects and Apple’s code samples do show control statements like while loops.
Listing 3-1
NSInteger loopCount = 10;
do
{
// Run the run loop 10 times to let the timer fire.
[myRunLoop runUntilDate:[NSDate dateWithTimeIntervalSinceNow:1]];
loopCount--;
}
while (loopCount);
Listing 3-2
do
{
// Start the run loop but return after each source is handled.
SInt32 result = CFRunLoopRunInMode(kCFRunLoopDefaultMode, 10, YES);
// If a source explicitly stopped the run loop, or if there are no
// sources or timers, go ahead and exit.
if ((result == kCFRunLoopRunStopped) || (result == kCFRunLoopRunFinished))
done = YES;
// Check for any other exit conditions here and set the
// done variable as needed.
}
while (!done);
The intended way to use NSRunLoop does require you to invoke the next run, again and again until a certain condition is met.
But if you start your run loop with -[NSRunLoop run], it runs indefinitely without help. That’s what the main thread does.
In case you’re wondering why Apple lets (or wants) you to control every loop, NeXTSTEP shipped in the 80s when every CPU cycle counts. Functions like -[NSRunLoop runMode:beforeDate:] lets you fine tune the frequency and behaviour of your run loops down to every run.

Oh, you do run a loop on the main thread, but you don't know.
Set a breakpoint on an action method and look at the stack trace. There will be something like:
#9 0x00007fff912eaa29 in -[NSApplication run] ()
That's the loop.
In another thread you very often do not need a instance of NSRunLoop. Its primary ability is to receive events and to dispatch them. But in an additional thread you want to process calculations straight forwarded in most cases. To have a term for it: Additional threads are usually not event-driven.
So you have a run loop (and have to run it) only rarely, especially when you have networking or file access that is dispatched using a run loop.In such a case it is a common mistake that one does not run the thread's run loop.

Related

iOS: Handling OpenGL code running on background threads during App Transition

I am working on an iOS application that, say on a button click, launches several threads, each executing a piece of Open GL code. These threads either have a different EAGLContext set on them, or if they use same EAGLContext, then they are synchronised (i.e. 2 threads don't set same EAGLContext in parallel).
Now suppose the app goes into background. As per Apple's documentation, we should stop all the OpenGL calls in applicationWillResignActive: callback so that by the time applicationDidEnterBackground: is called, no further GL calls are made.
I am using dispatch_queues to create background threads. For e.g.:
__block Byte* renderedData; // some memory already allocated
dispatch_sync(glProcessingQueue, ^{
[EAGLContext setCurrentContext:_eaglContext];
glViewPort(...)
glBindFramebuffer(...)
glClear(...)
glDrawArrays(...)
glReadPixels(...) // read in renderedData
}
use renderedData for something else
My question is - how to handle applicationWillResignActive: so that any such background GL calls can be not just stopped, but also be able to resume on applicationDidBecomeActive:? Should I wait for currently running blocks to finish before returning from applicationWillResignActive:? Or should I just suspend glProcessingQueue and return?
I have also read that similar is the case when app is interrupted in other ways, like displaying an alert, a phone call, etc.
I can have multiple such threads at any point of time, invoked by possibly multiple ViewControllers, so I am looking for some scalable solution or design pattern.
The way I see it you need to either pause a thread or kill it.
If you kill it you need to ensure all resources are released which means again calling openGL most likely. In this case it might actually be better to simply wait for the block to finish execution. This means the block must not take too long to finish which is impossible to guarantee and since you have multiple contexts and threads this may realistically present an issue.
So pausing seems better. I am not sure if there is a direct API to pause a thread but you can make it wait. Maybe a s system similar to this one can help.
The linked example seems to handle exactly what you would want; it already checks the current thread and locks that one. I guess you could pack that into some tool as a static method or a C function and wherever you are confident you can pause the thread you would simply do something like:
dispatch_sync(glProcessingQueue, ^{
[EAGLContext setCurrentContext:_eaglContext];
[ThreadManager pauseCurrentThreadIfNeeded];
glViewPort(...)
glBindFramebuffer(...)
[ThreadManager pauseCurrentThreadIfNeeded];
glClear(...)
glDrawArrays(...)
glReadPixels(...) // read in renderedData
[ThreadManager pauseCurrentThreadIfNeeded];
}
You might still have an issue with main thread if it is used. You might want to skip pause on that one otherwise your system may simply never wake up again (not sure though, try it).
So now you are look at interface of your ThreadManager to be something like:
+ (void)pause {
__threadsPaused = YES;
}
+ (void)resume {
__threadsPaused = NO;
}
+ (void)pauseCurrentThreadIfNeeded {
if(__threadsPaused) {
// TODO: insert code for locking until __threadsPaused becomes false
}
}
Let us know what you find out.

Resart a task in FreeRTOS

I have a specific task routine which performs some operations in a specific order, and these operations handle few volatile variables. There is a specific interrupt which updates these volatile variables asynchronously. Hence, the task routine should restart if such an interrupt occurs. Normally FreeRTOS will resume the task, but this will result in wrong derived values, hence the requirement for restarting the routine. I also cannot keep the task routine under critical section, because I should not be missing any interrupts.
Is there a way in FreeRTOS with which I can achieve this? Like a vtaskRestart API. I could have deleted the task and re-created it, but this adds a lot of memory management complications, which I would like to avoid. Currently my only option is to add checks in the routine on a flag to see if a context switch have occured and if yes, restart, else continue.
Googling did not fetch any clue on this. Seems like people never faced such a problem or may be its that this design is poor. In FreeRTOS forum, few who asked for a task-restart didn't seem to have this same problem. stackOverflow didn't have a result on freertos + task + restart. So, this could be the first post with this tag combination ;)
Can someone please tell me if this is directly possible in FreeRTOS?
You can use semaphore for this purpose. If you decide using semaphore, you should do the steps below.
Firstly, you should create a binary semaphore.
The semaphore must be given in the interrupt routine with
xSemaphoreGiveFromISR( Example_xSemaphore, &xHigherPriorityTaskWoken
);
And, you must check taking semaphore in the task.
void vExample_Task( void * pvParameters )
{
for( ;; )
{
if (xSemaphoreTake( Example_xSemaphore, Example_PROCESS_TIME)==pdTRUE)
{
}
}
}
For this purpose you should use a queue and use the queue peek function to yield at your volatile data.
I'm using it as I have a real time timer and this way I make the time available to all my task, without any blocking.
Here it how it goes:
Declare the queue:
xQueueHandle RTC_Time_Queue;
Create the queue of 1 element:
RTC_Time_Queue = xQueueCreate( 1, sizeof(your volatile struct) );
Overwrite the queue everytime your interrupt occurs:
xQueueOverwriteFromISR(RTC_Time_Queue, (void*) &time);
And from other task peek the queue:
xQueuePeek(RTC_GetReadQueue(), (void*) &TheTime, 0);
The 0 at the end of xQueuePeek means you don't want to wait if the queue is empty. The queue peek won't delete the value in the queue so it will be present every time you peek and the code will never stop.
Also you should avoid having variable being accessed from ISR and the RTOS code as you may get unexpected corruption.

NSRunLoop's runMode:beforeDate: - the correct approach for setting the "beforeDate"

I have a doubt regarding the correct usage of NSRunLoop's runMode:beforeDate method.
I have a secondary, background thread that processes delegate messages as they are received.
Basically, I have process intensive logic that needs to be executed on a background thread.
So, I have 2 objects, ObjectA and AnotherObjectB.
ObjectA initializes AnotherObjectB and tells AnotherObjectB to start doing it's thing. AnotherObjectB works asynchronously, so ObjectA acts as AnotherObjectB's delegate. Now, the code that needs to be executed in the delegate messages, needs to be done on a background thread. So, for ObjectA, I created an NSRunLoop, and have done something like this to set the run loop up:
do {
[[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode beforeDate:[NSDate distantFuture]];
} while (aCondition);
Where aCondition is set somewhere in the "completion delegate message".
I'm getting all my delegate messages and they are being processed on that background thread.
My question being: is this the correct approach?
The reason I ask this is because [NSDate distantFuture] is a date spanning a couple of centuries. So basically, the runLoop won't timeout until "distantFuture" - I definitely won't be using my Mac or this version of iOS till then. >_<
However, I don't want the run loop to run that long. I want the run loop to get done as soon as my last delegate message is called, so that it can properly exit.
Also, I know that I can set repeating timers, with shorter intervals, but that is not the most efficient way since it's akin to polling. Instead, I want the thread to work only when the delegate messages arrive, and sleep when there are no messages. So, is the approach I'm taking the correct approach, or is there some other way of doing it. I read the docs and the guide, and I set this up based off what I understood from reading them.
However, when not completely sure, best to ask this awesome community for an opinion and confirmation.
So, thanks in advance for all your help!
Cheers!
The code is in the docs:
If you want the run loop to terminate, you shouldn't use this method. Instead, use one of the other run methods and also check other arbitrary conditions of your own, in a loop. A simple example would be:
BOOL shouldKeepRunning = YES; // global
NSRunLoop *theRL = [NSRunLoop currentRunLoop];
while (shouldKeepRunning && [theRL runMode:NSDefaultRunLoopMode beforeDate:[NSDate distantFuture]]);
where shouldKeepRunning is set to NO somewhere else in the program.
After your last "message", un-set shouldKeepRunning (on the same thread as the run loop!) and it should finish. The key idea here is that you need to send the run loop an event so it knows to stop.
(Also note that NSRunLoop is not thread-safe; I think you're supposed to use -[NSObject performSelector:onThread:...].)
Alternatively, if it works for your purposes, use a background a dispatch queue/NOperationQueue (but note that code which does this shouldn't touch the run loop; things like starting a NSURLConnection from a dispatch queue/NSOperationQueue worker thread will likely cause problems).
The reason I ask this is because [NSDate distantFuture] is a date spanning a couple of centuries.
The method runMode:beforeDate: will
return NO immediately if there are no sources scheduled on the RunLoop.
return YES whenever an event has been processed.
return YES when the limitDate has been reached.
So even if the limitDate is very high, it will return after every processed event, it will not keep running until limitDate has been hit. It would only wait for that long if no event is ever processed. limitDate is thus like a timeout after that the method will give up on waiting for an event to take place. But if you want to have multiple events in a row handled, you must call this method over and over again, hence the loop.
Think of fetching packets with timeout from a network socket. The fetch call returns when a packet arrives or when the timeout has been hit. Yet if you want to process the next packet, you must call the fetch method again.
The following is unfortunately pretty bad code for two reasons:
// BAD CODE! DON'T USE!
NSDate * distFuture = NSDate.distantFuture;
NSRunLoop * runLoop = NSRunLoop.currentRunLoop;
while (keepRunning) {
[runLoop runMode:NSDefaultRunLoopMode beforDate:distFuture];
}
If no RunLoopSource is yet scheduled on the RunLoop, it will waste 100% CPU time, as the method will return at once just to be called again and that as fast as the CPU is able to do so.
The AutoreleasePool is never renewed. Objects that are autoreleased (and even ARC does that) are added to the current pool but are never released as the pool is never cleared, so memory consumption will raise as long as this loop is running. How much depends on what your RunLoopSources are actually doing and how they are doing it.
A better version would be:
// USE THIS INSTEAD
NSDate * distFuture = NSDate.distantFuture;
NSRunLoop * runLoop = NSRunLoop.currentRunLoop;
while (keepRunning) #autoreleasepool {
BOOL didRun = [runLoop runMode:NSDefaultRunLoopMode beforDate:distFuture];
if (!didRun) usleep(1000);
}
It solves both problems:
An AutoreleasePool is created the first time the loop runs and after every run it is cleared, so memory consumption will not raise over time.
In case the RunLoop didn't really run at all, the current thread sleeps for one millisecond before trying again. This way the CPU load will be pretty low since as as no RunLoopSource is set, this code only runs once every millisecond.
To reliably terminate the loop, you need to do two things:
Set keepRunning to NO. Note that you must declare keepRunning as volatile! If you don't do that, the compiler may optimize the check away and turn your loop into an endless loop since it sees no code in the current execution context that would ever change the variable and it cannot know that some other code somewhere else (and maybe on another thread) may change it in the background. This is why you usually need a memory barrier for these cases (a lock, a mutex, a semaphore, or an atomic operation), as compilers don't optimize across those barriers. However, in that simple case, using volatile is enough, as BOOL is always atomic in Obj-C and volatile tells the compiler "Always check thes value of this variable as it may change behind your back without you seeing that change at compile time".
If the variable has been changed from another thread and not from within an event handler, your RunLoop thread may be sleeping inside the runMode:beforeDate: call, waiting for a RunLoopSource event to take place which may take any amount of time or never happen at all anymore. To force this call to return immediately, just schedule an event after changing the variable. This can be done with performSelector:onThread:withObject:waitUntilDone: as shown below. Performing this selector counts as a RunLoop event and the method will return after the selector was called, see that the variable has changed and break out of the loop.
volatile BOOL keepRunning;
- (void)wakeMeUpBeforeYouGoGo {
// Jitterbug
}
// ... In a Galaxy Far, Far Away ...
keepRunning = NO;
[self performSelector:#selector(wakeMeUpBeforeYouGoGo)
onThread:runLoopThread withObject:nil waitUntilDone:NO];

How to open/create UIManagedDocument synchronously?

As mentioned in title, I would like to open UIManagedDocument synchronously, i.e, I would like my execution to wait till open completes. I'm opening document on mainThread only.
Current API to open uses block
[UIManagedDocument openWithCompletionHandler:(void (^)(BOOL success))];
Locks usage mentioned at link works well on threads other than main thread. If I use locks on mainThread, it freezes execution of app.
Any advice would be helpful. Thanks.
First, let me say that I strongly discourage doing this. Your main thread just waits, and does nothing while waiting for the call to complete. Under certain circumstances, the system will kill your app if it does not respond on the main thread. This is highly unusual.
I guess you should be the one to decide when/how you should use various programming tools.
This one does exactly what you want... block the main thread until the completion handler runs. Again, I do not recommend doing this, but hey, it's a tool, and I'll take the NRA stance: guns don't kill people...
__block BOOL waitingOnCompletionHandler = YES;
[object doSomethingWithCompletionHandler:^{
// Do your work in the completion handler block and when done...
waitingOnCompletionHandler = NO;
}];
while (waitingOnCompletionHandler) {
usleep(USEC_PER_SEC/10);
}
Another option is to execute the run loop. However, this isn't really synchronous, because the run loop will actually process other events. I've used this technique in some unit tests. It is similar to the above, but still allows other stuff to happen on the main thread (for example, the completion handler may invoke an operation on the main queue, which may not get executed in the previous method).
__block BOOL waitingOnCompletionHandler = YES;
[object doSomethingWithCompletionHandler:^{
// Do your work in the completion handler block and when done...
waitingOnCompletionHandler = NO;
}];
while (waitingOnCompletionHandler) {
NSDate *futureTime = [NSDate dateWithTimeIntervalSinceNow:0.1];
[[NSRunLoop currentRunLoop] runUntilDate:futureTime];
}
There are other methods as well, but these are simple, easy to understand, and stick out like a sore thumb so it's easy to know you are doing something unorthodox.
I should also note that I've never encountered a good reason to do this in anything other than tests. You can deadlock your code, and not returning from the main run loop is a slippery slope (even if you are manually executing it yourself - note that what called you is still waiting and running the loop again could re-enter that code, or cause some other issue).
Asynchronous APIs are GREAT. The condition variable approach or using barriers for concurrent queues are reasonable ways to synchronize when using other threads. Synchronizing the main thread is the opposite of what you should be doing.
Good luck... and make sure you register your guns, and always carry your concealed weapons permit. This is certainly the wild west. There's always a John Wesley Harden out there looking for a gun fight.

trouble reading from __global memory after atom_inc in OpenCL

OpenCL doesn't have a global barrier that will stop all threads, so I'm trying to create a work around with the following code:
void barrier(__global uint* scratch) {
uint nThreads = get_global_size(0);
atom_inc(scratch);
/* this loop never terminates */
while(scratch[0] < nThreads) {
continue;
}
}
The idea is that each thread loops until all of them increment that one piece of memory.
However, the value read from scratch[0] never changes for the threads once it's been read, and it loops forever. I know it's being incremented because it's the correct value when I read it back to the host.
Is the global memory being locally cached? What's going on here?
Found the problem: the order in which work groups are executed is implementation defined. This means that some threads might start only after others have finished.
In the code I gave, the work groups that are started first will loop forever waiting on the the others to hit the 'barrier'. And the work groups that would be started later won't ever start because they're waiting for the first ones to finish.
If the implementation (I'm on a Radeon 5750, using Stream SDK 2.2) executes all work groups concurrently, then it probably wouldn't be an issue. But that's not the case for my setup.

Resources