Asynchronous NSStream I/O with GCD - ios

I am working with an external device that I receive data from. I want to handle its data read/write queue asynchronously, in a thread.
I've got it mostly working: There is a class that simply manages the two streams, using the NSStreamDelegate to respond to incoming data, as well as responding to NSStreamEventHasSpaceAvailable for sending out data that's waiting in a buffer after having failed to be sent earlier.
This class, let's call it SerialIOStream, does not know about threads or GCD queues. Instead, its user, let's call it DeviceCommunicator, uses a GCD queue in which it initializes the SerialIOStream class (which in turn creates and opens the streams, scheduling them in the current runloop):
ioQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0);
dispatch_async(ioQueue, ^{
ioStreams = [[SerialIOStream alloc] initWithPath:[#"/dev/tty.mydevice"]];
[[NSRunLoop currentRunLoop] run];
});
That way, the SerialIOStreams stream:handleEvent: method runs in that GCD queue, apparently.
However, this causes some problems. I believe I run into concurrency issues, up to getting crashes, mainly at the point of feeding pending data to the output stream. There's a critical part in the code where I pass the buffered output data to the stream, then see how much data was actually accepted into the stream, and then removing that part from my buffer:
NSInteger n = self.dataToWrite.length;
if (n > 0 && stream.hasSpaceAvailable) {
NSInteger bytesWritten = [stream write:self.dataToWrite.bytes maxLength:n];
if (bytesWritten > 0) {
[self.dataToWrite replaceBytesInRange:NSMakeRange(0, bytesWritten) withBytes:NULL length:0];
}
}
The above code can get called from two places:
From the user (DeviceCommunicator)
From the local stream:handleEvent: method, after being told that there's space in the output stream.
Those may be (well, surely are) running in separate thread, and therefore I need to make sure they do not run concurrently this code.
I thought I'd solve this by using the following code in DeviceCommunicator when sending new data out:
dispatch_async (ioQueue, ^{
[ioStreams writeData:data];
});
(writeData adds the data to dataToWrite, see above, and then runs the above code that sends it to the stream.)
However, that doesn't work, apparently because ioQueue is a concurrent queue, which may decide to use any available thread, and therefore lead to a race condition when writeData get called by the DeviceCommunicator while there's also a call to it from stream:handleEvent:, on separate threads.
So, I guess I am mixing expectations of threads (which I'm a bit more familiar with) into my apparent misunderstandings with GCD queues.
How do I solve this properly?
I could add an NSLock, protecting the writeData method with it, and I believe that would solve the issue in that place. But I am not so sure that that's how GCD is supposed to be used - I get the impression that'd be a cludge.
Shall I rather make a separate class, using its own serial queue, for accessing and modifying the dataToWrite buffer, perhaps?
I am still trying to grasp the patterns that are involved with this. Somehow, it looks like a classic producer / consumer pattern, but on two levels, and I'm not doing this right.

Long story, short: Don't cross the streams! (haha)
NSStream is a RunLoop-based abstraction (which is to say that it intends to do its work cooperatively on an NSRunLoop, an approach which pre-dates GCD). If you're primarily using GCD to support concurrency in the rest of your code, then NSStream is not an ideal choice for doing I/O. GCD provides its own API for managing I/O. See the section entitled "Managing Dispatch I/O" on this page.
If you want to continue to use NSStream, you can either do so by scheduling your NSStreams on the main thread RunLoop or you can start a dedicated background thread, schedule it on a RunLoop over there, and then marshal your data back and forth between that thread and your GCD queues. (...but don't do that; just bite the bullet and use dispatch_io.)

Related

iOS: Handling OpenGL code running on background threads during App Transition

I am working on an iOS application that, say on a button click, launches several threads, each executing a piece of Open GL code. These threads either have a different EAGLContext set on them, or if they use same EAGLContext, then they are synchronised (i.e. 2 threads don't set same EAGLContext in parallel).
Now suppose the app goes into background. As per Apple's documentation, we should stop all the OpenGL calls in applicationWillResignActive: callback so that by the time applicationDidEnterBackground: is called, no further GL calls are made.
I am using dispatch_queues to create background threads. For e.g.:
__block Byte* renderedData; // some memory already allocated
dispatch_sync(glProcessingQueue, ^{
[EAGLContext setCurrentContext:_eaglContext];
glViewPort(...)
glBindFramebuffer(...)
glClear(...)
glDrawArrays(...)
glReadPixels(...) // read in renderedData
}
use renderedData for something else
My question is - how to handle applicationWillResignActive: so that any such background GL calls can be not just stopped, but also be able to resume on applicationDidBecomeActive:? Should I wait for currently running blocks to finish before returning from applicationWillResignActive:? Or should I just suspend glProcessingQueue and return?
I have also read that similar is the case when app is interrupted in other ways, like displaying an alert, a phone call, etc.
I can have multiple such threads at any point of time, invoked by possibly multiple ViewControllers, so I am looking for some scalable solution or design pattern.
The way I see it you need to either pause a thread or kill it.
If you kill it you need to ensure all resources are released which means again calling openGL most likely. In this case it might actually be better to simply wait for the block to finish execution. This means the block must not take too long to finish which is impossible to guarantee and since you have multiple contexts and threads this may realistically present an issue.
So pausing seems better. I am not sure if there is a direct API to pause a thread but you can make it wait. Maybe a s system similar to this one can help.
The linked example seems to handle exactly what you would want; it already checks the current thread and locks that one. I guess you could pack that into some tool as a static method or a C function and wherever you are confident you can pause the thread you would simply do something like:
dispatch_sync(glProcessingQueue, ^{
[EAGLContext setCurrentContext:_eaglContext];
[ThreadManager pauseCurrentThreadIfNeeded];
glViewPort(...)
glBindFramebuffer(...)
[ThreadManager pauseCurrentThreadIfNeeded];
glClear(...)
glDrawArrays(...)
glReadPixels(...) // read in renderedData
[ThreadManager pauseCurrentThreadIfNeeded];
}
You might still have an issue with main thread if it is used. You might want to skip pause on that one otherwise your system may simply never wake up again (not sure though, try it).
So now you are look at interface of your ThreadManager to be something like:
+ (void)pause {
__threadsPaused = YES;
}
+ (void)resume {
__threadsPaused = NO;
}
+ (void)pauseCurrentThreadIfNeeded {
if(__threadsPaused) {
// TODO: insert code for locking until __threadsPaused becomes false
}
}
Let us know what you find out.

A proper way to wait for an event from a concurrent thread

I've got an occasional crash that has to do with the improperly finished task on a concurrent thread while an app is transitioning to background.
So I have 3 threads:
A (main).
B (managed by GCD).
C (manually created to process intensive socket operations).
The scenario is the following:
In the applicationDidEnterBackground: handler (which is certainly executed on thread A) a long-running task is begun on thread B to complete all ongoing operations (save an application state, close a socket, etc.). In this task I need to wait until a socket properly finishes its work on thread C and only after that to continue with this long-running task.
Below is simplified code:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// Some synchronous task.
[stateManager saveState];
// Here I need to wait until the socket finishes its task.
...
// Continuing of the long-running task.
...
}
What is the acceptable way to accomplish this task. Is it OK if I do something like this?
while (socket.isConnected)
{
[[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode beforeDate:[NSDate distantFuture]];
}
// Continuing of the long-running task.
Or maybe something wrong in my current architecture and I need to use NSOperation to serialize asynchronous tasks somehow for example?
update: The problem has been solved by using dispatch_semaphore APIs as #Rob Napier suggested.
First, do not think of these as threads, and if you're creating a thread with NSThread or performSelectorInBackground: (or even worse, pthreads), don't do that. Use GCD. GCD creates queues. Queues order blocks, which eventually do run on threads, but the threads are an implementation detail and there is not a 1:1 mapping of queues to threads. See Migrating Away from Threads for more discussion on that.
To the question of how to wait for some other operation, the tool you probably want is a dispatch_semaphore. You create the semaphore and hand it to both operations. Then you call dispatch_semaphore_wait when you want to wait for something, and dispatch_sempahore_signal when you want to indicate that that something has happened. See Using Dispatch Semaphores to Regulate the Use of Finite Resources for more example code. The "finite resource" in this case is "the socket." You want to wait until the other part is done using it and returned it to the pool.
Semaphores will work even if you are using manual threading, but I can't emphasize enough that you should not be doing manual threading. All your concurrency should be managed through GCD. This is an important tool to overall concurrency management in iOS.
I would use NSOperation with dependencies.
So, you have tasks
A - main thread - aka 'entry point'
B - heavy boy to run in background
C - something else heavy to run after socket finished
Your heavy task from "B" is OperationB
Assume your socket framework capable of running syncronous in current thread? - then this is your OperationSend
Do the rest to do in background - OperationC
there you have a chain of operations, dependent on each other:
OperationB -> OperationSend -> OperationC

How to open/create UIManagedDocument synchronously?

As mentioned in title, I would like to open UIManagedDocument synchronously, i.e, I would like my execution to wait till open completes. I'm opening document on mainThread only.
Current API to open uses block
[UIManagedDocument openWithCompletionHandler:(void (^)(BOOL success))];
Locks usage mentioned at link works well on threads other than main thread. If I use locks on mainThread, it freezes execution of app.
Any advice would be helpful. Thanks.
First, let me say that I strongly discourage doing this. Your main thread just waits, and does nothing while waiting for the call to complete. Under certain circumstances, the system will kill your app if it does not respond on the main thread. This is highly unusual.
I guess you should be the one to decide when/how you should use various programming tools.
This one does exactly what you want... block the main thread until the completion handler runs. Again, I do not recommend doing this, but hey, it's a tool, and I'll take the NRA stance: guns don't kill people...
__block BOOL waitingOnCompletionHandler = YES;
[object doSomethingWithCompletionHandler:^{
// Do your work in the completion handler block and when done...
waitingOnCompletionHandler = NO;
}];
while (waitingOnCompletionHandler) {
usleep(USEC_PER_SEC/10);
}
Another option is to execute the run loop. However, this isn't really synchronous, because the run loop will actually process other events. I've used this technique in some unit tests. It is similar to the above, but still allows other stuff to happen on the main thread (for example, the completion handler may invoke an operation on the main queue, which may not get executed in the previous method).
__block BOOL waitingOnCompletionHandler = YES;
[object doSomethingWithCompletionHandler:^{
// Do your work in the completion handler block and when done...
waitingOnCompletionHandler = NO;
}];
while (waitingOnCompletionHandler) {
NSDate *futureTime = [NSDate dateWithTimeIntervalSinceNow:0.1];
[[NSRunLoop currentRunLoop] runUntilDate:futureTime];
}
There are other methods as well, but these are simple, easy to understand, and stick out like a sore thumb so it's easy to know you are doing something unorthodox.
I should also note that I've never encountered a good reason to do this in anything other than tests. You can deadlock your code, and not returning from the main run loop is a slippery slope (even if you are manually executing it yourself - note that what called you is still waiting and running the loop again could re-enter that code, or cause some other issue).
Asynchronous APIs are GREAT. The condition variable approach or using barriers for concurrent queues are reasonable ways to synchronize when using other threads. Synchronizing the main thread is the opposite of what you should be doing.
Good luck... and make sure you register your guns, and always carry your concealed weapons permit. This is certainly the wild west. There's always a John Wesley Harden out there looking for a gun fight.

Threading NSStream

I have a TCP connection that is open continuously for communication with an external device. There is a lot going on in the communication pipe which causes the UI to become unresponsive at times.
I would like to put the communication on a separate thread. I understand detachNewThread and how it calls a #selector. My issue is that I am not sure how this would be used in conjunction with something like NSStream?
Rather than manually creating a thread and managing thread safety issues, you might prefer to use Grand Central Dispatch ('GCD'). That allows you to post blocks — which are packets of code and some state — off to be executed away from the main thread and wherever the OS thinks is most appropriate. If you create a serial dispatch queue you can even be certain that if you post a new block while an old one has yet to finish, the system will wait until it finishes.
E.g.
// you'd want to make this an instance variable in a real program
dispatch_queue_t serialDispatchQueue =
dispatch_queue_create(
"A handy label for debugging",
DISPATCH_QUEUE_SERIAL);
...
dispatch_async(serialDispatchQueue,
^{
NSLog(#"all code in here occurs on the dispatch queue ...");
});
/* lots of other things here */
dispatch_async(serialDispatchQueue,
^{
NSLog(#"... and this won't happen until after everything already dispatched");
});
...
// cleanup, for when you're done
dispatch_release(serialDispatchQueue);
A very quick introduction to GCD is here, Apple's more thorough introduction is also worth reading.

POSIX threading on ios

I've started experimenting with POSix threads using the ios platform. Coming fro using NSThread it's pretty daunting.
Basically in my sample app I have a big array filled with type mystruct. Every so often (very frequently) I want to perform a task with the contents of one of these structs in the background so I pass it to detachnewthread to kick things off.
I think I have the basics down but Id like to get a professional opinion before I attempt to move on to more complicated stuff.
Does what I have here seem "o.k" and could you point out anything missing that could cause problems? Can you spot any memory management issues etc....
struct mystruct
{
pthread thread;
int a;
long c;
}
void detachnewthread(mystruct *str)
{
// pthread_t thread;
if(str)
{
int rc;
// printf("In detachnewthread: creating thread %d\n", str->soundid);
rc = pthread_create(&str->thread, NULL, DoStuffWithMyStruct, (void *)str);
if (rc){
printf("ERROR; return code from pthread_create() is %d\n", rc);
//exit(-1);
}
}
//
/* Last thing that main() should do */
// pthread_exit(NULL);
}
void *DoStuffWithMyStruct(void *threadid)
{
mystruct *sptr;
dptr = (mystruct *)threadid;
// do stuff with data in my struct
pthread_detach(soundptr->thread);
}
One potential issue would be how the storage for the passed in structure mystruct is created. The lifetime of that variable is very critical to its usage in the thread. For example, if the caller of detachnewthread had that declared on the stack and then returned before the thread finished, it would be undefined behavior. Likewise, if it were dynamically allocated, then it is necessary to make sure it is not freed before the thread is finished.
In response to the comment/question: The necessity of some kind of mutex depends on the usage. For the sake of discussion, I will assume it is dynamically allocated. If the calling thread fills in the contents of the structure prior to creating the "child" thread and can guarantee that it will not be freed until after the child thread exits, and the subsequent access is read/only, then you would not need a mutex to protect it. I can imagine that type of scenario if the structure contains information that the child thread needs for completing its task.
If, however, more than one thread will be accessing the contents of the structure and one or more threads will be changing the data (writing to the structure), then you probably do need a mutex to protect it.
Try using Apple's Grand Central Dispatch (GCD) which will manage the threads for you. GCD provides the capability to dispatch work, via blocks, to various queues that are managed by the system. Some of the queue types are concurrent, serial, and of course the main queue where the UI runs. Based upon the CPU resources at hand, the system will manage the queues and necessary threads to get the work done. A simple example, which shows the how you can nest calls to different queues is like this:
__block MYClass *blockSelf=self;
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[blockSelf doSomeWork];
dispatch_async(dispatch_get_main_queue(), ^{
[blockSelf.textField setStringValue:#"Some work is done, updating UI"];
});
});
__block MyClass *blockSelf=self is used simply to avoid retain cycles associated with how blocks work.
Apple's docs:
http://developer.apple.com/library/ios/#documentation/Performance/Reference/GCD_libdispatch_Ref/Reference/reference.html
Mike Ash's Q&A blog post:
http://mikeash.com/pyblog/friday-qa-2009-08-28-intro-to-grand-central-dispatch-part-i-basics-and-dispatch-queues.html

Resources