IOS application does not render due to infinite while loop - ios

Hello I am implementing a primitive echo server for ios written purely in c. The issue arises when I enter an infinite while loop in order to accept incoming connections. Where is the best place to put the accept loop and is an infinite while loop (in my case all I want) the best implementation.
Here is the flow
Set root controller inside application function inside app delegate.
call start server function inside controller. (This function never returns)
while(true)
{
#autoreleasepool
{
int exchangeSocket = accept(socket, NULL,NULL);
if(recv(exchangeSocket, buffer, sizeof(buffer), 0) == -1)
{
NSLog(#"%#", #"Error");
}
else
{
//do something with data received
}
}
}

There are at least two concerns with your approach. First, you should not run an infinite loop on the main thread since that thread is already used for the main run loop. Blocking this loop with your own infinite loop (no matter where) will disrupt event processing. This is the answer to why your application does not render. The reference specifies:
Lengthy tasks (or potentially length tasks) should always be performed on a background thread. Any tasks involving network access, file access, or large amounts of data processing should all be performed asynchronously using GCD or operation objects
However, iOS does allow you to create threads using POSIX interfaces, e.g. it offers pthread_create and friends. You should be able to run your code in such a thread without blocking your app.
Second, (but perhaps less of interest to you) using the POSIX networking APIs is somewhat discouraged (due to failure of those APIs to turn on radio) in favor of other interfaces. What's closest to your liking may be CFSocket (also a C interface).

Related

Does the POSIX Thread API offer a way to test if the calling thread alreeady holds a lock?

Short question: Does the POSIX thread API offer me a way to determine if the calling thread already holds a particular lock?
Long question:
Suppose I want to protect a data structure with a lock. Acquiring and releasing the lock need to happen in different functions. Calls between the functions involved are rather complex (I am adding multithreading to a 16-year-old code base). For example:
do_dangerous_stuff() does things for which the current thread needs to hold the write lock. Therefore, I acquire the lock at the beginning of the function and release it at the end, as the caller does not necessarily hold the lock.
Another function, do_extra_dangerous_stuff(), calls do_dangerous_stuff(). However, before the call it already does things which also require a write lock, and the data is not consistent until the call to do_dangerous_stuff() returns (so releasing and immediately re-acquiring the lock might break things).
In reality it is more complicated than that. There may be a bunch of functions calling do_dangerous_stuff(), and requiring each of them to obtain the lock before calling do_dangerous_stuff() may be impractical. Sometimes the lock is acquired in one function and released in another.
With a read lock, I could just acquire it multiple times from the same thread, as long as I make sure I release the same number of lock instances that I have acquired. For a write lock, this is not an option (attempting to do so will result in a deadlock). An easy solution would be: test if the current thread already holds the lock and acquire it if not, and conversely, test if the current thread still holds the lock and release it if it does. However, that requires me to test if the current thread already holds the lock—is there a way to do that?
Looking at the man page for pthread_rwlock_wrlock(), I see it says:
If successful, the pthread_rwlock_wrlock() function shall return zero; otherwise, an error number shall be returned to indicate the error.
[…]
The pthread_rwlock_wrlock() function may fail if:
EDEADLK The current thread already owns the read-write lock for writing or reading.
As I read it, EDEADLK is never used to indicate chains involving multiple threads waiting for each other’s resources (and from my observations, such deadlocks indeed seem to result in a freeze rather than EDEADLK). It seems to indicate exclusively that the thread is requesting a resource already held by the current thread, which is the condition I want to test for.
If I have misunderstood the documentation, please let me know. Otherwise the solution would be to simply call pthread_rwlock_wrlock(). One of the following should happen:
It blocks because another thread holds the resource. When we get to run again, we will hold the lock. Business as usual.
It returns zero (success) because we have just acquired the lock (which we didn’t hold before). Business as usual.
It returns EDEADLK because we are already holding the lock. No need to reacquire, but we might want to consider this when we release the lock—that depends on the code in question, see below
It returns some other error, indicating something has truly gone wrong. Same as with every other lock operation.
It may make sense to keep track of the number of times we have acquired the lock and got EDEADLK. Taking from Gil Hamilton’s answer, the lock depth would work for us:
Reset the lock depth to 0 when we have acquired a lock.
Increase the lock depth by 1 each time we get EDEADLK.
Match each attempt to acquire the lock with the following: If lock depth is 0, release the lock. Else decrease the lock depth.
This should be thread-safe without further synchronization, as the lock depth is effectively protected by the lock it refers to (we touch it only while holding the lock).
Caveat: if the current thread already holds the lock for reading (and no others do), this will also report it as being “already locked”. Further tests will be needed to determine if the currently held lock is indeed a write lock. If multiple threads, among them the current one, hold the read lock, I do not know if attempting to obtain a write lock will return EDEADLK or freeze the thread. This part needs some more work…
AFAIK, there's no easy way to accomplish what you're trying to do.
In linux, you can use the "recursive" mutex attribute to achieve your purpose (as shown here for example: https://stackoverflow.com/a/7963765/1076479), but this is not "posix"-portable.
The only really portable solution is to roll your own equivalent. You can do that by creating a data structure containing a lock along with your own thread index (or equivalent) and an ownership/recursion count.
CAVEAT: Pseudo-code off the top of my head
Recursive lock:
// First try to acquire the lock without blocking...
if ((err = pthread_mutex_trylock(&mystruct.mutex)) == 0) {
// Acquire succeeded. I now own the lock.
// (I either just acquired it or already owned it)
assert(mystruct.owner == my_thread_index || mystruct.lock_depth == 0);
mystruct.owner = my_thread_index;
++mystruct.lock_depth;
} else if (mystruct.owner == my_thread_index) {
assert(err == EBUSY);
// I already owned the lock. Now one level deeper
++mystruct.lock_depth;
} else {
// I don't own the lock: block waiting for it.
pthread_mutex_lock(&mystruct.mutex);
assert(mystruct.lock_depth == 0);
}
On the way out, it's simpler, because you know you own the lock, you only need to determine whether it's time to release it (i.e. the last unlock). Recursive unlock:
if (--mystruct.lock_depth == 0) {
assert(mystruct.owner == my_thread_index);
// Last level of recursion unwound
mystruct.owner = -1; // Mark it un-owned
pthread_mutex_unlock(&mystruct.mutex);
}
I would want to add some additional checks and assertions and significant testing before trusting this too.

Avoid Data Race condition in swift

I am getting race conditions in my code when I run TSan tool. As same code has been accessed from different queues and threads at the same time that's why I can not use Serial queues or barrier as Queue will block only single queues accessing the shared resource not the other queues.
I used objc_sync_enter(object) | objc_sync_exit(object) and locks NSLock() or NSRecursiveLock() to protect shared resource but these are also not working.
While when I use #synchronized() keyword in Objective C to protect shared resource, it's working fine as expected and I am not getting race conditions in particular block of code.
So, what is an alternative to protect data in Swift as we can not use #synchronized() keyword in Swift language.
PFA screenshot for reference -
I don't understand "I can not use Serial queues or barrier as Queue will block only single queues accessing the shared resource not the other queues." Using a queue is the standard solution to this problem.
class MultiAccess {
private var _property: String = ""
private let queue = DispatchQueue(label: "MultiAccess")
var property: String {
get {
var result: String!
queue.sync {
result = self._property
}
return result
}
set {
queue.async {
self._property = newValue
}
}
}
}
With this construction, access to property is atomic and thread-safe without the caller having to do anything special. Note that this intentionally uses a single queue for the class, not a queue per-property. As a rule, you want a relatively small number of queues doing a lot of work, not a large number of queues doing a little work. If you find that you're accessing a mutable object from lots of different threads, you need to rethink your system (probably reducing the number of threads). There's no simple pattern that will make that work efficiently and safely without you having to think about your specific use case carefully.
But this construction is useful for problems where system frameworks may call you back on random threads with minimal contention. It is simple to implement and fairly easy to use correctly. If you have a more complex problem, you will have to design a solution for that problem.
Edit: I haven't thought about this answer in a long time, but Brennan's comments brought it back to my attention. Because of the bug I had in the original code, my original answer was ok, but if you fixed the bug it was bad. (If you want to see my original code that used a barrier, look in the edit history, I don't want to put it here because people will copy it.) I've changed it to use a standard serial queue rather than a concurrent queue.
Don't generate concurrent queues without careful thought about how threads will be generated. If you are going to have many simultaneous accesses, you're going to create a lot of threads, which is bad. If you're not going to have many simultaneous accesses, then you don't need a concurrent queue. GCD talks make promises about managing threads that it doesn't actually live up to. You definitely can get thread explosion (as Brennan mentions.)

objective-c, possible to queue async NSURLRequests?

I realize this question sounds contradictory. I have several Async requests going out in an application. The situation is that the first async request is an authentication request, and the rest will use an access token returned by the successful authentication request.
The two obvious solutions would be:
run them all synchronous, and risk UI block. (bad choice)
run them async, and put request 2-N in the completion handler for the first one. (not practical)
The trouble is that the subsequent requests may be handled anywhere in the project, at anytime. The failure case would be if the 2nd request was called immediately after the 1st authentication request was issued, and before the access token was returned.
My question thus is, is there any way to queue up Async requests, or somehow say not to issue them until the first request returns successfully?
EDIT:
Why (2) is not practical: The first is an authentication request, happening when the app loads. The 2nd+ may occur right away, in which case it is practical, but it also may occur in a completely separate class or any other part of a large application. I can't essentially put the entire application in the completion handler. Other accesses to the API requests may occur in other classes, and at anytime. Even 1-2 days away after many other things have occurred.
SOLUTION:
//pseudo code using semaphore lock on authentication call to block all other calls until it is received
// at start of auth
_semaphore = dispatch_semaphore_create(0)
// at start of api calls
if(_accessToken == nil && ![_apiCall isEqualToString:#"auth]){
dispatch_semaphore_wait(_semaphore, DISPATCH_TIME_FOREVER);
}
// at end of auth with auth token
dispatch_semaphore_signal([[SFApi Instance] semaphore]);
_accessToken = ...;
This sounds like a case where you'd want to use NSOperation's dependencies
From apple docs:
Operation Dependencies
Dependencies are a convenient way to execute operations in a specific order. You can add and remove dependencies for an operation using the addDependency: and removeDependency: methods. By default, an operation object that has dependencies is not considered ready until all of its dependent operation objects have finished executing. Once the last dependent operation finishes, however, the operation object becomes ready and able to execute.
note that in order for this to work, you must subclass NSOperation "properly" with respect to KVO-compliance
The NSOperation class is key-value coding (KVC) and key-value observing (KVO) compliant for several of its properties. As needed, you can observe these properties to control other parts of your application.
You can't really have it both ways-- there's no built-in serialization for the NSURLConnection stuff. However, you are probably already funneling all of your API requests through some common class anyway (presumably you're not making raw network calls willy-nilly all over the app).
You'll need to build the infrastructure inside that class that prevents the execution of the later requests until the first request has completed. This suggests some sort of serial dispatch queue that all requests (including the initial auth step) are funneled through. You could do this via dependent NSOperations, as is suggested elsewhere, but it doesn't need to be that explicit. Wrapping the requests in a common set of entry points will allow you to do this any way you want behind the scenes.
In cases like this I always find it easiest to write the code synchronously and get it running on the UI thread first, correctly, just for debugging. Then, move the operations to separate threads and make sure you handle concurrency.
In this case the perfect mechanism for concurrency is a semaphore; the authentication operation clears the semaphore when it is done, and all the other operations are blocking on it. Once authentication is done, floodgates are open.
The relevant functions are dispatch_semaphore_create() and dispatch_semaphore_wait() from the Grand Central Dispatch documentation: https://developer.apple.com/library/ios/documentation/Performance/Reference/GCD_libdispatch_Ref/Reference/reference.html#//apple_ref/doc/uid/TP40008079-CH2-SW2
Another excellent solution is to create a queue with a barrier:
A dispatch barrier allows you to create a synchronization point within a concurrent dispatch queue. When it encounters a barrier, a concurrent queue delays the execution of the barrier block (or any further blocks) until all blocks submitted before the barrier finish executing. At that point, the barrier block executes by itself. Upon completion, the queue resumes its normal execution behavior.
Looks like you got it running with a semaphore, nicely done!
Use blocks... 2 ways that I do it:
First, a block inside of a block...
[myCommKit getPlayerInfoWithCallback:^(ReturnCode returnCode, NSDictionary *playerInfo) {
if (playerInfo) {
// this won't run until the first one has finished
[myCommKit adjustSomething: thingToAdjust withCallback:^(ReturnCode returnCode, NSDictionary *successCode) {
if (successCode) {
// this won't run until both the first and then the second one finished
}
}];
}
}];
// don't be confused.. anything down here will run instantly!!!!
Second way is a method inside of a block
[myCommKit getPlayerInfoWithCallback:^(ReturnCode returnCode, NSDictionary *playerInfo) {
if (playerInfo) {
[self doNextThingAlsoUsingBlocks];
}
}];
Either way, any time I do async communication with my server I use blocks. You have to think differently when writing code that communicates with a server. You have to force things to go in the order you want and wait for the return success/fail before doing the next thing. And getting used to blocks is the right way to think about it. It could be 15 seconds between when you start the block and when it gets to the callback and executes the code inside. It could never come back if they're not online or there's a server outage.
Bonus way.. I've also sometimes done things using stages:
switch (serverCommunicationStage) {
case FIRST_STAGE:
{
serverCommunicationStage = FIRST_STAGE_WAITING;
// either have a block in here or call a method that has a block
[ block {
// in call back of this async call
serverCommunicationStage = SECOND_STAGE;
}];
break;
}
case FIRST_STAGE_WAITING:
{
// this just waits for the first step to complete
break;
}
case SECOND_STAGE:
{
// either have a block in here or call a method that has a block
break;
}
}
Then in your draw loop or somewhere keep calling this method. Or set up a timer to call it every 2 seconds or whatever makes sense for your application. Just make sure to manage the stages properly. You don't want to accidentally keep calling the request over and over. So make sure to set the stage to waiting before you enter the block for the server call.
I know this might seem like an older school method. But it works fine.

is there a way that the synchronized keyword doesn't block the main thread

Imagine you want to do many thing in the background of an iOS application but you code it properly so that you create threads (for example using GCD) do execute this background activity.
Now what if you need at some point to write update a variable but this update can occur or any of the threads you created.
You obviously want to protect that variable and you can use the keyword #synchronized to create the locks for you but here is the catch (extract from the Apple documentation)
The #synchronized() directive locks a section of code for use by a
single thread. Other threads are blocked until the thread exits the
protected code—that is, when execution continues past the last
statement in the #synchronized() block.
So that means if you synchronized an object and two threads are writing it at the same time, even the main thread will block until both threads are done writing their data.
An example of code that will showcase all this:
// Create the background queue
dispatch_queue_t queue = dispatch_queue_create("synchronized_example", NULL);
// Start working in new thread
dispatch_async(queue, ^
{
// Synchronized that shared resource
#synchronized(sharedResource_)
{
// Write things on that resource
// If more that one thread access this piece of code:
// all threads (even main thread) will block until task is completed.
[self writeComplexDataOnLocalFile];
}
});
// won’t actually go away until queue is empty
dispatch_release(queue);
So the question is fairly simple: How to overcome this ? How can we securely add a locks on all the threads EXCEPT the main thread which, we know, doesn't need to be blocked in that case ?
EDIT FOR CLARIFICATION
As you some of you commented, it does seem logical (and this was clearly what I thought at first when using synchronized) that only two the threads that are trying to acquire the lock should block until they are both done.
However, tested in a real situation, this doesn't seem to be the case and the main thread seems to also suffer from the lock.
I use this mechanism to log things in separate threads so that the UI is not blocked. But when I do intense logging, the UI (main thread) is clearly highly impacted (scrolling is not as smooth).
So two options here: Either the background tasks are too heavy that even the main thread gets impacted (which I doubt), or the synchronized also blocks the main thread while performing the lock operations (which I'm starting reconsidering).
I'll dig a little further using the Time Profiler.
I believe you are misunderstanding the following sentence that you quote from the Apple documentation:
Other threads are blocked until the thread exits the protected code...
This does not mean that all threads are blocked, it just means all threads that are trying to synchronise on the same object (the _sharedResource in your example) are blocked.
The following quote is taken from Apple's Thread Programming Guide, which makes it clear that only threads that synchronise on the same object are blocked.
The object passed to the #synchronized directive is a unique identifier used to distinguish the protected block. If you execute the preceding method in two different threads, passing a different object for the anObj parameter on each thread, each would take its lock and continue processing without being blocked by the other. If you pass the same object in both cases, however, one of the threads would acquire the lock first and the other would block until the first thread completed the critical section.
Update: If your background threads are impacting the performance of your interface then you might want to consider putting some sleeps into the background threads. This should allow the main thread some time to update the UI.
I realise you are using GCD but, for example, NSThread has a couple of methods that will suspend the thread, e.g. -sleepForTimeInterval:. In GCD you can probably just call sleep().
Alternatively, you might also want to look at changing the thread priority to a lower priority. Again, NSThread has the setThreadPriority: for this purpose. In GCD, I believe you would just use a low priority queue for the dispatched blocks.
I'm not sure if I understood you correctly, #synchronize doesn't block all threads but only the ones that want to execute the code inside of the block. So the solution probably is; Don't execute the code on the main thread.
If you simply want to avoid having the main thread acquire the lock, you can do this (and wreck havoc):
dispatch_async(queue, ^
{
if(![NSThread isMainThread])
{
// Synchronized that shared resource
#synchronized(sharedResource_)
{
// Write things on that resource
// If more that one thread access this piece of code:
// all threads (even main thread) will block until task is completed.
[self writeComplexDataOnLocalFile];
}
}
else
[self writeComplexDataOnLocalFile];
});

A simple method for a "master" thread to monitor "slave" threads

As my first real foray into using pthreads, I'm looking to adapt an already written app of mine to use threads.
The paradigm I have in mind is basically to have one "master" thread which iterates through a list of data items to be processed, launching a new thread for each, with MAX_THREADS threads running at any given time (until the number of remaining tasks is less than this), each of which perform the same task on a single data element within a list.
The master thread needs to be aware of whenever any thread has completed its task and returned (or pthread_exit()'ed), immediately launching a new thread to perform the next task in the list.
What I'm wondering about is what are people's preferred methods for working with such a design? Data considerations aside, what would be the simplest set of pthreads functions to use to accomplish this? Obviously, pthread_join() is out as a means for "checking up" on threads.
Early experiments have been using a struct, passed as the final argument to pthread_create(), which contains an element called "running" which the thread sets to true on startup and resets just before returning. The master thread simply checks the current value of this struct element for each thread in a loop.
Here are the data the program uses for thread management:
typedef struct thread_args_struct
{
char *data; /* the data item the thread will be working on */
int index; /* thread's index in the array of threads */
int thread_id; /* thread's actual integer id */
int running; /* boolean status */
int retval; /* value to pass back from thread on return */
} thread_args_t;
/*
* array of threads (only used for thread creation here, not referenced
* otherwise)
*/
pthread_t thread[MAX_THREADS];
/*
* array of argument structs
*
* a pointer to the thread's argument struct will be passed to it on creation,
* and the thread will place its return value in the appropriate struct element
* before returning/exiting
*/
thread_args_t thread_args[MAX_THREADS];
Does this seem like a sound design? Is there a better, more standardized method for monitoring threads' running/exited status, a more "pthreads-y" way? I'm looking to use the simplest, clearest, cleanest mechanism possible which won't lead to any unexpected complications.
Thanks for any feedback.
There isn't so much a "pthreads-y" way as a (generic) multi-threading way. There is nothing wrong with what you have but it is more complicated and inefficient than it need be.
A more standard design is to use a thread pool. The master thread spawns a bunch of worker threads that read a queue. The master puts work in the queue and all the workers take a shot at processing the work in the queue. This eliminates the need to constantly start and terminate threads (though more sophisticated pools can have some mechanism to increase/decrease the pool size based on the work load). If the threads have to return data or status information they can use an output queue (maybe just a pointer to the actual data) that the master can read.
This still leaves the issue of how to get rid of the threads when you are done processing. Again, it is a master-worker relationship so it is advised that the master tell the slaves to shut themselves down. This amounts to using some program switch (such as you currently have), employing a condition variable somewhere, sending a signal, or cancelling the thread. There are a lot questions (and good answers) on this topic here.

Resources