Pthread Sync - pthread_cond_wait - pthreads

At one point in my function it would reach
pthread_cond_wait(&cond_state, &b_state);
When a signal is send and wake up this thread. Does it immediately try to do a lock on mutex before it calls Enter?
void Enter(int g, int timer){
pthread_mutex_lock(&b_state);
if (room.state == 2 || room.state == g)
{
pthread_mutex_unlock(&b_state);
Leave();
}
else
{
pthread_cond_wait(&cond_state, &b_state);
Enter(g, timer); //Try to enter again
}
}
I am having a problem when if a thread went to sleep, once it woke up it will get stuck at mutex_lock after called Enter.

Yes, the manpage for pthread_cond_wait says:
The waiting thread unblocks only after another thread calls
pthread_cond_signal(3), or pthread_cond_broadcast(3) with the same
condition variable, and the current thread reacquires the lock on
mutex.

Related

can I call on semaphore.wait() main thread?

I have a method which gives me photo auth status.
func photosAuthorizationStatus() -> PHAuthorizationStatus {
var authStatus = PHAuthorizationStatus.notDetermined
let semaphore = DispatchSemaphore(value: 0)
PHPhotoLibrary.requestAuthorization { (status: PHAuthorizationStatus) in
authStatus = status
semaphore.signal()
}
semaphore.wait()
return authStatus
}
I am calling this method in viewDidAppear of a ViewController , but Application is not freezing.
But if I call semaphore.wait when I ask mainQueue explicitly Application is freezing.
DispatchQueue.main.async{
let semaphore = DispatchSemaphore(value: 0)
semaphore.wait()
}
// Above code will freeze the application.
Can I know the reason ?
In your title, you ask:
can I call on semaphore.wait() main thread?
You should avoid blocking the main thread for any reason. Whether wait for semaphores or dispatch groups, or even synchronously dispatching (e.g., sync) of anything more than a few milliseconds. You risk having the watchdog process kill your app if you do this at the wrong time, and it results in a horrible UX.
You then go on to ask:
DispatchQueue.main.async {
let semaphore = DispatchSemaphore(value: 0)
semaphore.wait()
}
Above code will freeze the application.
Can I know the reason
That code says “block the main thread waiting for a signal on this semaphore”. So, until that signal arrives, the main thread will be blocked. But the main thread should never be blocked because it services, amongst other things, the UI, and your app will freeze if you deadlock the main thread.
Bottom line, never block the main thread.
Create completion closure in method which will call after successful request authorisation completed. See following code.
Make sure your have added permission key "Privacy - Photo Library Usage Description" in Info.plist file.
func photosAuthorizationStatus(completion: #escaping (PHAuthorizationStatus) -> Void) {
PHPhotoLibrary.requestAuthorization { (status: PHAuthorizationStatus) in
completion(status)
}
}
Use:
self.photosAuthorizationStatus { (status) in
// use your status here
}
Output:
A bit late to the party, but it may be useful for others, since nobody explained the real issue here.
Calling semaphore.wait() decreases the counting semaphore. If the counter becomes smaller than zero, wait() blocks the main queue until you signal the semaphore.
Now, you invoke semaphore.signal() in the completion closure, which happens to execute on the main queue. But the main queue is blocked, so it won't call semaphore.signal(). wait() and signal() will wait for each other for eternity -> a guaranteed classic deadlock!
Forget the semaphore, and refactor the photosAuthorizationStatus() method to return the result via a closure, as suggested by Sagar Chauhan.

Swift 2 - iOS - Dispatch back to originating thread

So I have an application that fires a series of asynchronous events and then writes the results to a buffer. The problem is that I want the buffer to be written to synchronously (in the thread that spawned the asynchronous process)
skeleton code is as such
let Session = NSURLSession.sharedSession()
let TheStack = [Structure]()
//This gets called asynchronously, e.g. in threads 3,4,5,6,7
func AddToStack(The Response) -> Void {
TheStack.insertAt(Structure(The Response), atIndex: 0))
if output.hasSpaceAvailable == true {
// This causes the stream event to be fired on mutliple threads
// This is what I want to call back into the original thread, e.g. in thread 2
self.stream(self.output, handleEvent: NSStreamEvent.hasSpaceAvailable)
}
}
// This is in the main loop, e.g. thread 2
func stream(aStream: NSStream, handleEvent: NSStreamEvent) {
switch(NSStreamEvent) {
case NSStreamEvent.OpenCompleted:
// Do some open stuff
case NSStreamEvent.HasBytesAvailable:
Session.dataTaskWithRequest(requestFromInput, completionHandler: AddToStack)
case NSStreamEvent.HasSpaceAvailable:
// Do stuff with the output
case NSStreamEvent.CloseCompleted:
// Close the stuff
}
}
The problem is the thread that calls is dataTaskWithRequest is in thread, say, 3. The completion handler fires in many different threads and causes case NSStreamEvent.HasSpaceAvailable: to be running in thread 3, plus all the threads that they existed in.
My question is: How do I make it so that self.stream(self.output, handleEvent: NSStreamEvent.hasSpaceAvailable) is called in thread 3, or what-ever the original thread was to prevent this tripping over of each other in the output phase.
Thanks in advance!
NOTE: The thread that contains the input/output handling was created with NSThread.detachNewThreadSelector
Alright, for the curious onlooker I, with aid from comments to the question I have figured out how to do what I originally asked in the question (whether or not this ultimately gets rewritten to use GCD is a different question)
The solution (with a slightly increased scope into the code) is to use performSelector with a specific thread.
final class ArbitraryConnection {
internal var streamThread: NSThread
let Session = NSURLSession.sharedSession()
let TheStack = [Structure]()
//This gets called asynchronously, e.g. in threads 3,4,5,6,7
func AddToStack(The Response) -> Void {
TheStack.insertAt(Structure(The Response), atIndex: 0))
if output.hasSpaceAvailable == true {
// This causes the stream event to be fired on multiple threads
// This is what I want to call back into the original thread, e.g. in thread 2
// Old way
self.stream(self.output, handleEvent: NSStreamEvent.hasSpaceAvailable)
// New way, that works
if(streamThread != nil) {
self.performSelector(Selector("startoutput"), onThread: streamThread!, withObject: nil, waitUntilDone: false)
}
}
}
func open -> Bool {
// Some stuff
streamThread = NSThread.currentThread()
}
final internal func startoutput -> Void {
if(output.hasSpaceAvailable && outputIdle) {
self.stream(self.output, handleEvent: NSStreamEvent.HasSpaceAvailable)
}
}
// This is in the main loop, e.g. thread 2
func stream(aStream: NSStream, handleEvent: NSStreamEvent) {
switch(NSStreamEvent) {
case NSStreamEvent.OpenCompleted:
// Do some open stuff
case NSStreamEvent.HasBytesAvailable:
Session.dataTaskWithRequest(requestFromInput, completionHandler: AddToStack)
case NSStreamEvent.HasSpaceAvailable:
// Do stuff with the output
case NSStreamEvent.CloseCompleted:
// Close the stuff
}
}
}
So use performSelector on the object with the selector and use the onThread to tell it what thread to pass to. I check both before performing the selector and before doing the call to make sure that output has space available (make sure I don't trip over myself)
It won't let me comment on on the thread above (this is what I get for lurking), but one thing to be aware of is that your current code could deadlock your UI if you use waitUntilDone or performBlockAndWait.
If you go that route you need to be absolutely sure that you don't call this from the mainThread or have a fallback case that spawns a new thread.

Core Audio render thread and thread signalling

Does iOS have any kind of very low level condition lock that does not include locking?
I am looking for a way to signal an awaiting thread from within the Core Audio render thread, without the usage of locks. I was wondering if something low level as a Mach system call might exist.
Right now I have a Core Audio thread that uses a non-blocking thread safe message queue to send messages to another thread. The other thread then pulls every 100ms to see if messages are available in the queue.
But this is very rudimentary and the timing is awful. I could use condition locks, but that involves locking, and I would like to keep any kind of locking out of the rendering thread.
What I am looking for is having the message queue thread wait until the Core Audio render thread signals it. Just like pthread conditions, but without locking and without immediate context switching? I would like the Core Audio thread to complete before the message queue thread is woken up.
Updated
A dispatch_semaphore_t works well and is more efficient than a mach semaphore_t. The original code looks like this using a dispatch semaphore:
#include <dispatch/dispatch.h>
// Declare mSemaphore somewhere it is available to multiple threads
dispatch_semaphore_t mSemaphore;
// Create the semaphore
mSemaphore = dispatch_semaphore_create(0);
// Handle error if(nullptr == mSemaphore)
// ===== RENDER THREAD
// An event happens in the render thread- set a flag and signal whoever is waiting
/*long result =*/ dispatch_semaphore_signal(mSemaphore);
// ===== OTHER THREAD
// Check the flags and act on the state change
// Wait for a signal for 2 seconds
/*long result =*/ dispatch_semaphore_wait(mSemaphore, dispatch_time(dispatch_time_now(), 2 * NSEC_PER_SEC));
// Clean up when finished
dispatch_release(mSemaphore);
Original answer:
You can use a mach semaphore_t for this purpose. I've written a C++ class that encapsulates the functionality: https://github.com/sbooth/SFBAudioEngine/blob/master/Semaphore.cpp
Whether or not you end up using my wrapper or rolling your own the code will look roughly like:
#include <mach/mach.h>
#include <mach/task.h>
// Declare mSemaphore somewhere it is available to multiple threads
semaphore_t mSemaphore;
// Create the semaphore
kern_return_t result = semaphore_create(mach_task_self(), &mSemaphore, SYNC_POLICY_FIFO, 0);
// Handle error if(result != KERN_SUCCESS)
// ===== RENDER THREAD
// An event happens in the render thread- set a flag and signal whoever is waiting
kern_return_t result = semaphore_signal(mSemaphore);
// Handle error if(result != KERN_SUCCESS)
// ===== OTHER THREAD
// Check the flags and act on the state change
// Wait for a signal for 2 seconds
mach_timespec_t duration = {
.tv_sec = 2,
.tv_nsec = 0
};
kern_return_t result = semaphore_timedwait(mSemaphore, duration);
// Timed out
if(KERN_OPERATION_TIMED_OUT != result)
;
// Handle error if(result != KERN_SUCCESS)
// Clean up when finished
kern_return_t result = semaphore_destroy(mach_task_self(), mSemaphore);
// Handle error if(result != KERN_SUCCESS)

pthread: locking mutex with timeout

I try to implement following logic (a kind of pseudo-code) using pthread:
pthread_mutex_t mutex;
threadA()
{
lock(mutex);
// do work
timed_lock(mutex, current_abs_time + 1 minute);
}
threadB()
{
// do work in more than 1 minute
unlock(mutex);
}
I do expect threadA to do the work and wait untill threadB signals but not longer than 1 minute. I have done similar a lot of time in Win32 but stuck with pthreads: a timed_lock part returns imediately (not in 1 minute) with code ETIMEDOUT.
Is there a simple way to implement the logic above?
even following code returns ETIMEDOUT immediately
pthread_mutex_t m;
// Thread A
pthread_mutex_init(&m, 0);
pthread_mutex_lock(&m);
// Thread B
struct timespec now;
clock_gettime(CLOCK_MONOTONIC, &now);
struct timespec time = {now.tv_sec + 5, now.tv_nsec};
pthread_mutex_timedlock(&m, &time); // immediately return ETIMEDOUT
Does anyone know why? I have also tried with gettimeofday function
Thanks
I implemented my logic with conditional variables with respect to other rules (using wrapping mutex, bool flag etc.)
Thank you all for comments.
For the second piece of code: AFAIK pthread_mutex_timedlock only works with CLOCK_REALTIME.
CLOCK_REALTIME are seconds since 01/01/1970
CLOCK_MONOTONIC typically since boot
Under these premises, the timeout set is few seconds into 1970 and therefore in the past.
try something like this :
class CmyClass
{
boost::mutex mtxEventWait;
bool WaitForEvent(long milliseconds);
boost::condition cndSignalEvent;
};
bool CmyClass::WaitForEvent(long milliseconds)
{
boost::mutex::scoped_lock mtxWaitLock(mtxEventWait);
boost::posix_time::time_duration wait_duration = boost::posix_time::milliseconds(milliseconds);
boost::system_time const timeout=boost::get_system_time()+wait_duration;
return cndSignalEvent.timed_wait(mtxEventWait,timeout); // wait until signal Event
}
// so inorder to wait then call the WaitForEvent method
WaitForEvent(1000); // it will timeout after 1 second
// this is how an event could be signaled:
cndSignalEvent.notify_one();

break Grand Central Dispatch run

i use
dispatch_async(getDataQueue,^{
//do many work task A
dispatch_aysnc (mainQueue, ^{
//do
};
}
)
if i press back key,and the gcd not finished its task A,i want to break the dispatch_async.how to do
You could use a flag to continue working all the time it's false:
// Somewhere accessible from the task's block and from the view controller
__block BOOL quit = NO;
dispatch_async(getDataQueue,^{
dispatch_aysnc (mainQueue, ^{
if (!quit)
{
// do first thing
}
if (!quit)
{
// do second thing
}
while (!quit)
{
// do lots of things
}
});
});
And then you can stop the background task simply doing:
quit = YES;
This is the preferred method of stopping any background task anyway as it allows the task to perform clean-up without before forced to terminate.
You cannot do this. One of the fundamental truths about GCD is that once you dispatch a block, it will run no matter what as long as the queue is not suspended. If you need cancelable async operations, you will need to use NSOperation

Resources