I am learning GCD Now. I create two global queues with different priority:
dispatch_queue_t q1 = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_queue_t q2 = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0);
I execute the two queues:
dispatch_async(q1, ^{
[self task:#"q1"];
});
dispatch_async(q2, ^{
[self task:#"q2"];
});
- (void)task:(NSString *)taskid
{
for (int i = 0; i < 1000; i++) {
NSLog(#"Now executing taskid:%# num : %d", taskid, i);
[NSThread sleepForTimeInterval:5];
}
}
But the result shows that the two queues are executed concurrently, but not that the queue with higher priority executed firstly. So what does priority really means?
From the docs for DISPATCH_QUEUE_PRIORITY_HIGH:
Items dispatched to the queue run at high priority; the queue is scheduled for execution before any default priority or low priority queue.
So code from different queues can still run concurrently, it's just that the higher priority queue is scheduled for execution before lower priority queues. Since the two queues may be setup in different threads, the two threads can run together. But it is quite possible that the higher priority queue (thread) will be given more opportunity to complete than the lower priority queue (thread).
It might be interesting to time how long (clock time) it take task: to run (try several iterations) and see if the higher priority queue gets done faster than lower priority queues.
Of course, the answer is in dispatch_queue_priority_t Constants, but the text is a bit misleading.
Why you don't see the behavior you expected…
This is a guess (and only a guess). You didn't crowd the CPU. In your test, the CPU has plenty of time to execute all the queues. No scheduler will run tasks based solely on priority. That leads to low priority tasks never executing. The scheduler picks a chooses tasks using priority as only one of the criteria.
If you setup a test with 100 or a 1000 concurrent DISPATCH_QUEUE_PRIORITY_DEFAULT and DISPATCH_QUEUE_PRIORITY_LOW tasks, you might start to see how the scheduler favors higher priority tasks.
UPDATE
Can't believe how wrong I was…
The answer is in dispatch_queue_priority_t Constants and it will alway pick higher priority queues over lower priority queues just like the document says.
Here is my version of the sample code:
dispatch_queue_t q1 = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_queue_t q2 = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0);
for (int i = 0; i < 50; ++i) {
dispatch_async(q1, ^{
for (int j = 0; j < 5; ++j) {
NSLog(#"Now executing %#[%d:%d]", #"DISPATCH_QUEUE_PRIORITY_DEFAULT", i, j);
[NSThread sleepForTimeInterval:0.00001];
}
});
}
for (int i = 0; i < 50; ++i) {
dispatch_async(q2, ^{
for (int j = 0; j < 5; ++j) {
NSLog(#"Now executing %#[%d:%d]", #"DISPATCH_QUEUE_PRIORITY_LOW", i, j);
[NSThread sleepForTimeInterval:0.00001];
}
});
}
The output:
2014-04-15 21:27:22.525 APP_NAME[10651:1103] Now executing DISPATCH_QUEUE_PRIORITY_DEFAULT[0:0]
2014-04-15 21:27:22.526 APP_NAME[10651:1103] Now executing DISPATCH_QUEUE_PRIORITY_DEFAULT[0:1]
2014-04-15 21:27:22.526 APP_NAME[10651:3803] Now executing DISPATCH_QUEUE_PRIORITY_DEFAULT[1:0]
…
2014-04-15 21:27:22.810 APP_NAME[10651:3b03] Now executing DISPATCH_QUEUE_PRIORITY_DEFAULT[47:3]
2014-04-15 21:27:22.812 APP_NAME[10651:3b03] Now executing DISPATCH_QUEUE_PRIORITY_DEFAULT[47:4]
2014-04-15 21:27:22.812 APP_NAME[10651:3f03] Now executing DISPATCH_QUEUE_PRIORITY_DEFAULT[39:4]
2014-04-15 21:27:22.813 APP_NAME[10651:3d07] Now executing DISPATCH_QUEUE_PRIORITY_LOW[0:0]
2014-04-15 21:27:22.813 APP_NAME[10651:3f03] Now executing DISPATCH_QUEUE_PRIORITY_LOW[1:0]
2014-04-15 21:27:22.813 APP_NAME[10651:3d07] Now executing DISPATCH_QUEUE_PRIORITY_LOW[0:1]
…
2014-04-15 21:27:22.998 APP_NAME[10651:3d07] Now executing DISPATCH_QUEUE_PRIORITY_LOW[49:3]
2014-04-15 21:27:22.999 APP_NAME[10651:3f03] Now executing DISPATCH_QUEUE_PRIORITY_LOW[48:4]
2014-04-15 21:27:22.999 APP_NAME[10651:3d07] Now executing DISPATCH_QUEUE_PRIORITY_LOW[49:4]
No DISPATCH_QUEUE_PRIORITY_LOW task was executed before a DISPATCH_QUEUE_PRIORITY_DEFAULT
Related
On an iPad how many parallel operations can start to get maximum performance? in each run a query operation , calculations , etc ...
It depends on the iPad model (CPU)?
int count = [objects count];
if (count > 0)
{
dispatch_group_t group = dispatch_group_create();
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
for (int i = 0; i < count; i++)
{
dispatch_group_async(group, queue, ^{
for (int j = i + 1; j < count; j++)
{
dispatch_group_async(group, queue, ^{
/** LOTS AND LOTS OF WORK FOR EACH OBJECT **/
});
}
});
}
dispatch_group_notify(group, queue, ^{
/** END OF ALL OPERATIONS */
};
}
This is basically a UX question. Depends on the needs of the end user.
Does he really need all those computations started and finished quickly ?
Can you delay some or most of them ?
It's good practice to inform the user with a progress of each computation (a progress bar) and notify him upon completion.
Let him choose which to start / stop / pause (this is a very important feature).
If all the tasks are local - CPU intensive, and not related to network fetching of resources, then it depends on each CPU device - how many threads can it run in parallel.
if we are using GCD approach for iteration , how to break/stop the loop once the condition matched?
queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
dispatch_apply(count, queue, ^(size_t i) {
printf("%u\n", i);
//doing thread safe(also heavy) operation here
if (condition) {
//exit the loop
}
});
It is not possible to cancel dispatch_apply as not all operations are completed sequentially but concurrently. The purpose of dispatch_apply is to parallelize a for-loop where all iterations are independent from other iterations.
However you can use a boolean which indicates that the condition was satisfied. All pending operations are cancelled immediately as they are invoked.
__block BOOL stop = NO;
queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
dispatch_apply(count, queue, ^(size_t i) {
if (stop)
return;
//Do stuff
if (condition)
stop = YES;
});
I have problem with distributing tasks in the OpenMP.
I have next code:
#include <stdio.h>
#include <unistd.h>
int cnttotal = 0;
int cnt1 = 0, cnt2 = 0;
int main()
{
int i;
#pragma omp parallel
#pragma omp single nowait
for (i = 0; i < 60; i++) {
if (cnttotal < 1) {
cnttotal++;
#pragma omp task
{
#pragma omp atomic
cnt1++;
usleep(10);
cnttotal--;
}
} else {
#pragma omp task
{
#pragma omp atomic
cnt2++;
sleep(1);
}
}
}
printf("cnt1 = %d; cnt2 = %d\n", cnt1, cnt2);
return 0;
}
What would I didn't, cnt1 = 1, cnt2 = 59. I think that problem in OpenMP scheduler.
Or is there something don't catch.
My feeling is that you confuse about task instantiation with the actual execution of a task. The #pragma omp task refers to the instantiaton of a task and that is extremely fast. Another thing is that an idle thread of the OpenMP runtime looks a list for ready tasks and executes it.
Going into the problem you posted. In this code is a running thread (say T1) enters the first iteration (i=0), thus it enters into the first if and then sets cnttotal to 1 and instantiates the first task (cnt1). After that instantiation, T1 keeps instantiating the remaining tasks while an idle thread (say T2) executes the task cnt1 which takes approx 10us and sets cnttotal to 0 again.
So in brief, what happens is the thread that instantiates any task is faster to execute than those 10us in task cnt1.
For instance, in my Intel(R) Core(TM) i7-2760QM CPU # 2.40GHz if I changed the code so that the loop runs until i = 500 and sleeps 1 us (usleep (1)) I get:
cnt1 = 2; cnt2 = 498
which shows that instantiation of tasks is extremely fast.
In my app I'm doing some audio processing.
In the for loop of the audio buffer, there is a NSMutable array. The loop is called a huge number of time every second (depending on the buffer size).
As an example :
#autoreleasepool
{
for ( int i = 0; i < tempBuffer.mDataByteSize / 2; ++i )
{
if ( samples[i] > trig)
{
[self.k_Array addObject:[NSNumber numberWithInt:k]];
// other stuff
}
}
}
Then, every second, I'm calling a function for other processing.
- (void)realtimeUpdate:(NSTimer*)theTimer
{
// Create a copy of the array
NSMutableArray *k_ArrayCopy = [NSMutableArray arrayWithArray:k_Array]; // CRASH with EXC_BAD_ACCESS code 1 error
//do some stuff with k_ArrayCopy
}
I sometime receive an EXC_BAD_ACCESS error because, I think, a locking problem of the array.
I spent a lot of time trying to get information on queues, locking, working copies, etc... but I'm lost on this specific case.
My questions :
do I have to use atomic or nonatomic for k_array ?
do I have to use a dispatch_sync function ? If so, where exactly ?
should the realtimeUpdate function be called on background ?
Thanks in advance !
Use dispatch queue that will solve problem
//create queue instance variable
dispatch_queue_t q = dispatch_queue_create("com.safearrayaccess.samplequeue", NULL);
//1.
#autoreleasepool
{
for ( int i = 0; i < tempBuffer.mDataByteSize / 2; ++i )
{
if ( samples[i] > trig)
{
dispatch_async(q, ^{
//queue block
[self.k_Array addObject:[NSNumber numberWithInt:k]];
});
// other stuff NOTE: if its operation on array do it in queue block only
}
}
}
//2.
- (void)realtimeUpdate:(NSTimer*)theTimer
{
// Create a copy of the array
__block NSMutableArray *k_ArrayCopy;//when you use any variable inside block add __block before it
dispatch_async(q, ^{
//queue block
k_ArrayCopy = [NSMutableArray arrayWithArray:k_Array];
});
//do some stuff with k_ArrayCopy
}
Now your add and read array operation are on same queue and it will not conflict..
For more details in using dispatch queue go through apples Grand Central Dispatch doc
Other way of doing this is use NSConditonLock
I am working on Ubuntu 12.04.2 LTS. I have a strange problem with pthread_kill(). The following program ends after writing only "Create thread 0!" to standard output. The program ends with exit status 138.
If I uncomment "usleep(1000);" everything executes properly. Why would this happen?
#include <nslib.h>
void *testthread(void *arg);
int main() {
pthread_t tid[10];
int i;
for(i = 0; i < 10; ++i) {
printf("Create thread %d!\n", i);
Pthread_create(&tid[i], testthread, NULL);
//usleep(1000);
Pthread_kill(tid[i], SIGUSR1);
printf("Joining thread %d!\n", i);
Pthread_join(tid[i]);
printf("Joined %d!", i);
}
return 0;
}
void sighandlertest(int sig) {
printf("print\n");
pthread_exit();
//return NULL;
}
void* testthread(void *arg) {
struct sigaction saction;
memset(&saction, 0, sizeof(struct sigaction));
saction.sa_handler = &sighandlertest;
if(sigaction(SIGUSR1, &saction, NULL) != 0 ) {
fprintf(stderr, "Sigaction failed!\n");
}
printf("Starting while...\n");
while(true) {
}
return 0;
}
If the main thread does not sleep a bit before raising the SIGUSR1, the signal handler for the thread created most propably had not been set up, so the default action for receiving the signal applies, which is ending the process.
Using sleep()s to synchronise threads is not recommended as not guaranteed to be reliable. Use other mechanics here. A condition/mutex pair would be suitable.
Declare a global state variable int signalhandlersetup = 0, protect access to it by a mutex, create the thread, make the main thread wait using pthread_cond_wait(), let the created thread set up the signal handle for SIGUSR1, set signalhandlersetup = 0 and then signal the condition the main thread is waiting on using pthread_signal_cond(). Finally let the main thread call pthread_kill() as by your posting.