Is there any good documention on how many threads are created by GCD?
At WWDC, they told us it's modeled around CPU cores. However, if I call this example:
for (int i=1; i<30000; i++) {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[NSThread sleepForTimeInterval:100000];
});
}
it opens 66 threads, even on an iPad1. (It also opens 66 threads when called on Lion natively). Why 66?
First, 66 == 64 (the maximum GCD thread pool size) + the main thread + some other random non-GCD thread.
Second, GCD is not magic. It is optimized for keeping the CPU busy with code that is mostly CPU bound. The "magic" of GCD is that it dynamically create more threads than CPUs when work items unintentionally and briefly wait for operations to complete.
Having said that, code can confuse the GCD scheduler by intentionally sleeping or waiting for events instead of using dispatch sources to wait for events. In these scenarios, the block of work is effectively implementing its own scheduler and therefore GCD must assume that the thread has been co-opted from the thread pool.
In short, the thread pool will operate optimally if your code prefers dispatch_after() over "sleep()" like APIs, and dispatch sources over handcrafted event loops (Unix select()/poll(), Cocoa runloops, or POSIX condition variables).
The documentation avoids mentioning the number of threads created. Mostly because the optimal number of threads depends heavily on the context.
One issue with Grand Cendral Dispatch is that it will spawn a new thread if a running task blocks. That is, you should avoid blocking when using GCD as having more threads than cores is suboptimal.
In your case, GCD detects that the task is inactive, and spawns a new thread for the next task.
Why 66 is the limit is beyond me.
The answer should be: 512
There are different cases:
All types (Concurrent and serial queues ) can simultaneously create up to 512 threads.
All global queues can create up to 64 threads simultaneously.
As a general rule of thumb,
If the number of threads in an app exceeds 64, it will cause the main thread to lag. In severe cases, it may even trigger a watchdog crash.
This is because there is an overhead in creating threads, and the main costs under iOS are: kernel data structure (about 1KB), stack space (512KB for child threads, 1MB for main threads)
It can also be set using -setStackSize, but it must be a multiple of 4K, and the minimum is 16K, which takes about 90 ms to create a thread.
Opening a large number of threads will reduce the performance of the program. The more threads there are, the more CPU overhead there is to schedule them.
The program design is more complex: communication between threads and data sharing among multiple threads.
First of all, GCD has a limit on the number of threads it can create,
The way gcd creates threads is by calling _pthread_workqueue_addthreads, so there is a limit to the number of threads created. Other ways of creating threads do not call this creation method.
Back to the question above:
All global queues can create up to 64 threads at the same time.
The global queue is added via the _pthread_workqueue_addthreads method. However, there is a limit on the number of threads added using this method in the kernel (plugin).
The specific code for the restriction is shown below:
#define MAX_PTHREAD_SIZE 64*1024
The meaning of this code is as follows:
The total size is limited to 64k, according to Apple-Docs-Threading Programming Guide- Thread creation costs: 1 thread allocates 1k cores, so we can deduce that the result is 64 threads.
In summary, the global queue can create up to 64 threads, which is written dead in the kernel (plugin).
A more detailed test code is as follows:
Test Environment : iOS 14.3
Test code
case 1: Global Queue - CPU Busy
In the first test case, we use dispatch_get_global_queue(0, 0) to get a default global queue and simulate CPU busy.
+ (void)printThreadCount {
kern_return_t kr = { 0 };
thread_array_t thread_list = { 0 };
mach_msg_type_number_t thread_count = { 0 };
kr = task_threads(mach_task_self(), &thread_list, &thread_count);
if (kr != KERN_SUCCESS) {
return;
}
NSLog(#"threads count:%#", #(thread_count));
kr = vm_deallocate( mach_task_self(), (vm_offset_t)thread_list, thread_count * sizeof(thread_t) );
if (kr != KERN_SUCCESS) {
return;
}
return;
}
+ (void)test1 {
NSMutableSet<NSThread *> *set = [NSMutableSet set];
for (int i=0; i < 1000; i++) {
dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
dispatch_async(queue, ^{
NSThread *thread = [NSThread currentThread];
[set addObject:[NSThread currentThread]];
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(#"start:%#", thread);
NSLog(#"GCD threads count:%lu",(unsigned long)set.count);
[self printThreadCount];
});
NSDate *date = [NSDate dateWithTimeIntervalSinceNow:10];
long i=0;
while ([date compare:[NSDate date]]) {
i++;
}
[set removeObject:thread];
NSLog(#"end:%#", thread);
});
}
}
Tested: the number of threads is 2
case 2: Global queue - CPU idle
For the second code, we test by [NSThread sleepForTimeInterval:10]; simulating CPU idle
+ (void)test2 {
NSMutableSet<NSThread *> *set = [NSMutableSet set];
for (int i=0; i < 1000; i++) {
dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
dispatch_async(queue, ^{
NSThread *thread = [NSThread currentThread];
[set addObject:[NSThread currentThread]];
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(#"start:%#", thread);
NSLog(#"GCD threads count:%lu",(unsigned long)set.count);
[self printThreadCount];
});
// thread sleep for 10s
[NSThread sleepForTimeInterval:10];
[set removeObject:thread];
NSLog(#"end:%#", thread);
return;
});
}
}
After testing, the maximum number of threads is 64
All concurrent queues and serial queues can create up to 512 threads simultaneously.
A more detailed test code is as follows:
case 1: Self-built Queues - CPU Busy
Now, let us see the performance of the self-built queue - CPU busy. This example will simulate most APP scenarios, where different business parties create separate queues to manage their tasks.
+ (void)test3 {
NSMutableSet<NSThread *> *set = [NSMutableSet set];
for (int i=0; i < 1000; i++) {
const char *label = [NSString stringWithFormat:#"label-:%d", i].UTF8String;
NSLog(#"create:%s", label);
dispatch_queue_t queue = dispatch_queue_create(label, DISPATCH_QUEUE_SERIAL);
dispatch_async(queue, ^{
NSThread *thread = [NSThread currentThread];
[set addObject:[NSThread currentThread]];
dispatch_async(dispatch_get_main_queue(), ^{
static NSInteger lastCount = 0;
if (set.count <= lastCount) {
return;
}
lastCount = set.count;
NSLog(#"begin:%#", thread);
NSLog(#"GCD threads count量:%lu",(unsigned long)set.count);
[self printThreadCount];
});
NSDate *date = [NSDate dateWithTimeIntervalSinceNow:10];
long i=0;
while ([date compare:[NSDate date]]) {
i++;
}
[set removeObject:thread];
NSLog(#"end:%#", thread);
});
}
}
After testing, the maximum number of threads created by GCD is 512
case 2: Self-built queues - CPU idle
+ (void)test4 {
NSMutableSet<NSThread *> *set = [NSMutableSet set];
for (int i=0; i < 10000; i++) {
const char *label = [NSString stringWithFormat:#"label-:%d", i].UTF8String;
NSLog(#"create:%s", label);
dispatch_queue_t queue = dispatch_queue_create(label, DISPATCH_QUEUE_SERIAL);
dispatch_async(queue, ^{
NSThread *thread = [NSThread currentThread];
dispatch_async(dispatch_get_main_queue(), ^{
[set addObject:thread];
static NSInteger lastCount = 0;
if (set.count <= lastCount) {
return;
}
lastCount = set.count;
NSLog(#"begin:%#", thread);
NSLog(#"GCD threads count:%lu",(unsigned long)set.count);
[self printThreadCount];
});
[NSThread sleepForTimeInterval:10];
dispatch_async(dispatch_get_main_queue(), ^{
[set removeObject:thread];
NSLog(#"end:%#", thread);
});
});
}
}
The maximum number of threads created by the self-built queue - CPU idle is 512
Other test code
__block int index = 0;
// one concurrent queue test
dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
for (int i = 0; i < 1000; ++i) {
dispatch_async(queue, ^{
id name = nil;
#synchronized (self) {
name = [NSString stringWithFormat:#"gcd-limit-test-global-concurrent-%d", index];
index += 1;
}
NSThread.currentThread.name = name;
NSLog(#"%#", name);
sleep(100000);
});
}
// some concurrent queues test
for (int i = 0; i < 1000; ++i) {
char buffer[256] = {};
sprintf(buffer, "gcd-limit-test-concurrent-%d", i);
dispatch_queue_t queue = dispatch_queue_create(buffer, DISPATCH_QUEUE_CONCURRENT);
dispatch_async(queue, ^{
id name = nil;
#synchronized (self) {
name = [NSString stringWithFormat:#"gcd-limit-test-concurrent-%d", index];
index += 1;
}
NSThread.currentThread.name = name;
NSLog(#"%#", name);
sleep(100000);
});
}
// some serial queues test
for (int i = 0; i < 1000; ++i) {
char buffer[256] = {};
sprintf(buffer, "gcd-limit-test-%d", i);
dispatch_queue_t queue = dispatch_queue_create(buffer, 0);
dispatch_async(queue, ^{
id name = nil;
#synchronized (self) {
name = [NSString stringWithFormat:#"gcd-limit-test-%d", index];
index += 1;
}
NSThread.currentThread.name = name;
NSLog(#"%#", name);
sleep(100000);
});
}
Special note here:
The 512 mentioned is the limit of gcd. After the 512 gcd threads are opened, you can still open them with NSThread.
So the chart above
Currently it should be 512, 516 = 512(max) + main thread + js thread + web thread + uikit event thread"
Conclusion
After testing, GCD's global queue automatically limits the number of threads to a reasonable number. Compared to this, the number of threads created by the self-built queue is large.
Considering that the number of threads is too large, the CPU scheduling cost will increase.
Therefore, it is recommended that small APPs use a global queue to manage tasks as much as possible; large APPs can decide the suitable solution according to their actual situation.
Number of busy threads is equal to CPU cores. Blocked threads (you block them with sleepForTimeInterval) are not counted.
If you change your code to this (Swift):
for _ in 1..<30000 {
DispatchQueue.global().async {
while true {}
}
}
you'll see that there are just 2 threads created (on iPhone SE):
There is a thread limit so that your app is not killed because of big memory consumption if you have problem with blocked threads (usually because of deadlock).
Never block threads and you'll have them as much as your cores.
Related
I think "end" will be print in for loop, but this is wrong, can you tell me why. This is code:
dispatch_queue_t queue = dispatch_queue_create("queue", DISPATCH_QUEUE_CONCURRENT);
for (NSUInteger i = 0; i < 1000; i++) {
dispatch_async(queue, ^{
NSLog(#"i:%lu", (unsigned long)i);
});
}
dispatch_async(queue, ^{
NSLog(#"end:%#", [NSThread currentThread]);
});
Result:
2018-03-22 19:26:33.812371+0800 MyIOSNote[96704:912772] i:990
2018-03-22 19:26:33.812671+0800 MyIOSNote[96704:912801] i:991
2018-03-22 19:26:33.812935+0800 MyIOSNote[96704:912662] i:992
2018-03-22 19:26:33.813295+0800 MyIOSNote[96704:912802] i:993
2018-03-22 19:26:33.813552+0800 MyIOSNote[96704:912766] i:994
2018-03-22 19:26:33.813856+0800 MyIOSNote[96704:912778] i:995
2018-03-22 19:26:33.814299+0800 MyIOSNote[96704:912803] i:996
2018-03-22 19:26:33.814648+0800 MyIOSNote[96704:912779] i:997
2018-03-22 19:26:33.814930+0800 MyIOSNote[96704:912759] i:998
2018-03-22 19:26:33.815361+0800 MyIOSNote[96704:912804] i:999
2018-03-22 19:26:33.815799+0800 MyIOSNote[96704:912805] end:<NSThread: 0x60400027e200>{number = 3, name = (null)}
Look at the order of execution. You first enqueue 1000 blocks to print a number. Then you enqueue the block to print "end". All of those blocks are enqueued to run asynchronously on the same concurrent background queue. All 1001 calls to dispatch_async are being done in order, one at a time, on whatever thread this code is being run on which is from a different queue all of the enqueued blocks will be run on.
The concurrent queue will pop each block in the order it was enqueued and run it. Since it is a concurrent queue and since each is to be run asynchronously, in theory, some of them could be a bit out of order. But in general, the output will appear in the same order because each block does exactly the same thing - a simple NSLog statement.
But the short answer is that "end" is printed last because it was enqueued last, after all of the other blocks have been enqueued.
What may help is to log each call as it is enqueued:
dispatch_queue_t queue = dispatch_queue_create("queue", DISPATCH_QUEUE_CONCURRENT);
for (NSUInteger i = 0; i < 1000; i++) {
NSLog(#"enqueue i: %lu", (unsigned long)i);
dispatch_async(queue, ^{
NSLog(#"run i: %lu", (unsigned long)i);
});
}
NSLog(#"enqueue end");
dispatch_async(queue, ^{
NSLog(#"run end: %#", [NSThread currentThread]);
});
For loop code runs in main Queue (Serial) so the for loop ends then the end statement is printed , if you wrapped the for loop inside the async like this
dispatch_queue_t queue = dispatch_queue_create("queue", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(queue, ^{
for (NSUInteger i = 0; i < 1000; i++) {
NSLog(#"i:%lu", (unsigned long)i);
}
});
dispatch_async(queue, ^{
NSLog(#"end:%#", [NSThread currentThread]);
});
you will get this
To combine both of the previous answers, the reason you see end printed last is because you enqueue serially, but each block executes very quickly. By the time you enqueue the log of end, all the other blocks have already executed.
I've some data which is accumulated in a buffer and I need to read the data when buffer is having data. This i need to do with thread synchronisation. I've worked little with GCD, which I'm failing to do. please help how to do a circular buffer with read and write threads in synchronization.
My Code:
- (void)viewDidLoad {
[super viewDidLoad];
readPtr = 0;
writePtr = 0;
currentPtr = 0;
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0),^{
while(YES){
[self writeToBuffer:buffer[0] withBufferSize:bufferSize];
}
});
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0),^{
while(YES){
float* newBuffer;
if(currentPtr>512){
newBuffer = [self readBuffer];
}else{
continue;
}
[self UseBuffer: newBuffer];
}
});
}
-(void)writeToBuffer:(float*)Values withBufferSize:(int)bSize{
[_lock lock];
for(int i=0;i<bSize;i++){
if(writePtr>1859){
writePtr = 0;
}
globalBuffer[writePtr] = Values[i];
writePtr++;
currentPtr++;
}
NSLog(#"Writing");
[_lock unlock];
}
-(float*)readBuffer{
[_lock lock];
float rBuffer[512];
for(int i=0;i<512;i++){
if(readPtr>1859){
readPtr = 0;
}
rBuffer[i] = globalBuffer[readPtr];
readPtr++;
currentPtr--;
}
NSLog(#"Reading");
[_lock unlock]
return rBuffer;
}
One of the key points of GCD is that it completely replaces the need for locks. So, if you are mixing GCD and mutex locks it is typically a sign that you're doing things wrong or sub-optimally.
A serial queue is, effectively, an exclusive lock on whatever is associated with the serial queue.
There a bunch of problems in your code.
while (YES) {...} is going to spin, burning CPU cycles ad infinitum.
The readBuffer method is returning a pointer to a stack based buffer. That won't work.
It isn't really clear what the goal of the code is, but those are some specific issues.
So with some help, I am more clear on how a nested GCD works in my program.
The original post is at:
Making sure I'm explaining nested GCD correctly
However, you don't need to go through the original post, but basically the code here runs database execution in the background and the UI is responsive:
-(void)viewDidLoad {
dispatch_queue_t concurrencyQueue = dispatch_queue_create("com.epam.halo.queue", DISPATCH_QUEUE_CONCURRENT);
dispatch_queue_t serialQueue = dispatch_queue_create("com.epam.halo.queue2", DISPATCH_QUEUE_SERIAL);
for ( int i = 0; i < 10; i++) {
dispatch_async(concurrencyQueue, ^() {
NSLog(#"START insertion method%d <--", i);
dispatch_sync(serialQueue, ^() {
//this is to simulate writing to database
NSLog(#"----------START %d---------", i);
[NSThread sleepForTimeInterval:1.0f];
NSLog(#"--------FINISHED %d--------", i);
});
NSLog(#"END insertion method%d <--", i);
});
}
}
However, when I start refactoring them and putting them into methods and making everything look nice, the UI does not respond anymore:
//some database singleton class
//the serial queues are declared in the class's private extension. And created in init()
-(void)executeDatabaseStuff:(int)i {
dispatch_sync(serialQueue, ^() {
//this is to simulate writing to database
NSLog(#"----------START--------- %d", i);
[NSThread sleepForTimeInterval:1.0f];
NSLog(#"--------FINISHED-------- %d", i);
});
}
-(void)testInsert:(int)i {
dispatch_async(concurrencyQueue, ^() {
[self executeDatabaseStuff:i];
});
}
//ViewController.m
- (void)viewDidLoad {
//UI is unresponsive :(
for ( int i = 0; i < totalNumberOfPortfolios; i++) {
NSLog(#"START insertion method%d <--", i);
[[DatabaseFunctions sharedDatabaseFunctions] testInsert: i];
NSLog(#"END insertion method%d <--", i);
}
}
The only way to make the refactored version work is when I put dispatch_async(dispatch_get_main_queue():
for ( int i = 0; i < totalNumberOfPortfolios; i++) {
dispatch_async(dispatch_get_main_queue(), ^() {
NSLog(#"START insertion method%d <--", i);
[[DatabaseFunctions sharedDatabaseFunctions] testInsert: i];
NSLog(#"END insertion method%d <--", i);
});
}
So my question is, I thought the fact that I use dispatch_async the concurrencyQueue will ensure that my main thread is not touched by the dispatch_sync serialQueue combo. Why is it that when I wrap it in an object/method, I must use dispatch_async(dispatch_get_main_queue()...) ?
Seems that whether my main thread does dispatch_async on a concurrent queue
in viewDidLoad, or within a method, does indeed matter.
I am thinking that the main thread is getting all these testInsert methods pushed onto its thread stack. The methods must then be processed by the main thread. Hence, even though the dispatch_sync is not blocking the main thread, the main thread runs to the end of viewDidLoad, and must wait for all the testInsert methods to be processed and done before it can move onto the next task in the Main Queue??
Notes
So I went home and tested it again with this:
for ( int i = 0; i < 80; i++) {
NSLog(#"main thread %d <-- ", i);
dispatch_async(concurrencyQueue, ^() {
[NSThread isMainThread] ? NSLog(#"its the main thread") : NSLog(#"not main thread");
NSLog(#"concurrent Q thread %i <--", i);
dispatch_sync(serialQueue, ^() {
//this is to simulate writing to database
NSLog(#"serial Q thread ----------START %d---------", i);
[NSThread sleepForTimeInterval:1.0f];
NSLog(#"serial Q thread --------FINISHED %d--------", i);
});
NSLog(#"concurrent Q thread %i -->", i);
});
NSLog(#"main thread %d --> ", i);
} //for loop
When I run the loop from 1 - 63, the UI is not blocked. And I see my database operation processing in the background.
Then when the loop is 64, UI is blocked for 1 database operation, then returns fine.
When I use 65, UI freezes for 2 database operations, then returns fine...
When I use something like 80, it gets blocked from 64-80...so I wait 16 seconds before my UI is responsive.
At the time, I couldn't figure out why 64. So now I know that its 64 concurrent threads allowed at once. ...and has nothing to do with wrapping it in a object/method. :D
Many thanks for the awesome help from the contributors!
There is a hard limit of 64 GCD concurrent operations (per top level concurrent queue) that can be run together.
What's happening is you're submitting over 64 blocks to your concurrent queue, each of them getting blocked by the [NSThread sleepForTimeInterval:1.0f], forcing a new thread to be created for each operation. Therefore, once the thread limit is reached, it backs up and starts to block the main thread.
I have tested this with 100 "database write" operations (on device), and the main thread appears to be blocked until 36 operations have taken place (there are now only 64 operations left, therefore the main thread is now un-blocked).
The use of a singleton shouldn't cause you any problems, as you're calling methods to that synchronously, therefore there shouldn't be thread conflicts.
The simplest solution to this is just to use a single background serial queue for your "database write" operations. This way, only one thread is being created to handle the operation.
- (void)viewDidLoad {
[super viewDidLoad];
static dispatch_once_t t;
dispatch_once(&t, ^{
serialQueue = dispatch_queue_create("com.epam.halo.queue2", DISPATCH_QUEUE_SERIAL);
});
for (int i = 0; i < 100; i++) {
[self testInsert:i];
}
}
-(void)executeDatabaseStuff:(int)i {
//this is to simulate writing to database
NSLog(#"----------START--------- %d", i);
[NSThread sleepForTimeInterval:1.0f];
NSLog(#"--------FINISHED-------- %d", i);
}
-(void)testInsert:(int)i {
NSLog(#"Start insert.... %d", i);
dispatch_async(serialQueue, ^() {
[self executeDatabaseStuff:i];
});
NSLog(#"End insert... %d", i);
}
I don't know why inserting dispatch_async(dispatch_get_main_queue(), ^() {} inside your for loop was working for you... I can only assume it was off-loading the "database writing" until after the interface had loaded.
Further resources on threads & GCD
Number of threads created by GCD?
https://developer.apple.com/library/ios/documentation/General/Conceptual/ConcurrencyProgrammingGuide/OperationQueues/OperationQueues.html#//apple_ref/doc/uid/TP40008091-CH102-SW1
https://developer.apple.com/library/ios/documentation/Cocoa/Conceptual/Multithreading/CreatingThreads/CreatingThreads.html#//apple_ref/doc/uid/10000057i-CH15-SW2
I'm transitioning a large file copy operation from NSStream to a dispatch IO implementation with GCD.
When copying two 1GB files together into a single 2GB file, the app consumes 2GB of memory with GCD. The NSStream implementation consumes just 50MB.
In Instruments, I can see start_wqthread calls allocating 1MB chunks, as I requested with my block size for the dispatch IO high water mark, but instead of being freed after being written to the output stream, they hang around.
How can I free the buffer after it has been written to the output stream?
If I create a completely new OS X Cocoa application in Xcode and paste the following code in the applicationDidFinishLaunching: method, it will consume 500-2000MB of memory. (To test, replace the temp file references with local file references.)
When creating a new project using the OS 10.9 SDK targeting OS 10.9, calls to dispatch_release() are forbidden by ARC. When targeting OS 10.6 in an older project, even with ARC enabled, calls to dispatch_release() are allowed but have no effect on the memory footprint.
NSArray* files = #[#"/1GBFile.tmp", #"/1GBFile2.tmp"];
NSString* outFile = #"/outFile.tmp";
NSString* queueName = [NSString stringWithFormat:#"%#.IO", [[NSBundle mainBundle].infoDictionary objectForKey:(id)kCFBundleIdentifierKey]];
dispatch_queue_t queue = dispatch_queue_create(queueName.UTF8String, DISPATCH_QUEUE_SERIAL);
dispatch_io_t io_write = dispatch_io_create_with_path(DISPATCH_IO_STREAM, outFile.UTF8String, (O_RDWR | O_CREAT | O_APPEND), (S_IWUSR | S_IRUSR | S_IRGRP | S_IROTH), queue, NULL);
dispatch_io_set_high_water(io_write, 1024*1024);
[files enumerateObjectsUsingBlock:^(NSString* file, NSUInteger idx, BOOL *stop) {
dispatch_io_t io_read = dispatch_io_create_with_path(DISPATCH_IO_STREAM, file.UTF8String, O_RDONLY, 0, queue, NULL);
dispatch_io_set_high_water(io_read, 1024*1024);
dispatch_io_read(io_read, 0, SIZE_MAX, queue, ^(bool done, dispatch_data_t data, int error) {
if (error) {
dispatch_io_close(io_write, 0);
return;
}
if (data) {
size_t bytesRead = dispatch_data_get_size(data);
if (bytesRead > 0) {
dispatch_io_write(io_write, 0, data, queue, ^(bool doneWriting, dispatch_data_t dataToBeWritten, int errorWriting) {
if (errorWriting) {
dispatch_io_close(io_read, DISPATCH_IO_STOP);
}
});
}
}
if (done) {
dispatch_io_close(io_read, 0);
if (files.count == (idx+1)) {
dispatch_io_close(io_write, 0);
}
}
});
}];
I believe I've worked out a solution using a dispatch group.
The code essentially copies each file in sequence synchronously (blocking the loop from processing the next file until the previous file has been completely read and written), but allows the file reading and writing operations to be queued asynchronously.
I believe the memory over-consumption was due to the fact that reads for multiple files were being queued simultaneously. I would have thought that would be fine for a serial queue, but it seems blocking progress with a dispatch group, so that only work to read and write a single file is queued, does the trick. With the following code, peak memory usage is ~7MB.
Now, a single input file is queued to be read, each read operation queues its corresponding write operations, and the loop on the input files is blocked until all reading and writing operations are complete.
NSArray* files = #[#"/1GBFile.tmp", #"/1GBFile2.tmp"];
NSString* outFile = #"/outFile.tmp";
NSString* queueName = [NSString stringWithFormat:#"%#.IO", [[NSBundle mainBundle].infoDictionary objectForKey:(id)kCFBundleIdentifierKey]];
dispatch_queue_t queue = dispatch_queue_create(queueName.UTF8String, DISPATCH_QUEUE_SERIAL);
dispatch_group_t group = dispatch_group_create();
dispatch_io_t io_write = dispatch_io_create_with_path(DISPATCH_IO_STREAM, outFile.UTF8String, (O_RDWR | O_CREAT | O_APPEND), (S_IWUSR | S_IRUSR | S_IRGRP | S_IROTH), queue, NULL);
dispatch_io_set_high_water(io_write, 1024*1024);
[files enumerateObjectsUsingBlock:^(NSString* file, NSUInteger idx, BOOL *stop) {
dispatch_group_wait(group, DISPATCH_TIME_FOREVER);
if (*stop) {
return;
}
dispatch_group_enter(group);
dispatch_io_t io_read = dispatch_io_create_with_path(DISPATCH_IO_STREAM, file.UTF8String, O_RDONLY, 0, queue, NULL);
dispatch_io_set_high_water(io_read, 1024*1024);
dispatch_io_read(io_read, 0, SIZE_MAX, queue, ^(bool done, dispatch_data_t data, int error) {
if (error || *stop) {
dispatch_io_close(io_write, 0);
*stop = YES;
return;
}
if (data) {
size_t bytesRead = dispatch_data_get_size(data);
if (bytesRead > 0) {
dispatch_group_enter(group);
dispatch_io_write(io_write, 0, data, queue, ^(bool doneWriting, dispatch_data_t dataToBeWritten, int errorWriting) {
if (errorWriting || *stop) {
dispatch_io_close(io_read, DISPATCH_IO_STOP);
*stop = YES;
dispatch_group_leave(group);
return;
}
if (doneWriting) {
dispatch_group_leave(group);
}
});
}
}
if (done) {
dispatch_io_close(io_read, 0);
if (files.count == (idx+1)) {
dispatch_io_close(io_write, 0);
}
dispatch_group_leave(group);
}
});
}];
I'm not sure what [self cleanUpAndComplete]; is, however, it doesn't appear you ever call dispatch_close for the other channels you've created (only io_read).
— from dispatch_create:
The returned object is retained before it is returned; it is your
responsibility to close the channel and then release this object when
you are done using it.
The code below is called each time a scrollview scroll and if user scroll it multiple times, it crashed the code. How do i make sure only 1 code execute at a time or threadsafe?
[self.cv addInfiniteScrollingWithActionHandler:^{
[weakSelf loadNextPage];
}];
Here is example
- (void)_startExperiment {
FooClass *foo = [[FooClass alloc] init];
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
for (int i = 0; i < 4; ++i) {
dispatch_async(queue, ^{
[foo doIt];
});
}
[foo release];
}
Detail is Here
The common pattern is to use a mutex to protect a critical section of code where the structure is accessed and/or modified.
just go through this link->
Does #synchronized guarantees for thread safety or not?