I've some data which is accumulated in a buffer and I need to read the data when buffer is having data. This i need to do with thread synchronisation. I've worked little with GCD, which I'm failing to do. please help how to do a circular buffer with read and write threads in synchronization.
My Code:
- (void)viewDidLoad {
[super viewDidLoad];
readPtr = 0;
writePtr = 0;
currentPtr = 0;
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0),^{
while(YES){
[self writeToBuffer:buffer[0] withBufferSize:bufferSize];
}
});
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0),^{
while(YES){
float* newBuffer;
if(currentPtr>512){
newBuffer = [self readBuffer];
}else{
continue;
}
[self UseBuffer: newBuffer];
}
});
}
-(void)writeToBuffer:(float*)Values withBufferSize:(int)bSize{
[_lock lock];
for(int i=0;i<bSize;i++){
if(writePtr>1859){
writePtr = 0;
}
globalBuffer[writePtr] = Values[i];
writePtr++;
currentPtr++;
}
NSLog(#"Writing");
[_lock unlock];
}
-(float*)readBuffer{
[_lock lock];
float rBuffer[512];
for(int i=0;i<512;i++){
if(readPtr>1859){
readPtr = 0;
}
rBuffer[i] = globalBuffer[readPtr];
readPtr++;
currentPtr--;
}
NSLog(#"Reading");
[_lock unlock]
return rBuffer;
}
One of the key points of GCD is that it completely replaces the need for locks. So, if you are mixing GCD and mutex locks it is typically a sign that you're doing things wrong or sub-optimally.
A serial queue is, effectively, an exclusive lock on whatever is associated with the serial queue.
There a bunch of problems in your code.
while (YES) {...} is going to spin, burning CPU cycles ad infinitum.
The readBuffer method is returning a pointer to a stack based buffer. That won't work.
It isn't really clear what the goal of the code is, but those are some specific issues.
Related
So with some help, I am more clear on how a nested GCD works in my program.
The original post is at:
Making sure I'm explaining nested GCD correctly
However, you don't need to go through the original post, but basically the code here runs database execution in the background and the UI is responsive:
-(void)viewDidLoad {
dispatch_queue_t concurrencyQueue = dispatch_queue_create("com.epam.halo.queue", DISPATCH_QUEUE_CONCURRENT);
dispatch_queue_t serialQueue = dispatch_queue_create("com.epam.halo.queue2", DISPATCH_QUEUE_SERIAL);
for ( int i = 0; i < 10; i++) {
dispatch_async(concurrencyQueue, ^() {
NSLog(#"START insertion method%d <--", i);
dispatch_sync(serialQueue, ^() {
//this is to simulate writing to database
NSLog(#"----------START %d---------", i);
[NSThread sleepForTimeInterval:1.0f];
NSLog(#"--------FINISHED %d--------", i);
});
NSLog(#"END insertion method%d <--", i);
});
}
}
However, when I start refactoring them and putting them into methods and making everything look nice, the UI does not respond anymore:
//some database singleton class
//the serial queues are declared in the class's private extension. And created in init()
-(void)executeDatabaseStuff:(int)i {
dispatch_sync(serialQueue, ^() {
//this is to simulate writing to database
NSLog(#"----------START--------- %d", i);
[NSThread sleepForTimeInterval:1.0f];
NSLog(#"--------FINISHED-------- %d", i);
});
}
-(void)testInsert:(int)i {
dispatch_async(concurrencyQueue, ^() {
[self executeDatabaseStuff:i];
});
}
//ViewController.m
- (void)viewDidLoad {
//UI is unresponsive :(
for ( int i = 0; i < totalNumberOfPortfolios; i++) {
NSLog(#"START insertion method%d <--", i);
[[DatabaseFunctions sharedDatabaseFunctions] testInsert: i];
NSLog(#"END insertion method%d <--", i);
}
}
The only way to make the refactored version work is when I put dispatch_async(dispatch_get_main_queue():
for ( int i = 0; i < totalNumberOfPortfolios; i++) {
dispatch_async(dispatch_get_main_queue(), ^() {
NSLog(#"START insertion method%d <--", i);
[[DatabaseFunctions sharedDatabaseFunctions] testInsert: i];
NSLog(#"END insertion method%d <--", i);
});
}
So my question is, I thought the fact that I use dispatch_async the concurrencyQueue will ensure that my main thread is not touched by the dispatch_sync serialQueue combo. Why is it that when I wrap it in an object/method, I must use dispatch_async(dispatch_get_main_queue()...) ?
Seems that whether my main thread does dispatch_async on a concurrent queue
in viewDidLoad, or within a method, does indeed matter.
I am thinking that the main thread is getting all these testInsert methods pushed onto its thread stack. The methods must then be processed by the main thread. Hence, even though the dispatch_sync is not blocking the main thread, the main thread runs to the end of viewDidLoad, and must wait for all the testInsert methods to be processed and done before it can move onto the next task in the Main Queue??
Notes
So I went home and tested it again with this:
for ( int i = 0; i < 80; i++) {
NSLog(#"main thread %d <-- ", i);
dispatch_async(concurrencyQueue, ^() {
[NSThread isMainThread] ? NSLog(#"its the main thread") : NSLog(#"not main thread");
NSLog(#"concurrent Q thread %i <--", i);
dispatch_sync(serialQueue, ^() {
//this is to simulate writing to database
NSLog(#"serial Q thread ----------START %d---------", i);
[NSThread sleepForTimeInterval:1.0f];
NSLog(#"serial Q thread --------FINISHED %d--------", i);
});
NSLog(#"concurrent Q thread %i -->", i);
});
NSLog(#"main thread %d --> ", i);
} //for loop
When I run the loop from 1 - 63, the UI is not blocked. And I see my database operation processing in the background.
Then when the loop is 64, UI is blocked for 1 database operation, then returns fine.
When I use 65, UI freezes for 2 database operations, then returns fine...
When I use something like 80, it gets blocked from 64-80...so I wait 16 seconds before my UI is responsive.
At the time, I couldn't figure out why 64. So now I know that its 64 concurrent threads allowed at once. ...and has nothing to do with wrapping it in a object/method. :D
Many thanks for the awesome help from the contributors!
There is a hard limit of 64 GCD concurrent operations (per top level concurrent queue) that can be run together.
What's happening is you're submitting over 64 blocks to your concurrent queue, each of them getting blocked by the [NSThread sleepForTimeInterval:1.0f], forcing a new thread to be created for each operation. Therefore, once the thread limit is reached, it backs up and starts to block the main thread.
I have tested this with 100 "database write" operations (on device), and the main thread appears to be blocked until 36 operations have taken place (there are now only 64 operations left, therefore the main thread is now un-blocked).
The use of a singleton shouldn't cause you any problems, as you're calling methods to that synchronously, therefore there shouldn't be thread conflicts.
The simplest solution to this is just to use a single background serial queue for your "database write" operations. This way, only one thread is being created to handle the operation.
- (void)viewDidLoad {
[super viewDidLoad];
static dispatch_once_t t;
dispatch_once(&t, ^{
serialQueue = dispatch_queue_create("com.epam.halo.queue2", DISPATCH_QUEUE_SERIAL);
});
for (int i = 0; i < 100; i++) {
[self testInsert:i];
}
}
-(void)executeDatabaseStuff:(int)i {
//this is to simulate writing to database
NSLog(#"----------START--------- %d", i);
[NSThread sleepForTimeInterval:1.0f];
NSLog(#"--------FINISHED-------- %d", i);
}
-(void)testInsert:(int)i {
NSLog(#"Start insert.... %d", i);
dispatch_async(serialQueue, ^() {
[self executeDatabaseStuff:i];
});
NSLog(#"End insert... %d", i);
}
I don't know why inserting dispatch_async(dispatch_get_main_queue(), ^() {} inside your for loop was working for you... I can only assume it was off-loading the "database writing" until after the interface had loaded.
Further resources on threads & GCD
Number of threads created by GCD?
https://developer.apple.com/library/ios/documentation/General/Conceptual/ConcurrencyProgrammingGuide/OperationQueues/OperationQueues.html#//apple_ref/doc/uid/TP40008091-CH102-SW1
https://developer.apple.com/library/ios/documentation/Cocoa/Conceptual/Multithreading/CreatingThreads/CreatingThreads.html#//apple_ref/doc/uid/10000057i-CH15-SW2
Is there any way to notify when a enumerateLinesUsingBlock is completed? Please check below code. I am calling createFastSearchData method with chunk by chunk data in a while loop and inside that taking each lines and processing it. In while condition I am checking the length of the main string and I want to continue untill it completes the total length. So I want to make sure that enumerateLinesUsingBlock is completed before the while loop trigger again.
while(<checking the length of the mainstring>){
[self createFastSearchData:string];
}
- (void)createFastSearchData:(NSString *)newChunk{
[newChunk enumerateLinesUsingBlock:^(NSString * line, BOOL * stop)
{}];
}
Added:
I am working with blocks and finding difficulty to understand the actual flow. Please check the below code. I want to call fetchCSVData method by passing different values in an array filesToBeFetchedWhat. I want to make sure that, fetchCSVData should not overlap. How can I do that? Please help
dispatch_async( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
#autoreleasepool {
for (int i = 0; i < filesToBeFetched.count; i++) {
[applicationDelegate fetchCSVData:[filesToBeFetched objectAtIndex:i]];
}
}
dispatch_async( dispatch_get_main_queue(), ^{
NSLog(#"Fetching is done *********************");
});
});
To answer the first part of the question:
All enumerate...UsingBlock methods don't work asynchronously.
Regarding the added part:
Assuming fetchCSVData works also synchronously, that's the preferred way to process the data
dispatch_async( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
#autoreleasepool {
for (int i = 0; i < filesToBeFetched.count; i++) {
[applicationDelegate fetchCSVData:[filesToBeFetched objectAtIndex:i]];
dispatch_async( dispatch_get_main_queue(), ^{
NSLog(#"Fetching is done *********************");
});
}
}
});
I have two methods which run on a serial queue. Each method return a copy of some class. I'm trying to achieve thread safety solution while also mainting data integrity.
for example:
-(Users *) getAllUsers
{
__block copiedUsers;
dispatch_sync(_backgroundQueue, ^{
copiedUsers = [self.users copy]; // return copy object to calling thread.
});
return copiedUsers;
}
-(Orders *) getAllOrders
{
__block copiedOrders;
dispatch_sync(_backgroundQueue, ^{
copiedOrders = [self.Orders copy]; // return copy object to calling thread.
});
return copiedOrders;
}
In addition to this two methods, I have a worker class that add/remove users and orders, all done via a serial queue backgroundQueue.
If in the main thread I call getAllUsers and then getAllOrders right after the other my data integrity isn't safe because between the two calls the worker class might have changed the model.
my question is how can I make to the caller a nice interface that allows multiple methods to run atomically?
Model is only updated from backgroundQueue serial queue.
Client talks to model via a method that receives a block that runs in the background queue.
In addition, not to freeze main thread, I create another queue and run a block that talks with the gateway method.
P.S - attention that dispatch_sync is called only in runBlockAndGetNeededDataSafely to avoid deadlocks.
Code sample:
aViewController.m
ManagerClass *m = [ManagerClass new];
dispatch_queue_t q = dispatch_queue_create("funnelQueue", DISPATCH_QUEUE_SERIAL);
dispatch_block_t block_q = ^{
__Users *users;
__Orders *orders;
[manager runBlockAndGetNeededDataSafely:^
{
users = [manager getUsers];
orders = [manager getOrders];
dispatch_async(dispatch_get_main_queue(),
^{
// got data safely - no thread issues, copied objects. update UI!
[self refreshViewWithUsers:users
orders:orders];
});
}];
}
dispatch_async(q, block_q);
Manager.m implementation:
-(void) runBlockInBackground:(dispatch_block_t) block
{
dispatch_sync(self.backgroundQueue, block);
}
-(Users *) getAllUsers
{
return [self.users copy];
}
-(Orders *) getAllOrders
{
return [self.Orders copy];
}
To answer your question about how to checking the current queue:
First when you create the queue, give it a tag:
static void* queueTag = &queueTag;
dispatch_queue_t queue = dispatch_queue_create("a queue", 0);
dispatch_queue_set_specific(queue, queueTag, queueTag, NULL);
and then run a block like this:
-(void)runBlock:(void(^)()) block
{
if (dispatch_get_specific(queueTag) != NULL) {
block();
}else {
dispatch_async(self.queue, block);
}
}
Your example doesn't work. I suggest to use completion callback. You should have an option to know when the worker finish his job to return to value.
- (void)waitForCompletion:(BOOL*)conditions length:(int)len timeOut:(NSInteger)timeoutSecs {
NSDate *timeoutDate = [NSDate dateWithTimeIntervalSinceNow:timeoutSecs];
BOOL done = YES;
for (int i = 0; i < len; i++) {
done = done & *(conditions+i);
}
do {
[[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode beforeDate:timeoutDate];
if([timeoutDate timeIntervalSinceNow] < 0.0)
break;
//update done
done = YES;
for (int i = 0; i < len; i++) {
done = done & *(conditions+i);
}
} while (!done);
}
-(void) getAllUsers:(void(^)(User* user, NSError* error))completion
{
dispatch_async(_backgroundQueue, ^{
BOOL condition[2] = [self.userCondition, self.orderCondition];
[self waitForCompletion: &condition[0] length:2 timeOut:60];
if (completion) {
completion([self.users copy], nil);
}
});
}
The code below is called each time a scrollview scroll and if user scroll it multiple times, it crashed the code. How do i make sure only 1 code execute at a time or threadsafe?
[self.cv addInfiniteScrollingWithActionHandler:^{
[weakSelf loadNextPage];
}];
Here is example
- (void)_startExperiment {
FooClass *foo = [[FooClass alloc] init];
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
for (int i = 0; i < 4; ++i) {
dispatch_async(queue, ^{
[foo doIt];
});
}
[foo release];
}
Detail is Here
The common pattern is to use a mutex to protect a critical section of code where the structure is accessed and/or modified.
just go through this link->
Does #synchronized guarantees for thread safety or not?
Is there any good documention on how many threads are created by GCD?
At WWDC, they told us it's modeled around CPU cores. However, if I call this example:
for (int i=1; i<30000; i++) {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[NSThread sleepForTimeInterval:100000];
});
}
it opens 66 threads, even on an iPad1. (It also opens 66 threads when called on Lion natively). Why 66?
First, 66 == 64 (the maximum GCD thread pool size) + the main thread + some other random non-GCD thread.
Second, GCD is not magic. It is optimized for keeping the CPU busy with code that is mostly CPU bound. The "magic" of GCD is that it dynamically create more threads than CPUs when work items unintentionally and briefly wait for operations to complete.
Having said that, code can confuse the GCD scheduler by intentionally sleeping or waiting for events instead of using dispatch sources to wait for events. In these scenarios, the block of work is effectively implementing its own scheduler and therefore GCD must assume that the thread has been co-opted from the thread pool.
In short, the thread pool will operate optimally if your code prefers dispatch_after() over "sleep()" like APIs, and dispatch sources over handcrafted event loops (Unix select()/poll(), Cocoa runloops, or POSIX condition variables).
The documentation avoids mentioning the number of threads created. Mostly because the optimal number of threads depends heavily on the context.
One issue with Grand Cendral Dispatch is that it will spawn a new thread if a running task blocks. That is, you should avoid blocking when using GCD as having more threads than cores is suboptimal.
In your case, GCD detects that the task is inactive, and spawns a new thread for the next task.
Why 66 is the limit is beyond me.
The answer should be: 512
There are different cases:
All types (Concurrent and serial queues ) can simultaneously create up to 512 threads.
All global queues can create up to 64 threads simultaneously.
As a general rule of thumb,
If the number of threads in an app exceeds 64, it will cause the main thread to lag. In severe cases, it may even trigger a watchdog crash.
This is because there is an overhead in creating threads, and the main costs under iOS are: kernel data structure (about 1KB), stack space (512KB for child threads, 1MB for main threads)
It can also be set using -setStackSize, but it must be a multiple of 4K, and the minimum is 16K, which takes about 90 ms to create a thread.
Opening a large number of threads will reduce the performance of the program. The more threads there are, the more CPU overhead there is to schedule them.
The program design is more complex: communication between threads and data sharing among multiple threads.
First of all, GCD has a limit on the number of threads it can create,
The way gcd creates threads is by calling _pthread_workqueue_addthreads, so there is a limit to the number of threads created. Other ways of creating threads do not call this creation method.
Back to the question above:
All global queues can create up to 64 threads at the same time.
The global queue is added via the _pthread_workqueue_addthreads method. However, there is a limit on the number of threads added using this method in the kernel (plugin).
The specific code for the restriction is shown below:
#define MAX_PTHREAD_SIZE 64*1024
The meaning of this code is as follows:
The total size is limited to 64k, according to Apple-Docs-Threading Programming Guide- Thread creation costs: 1 thread allocates 1k cores, so we can deduce that the result is 64 threads.
In summary, the global queue can create up to 64 threads, which is written dead in the kernel (plugin).
A more detailed test code is as follows:
Test Environment : iOS 14.3
Test code
case 1: Global Queue - CPU Busy
In the first test case, we use dispatch_get_global_queue(0, 0) to get a default global queue and simulate CPU busy.
+ (void)printThreadCount {
kern_return_t kr = { 0 };
thread_array_t thread_list = { 0 };
mach_msg_type_number_t thread_count = { 0 };
kr = task_threads(mach_task_self(), &thread_list, &thread_count);
if (kr != KERN_SUCCESS) {
return;
}
NSLog(#"threads count:%#", #(thread_count));
kr = vm_deallocate( mach_task_self(), (vm_offset_t)thread_list, thread_count * sizeof(thread_t) );
if (kr != KERN_SUCCESS) {
return;
}
return;
}
+ (void)test1 {
NSMutableSet<NSThread *> *set = [NSMutableSet set];
for (int i=0; i < 1000; i++) {
dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
dispatch_async(queue, ^{
NSThread *thread = [NSThread currentThread];
[set addObject:[NSThread currentThread]];
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(#"start:%#", thread);
NSLog(#"GCD threads count:%lu",(unsigned long)set.count);
[self printThreadCount];
});
NSDate *date = [NSDate dateWithTimeIntervalSinceNow:10];
long i=0;
while ([date compare:[NSDate date]]) {
i++;
}
[set removeObject:thread];
NSLog(#"end:%#", thread);
});
}
}
Tested: the number of threads is 2
case 2: Global queue - CPU idle
For the second code, we test by [NSThread sleepForTimeInterval:10]; simulating CPU idle
+ (void)test2 {
NSMutableSet<NSThread *> *set = [NSMutableSet set];
for (int i=0; i < 1000; i++) {
dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
dispatch_async(queue, ^{
NSThread *thread = [NSThread currentThread];
[set addObject:[NSThread currentThread]];
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(#"start:%#", thread);
NSLog(#"GCD threads count:%lu",(unsigned long)set.count);
[self printThreadCount];
});
// thread sleep for 10s
[NSThread sleepForTimeInterval:10];
[set removeObject:thread];
NSLog(#"end:%#", thread);
return;
});
}
}
After testing, the maximum number of threads is 64
All concurrent queues and serial queues can create up to 512 threads simultaneously.
A more detailed test code is as follows:
case 1: Self-built Queues - CPU Busy
Now, let us see the performance of the self-built queue - CPU busy. This example will simulate most APP scenarios, where different business parties create separate queues to manage their tasks.
+ (void)test3 {
NSMutableSet<NSThread *> *set = [NSMutableSet set];
for (int i=0; i < 1000; i++) {
const char *label = [NSString stringWithFormat:#"label-:%d", i].UTF8String;
NSLog(#"create:%s", label);
dispatch_queue_t queue = dispatch_queue_create(label, DISPATCH_QUEUE_SERIAL);
dispatch_async(queue, ^{
NSThread *thread = [NSThread currentThread];
[set addObject:[NSThread currentThread]];
dispatch_async(dispatch_get_main_queue(), ^{
static NSInteger lastCount = 0;
if (set.count <= lastCount) {
return;
}
lastCount = set.count;
NSLog(#"begin:%#", thread);
NSLog(#"GCD threads count量:%lu",(unsigned long)set.count);
[self printThreadCount];
});
NSDate *date = [NSDate dateWithTimeIntervalSinceNow:10];
long i=0;
while ([date compare:[NSDate date]]) {
i++;
}
[set removeObject:thread];
NSLog(#"end:%#", thread);
});
}
}
After testing, the maximum number of threads created by GCD is 512
case 2: Self-built queues - CPU idle
+ (void)test4 {
NSMutableSet<NSThread *> *set = [NSMutableSet set];
for (int i=0; i < 10000; i++) {
const char *label = [NSString stringWithFormat:#"label-:%d", i].UTF8String;
NSLog(#"create:%s", label);
dispatch_queue_t queue = dispatch_queue_create(label, DISPATCH_QUEUE_SERIAL);
dispatch_async(queue, ^{
NSThread *thread = [NSThread currentThread];
dispatch_async(dispatch_get_main_queue(), ^{
[set addObject:thread];
static NSInteger lastCount = 0;
if (set.count <= lastCount) {
return;
}
lastCount = set.count;
NSLog(#"begin:%#", thread);
NSLog(#"GCD threads count:%lu",(unsigned long)set.count);
[self printThreadCount];
});
[NSThread sleepForTimeInterval:10];
dispatch_async(dispatch_get_main_queue(), ^{
[set removeObject:thread];
NSLog(#"end:%#", thread);
});
});
}
}
The maximum number of threads created by the self-built queue - CPU idle is 512
Other test code
__block int index = 0;
// one concurrent queue test
dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
for (int i = 0; i < 1000; ++i) {
dispatch_async(queue, ^{
id name = nil;
#synchronized (self) {
name = [NSString stringWithFormat:#"gcd-limit-test-global-concurrent-%d", index];
index += 1;
}
NSThread.currentThread.name = name;
NSLog(#"%#", name);
sleep(100000);
});
}
// some concurrent queues test
for (int i = 0; i < 1000; ++i) {
char buffer[256] = {};
sprintf(buffer, "gcd-limit-test-concurrent-%d", i);
dispatch_queue_t queue = dispatch_queue_create(buffer, DISPATCH_QUEUE_CONCURRENT);
dispatch_async(queue, ^{
id name = nil;
#synchronized (self) {
name = [NSString stringWithFormat:#"gcd-limit-test-concurrent-%d", index];
index += 1;
}
NSThread.currentThread.name = name;
NSLog(#"%#", name);
sleep(100000);
});
}
// some serial queues test
for (int i = 0; i < 1000; ++i) {
char buffer[256] = {};
sprintf(buffer, "gcd-limit-test-%d", i);
dispatch_queue_t queue = dispatch_queue_create(buffer, 0);
dispatch_async(queue, ^{
id name = nil;
#synchronized (self) {
name = [NSString stringWithFormat:#"gcd-limit-test-%d", index];
index += 1;
}
NSThread.currentThread.name = name;
NSLog(#"%#", name);
sleep(100000);
});
}
Special note here:
The 512 mentioned is the limit of gcd. After the 512 gcd threads are opened, you can still open them with NSThread.
So the chart above
Currently it should be 512, 516 = 512(max) + main thread + js thread + web thread + uikit event thread"
Conclusion
After testing, GCD's global queue automatically limits the number of threads to a reasonable number. Compared to this, the number of threads created by the self-built queue is large.
Considering that the number of threads is too large, the CPU scheduling cost will increase.
Therefore, it is recommended that small APPs use a global queue to manage tasks as much as possible; large APPs can decide the suitable solution according to their actual situation.
Number of busy threads is equal to CPU cores. Blocked threads (you block them with sleepForTimeInterval) are not counted.
If you change your code to this (Swift):
for _ in 1..<30000 {
DispatchQueue.global().async {
while true {}
}
}
you'll see that there are just 2 threads created (on iPhone SE):
There is a thread limit so that your app is not killed because of big memory consumption if you have problem with blocked threads (usually because of deadlock).
Never block threads and you'll have them as much as your cores.