I know about using dispatch_barrier_async to lock a given resource, but in my case it isn't a good candidate because I am not modifying a shared data structure, rather a resource on disk and don't want to block the whole queue, rather just a given key as the action could take a long time. I'm not certain how the file system works pertaining to accessing the same file (by name) from several threads simultaneously and couldn't find a clear answer in the documentation, just best practices. I think I would like to lock by "file name" - and am missing a method "tryLock(key)"
Something like:
-(void)readFileAtPath:(NSString *)path completion:(void(^)(NSData *fileData))completion
{
dispatch_async(self.concurrentQueue,^{
// acquire the lock for a given key and block until can acquire
trylock(path);
NSData *fileData = [self dataAtPath:path];
unlock(path);
completion(fileData);
});
}
-(void)writeData:(NSData *)data toPath:(NSString *)path completion:(void(^)())completion
{
dispatch_async(self.concurrentQueue,^{
// if someone is reading the data at 'path' then this should wait - otherwise should write
trylock(path);
[data writeToFile:path atomically:YES];
unlock(path);
completion();
});
}
EDIT:
Does #synchronized do this? Is this a proper use case?
If you want to create "scoped queues", just do it. Create a serial queue for each file, and have them target your concurrent queue. It might look like this:
#interface Foo : NSObject
#property (readonly) dispatch_queue_t concurrentQueue;
#end
#implementation Foo
{
NSMutableDictionary* _fileQueues;
dispatch_queue_t _dictGuard;
}
#synthesize concurrentQueue = _concurrentQueue;
- (instancetype)init
{
if (self = [super init])
{
_concurrentQueue = dispatch_queue_create(NULL, DISPATCH_QUEUE_CONCURRENT);
_dictGuard = dispatch_queue_create(NULL, DISPATCH_QUEUE_SERIAL);
_fileQueues = [[NSMutableDictionary alloc] init];
}
return self;
}
- (dispatch_queue_t)queueForFile: (NSString*)path
{
__block dispatch_queue_t retVal = NULL;
dispatch_sync(_dictGuard, ^{
retVal = _fileQueues[path];
if (!retVal)
{
retVal = dispatch_queue_create(path.UTF8String, DISPATCH_QUEUE_SERIAL);
dispatch_set_target_queue(retVal, self.concurrentQueue);
_fileQueues[path] = retVal;
}
});
return retVal;
}
- (void)doStuff: (id)stuff withFile: (NSString*)path
{
dispatch_queue_t fileQueue = [self queueForFile: path];
dispatch_async(fileQueue, ^{
DoStuff(stuff, path);
});
}
#end
That said, this queue-per-file thing has a little bit of a "code smell" to it, especially if it's intended to improve I/O performance. Just off the top of my head, for max performance, it feels like it would be better to have a queue per physical device than a queue per file. It's not generally the case that you as the developer know better than the OS/system frameworks how to coordinate file system access, so you will definitely want to measure before vs. after to make sure that this approach is actually improving your performance. Sure, there will be times when you know something that the OS doesn't know, but you might want to look for a way to give the OS that information rather than re-invent the wheel. In terms of performance of reads and writes, if you were to use dispatch_io channels to read and write the files, you would be giving GCD the information it needed to best coordinate your file access.
It also occurs to me that you also might be trying to 'protect the application from itself.' Like, if you were using the disk as a cache, where multiple tasks could be accessing the file at the same time, you might need to protect a reader from another writer. If this is the case, you might want to look for some existing framework that might address the need better than rolling your own. Also, in this use case, you might want to consider managing your scope in-application, and just mmaping one large file, but the cost/benefit of this approach would depend on the granule size of your files.
It would be hard to say more without more context about the application.
To your follow-on question: #synchronized could be used to achieve this, but not without much the same mechanics required as posted above for the GCD way. The reason for this is that #synchronized(foo) synchronizes on foo by identity (pointer equality) and not value equality (i.e. -isEqual:), so NSString and NSURL (the two most obvious objects used to refer to files) having value semantics, makes them poor candidates. An implementation using #synchronized might look like this:
#interface Bar : NSObject
#property (readonly) dispatch_queue_t concurrentQueue;
#end
#implementation Bar
{
NSMutableDictionary* _lockObjects;
dispatch_queue_t _dictGuard;
}
- (instancetype)init
{
if (self = [super init])
{
_concurrentQueue = dispatch_queue_create(NULL, DISPATCH_QUEUE_CONCURRENT);
_dictGuard = dispatch_queue_create(NULL, DISPATCH_QUEUE_SERIAL);
_lockObjects = [[NSMutableDictionary alloc] init];
}
return self;
}
#synthesize concurrentQueue = _concurrentQueue;
- (id)lockForFile: (NSString*)path
{
__block id retVal = NULL;
dispatch_sync(_dictGuard, ^{
retVal = _lockObjects[path];
if (!retVal)
{
retVal = [[NSObject alloc] init];
_lockObjects[path] = retVal;
}
});
return retVal;
}
- (void)syncDoStuff: (id)stuff withFile: (NSString*)path
{
id fileLock = [self lockForFile: path];
#synchronized(fileLock)
{
DoStuff(stuff, path);
}
}
- (void)asyncDoStuff: (id)stuff withFile: (NSString*)path
{
id fileLock = [self lockForFile: path];
dispatch_async(self.concurrentQueue, ^{
#synchronized(fileLock)
{
DoStuff(stuff, path);
}
});
}
#end
You'll see that I made two methods to do stuff, one synchronous and the other asynchronous. #synchronized provides a mutual exclusion mechanism, but is not an asynchronous dispatch mechanism, so if you want parallelism, you still have to get that from GCD (or something else.) The long and short of it is that while you can use #synchronized to do this, it's not a good option these days. It's measurably slower than equivalent GCD mechanisms. About the only time #synchronized is useful these days is as a syntactic shortcut to achieve recursive locking. That said, many smart folks believe that recursive locking is an anti-pattern. (For more details on why, check out this link.) The long and short of it is that #synchronized is not the best way to solve this problem.
Related
I have a class called Dictionary, where the init method looks like this:
- (id) init{
self = [super init];
if (self){
[self makeEmojiDictionaries];
}
return self;
}
- (void)makeEmojiDictionaries{
//next line triggers bad_exc_access error
self.englishEmojiAllDictionary = #{#"hi" : #"👋"}; //this is a strong, atomic property of NSDictionary
};
My issue is that the actual emoji dictionary is quite large, and I want to do all the heavy lifting in a non-main thread using GCD. However, whenever I get to the line where I set self.englishEmojiAllDictionary, I always get a bad_access error.
I am using GCD in the most normal way possible:
dispatch_queue_t myQueue = dispatch_queue_create("My Queue",NULL);
dispatch_async(myQueue, ^{
//Do long process activity
Dictionary *dictionary = [[Dictionary alloc] init];
});
Are there particular nuances to GCD or non-main thread work that I am missing? Any help is much appreciated - thank you!
Edit 1:
In case you'd like to try it yourself. I have uploaded a sample project that replicates this exception. My theory is that the NSDictionary I am initialization is simply too large.
I have moved your data from code to a plist file in the form:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>arancia meccanica</key><string>⏰🍊</string>
<key>uno freddo</key><string>🍺</string>
<key>un drink</key><string>🍸</string>
...
<key>bacio</key><string>💋</string>
<key>baci</key><string>💋👐</string>
</dict>
</plist>
(I took your data and used find-replace three times: ", => </string>, then ":#" => </key><string> and #" => <key>).
Then I have loaded the data using:
NSString *filePath = [[NSBundle mainBundle] pathForResource:#"dictionary"
ofType:#"plist"]
dictionary = [NSDictionary dictionaryWithContentsOfFile:filePath];
That has fixed the problem. Note that you should never hardcode your data into source code.
The exact reason for the bug was pretty hard to pinpoint. The NSDictionary literal uses method +[NSDictionary dictionaryWithObjects:forKeys:count:].
My assembler knowledge is very poor but I think that before calling this initializer, all the keys & values are put on the stack.
However, there is a difference between the stack size of the main thread and the stack size of the background thread (see Creating Threads in Thread Programming Guide).
That's why the issue can be seen when executing the code on the background thread. If you had more data, the issue would probably appear on the main thread too.
The difference between stack size on main thread and background thread can be also demonstrated by the following simple code:
- (void)makeEmojiDictionaries {
// allocate a lot of data on the stack
// (approximately the number of pointers we need for our dictionary keys & values)
id pointersOnStack[32500 * 2];
NSLog(#"%i", sizeof(pointersOnStack));
}
First of all, I suggest you use a file (plist, txt, xml, ...) to store large data, then read it at runtime, or download it from a remote server.
For your issue, it is because of the limitation of stack size. On iOS, the default stack size for the main thread is 1 MB, and 512 KB for the secondary threads. You can check it out via [NSThread currentThread].stackSize.
Your hardcoded dictionary costs almost 1 MB of stack, that is why your app will be crash on a secondary thread, but be OK on the main thread.
If you want to do this work on a background thread, you must increase the stack size for that thread.
For example:
// NSThread way:
NSThread *thread = [[NSThread alloc] initWithTarget:self selector:#selector(populateDictionaries) object:nil];
thread.stackSize = 1024*1024;
[thread start];
Or
// POSIX way:
#include <pthread.h>
static void *posixThreadFunc(void *arg) {
Dictionary *emojiDictionary = [[Dictionary alloc] init];
return NULL;
}
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
pthread_t posixThread;
pthread_attr_t stackSizeAttribute;
size_t stackSize = 0;
pthread_attr_init (&stackSizeAttribute);
pthread_attr_getstacksize(&stackSizeAttribute, &stackSize);
if (stackSize < 1024*1024) {
pthread_attr_setstacksize (&stackSizeAttribute, REQUIRED_STACK_SIZE);
}
pthread_create(&posixThread, &stackSizeAttribute, &posixThreadFunc, NULL);
}
#end
Or
// Create mutable dictionary to prevent stack from overflowing
- (void)makeEmojiDictionaries {
NSMutableDictionary *dict = [NSMutableDictionary dictionary];
dict[#"arancia meccanica"] = #"⏰🍊";
dict[#"uno freddo"] = #"🍺";
dict[#"un drink"] = #"🍸";
.....
self.englishEmojiAllDictionary = [dict copy];
}
FYI:
Thread Costs
Customizing Process Stack Size
The correct pattern when you need to do something slow is to do the work privately on a background queue, and then dispatch back to the main queue to make the completed work available to the rest of the app. In this case, you don't need to create your own queue. You can use one of the global background queues.
#import "ViewController.h"
#import "Dictionary.h"
#interface ViewController ()
#property (nonatomic, strong, readonly) Dictionary *dictionary;
#end
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
[self updateViews];
dispatch_async(dispatch_get_global_queue(QOS_CLASS_USER_INITIATED, 0), ^{
Dictionary *dictionary = [[Dictionary alloc] init];
dispatch_async(dispatch_get_main_queue(), ^{
_dictionary = dictionary;
[self updateViews];
});
});
}
- (void)updateViews {
if (self.dictionary == nil) {
// show an activity indicator or something
} else {
// show UI using self.dictionary
}
}
#end
Loading the dictionary from a file is a good idea, and you can do that in the background queue and then dispatch back to the main thread with the loaded dictionary.
i want create 3 or more different singleton to handle different store scenario in my app using FMDB, an example of a singleton is this:
.h
#interface MyManager : NSObject
+ (id)sharedManager;
- (BOOL)isChecked:(int)id_product;
#end
.m
#implementation MyManager
#synthesize someProperty;
#pragma mark Singleton Methods
+ (id)sharedManager {
static MyManager *sharedMyManager = nil;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
sharedMyManager = [[self alloc] init];
});
return sharedMyManager;
}
- (id)init {
if (self = [super init]) {
self.databaseQueue = [FMDatabaseQueue databaseQueueWithPath:databasePath]; //or FMDatabase
}
return self;
}
- (BOOL)isChecked:(int)id_product
{
BOOL isChecked = NO;
[self.databaseQueue inDatabase:^(FMDatabase *db) {
FMResultSet *product_query = [db executeQuery:#"SELECT isChecked FROM products WHERE id = ?",[NSNumber numberWithInt:id_product]];
while ([product_query next]) {
if ([product_query boolForColumn:#"isChecked"] == 1) {
isChecked = YES;
} else {
isChecked = NO;
}
}
}];
return isChecked;
}
#end
So my question is, can i create 3 or more singleton like this that use a FMDatabaseQueue or a FMDatabase defined as a class property for the class, and is better using FMDatabaseQueue or FMDatabase?
A few thoughts:
You theoretically can have three different classes of singleton objects, each with its own FMDatabase/FMDatabaseQueue instance.
Whether you should do this is a completely different question. Having three, without some very compelling argument for that, suggests some serious code smell.
Make sure that none of those three instances are trying to access the same database file, or else you defeat the entire purpose of FMDatabaseQueue. This three instance model is only plausible if you're dealing with three different database files (and even then, it seems like a curious design).
You say "i think i can use only one singleton, the idea of use different singleton was only to make code more readable, and divide the operation for type."
That is, absolutely, not an argument for three singleton classes. you should only have one.
In terms of FMDatabase vs. FMDatabaseQueue, the latter enables you to enjoy multithreaded access, so I would lean towards that. Using FMDatabase offers no significant advantages, but introduces limitations unnecessarily.
The whole purpose of FMDatabaseQueue is to manage database contention when you have multiple threads accessing the same database. So if you know, with absolute certainty, that you'll never access FMDatabase object from different threads, then you could use FMDatabase.
But why paint yourself into a corner like that? Just use FMDatabaseQueue and then you don't have to worry about it. It works fine if used from single thread and saves you from many headaches if you happen to use it from multiple threads (e.g. you use your instances from GCD blocks, inside completion handlers for asynchronous methods, etc.).
Here's a simplified version of my class:
#interface RTMovieBuilder : NSObject
#property (atomic, getter = isCancelled) volatile BOOL cancelled;
#property (nonatomic, weak) id<BuilderDelegate>delegate;
- (void)moviesFromJSON:(id)JSON;
- (Movie *)movieFromDictionary:(NSDictionary *)dict;
- (void)cancel;
#end
#implementation RTMovieBuilder
- (void)moviesFromJSON:(id)JSON
{
// Check for errors -> If good, then do...
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
[self syncrouslyCreateMoviesFromJSON:JSON];
});
}
- (void)syncrouslyCreateMoviesFromJSON:(id)JSON
{
NSMutableArray *movies = [NSMutableArray array];
for (NSDictionary *dict in JSON)
{
if ([self isCancelled])
return;
else
[movies addObject:[self movieFromDictionary:dict]];
}
[self notifyDelegateCreatedObjects:movies];
}
- (Movie *)movieFromDictionary:(NSDictionary *)dict
{
Movie *movie = [[Movie alloc] init];
// Set movie properties based on dictionary...
return movie;
}
- (void)cancel
{
[self setCancelled:YES];
}
// ... Other methods omitted for brevity's sake
#end
The property cancelled is atomic and volatile because it may be accessed by other threads (i.e. the main thread may call cancel method to stop the operation). (I believe these are needed, if not, please note why it's not in your answer.)
I am trying to write unit tests to make sure this will work before writing the view controller class.
How can I write a unit test that will simulate a call to cancel while RTMovieBuilder is in the middle of creating movies?
Edit
Here's a unit test I have already written which tests to make sure that notifyDelegateCreatedObjects: isn't called if cancel is called first.
- (void)testIfCancelledDoesntNotifyDelegateOfSuccess
{
// given
RTMovieBuilder *builder = [[RTMovieBuilder alloc] init];
builder.delegate = mockProtocol(#protocol(BuilderDelegate));
// when
[builder cancel];
[builder notifyDelegateCreatedObjects:#[]];
// then
[verifyCount(builder.delegate, never()) builder:builder createdObjects:anything()];
}
I'm using OCHamcrest and OCMockito. This test passes.
I would avoid trying to simulate thread timing in unit tests and focus more on figuring out what all the possible end states could be regardless of where the timing falls, and write tests for code under those conditions. This avoids endless complexity in your tests, as bbum points out as well.
In your case it seems the condition you need to be testing for is if the call to notifyDelegateCreatedObjects happens after the action is canceled, because the cancel came too late. So instead just unit test the handling of that scenario downstream in your notifyDelegateCreatedObjects method, or whatever class is being notified of that aborted event because of the thread timing.
I know this is not a specific answer to your question but I think its a better approach to achieve the same unit testing goal.
There is no reason to use volatile if your property is atomic and you always go through the setter/getter.
As well, this is a bit of re-inventing the wheel, as noted in the comments.
In general trying to unit test cancellation with any hope of full coverage is very hard because you can't really effectively test all possible timing interactions.
I have been thinking about a problem that seemingly would be simple to implement, yet an efficient and threadsafe solution is stymying me. What I want to do is create some sort of worker object. Several callers may ask it to work from different threads. A requirement is that requests must not queue up. In other words if somebody asks the worker to do work but sees it is already doing work, it should just return early.
A simple first pass is this:
#interface Worker : NSObject
#property (nonatomic, assign, getter = isWorking) BOOL working;
- (void)doWork;
#end
#implementation Worker
{
dispatch_queue_t _workerQueue; //... a private serial queue
}
- (void)doWork
{
if ( self.isWorking )
{
return;
}
self.working = YES;
dispatch_async(_workerQueue, ^{
// Do time consuming work here ... Done!
self.working = NO;
});
}
#end
The problem with this is that the isWorking property is not threadsafe. Marking it atomic won't help here, as accesses to it need to be synchronized across a few statements.
To make it threadsafe I would need to protect the isWorking with a lock:
#interface Worker : NSObject
#property (nonatomic, assign, getter = isWorking) BOOL working;
- (void)doWork;
#end
#implementation Worker
{
dispatch_queue_t _workerQueue; //... a private serial queue
NSLock *_lock; // assume this is created
}
- (void)doWork
{
[_lock lock];
if ( self.isWorking )
{
[_lock unlock];
return;
}
self.working = YES;
[_lock unlock];
dispatch_async(_workerQueue, ^{
// Do time consuming work here ... Done!
[_lock lock];
self.working = NO;
[_lock unlock];
});
}
#end
While I do believe this would be threadsafe, I think it's pretty crummy to have to take and give up a lock (an expensive operation) so frequently.
So, is there a more elegant solution?
dispatch_semaphore is the idiomatic way to limit access to a finite resource, if you're already using GCD.
// Add an ivar:
dispatch_semaphore_t _semaphore;
// To initialize:
_semaphore = dispatch_semaphore_create(1);
// To "do work" from any thread:
- (void)doWork
{
if (dispatch_semaphore_wait(_semaphore, DISPATCH_TIME_NOW) == 0) {
// We got the semaphore without waiting, so we're first in line.
dispatch_async(_workerQueue, ^{
// do time consuming work here, then when done:
dispatch_semaphore_signal(_semaphore);
});
} else {
// We would have had to wait for the semaphore, so somebody must have
// been doing work already, and we should do nothing.
}
}
Here's a blog post explaining in more detail.
You may be able to use an atomic test-and-set operation here. GCC provides __atomic_test_and_set for this purpose. Here's how you might use it in C (untested):
static volatile bool working = FALSE;
if(__atomic_test_and_set(&working, __ATOMIC_ACQUIRE)) {
// Already was working.
}else{
// Do work, possibly in another thread.
// When done:
__atomic_clear(&working, __ATOMIC_RELEASE);
}
Easy, huh?
For making a property thread-safe you could simply use #synchronize.
Let's say that I want to keep things nice and speedy in the main UI, so I break off slow parts into queues (using the global concurrent queues). Assume that selectedUser in this case remains static throughout.
In one View Controller I have something like this:
- (IBAction)buttonPressed:(id)sender {
User *selectedUser = [self getSelectedUser];
dispatch_queue_t queue;
queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(queue, ^{
#autoreleasepool {
[userManager doSomething:selectedUser withForceOption:YES];
}
});
}
And in another class I have a singleton defined (userManager), with a method like this:
- (void)doSomething:(User*)user withForceOption:(BOOL)force {
SAppDelegate *delegate = (SAppDelegate *)[UIApplication sharedApplication].delegate;
extlib_main_queue = delegate.extlib_main_queue;
dispatch_async(extlib_main_queue, ^{
#autoreleasepool {
extlib_status_t status;
user.opIsStarting = YES;
extlib_call_id callId = -1;
// this is the part that worries me:
extlib_str_t uri = extlib_str((char *) [[NSString stringWithFormat:#"http:%##%s", user.account,DOMAIN] UTF8String]);
status = extlib_call_make_call(0, &uri, 0, NULL, NULL, &callId);
}
});
}
My question is: is it safe to do this, or do I need to do something else to make sure that the passed User instance's parameters remain accessible to both blocks?
The User object will be retained by both blocks as long as they are alive. The only issue here is that the User object need to actually be safe to access from different threads.
You have nothing to worry about, because blocks retain the variables that they refer to.
The block in buttonPressed: retains selectedUser since the block refers to it, and the block in doSomething:withForceOption: retains user because the block in there refers to it as well.
Read this section of Blocks Programming Topics for more details on how this works.