Is CFUUIDCreate thread-safe? Didn't see anything about that in the docs.
Based on the current 10.8.2 source code, it's certainly intended to be thread-safe. The body of the function uses a LOCKED() function to dispatch_sync all the real work onto a single serial GCD dispatch queue. Simultaneous calls from multiple threads would therefore be serialized.
If you're interested in earlier versions of Mac OS X, you can find the code here. Unfortunately Apple doesn't release the source code of CoreFoundation on iOS, but it's probably safe to assume that it's similar to the OS X version.
Related
Ever since iOS 11 was released, I'm experiencing an esporadic but frequent crash with the following signature:
Cannot remove an observer <CBPeripheral 0x1c010ef10> for the key path "delegate" from <CBPeripheral 0x1c010ef10> because it is not registered as an observer.
This happens in the context of a scan for Bluetooth devices, a later connection to one of them and a final cleanup of the whole process. All these tasks are performed in a non-main dispatch queue to soften the pressure on the main thread (for smoother UI experience). This very code has been running without incident ever since iOS 9 days and only now that iOS 11 came out, started to crash.
The only references I've found in the net so far regarding this behaviour are this and this post for the Estimote SDK. These references suggest that something might be going on with parallel instances of the CBCentralManager in different dispatch queues, however, nothing regarding special care on the matter is stated in the official Programming Guide. Also, a response from an Apple Staff member to another CoreBluetooth issue stating:
iOS 11 is in general going to be less forgiving for apps which don't hold a proper reference to CB objects...
Doesn't sound very encouraging. I tried profiling the app and looking for potential leaks using XCode and it's companion tools but this didnt't shed much light on it either.
Has anybody else experienced similar issues? Any suggestions on how to workaround it? Ideas on where to dig next?
After some testing period, in our particular case the solution consisted in translating all the Bluetooth stack related work to the mainQueue. Meaning that all the related callbacks exists within Main-thread territory.
This solution requires extra caution with the work performed in those callbacks (UI runs here too), but since most CoreBluetooth actions are asynchronous by default, this has proven feasible. This workaround has been confirmed in iOS 11 and so far no issues have been reported in iOS 12 as well.
The takeaway here is: Handle ONLY the absolutely necessary bits in mainQueue, and then transfer the rest of the load elsewhere if necessary.
I am new to using the Contiki OS and I have a fundamental question.
Can I safely use a low level ISR from within a Contiki Process?
I am doing this as a quick test and it is functioning well.
However, I am concerned that I may be undermining something in the OS that will
fail at a later time under different conditions.
In the context of a process which is fired periodically based upon an event timer,
I am calling a function which sets up an LED to blink.
The LED blinking function itself is a callback from an ISR fired by a hardware timer on an Atmel SAMD21 MCU.
Can some one please clarify for me what constraints I should be concerned about in this particular case?
Thank You.
Basically you can, but you have to understand the context in which each part of the code run in.
A Process has the context of a function, the Contiki's scheduler runs in the main body, timers will enqueue process wakes in this scheduler, in fact, think of Contiki Processes as functions called after each other, notice that those PROCESS_* macros does in fact call return on the function.
When you are at an interrupt handler or callback, you are in a different context, here you can have race conditions if you share data with processes, the same it would be in a bare-metal firmware where interrupt and main() are different contexts.
I strongly recommend you to read about the "protothreads", besides they sound like threads, they are not, they are functions running in the main body. (I believe this link will enlighten you http://dunkels.com/adam/pt/)
On the problem you described, I see nothing wrong.
Contiki itself has some hardware abstractions modules, so you won't have to deal with the platform directly from you application code. I have written big firmwares using Contiki and found these abstractions not very much usable, since it has limited applications. What I did, on this case, was to write my own low level layer to touch the platform, so in the application everything is still platform independent, but, from the OS perspective, I had application code calling platform registers.
I'm trying to save a high score in my spritekit game. Based on all the tutorials I've watched and the stack overflow answers i've read, the following code should work:
NSUserDefaults.standardUserDefaults().setInteger(highScore, forKey: "highScore")
NSUserDefaults.synchronize()
however I keep encountering an error with the
NSUserDefaults.synchronize()
portion.The error states "Missing argument for parameter #1 in call" All the places i've looked seem to use that code with no error. I'm aware that there is going to be/ was recently an update to swift. Did this update cause something to change with the synchronize function, how do I fix this?
You need to call synchronize on standardUserDefaults.
NSUserDefaults.standardUserDefaults().synchronize();
Keep in mind that there is no need to call synchronize.
synchronize is an instance method, so you should call it on an instance, not the class:
NSUserDefaults.standardUserDefaults().synchronize()
However, you shouldn't need to do this.
In the CoreFoundation release notes for OS X 10.8, it was stated:
CFPreferences Synchronization
CFPreferencesSynchronize() (and therefore CFPreferencesAppSynchronize() and -[NSUserDefaults synchronize]) is now automatic in virtually all cases. The only remaining reason to call it is if you need a separate process to be able to synchronously access the values you just set; for example if you set a preference, then post a notification which another process receives and reads the same preference. Most regular applications will never need to do this, and applications that do are encouraged to use an inter-process communication API (for example XPC) to communicate with the other process rather than using the preferences system for IPC.
CFPreferencesSynchronize() is also much faster in 10.8, and will avoid doing any work if there are no outstanding changes to read or write.
And the Foundation release notes for 10.9:
-[NSUserDefaults synchronize] is not generally useful
You should only need to call -synchronize if a separate application will be reading the default that you just set, or if a process that does not use AppKit is terminating. In most applications neither of these should ever occur, and -synchronize should not be called. Note that prior to Mac OS X 10.8.4 there was a bug that caused AppKit to automatically synchronize slightly prematurely during application termination, so preferences set in response to windows closing while the application is terminating might not be saved; this has been fixed.
Because NSOperationQueue always run tasks on a new thread,
I'm confused about the role of isConcurrent when NSOperation runs from NSOperationQueue.
If i have two subclasses of NSOperation, both running an async processes, both are launched from NSOperationQueue and in both I override isCancelled, isExecuting, isFinished and isReady.
What will be the difference if at one I override isConcurrent to always return YES and the other always to return NO.
Who actually calls isConcurrent? How the logic change if it is NO or YES.
It's a legacy method, used before OS X v10.6 and before iOS 4, i.e. before the introduction of GCD for NSOperationQueue dispatch.
From the doc
Operation queues usually provide the threads used to run their operations. In OS X v10.6 and later, operation queues use the libdispatch library (also known as Grand Central Dispatch) to initiate the execution of their operations. As a result, operations are always executed on a separate thread, regardless of whether they are designated as concurrent or non-concurrent operations. In OS X v10.5, however, operations are executed on separate threads only if their isConcurrent method returns NO. If that method returns YES, the operation object is expected to create its own thread (or start some asynchronous operation); the queue does not provide a thread for it.
If you're running OS X >= 10.6 or iOS >= 4, you can safely ignore it.
As a confirmation of this fact, from the doc of isConcurrent
In OS X v10.6 and later, operation queues ignore the value returned by this method and always start operations on a separate thread.
I have added a more concise answer now, left this for the time being as it was part of a discussion...
Ok, I was quite happy with my use of the isConcurrent method until today !
I read the sentence:
In OS X v10.6 and later, operation queues ignore the value returned by
this method and always start operations on a separate thread.
As a warning relating to QA1712, pointing out that for concurrent operations the start method can now be called on a different thread to the one that queued the operation, which is a change in 10.6 and iOS 4.
I didn't read this as indicating that the isConcurrent method is ignored completely by the queue and has no purpose at all, just that it no longer effects the thread on which start is called. I may have misunderstood this.
I also misunderstood the original question as a more general one about concurrent operations and the isConcurrent flag, and the accepted answer as effectively saying
The isConcurrent flag can be ignored since 10.6 and iOS 4
I'm not sure this is correct.
If I understand the original question now, to paraphrase:
Given a correctly built concurrent NSOperation does the isConcurrent flag itself actually alter the execution of the operation at all?
I guess it's hard to say for all possible setups, but we can say that:
It's not deprecated. It's normal for Apple to deprecate methods
that are no longer useful.
The documentation consistently refers to the method being a required
override.
Perhaps isConcurrent is effectively deprecated but as it's only a single BOOL flag perhaps it's not worth the effort to deprecate it in the docs. Or perhaps it does nothing now but Apple have kept it for possible future use and expect you to override it as described.
I created a quick test project with an NSOperation that overrode isConcurrent and main only, isConcurrent was not called at any stage. It was a very simple test though. I assume you may have tested it also? I assumed perhaps if the NSOperationQueue didn't call it NSOperation's default implementation of start might.
So where does that leave us? Obviously it's no trouble to implement it and return YES to satisfy the documented requirements. However from my perspective I think it's too much of a leap to go from the caveat regarding 10.6 and iOS 4.0 to say that it can be safely ignored now.
My Original Answer...
isConcurrent is not a legacy method and is not ignored by NSOperationQueue. The documentation as quoted in the other answers is a little unclear and easily misunderstood.
isConcurrent = YES means the operation provides its own means of concurrency. Or to put it another way, the operation "isAlreadyConcurrent" and doesn't need the NSOperationQueue to provide and manage the concurrency. As NSOperationQueue is no longer providing the concurrency you need to tell it when the operation isFinished or if it isCancelled (etc) hence the need to override these methods.
A common example is an NSOperation that manages an NSURLConnection. NSURLConnection has it's own mechanism for running in the background, so doesn't need to be made concurrent by NSOperationQueue.
An obvious question is then: "Why put an already concurrent operation into an NSOperationQueue?" It's so the operation can benefit from the other features of the NSOperationQueue like dependencies etc.
The misleading part of the documentation is referring only to what thread the start method of an NSOperation is called on. The change caused a problem discussed in QA1712.
Paraphrasing the original question to:
Given a correctly built concurrent NSOperation does the value returned by isConcurrent actually alter the execution of the operation at all?
Whilst we could try to understand how the isConcurrent flag is used by NSOperation, NSOperationQueue, or other parts of the operating system, it would be a mistake to rely on any information we discovered.
The purpose of the flag is described only as:
Return YES if the operation runs asynchronously with respect to the current thread or NO if the operation runs synchronously on whatever thread started it.
As long as you correctly respond, it should not be important who calls the method or how it effects any logic inside the Apple frameworks.
Additional Notes:
The documentation does make reference to a change in the way that NSOperationQueue used this value before and after OSX 10.6 and iOS 4.0.
In OS X v10.6, operation queues ignore the value returned by isConcurrent and always call the start method of your operation from a separate thread. In OS X v10.5, however, operation queues create a thread only if isConcurrent returns NO. ...
This change caused an issue described in QA1712.
This commentary is commonly interpreted to imply that the isConcurrent flag is no longer used after 10.6 and iOS 4, and can be ignored. I do not agree with this interpretation as the method has not been formally deprecated and is still documented as a required override for concurrent operations.
Also the fact that it's use has changed in the past is a reminder that it could change again in the future, so we should respond correctly even if we suspect it does not currently have any effect.
There are various questions around this topic, and lots of advice saying NOT to use sendSynchronousRequest within dispatch_async, because it blocks the thread, and GCD will spawn lots of new worker threads to service all the synchronous URL requests.
Nobody seems to have a definitive answer as to what iOS 5, [NSURLConnection sendAsynchronousRequest:queue:completionHandler:] does behind the scenes.
One post I read states that it 'might' optimise, and it 'might' use the run loop - but certainly won't create a new thread for each request.
When I pause my debugger when using sendAsynchronousRequest:queue:completionHandler, the stack trace looks like this:
..now it appears that sendAsynchronousRequest:queue:completionHandler, is actually calling sendSynchronousRequest, and I still have tons of threads created when I use the async method instead of the sync method.
Yes, there are other benefits to using the async call, which I don't want to discuss in this post.
All I'm interested in is performance / thread / system usage, and if i'm worse off using the sync call inside dispatch_async instead of using the async call.
I don't need advice on using ios4 async calls either, this is purely for educational purposes.
Does anyone have any insightful answers to this?
Thanks
This is actually open source. http://libdispatch.macosforge.org/
It is very unlikely that you will be able to manage the worker threads more efficiently than Apple's implementation. In this context "asynchronous" doesn't mean select/poll, it just means that the call will return immediately. So it is not surprising that the implementation is spawning threads.
As the previous answer stated, both GCD (libdispatch) and the libblocksruntime are open source. Internally, iOS/OS X manage a global pool of pthreads, plus any app-specific pools you create in your user code. Since there's a 1:N mapping between OS threads and dispatch tasks, you don't have to (and shouldn't) worry about thread creation and disposal under the hood. To that end, a GCD task doesn't use any time in an app's run loop after invocation as it's punted to a background thread.
Most kinds of NSURL operations are I/O-bound; there's no amount of concurrency voodo that can disguise this, but if Apple's own async implementation uses its synchronous counterpart in the background, it probably suggests it's quite highly optimized already. Contrary to what the previous answer said, libdispatch does use scalable I/O internally (kqueue on OS X/iOS/BSD), and if you hand-rolled something yourself you'd have to deal with file-descriptor readiness yourself to yield similar performance.
While possible, the return on time invested is probably marginal. Stick with Apple's implementation and stop worrying about threads!