'OSSpinLock' was deprecated in iOS 10.0: Use os_unfair_lock() from <os/lock.h> instead - ios

I went through this Question but the provided solution didn't work. Can someone please explain any alternative approach or proper implementation using os_unfair_lock()?
when I am using 'OS_UNFAIR_LOCK_INIT', it seems unavailable.
Thanks!

In iOS 16 (and macOS 13) and later, you should use OSAllocatedUnfairLock. As the documentation says:
it’s unsafe to use os_unfair_lock from Swift because it’s a value type and, therefore, doesn’t have a stable memory address. That means when you call os_unfair_lock_lock or os_unfair_lock_unlock and pass a lock object using the & operator, the system may lock or unlock the wrong object.
Instead, use OSAllocatedUnfairLock, which avoids that pitfall because it doesn’t function as a value type, despite being a structure. All copied instances of an OSAllocatedUnfairLock control the same underlying lock allocation.
So, if you have a counter that you want to interact with in a thread-safe manner:
import os.lock
let counter = OSAllocatedUnfairLock(initialState: 0)
...
counter.withLock { value in
value += 1
}
...
counter.withLock { value in
print(value)
}
For support of earlier OS versions, see my original answer below.
In Concurrent Programming With GCD in Swift 3, they warn us that we cannot use os_unfair_lock directly in Swift because “Swift assumes that anything that is struct can be moved, and that doesn't work with a mutex or with a lock.”
In that video, the speaker suggests that if you must use os_unfair_lock, that you put this in an Objective-C class (which won't move the struct). Or if you look at some of the stdlib code, they show you can stay in Swift, but use a UnsafeMutablePointer instead of the struct directly. (Thanks to bscothern for confirming this pattern.)
So, for example, you can write an UnfairLock class that avoids this problem:
final class UnfairLock: NSLocking {
private let unfairLock: UnsafeMutablePointer<os_unfair_lock> = {
let pointer = UnsafeMutablePointer<os_unfair_lock>.allocate(capacity: 1)
pointer.initialize(to: os_unfair_lock())
return pointer
}()
deinit {
unfairLock.deinitialize(count: 1)
unfairLock.deallocate()
}
func lock() {
os_unfair_lock_lock(unfairLock)
}
func tryLock() -> Bool {
os_unfair_lock_trylock(unfairLock)
}
func unlock() {
os_unfair_lock_unlock(unfairLock)
}
}
Then you can do things like:
let lock = UnfairLock()
And then use lock and unlock like you would with NSLock, but using the more efficient os_unfair_lock behind the scenes:
lock.lock()
// critical section here
lock.unlock()
And because this conforms to NSLocking, you can use extensions designed for that. E.g., here is a common method that we use to guarantee that our locks and unlocks are balanced:
extension NSLocking {
func synchronized<T>(block: () throws -> T) rethrows -> T {
lock()
defer { unlock() }
return try block()
}
}
And
lock.synchronized {
// critical section here
}
But, bottom line, do not use os_unfair_lock from Swift without something like the above or as contemplated in that video, both of which provide a stable memory address for the lock.

You can use os_unfair_lock as below,
var unfairLock = os_unfair_lock_s()
os_unfair_lock_lock(&unfairLock)
os_unfair_lock_unlock(&unfairLock)

Related

What is the reason behind objc_sync_enter doesn't work well with struct, but works well with class?

I have the following demo code.
struct IdGenerator {
private var lastId: Int64
private var set: Set<Int64> = []
init() {
self.lastId = 0
}
mutating func nextId() -> Int64 {
objc_sync_enter(self)
defer {
objc_sync_exit(self)
}
repeat {
lastId = lastId + 1
} while set.contains(lastId)
precondition(lastId > 0)
let (inserted, _) = set.insert(lastId)
precondition(inserted)
return lastId
}
}
var idGenerator = IdGenerator()
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
}
#IBAction func click(_ sender: Any) {
DispatchQueue.global(qos: .userInitiated).async {
for i in 1...10000 {
let id = idGenerator.nextId()
print("i : \(id)")
}
}
DispatchQueue.global(qos: .userInitiated).async {
for j in 1...10000 {
let id = idGenerator.nextId()
print("j : \(id)")
}
}
}
}
Whenever I execute click, I would get the following crash
Thread 5 Queue : com.apple.root.user-initiated-qos (concurrent)
#0 0x000000018f58b434 in _NativeSet.insertNew(_:at:isUnique:) ()
#1 0x000000018f598d10 in Set._Variant.insert(_:) ()
#2 0x00000001001fe31c in IdGenerator.nextId() at /Users/yccheok/Desktop/xxxx/xxxx/ViewController.swift:30
#3 0x00000001001fed8c in closure #2 in ViewController.click(_:) at /Users/yccheok/Desktop/xxxx/xxxx/ViewController.swift:56
It isn't clear why the crash happen. My raw guess is, under struct, objc_sync_enter(self) doesn't work as expected. 2 threads accessing Set simultaneously will cause such an issue.
If I change the struct to class, everything just work fine.
class IdGenerator {
private var lastId: Int64
private var set: Set<Int64> = []
init() {
self.lastId = 0
}
func nextId() -> Int64 {
objc_sync_enter(self)
defer {
objc_sync_exit(self)
}
repeat {
lastId = lastId + 1
} while set.contains(lastId)
precondition(lastId > 0)
let (inserted, _) = set.insert(lastId)
precondition(inserted)
return lastId
}
}
May I know what is the reason behind? Why the above objc_sync_enter works well in class, but not in struct?
The objc_sync_enter/objc_sync_exit functions take an object instance and use its identity (i.e., address in memory) in order to allocate and associate a lock in memory — and use that lock to protect the code between the enter and exit calls.
However, structs are not objects, and don't have reference semantics which would allow them to be used in this way — they're not even guaranteed to be allocated in a stable location in memory. However, to support interoperation with Objective-C, structs must have a consistent object-like representation when used from Objective-C, or else calling Objective-C code with, say, a struct inside of an Any instance could trigger undefined behavior.
When a struct is passed to Objective-C in the guise of an object (e.g., inside of Any, or AnyObject), it is wrapped up in a temporary object of a private class type called __SwiftValue. This allows it to look like an object to Objective-C, and in some cases, be used like an object, but critically, it is not a long-lived, stable object.
You can see this with the following code:
struct Foo {}
let f = Foo()
print(f) // => Foo()
print(f as AnyObject) // => __SwiftValue
print(ObjectIdentifier(f as AnyObject)) // => ObjectIdentifier(0x0000600002595900)
print(ObjectIdentifier(f as AnyObject)) // => ObjectIdentifier(0x0000600002595c60)
The pointers will change run to run, but you can see that every time f is accessed as an AnyObject, it will have a new address.
This means that when you call objc_sync_enter on a struct, a new __SwiftValue object will be created to wrap your struct, and that object is passed in to objc_sync_enter. objc_sync_enter will then associate a new lock with the temporary object value which was automatically created for you... and then that object is immediately deallocated. This means two major things:
When you call objc_sync_exit, a new object will be created and passed in, but the runtime has no lock associated with that new object instance! It may crash at this point.
Every time you call objc_sync_enter, you're creating a new, separate lock... which means that there's effectively no synchronization at all: every thread is getting a new lock altogether.
This new pointer instance isn't guaranteed — depending on optimization, the object may live long enough to be reused across objc_sync_* calls, or a new object could be allocated exactly in the same place as an old one... or a new object could be allocated where a different struct used to be, and you accidentally unlock a different thread...
All of this means that you should definitely avoid using objc_sync_enter/objc_sync_exit as a locking mechanism from Swift, and switch over to something like NSLock, an allocated os_unfair_lock, or even a DispatchQueue, which are well-supported from Swift. (Really, the objc_sync_* functions are primitives for use largely by the Obj-C runtime only, and probably should be un-exposed to Swift.)

Memory Leak Kotlin Native library in iOS

I'm building a Kotlin library to use in my iOS app using Kotlin/Native. After I call some methods in the library from Swift, which works, I also want to call methods in Swift from the library. To accomplish this I implemented an interface in the library:
class Outbound {
interface HostInterfaceForTracking {
fun calcFeatureVector(bitmap: Any?): Array<Array<FloatArray>>?
}
var hostInterface: HostInterfaceForTracking? = null
fun registerInterface(hostInterface: HostInterfaceForTracking) {
this.hostInterface = hostInterface
instance.hostInterface = hostInterface
}
}
This is implemented on the Swift side like this:
class HostInterfaceForTracking : OutboundHostInterfaceForTracking {
var t : Outbound? = nil
init() {
TrackingWrapper.instance?.runOnMatchingLibraryThread {
self.t = Outbound()
self.t!.registerInterface(hostInterface: self)
}
}
func calcFeatureVector(bitmap: Any?) -> KotlinArray<KotlinArray<KotlinFloatArray>>? {
do {
var test : Any? = (bitmap as! Bitmap).bitmap
return nil
} catch {
return nil
}
}
}
The TrackingWrapper looks like this:
class TrackingWrapper : NSObject {
static var instance: TrackingWrapper? = nil
var inbound: Inbound? = nil
var worker: Worker
override init() {
self.worker = Worker()
super.init()
initInboundInterface()
}
func initInboundInterface() {
runOnMatchingLibraryThread {
TrackingWrapper.instance = self
self.inbound = Inbound()
HostInterfaceForTracking()
}
}
func runOnMatchingLibraryThread(block: #escaping() -> Void) {
worker.enqueue {
block()
}
}
}
The function runOnMatchingLibraryThread is needed because every call to the TrackingLibrary needs to be called from the exact same thread, so the Worker class initializes a thread and enqueues every method to that thread.
The Bitmap in this case is simply a wrapper for an UIImage, which I already accessed with the .bitmap call, so I've tried to access the wrapped UIImage and save it in the test variable. The library gets the current camera frame from the Swift side every few frames and sends the current image wrapped as a Bitmap to the method calcFeatureVector depicted here.
Problem: My memory load starts increasing as soon as the app starts until the point it crashes. This is not the case if I don't access the wrapped UIImage (var test : Any? = (bitmap as! Bitmap)). So there is a huge memory leak, just by accessing the wrapped variable on the Swift side. Is there anything I've missed or is there any way to release the memory?
Looks like you have a circular dependency here:
TrackingWrapper.instance?.runOnMatchingLibraryThread {
self.t = Outbound()
self.t!.registerInterface(hostInterface: self)
}
You are asking a property inside HostInterfaceForTracking to maintain a strong reference to the same instance of HostInterfaceForTracking. You should be using [weak self] to avoid the circular reference.
EDIT:
Ok after seeing the rest of you code theres a lot to unpack. There is a lot of unnecessary bouncing back and forth between classes, functions and threads.
There is no need to use runOnMatchingLibraryThread to just create an instance of something. You only need to use that for the code processing the image itself (I would assume, I haven't seen anything so far that requires being split off into another thread). Inside TrackingWrapper, you can create a singleton more easily, and matching the swift pattern by simply doing this as the first line:
static let shared = TrackingWrapper()
And everywhere you want to use it, you can just call TrackingWrapper.shared. This is more common and will avoid one of the levels of indirection in the code.
I'm not sure what Worker or Inbound are, but again these can and should be created inside the TrackingWrapper init, rather than branching Inbound's init, to use another thread.
Inside initInboundInterface you are creating an instance of HostInterfaceForTracking() which doesn't get stored anywhere. The only reason HostInterfaceForTracking is continuing to stay in memory after its creation, is because of the internal circular dependency inside it. This is 100% causing some form of a memory issue for you. This should probably also be a property on TrackingWrapper, and again, its Init should not be called inside runOnMatchingLibraryThread.
Having HostInterfaceForTracking's init, also using runOnMatchingLibraryThread is problematic. If we inline all the code whats happening is this:
TrackingWrapper
init() {
self.runOnMatchingLibraryThread {
TrackingWrapper.instance = self
self.inbound = Inbound()
TrackingWrapper.instance?.runOnMatchingLibraryThread {
self.t = Outbound()
self.t!.registerInterface(hostInterface: self)
}
}
}
Having all these classes unnecessarily keep coming back to TrackingWrapper is going to cause issues.
Inside HostInterfaceForTracking 's init, no need to be creating Outbound on a separate thread. First line in this class can simply be:
var t : Outbound = OutBound()
Or do it in the init if you prefer. Either way will also remove the issue of needing to unwrap Outbound before using it.
Inside Outbound you are storing 2 references to the hostInterface instance:
this.hostInterface = hostInterface
instance.hostInterface = hostInterface
I would have imagined there should only be 1. If there are now multiple copies of a class that has a circular dependency, which has multiple calls to separate threads. This again will cause issues.
I'm still not sure on the differences between Swift and Kotlin. In Swift when passing self into a function to be stored, the class storing it would mark the property as weak, like so:
weak var hostInterface: ......
Which will avoid any circular dependency from forming. A quick google says this isn't how things work in Kotlin. It might be better to look into the swift side passing in a closure (lambda on kotlin) and the kotlin side executing that. This might avoid the need to store a strong reference. Otherwise you need to be looking into some part of your code setting hostInterface back to null. Again its a bit hard to say only seeing some of the code and not knowing how its working.
In short, it looks like the code is very over complicated, and needs to be simplified, so that all these moving pieces can be tracked easier.

How can I use the __builtin_expect in Swift?

How can I use the __builtin_expect(::) in Swift?
Does Swift still support this method?
I found the following definition in Dispatch, but I can't call it.
public func __builtin_expect(_: Int, _: Int) -> Int
Builtin.swift in the Swift standard library defines
/// Optimizer hint that `x` is expected to be `true`.
#_transparent
#_semantics("fastpath")
public func _fastPath(_ x: Bool) -> Bool {
return _branchHint(x, expected: true)
}
/// Optimizer hint that `x` is expected to be `false`.
#_transparent
#_semantics("slowpath")
public func _slowPath(_ x: Bool) -> Bool {
return _branchHint(x, expected: false)
}
and these are documented in the Standard Library Programmers Manual: Builtins:
_fastPath returns its argument, wrapped in a Builtin.expect. This informs the optimizer that the vast majority of the time, the branch
will be taken (i.e. the then branch is “hot”).
[...]
_slowPath is the same as _fastPath, just with the branches swapped. Both are just wrappers around _branchHint, which is otherwise never
called directly.
[...]
NOTE: these are due for a rename and possibly a redesign. They
conflate multiple notions that don’t match the average standard
library programmer’s intuition.
See also Does guard hint the optimiser that this is the unlikely branch in the Swift forum.
However, these functions are not publicly exposed, and the leading underscore indicates that they are meant for library internal use only. It is currently possible to use them in your code
if _fastPath(conditionExpectedToBeTrue) {
// ...
}
if _slowPath(conditionExpectedToBeFalse) {
// ...
}
but that is not guaranteed to work or to compile in the future.

In Swift, register "anywhere" to be a delegate of a protocol

I have a complicated view class,
class Snap:UIViewController, UIScrollViewDelegate
{
}
and the end result is the user can, let's say, pick a color...
protocol SnapProtocol:class
{
func colorPicked(i:Int)
}
class Snap:UIViewController, UIScrollViewDelegate
{
someDelegate.colorPicked(blah)
}
So who's going to handle it.
Let's say that you know for sure there is something up the responder chain, even walking through container views, which is a SnapProtocol. If so you can use this lovely code to call it
var r : UIResponder = self
repeat { r = r.nextResponder()! } while !(r is SnapProtocol)
(r as! SnapProtocol).colorPicked(x)
If you prefer, you can use this superlative extension
public extension UIResponder // walk up responder chain
{
public func next<T>() -> T?
{
guard let responder = self.nextResponder()
else { return nil }
return (responder as? T) ?? responder.next()
}
}
courtesy these guys and safely find any SnapProtocol above you,
(next() as SnapProtocol?)?.colorPicked(x)
That's great.
But. What if the object that wants to get the colorPicked is a knight-move away from you, down some complicated side chain, and/or you don't even know which object wants it.
My current solution is this, I have a singleton "game manager" -like class,
public class .. a singleton
{
// anyone who wants a SnapProtocol:
var useSnap:SnapProtocol! = nil
}
Some freaky class, anywhere, wants to eat SnapProtocol ...
class Dinosaur:NSObject, SnapProtocol
{
....
func colorPicked(index: Int)
{...}
... so, to set that as the desired delegate, use the singleton
thatSingleton.useSnap = dinosaur
Obviously enough, this works fine.
Note too that I could easily write a little system in the singleton, so that any number of users of the protocol could dynamically register/deregister there and get the calls.
But it has obvious problems, it's not very "pattern" and seem violently non-idiomatic.
So. Am I really, doing this the right way in the Swift milieu?
Have I indeed confused myself, and there is some entirely different pattern I should be using in today's iOS, to send out such "messages to anyone who wants them?" ... maybe I shouldn't even be using a protocol?
"Send out messages to anyone who wants them" is pretty much the description of NSNotificationCenter.
True, it's not an API that's designed from the beginning for Swift patterns like closures, strong typing, and protocol-oriented programming. (As noted in other comments/answers, the open source SwiftNotificationCenter is a good alternative if you really want such features.)
However, NSNotificationCenter is robust and battle-hardened — it's the basis for thousands of messages that get sent around between hundreds of objects on each pass through the run loop.
Here's a very concise how-to for using NSNotificationCenter in Swift:
https://stackoverflow.com/a/24756761/294884
There is no "protocol based" notification mechanism in the Swift standard
libraries or runtime. A nice implementation can be found here https://github.com/100mango/SwiftNotificationCenter. From the README:
A Protocol-Oriented NotificationCenter which is type safe, thread safe
and with memory safety.
Type Safe
No more userInfo dictionary and Downcasting, just deliver the concrete
type value to the observer.
Thread Safe
You can register, notify, unregister in any thread without crash and
data corruption.
Memory Safety
SwiftNotificationCenter store the observer as a zeroing-weak
reference. No crash and no need to unregister manually.
It's simple, safe, lightweight and easy to use for one-to-many
communication.
Using SwiftNotificationCenter, a (instance of a) conforming class could register itself for example like this:
class MyObserver: SnapProtocol {
func colorPicked(i: Int) {
print("color picked:", i)
}
init() {
NotificationCenter.register(SnapProtocol.self, observer: self)
}
}
and a broadcast notification to all conforming registered observers is
done as
NotificationCenter.notify(SnapProtocol.self) {
$0.colorPicked(x)
}

Will this static var work like C?

I want to define a static variable on a class using Swift 2 that is a NSLock.
After researching I discovered that I have to use a struct, in something like this:
class Entity: NSManagedObject {
struct Mechanism {
static let lock = NSLock()
}
func myFunction -> NSArray {
Mechanism.lock.lock()
// do something
Mechanism.lock.unlock()
}
}
Will this work like C? I mean, the first time Mechanism is used a static lock constant will be created and subsequent calls will use the same constant?
I feel that this is not correct because the line
static let lock = NSLock()
is initializing a NSLock. So it will initialize a new one every time.
If this was not swift I would do like this:
static NSLock *lock;
if (!lock) {
lock = ... initialize
}
How do I do the equivalent in Swift 2?
You said that "after researching, I discovered that I have to use a struct [to get a static]." You then go on to ask whether it really is a static a how it changes in Swift 2.0.
So, a couple of observations:
Yes, this static within a struct pattern will achieve the desired behavior, that only one NSLock will be instantiated.
The significant change in the language was in Swift 1.2 (not 2.0), which now allows static variables, eliminating the need for the struct altogether:
class Entity: NSManagedObject {
static let lock = NSLock()
func myFunction() -> NSArray {
Entity.lock.lock()
// do something
Entity.lock.unlock()
}
}
Seriously, nobody uses NSLock on MacOS X or iOS. In Objective C, you use #synchronized. In Swift, you use a global function like this:
func Synchronized (obj: AnyObject, _ block: dispatch_block_t)
{
objc_sync_enter (obj)
block ()
objc_sync_exit (obj)
}
First, this uses a recursive lock. That alone will safe you gazillions of headaches. Second, it works much more fine grained, with a lock for one specific object. To use:
func myFunction() -> NSArray {
Synchronized(someObject) {
// Stuff to do.
}
}

Resources