PJSIP random crashing in Swift - ios

I'm build a softphone for iOS using Swift and PJSIP.
The PJSIP documentation states: "PJLIB API should be called from a registered thread, otherwise it will raise assertion such as "Calling pjlib from unknown/external thread...". With GCD, we cannot really be sure of which thread executing the PJLIB function. Registering that thread to PJLIB seems to be a simple and easy solution, however it potentially introduces a random crash which is harder to debug. Here are few possible crash scenarios:
PJLIB's pj_thread_desc should remain valid until the registered thread stopped, otherwise crash of invalid pointer access may occur, e.g: in pj_thread_check_stack().
Some compatibility problems between GCD and PJLIB, see #1837 for more info.
If you want to avoid any possibility of blocking operation by PJLIB (or any higher API layer such as PJMEDIA, PJNATH, PJSUA that usually calls PJLIB), instead of dispatching the task using GCD, the safest way is to create and manage your own thread pool and register that thread pool to PJLIB. Or alternatively, simply use PJSUA timer mechanism (with zero delay), see pjsua_schedule_timer()/pjsua_schedule_timer2() docs for more info."
So what I did was to use the class below found here on another thread in Stack Overflow:
class Worker {
private var thread: Thread?
private let semaphore = DispatchSemaphore(value: 0)
private let lock = NSRecursiveLock()
private var queue = [() -> Void]()
public func enqueue(_ block: #escaping () -> Void) { locked { queue.append(block) }
semaphore.signal()
if thread == nil { thread = Thread(block: work) thread?.start()
}
}
private func work() { while true { semaphore.wait()
let block = locked { queue.removeFirst() } block()
}
}
return block()
}
}
So every time I have to call any function from pjsip I use
worker.enqueue { }
For example:
worker.enqueue {
var threadError: NSError = NSError()
if !createPjSipThread(error: &threadError) {
print(threadError)
}
}
worker.enqueue { var error = NSError()
if !PjApp.shared().start(appDir: "", error: &error {
print(error)
}
}
Without the worker thread, the app crashes constantly (specially on TestFlight/Production Build).
Any ideias on what I'm missing or doing wrong? Thanks in advance.

Related

URLSession concurrency issue for async calls

I am trying to implement upload mechanism for my application. However, I have a concurrency issue I couldn't resolve. I sent my requests using async/await with following code. In my application UploadService is creating every time an event is fired from some part of my code. As an example I creation of my UploadService in a for loop. The problem is if I do not use NSLock backend service is called multiple times (5 in this case because of loop). But if I use NSLock it never reaches the .success or .failure part because of deadlock I think. Could someone help me how to achieve without firing upload service multiple times and reaching success part of my request.
final class UploadService {
/// If I use NSLock in the commented lines it never reaches to switch result so can't do anything in success or error part.
static let locker = NSLock()
init() {
Task {
await uploadData()
}
}
func uploadData() async {
// Self.locker.lock()
let context = PersistentContainer.shared.newBackgroundContext()
// It fetches data from core data to send it in my request
guard let uploadedThing = Upload.coreDataFetch(in: context) else {
return
}
let request = UploadService(configuration: networkConfiguration)
let result = await request.uploadList(uploadedThing)
switch result {
case .success:
print("success")
case .failure(let error as NSError):
print("error happened")
}
// Self.locker.unlock()
}
}
class UploadExtension {
func createUploadService() {
for i in 0...4 {
let uploadService = UploadService()
}
}
}
A couple of observations:
Never use locks (or wait for semaphores or dispatch groups, etc.) to attempt to manage dependencies between Swift concurrency tasks. This is a concurrency system predicated upon the contract that threads can make forward progress. It cannot reason about the concurrency if you block threads with mechanisms outside of its purview.
Usually you would not create a new service for every upload. You would create one and reuse it.
E.g., either:
func createUploadService() async {
let uploadService = UploadService()
for i in 0...4 {
await uploadService.uploadData(…)
}
}
Or, more likely, if you might use this same UploadService later, do not make it a local variable at all. Give it some broader scope.
let uploadService = UploadService()
func createUploadService() async {
for i in 0...4 {
await uploadService.uploadData(…)
}
}
The above only works in simple for loop, because we could simply await the result of the prior iteration.
But what if you wanted the UploadService keep track of the prior upload request and you couldn’t just await it like above? You could keep track of the Task and have each task await the result of the previous one, e.g.,
actor UploadService {
var task: Task<Void, Never>? // change to `Task<Void, Error>` if you change it to a throwing method
func upload() {
…
task = Task { [previousTask = task] in // capture copy of previous task (if any)
_ = await previousTask?.result // wait for it to finish before starting this one
await uploadData()
}
}
}
FWIW, I made this service with some internal state an actor (to avoid races).
Since creating Task {} is part of structured concurrency it inherits environment (e.g MainThread) from the scope where it was created,try using unstructured concurrency's Task.detached to prevent it from runnning on same scope ( maybe it was called on main thread ) - with creating Task following way:
Task.detached(priority: .default) {
await uploadData()
}

ThreadSanitizer vs. async/await in XCTest

I'm trying to test an async function that creates a WKWebsiteDataStore with some HTTPCookies.
import Foundation
import WebKit
#MainActor
class HttpCookieUtility {
func createWebsiteDataStore(httpCookies: [HTTPCookie]) async -> WKWebsiteDataStore {
let websiteDataStore = WKWebsiteDataStore.nonPersistent()
for cookie in httpCookies {
await websiteDataStore.httpCookieStore.setCookie(cookie)
}
return websiteDataStore
}
}
The unit test code is here:
import XCTest
#testable import MyFramework
final class HttpCookieUtilityTests: XCTestCase {
func test_websiteDataStore_is_created_from_cookies() async throws {
var httpCookies = [HTTPCookie]()
for index in 1...20 {
let properties: [HTTPCookiePropertyKey: Any] = [.domain: "www.example.com", .path: ".", .name: "name-\(index)", .value: "value-\(index)"]
let httpCookie = HTTPCookie(properties: properties)!
httpCookies.append(httpCookie)
}
let websiteDataStore = await HttpCookieUtility().createWebsiteDataStore(httpCookies: httpCookies)
XCTAssertNotNil(websiteDataStore)
}
}
The unit test passes. However, when I turn on the Thread Sanitizer in the scheme's Diagnostic settings, I see a series of warnings like the following (not all output include here for brevity):
WARNING: ThreadSanitizer: data race (pid=47736) Read of size 8 at
0x7b6400040310 by main thread:
#0 (1) suspend resume partial function for HttpCookieUtilityTests.test_websiteDataStore_is_created_from_cookies()
HttpCookieUtilityTests.swift:18 (MyFrameworkTests:x86_64+0x1e793)
#1 swift::runJobInEstablishedExecutorContext(swift::Job*) :2 (libswift_Concurrency.dylib:x86_64+0x2a4b5)
Previous write of size 8 at 0x7b6400040310 by thread T7:
#0 HttpCookieUtilityTests.test_websiteDataStore_is_created_from_cookies()
HttpCookieUtilityTests.swift:18 (MyFrameworkTests:x86_64+0x1e613)
#1 swift::runJobInEstablishedExecutorContext(swift::Job*) :2 (libswift_Concurrency.dylib:x86_64+0x2a4b5)
Location is heap block of size 1032 at 0x7b6400040100 allocated by
thread T7:
#0 malloc :2 (libclang_rt.tsan_iossim_dynamic.dylib:x86_64+0x533ac)
#1 swift::StackAllocator<1000ul, &(swift::TaskAllocatorSlabMetadata)>::getSlabForAllocation(unsigned
long) :2 (libswift_Concurrency.dylib:x86_64+0x2f19a)
#2 swift::runJobInEstablishedExecutorContext(swift::Job*) :2 (libswift_Concurrency.dylib:x86_64+0x2a4b5)
Thread T7 (tid=397262, running) is a GCD worker thread
SUMMARY: ThreadSanitizer: data race HttpCookieUtilityTests.swift:18 in
(1) suspend resume partial function for
HttpCookieUtilityTests.test_websiteDataStore_is_created_from_cookies()
When I run the same code from a normal application, I don't see the same ThreadSanitizer warnings. I'm assuming that running XCTests that require the main thread are not supported or are somehow problematic. But I wanted to outrule issues with the actual code. I'm also just wondering if async/await is creating unexpected complications.
Also, I tried this alternate implementation which does not have the ThreadSanitizer issue, and is also significantly faster (presumably because the setCookie operations can process concurrently), but it specifically avoids using the modern concurrency support (aside from the continuation).
class HttpCookieUtility {
func createWebsiteDataStore(httpCookies: [HTTPCookie]) async -> WKWebsiteDataStore {
return await withCheckedContinuation{ continuation in
DispatchQueue.main.async {
let websiteDataStore = WKWebsiteDataStore.nonPersistent()
let waitGroup = DispatchGroup()
for cookie in httpCookies {
waitGroup.enter()
websiteDataStore.httpCookieStore.setCookie(cookie, completionHandler: {
waitGroup.leave()
})
}
waitGroup.notify(queue: DispatchQueue.main) {
continuation.resume(returning: websiteDataStore)
}
}
}
}
}
Note that I removed the #MainActor attribute from the HttpCookieUtility class declaration.

GKTurnBasedMatch saveCurrentTurnWithMatchData returning an error on every other call

The player takes multiple actions before completing a turn. After each action, I call saveCurrentTurnWIthMatchData, with the match data updated.
[gameMatch saveCurrentTurnWithMatchData: matchData completionHandler: ^(NSError *error){
if (error) {
NSLog(#"Error updating match = %#",error);
}
}];
On every other call I get "Error Domain=GKServerErrorDomain Code=5002 "status = 5002, Unexpected game state version expectedGameStateVersion='null'"
The GKTurnBasedMatch.state = 3 (GKTurnBasedMatchStatusMatching) in every call. I'm not changing this, I just check before the call. I have no idea if this is relevant.
Any suggestion what to try?
the "Unexpected game state version" error happens irregularly and is hard to reproduce -- although i can often reproduce it by calling saveCurrentTurn several times in rapid succession. it would be useful to have clarity from Apple on this since it appears to be server side (but i'm not sure). i wrote a unit test that does stress testing on GKTurnBasedMatch.saveCurrentTurn. it fails irregularly but often up to 20% of the time.
i have no full solution only a partial one. to partially mitigate the problem, you can wrap your saveCurrentTurn calls in a task queue, that way they wait for the previous one to finish. not a solution, but helps.
let dqt:DispatchQueueTask = {
gkTurnBasedMatch.saveCurrentTurn(withMatch:payload) { error in
//handle error
TaskQueue.completion() //step to next task
}
}
TaskQueue.add(task:dqt)
and here is the TaskQueue class i use
import Foundation
/*
Uses the DispatchQueue to execute network commands in series
useful for server commands like GKTurnBasedMatch.saveCurrentTurn(...)
Usage:
let doSomethingThatTakesTime:DispatchQueueTask = {
...
TaskQueue.completion()
}
TaskQueue.add(task: doSomethingThatTakesTime)
*/
typealias DispatchQueueTask = () -> ()
let DispatchQueue_serial = DispatchQueue(label: "org.my.queue.serial")
class TaskQueue {
static var isRunning:Bool = false
static var tasks:[DispatchQueueTask] = []
static func add(task:#escaping DispatchQueueTask) {
tasks.append(task)
run()
}
static func run() {
guard !isRunning else { return }
guard tasks.count > 0 else { return }
let task = tasks.removeFirst()
DispatchQueue_serial.async {
TaskQueue.isRunning = true
task()
}
}
static func completion() {
TaskQueue.isRunning = false
TaskQueue.run()
}
}

Adding condition based on previous result on DispatchQueue

Is it possible to set a condition on the next queue of DispatchQueue? Supposed there are 2 API calls that should be executed synchronously, callAPI1 -> callAPI2. But, callAPI2 should be only executed if callAPI1 returning true. Please check code below for more clear situation:
let dispatchQueue: DispatchQueue = DispatchQueue(label: "queue")
let dispatchGroup = DispatchGroup()
var isSuccess: Bool = false
dispatchGroup.enter()
dispatchQueue.sync {
self.callAPI1(completion: { (result) in
isSuccess = result
dispatchGroup.leave()
}
}
dispatchGroup.enter()
dispatchQueue.sync {
if isSuccess { //--> This one always get false
self.callAPI2(completion: { (result) in
isSuccess = result
dispatchGroup.leave()
})
} else {
dispatchGroup.leave()
}
}
dispatchGroup.notify(queue: DispatchQueue.main, execute: {
completion(isSuccess) //--> This one always get false
})
Currently above code always returning isSuccess as false despite on callAPI1's call returning true, which cause only callAPI1's is called.
All non-playground code typed directly into answer, expect little errors.
It appears that you are trying to make an asynchronous call into a synchronous one, and the way you are attempting this simply will not work. Assuming callAPI1 is asynchronous then after:
self.callAPI1(completion: { (result) in
isSuccess = result
}
the completion block has (in all probability) not yet been run, you cannot test isSuccess immediately, as in:
self.callAPI1(completion: { (result) in
isSuccess = result
}
if isSuccess
{
// in all probability this will never be reached
}
Wrapping the code into a synchronous block will have no effect whatsoever:
dispatchQueue.sync
{
self.callAPI1(completion: { (result) in
isSuccess = result
}
// at this point, in all probability, the completion block
// has not yet run, therefore...
}
// at this point it has also not run
A sync dispatch just runs its block on a different queue and waits for it to complete; if that block contains asynchronous code, as yours does, then it is not magically made synchronous - it executes asynchronously as normal, the synchronously dispatched block terminates, the sync dispatch returns, and your code continues. The sync dispatch has no real effect (apart from running the block on a different queue while blocking the current one).
If you need to sequence a number of asynchronous calls you can do it a number of ways. One method is to simply chain the calls through the completion blocks. Using this approach your code becomes:
self.callAPI1(completion: { (result) in
if !result { completion(false) }
else
{
self.callAPI2(completion: { (result) in
completion(result)
}
}
}
Using Semaphores
If you have a long sequence of such calls using the above pattern then the code can become very nested, in such a case instead of nesting you can use semaphores to sequence the calls. A simple semaphore can be used to block (thread) execution, using wait(), until it is signalled (by an unblocked thread), using signal().
Notice the emphasis here on blocking, once you introduce the ability to block execution all sorts of issues have to be considered: among them are UI responsiveness - blocking the UI thread is not good; deadlock - for example if the code that will issue semaphore wait and signal operations is executing on the same thread then after a wait there will be no signal...
Here is a sample Swift Playground script to demonstrate using semaphores. The pattern follows your original code but uses a semaphore in addition to your boolean.
import Cocoa
// some convenience functions for our dummy callAPI1 & callAPI2
func random(_ range : CountableClosedRange<UInt32>) -> UInt32
{
let lower = range.lowerBound
let upper = range.upperBound
return lower + arc4random_uniform(upper - lower + 1)
}
func randomBool() -> Bool
{
return random(0...1) == 1
}
class Demo
{
// grab the global concurrent utility queue to schedule our work on
let workerQueue = DispatchQueue.global(qos : .utility)
// dummy callAPI1, just pauses and then randomly return success or failure
func callAPI1(_ completion : #escaping (Bool) -> Void) -> Void
{
// do the "work" on workerQueue, which is concurrent so other work
// can be executing, or *blocked*, on the same queue
let pause = random(1...2)
workerQueue.asyncAfter(deadline: .now() + Double(pause))
{
// produce a random success result
let success = randomBool()
print("callAPI1 after \(pause) -> \(success)")
completion(success)
}
}
func callAPI2(_ completion : #escaping (Bool) -> Void) -> Void
{
let pause = random(1...2)
workerQueue.asyncAfter(deadline: .now() + Double(pause))
{
let success = randomBool()
print("callAPI2 after \(pause) -> \(success)")
completion(success)
}
}
func runDemo(_ completion : #escaping (Bool) -> Void) -> Void
{
// We run the demo as a standard async function
// which doesn't block the main thread
workerQueue.async
{
print("Demo starting...")
var isSuccess: Bool = false
let semaphore = DispatchSemaphore(value: 0)
// do the first call
// this will asynchronously execute on a different thread
// *including* its completion block
self.callAPI1
{ (result) in
isSuccess = result
semaphore.signal() // signal completion
}
// we can safely wait for the semaphore to be
// signalled as callAPI1 is executing on a different
// thread so we will not deadlock
semaphore.wait()
if isSuccess
{
self.callAPI2
{ (result) in
isSuccess = result
semaphore.signal() // signal completion
}
semaphore.wait() // wait for completion
}
completion(isSuccess)
}
}
}
Demo().runDemo { (result) in print("Demo result: \(result)") }
// For the Playground
// ==================
// The Playground can terminate a program run once the main thread is done
// and before all async work is finished. This can result in incomplete execution
// and/or errors. To avoid this we sleep the main thread for a few seconds.
sleep(6)
print("All done")
// Run the Playground multiple times, the results should vary
// (different wait times, callAPI2 may not run). Wait until
// the "All done"" before starting next run
// (i.e. don't push stop, it confuses the Playground)
Or...
Another approach to avoid the nesting is to design functions (or operators) which take two async methods and produce a single one by implementing the nesting pattern. Long nested sequences can then be reduce to more linear sequences. This approach is left as an exercise.
HTH

Swift 3 GCD lock variable and block_and_release error

I am using Swift 3 GCD in order to perform some operations in my code. But I'm getting _dispatch_call_block_and_release error often. I suppose the reason behind this error is because different threads modify same variable, but I'm not sure how to fix problem. Here is my code and explanations:
I have one variable which is accessed and modified in different threads:
var queueMsgSent: Dictionary<Date,BTCommand>? = nil
func lock(obj: AnyObject, blk:() -> ()) {
objc_sync_enter(obj)
blk()
objc_sync_exit(obj)
}
func addMsgSentToQueue(msg: BTCommands) {
if queueMsgSent == nil {
queueMsgSent = Dictionary.init()
}
let currentDate = Date()
lock(obj: queueMsgSent as AnyObject) {
queueMsgSent?.updateValue(msg, forKey: currentDate)
}
}
func deleteMsgSentWithId(id: Int) {
if queueMsgSent == nil { return }
for (date, msg) in queueMsgSent! {
if msg.isAck() == false && msg.getId()! == id {
lock(obj: queueMsgSent as AnyObject) {
queueMsgSent?.removeValue(forKey: date)
}
}
}
}
func runSent() -> Void {
while(true) {
if queueMsgSent == nil { continue }
for (date, msg) in queueMsgSent! {
if msg.isSent() == false {
mainSearchView?.btCom?.write(str: msg.getCommand()!)
msg.setSent(val: true)
lastMsgSent = Date()
continue
}
if msg.isAck() == true {
lock(obj: queueMsgSent as AnyObject) {
queueMsgSent?.removeValue(forKey: date)
}
continue
}
}
}
}
I start runSent method as:
DispatchQueue.global().async(execute: runSent)
I need that runSent continuously check some conditions withinn queueMsgSent, and other functions addMsgSentToQueueue and deleteMsgSentWithId are called in main thread id necessary. I am using some locking mechanism but its not working properly
I strongly suggest you to use the DispatchQueue(s) provided by Grand Central Dispatch, they makes multithreading management much easier.
Command
Let's start with your command class
class Command {
let id: String
var isAck = false
var isSent = false
init(id:String) {
self.id = id
}
}
Queue
Now we can build our Queue class, it will provide the following functionalities
This is our class should not be confused with the concept of DispatchQueue!
push a Command into the queue
delete a Command from the queue
start the processing of all the elements into the queue
And now the code:
class Queue {
typealias Element = (date:Date, command:Command)
private var storage: [Element] = []
private let serialQueue = DispatchQueue(label: "serialQueue")
func push(command:Command) {
serialQueue.async {
let newElement = (Date(), command)
self.storage.append(newElement)
}
}
func delete(by id: String) {
serialQueue.async {
guard let index = self.storage.index(where: { $0.command.id == id }) else { return }
self.storage.remove(at: index)
}
}
func startProcessing() {
Timer.scheduledTimer(withTimeInterval: 10, repeats: true) { timer in
self.processElements()
}
}
private func processElements() {
serialQueue.async {
// send messages where isSent == false
let shouldBeSent = self.storage.filter { !$0.command.isSent }
for elm in shouldBeSent {
// TODO: add here code to send message
elm.command.isSent = true
}
// remove from storage message where isAck == true
self.storage = self.storage.filter { !$0.command.isAck }
}
}
}
How does it work?
As you can see the storage property is an array holding a list of tuples, each tuple has 2 components: Date and Command.
Since the storage array is accesses by multiple threads we need to make sure it is accessed in a thread safe way.
So each time we access storage we wrap our code into this
serialQueue.async {
// access self.storage safely
}
Each code we write into the closure 👆👆👆 shown above is added to our Serial Dispatch Queue.
The Serial Queue does process 1 closure at the time. That's why our storage property is accessed in a thread safe way!
Final consideration
The following block of code is evil
while true {
...
}
It does use all the available CPU time, it does freeze the UI (when executed on the main thread) and discharge the battery.
As you can see I replaced it with
Timer.scheduledTimer(withTimeInterval: 10, repeats: true) { timer in
self.processElements()
}
which calls self.processElements() every 10 seconds leaving plenty of time to the CPU to process other threads.
Of course it's up to you changing the number of seconds to better fit your scenario.
If you're uncomfortable with the objc mechanisms, you might take a look here. Using that, you create a PThreadMutex for the specific synchronizations you want to coordinate, then use mutex.fastsync{ *your code* } to segregate accesses. It's a simple, very lightweight mechanism using OS-level calls, but you'll have to watch out for creating deadlocks.
The example you provide depends on the object always being the same physical entity, because the objc lock uses the address as the ID of what's being synchronized. Because you seem to have to check everywhere for the existence of queueMsgSent, I'm wondering what the update value routine is doing - if it ever deletes the dictionary, expecting it to be created later, you'll have a potential race as different threads can be looking at different synchronizers.
Separately, your loop in runSent is a spin loop - if there's nothing to do, it's just going to burn CPU rather than waiting for work. Perhaps you could consider revising this to use semaphores or some more appropriate mechanism that would allow the workers to block when there's nothing to do?

Resources