Run Two Loops Parallely in iOS Swift - ios

Is it possible to run two While loops parallely in Swift Programming Languages. Both of the loops are independent but both are heavy process going.
Is it possible to run two loops at a time

Yes it's possible. You just have to run both loops on separate threads.
It's general rule: If you want to run two processes in the same time, you have to run them separately.
Example code which you can just copy and paste to playground to see how it works:
//DispatchQueue(label: "com.app.queue1", qos: .background).async {
DispatchQueue.global(qos: .background).async {
while true {
print(1)
sleep(1)
}
}
//DispatchQueue(label: "com.app.queue2", qos: .background).async {
DispatchQueue.global(qos: .background).async {
while true {
print(2)
sleep(1)
}
}
... you can also use just main thread and one background. It depends on what you need. But the main idea is the same, you have to have separate threads.

You can call it in 2 separate background threads.
DispatchQueue.global(qos: .background).async {
// function 1
}
DispatchQueue.global(qos: .background).async {
// function 2
}
Since these are background threads, if you want to do, then do something in the main thread, you'll have to switch to the DispatchQueue.main.
If you want to run them on 2 separate queues, you can make them something like
let backgroundQueue = DispatchQueue(label: "loadQueueOne", qos: .background)
backgroundQueue.async {
// function 1
}
You can similarly create another one, separate, for the second queue.

Related

GCD serial async queue vs serial sync queue nested in async

I have to protect a critical section of my code.
I don't want the caller to be blocked by the function that can be time consuming so I'm creating a serial queue with background qos and then dispatching asynchronously:
private let someQueue = DispatchQueue(label: "\(type(of: self)).someQueue", qos: .background)
func doSomething() {
self.someQueue.async {
//critical section
}
}
For my understanding, the function will directly return on the calling thread without blocking.
I've also seen somewhere dispatching first asynchronously on the global queue, the synchronously on a serial queue:
private let someQueue2 = DispatchQueue(label: "\(type(of: self)).someQueue2")
func doSomething() {
DispatchQueue.global(qos: .background).async {
self.someQueue2.sync {
//critical section
}
}
}
What's the difference between the two approaches?
Which is the right approach?
In the first approach, the calling thread is not blocked and the task (critical section) passed in the async block will be executed in background.
In the second approach, the calling thread is not blocked, but the "background" thread will be waiting for the sync block (critical section) execution which is executed by another thread.
I don't know what you do in your critical section, but it seems first approach seems the best one. Note that background qos is quite slow, maybe use default qos for your queue, unless you know what you are doing. Also note that convention wants that you use bundle identifier as label for your queue. So something like this:
private let someQueue = DispatchQueue(label: "\(Bundle.main.bundleIdentifier ?? "").\(type(of: self)).someQueue")

Synchronise async tasks in a serial queue

let serialQueue = DispatchQueue(label: "Serial Queue")
func performCriticalSectionTask() {
serialQueue.async {
performLongRuningAsyncTask()
}
}
func performLongRuningAsyncTask() {
/// some long running task
}
The function performCriticalSectionTask() can be called from different places many times.
I want this function to be running one at a time. Thus, I kept the critical section of code inside the serial async queue.
But, the problem here is that the critical section itself is a performLongRuningAsyncTask() which will return immediately, and thus serial queue will not wait for the current task to complete first and will start another one.
How can I solve this problem?
if performLongRuningAsyncTask is only running in one thread, it will be called only once at the time. In your case it delegates it to another thread, so you wrapping it into another thread call doesn't work since it will be on another thread anyway
You could do checks in the method itself, the simplest way is to add a boolean. (Or you could add these checks in your class that executes this method, with a completion handler).
Another ways are adding dispatch groups / semaphores / locks.
If you still need it to be executed later, you should use a dispatch group / OperationQueue / Semaphore.
func performLongRunningAsyncTask() {
self.serialQueue.sync {
if isAlreadyRunning {
return
}
isAlreadyRunning = true
}
asyncTask { result in
self.serialQueue.sync {
self.isAlreadyRunning = false
}
}
}

dispatch sync vs dispatch async in main queue ios related query

so what I understood from basics is:
the async task will not block the current queue, sync task will block the current queue until that task finishes.
but when I run code on the main queue, just to print number from 1 to 100, it is giving me the same output.
use case -1
DispatchQueue.global(qos: .default).async {
DispatchQueue.main.async {
for i in 0...100{
print(i)
sleep(arc4random() % 5)
}
}
}
==============
use case -2
DispatchQueue.global(qos: .default).async {
DispatchQueue.main.sync {
for i in 0...100{
print(i)
sleep(arc4random() % 5)
}
}
what can be the reason for getting the same output for async vs sync on the main queue?
Is it because the serial queue will have only one thread associated with it?
When you, a queue, say DispatchQueue.main.sync or DispatchQueue.main.async, that does not affect how the main queue behaves; it affects how you behave, ie whether you wait before doing what comes after the call. But you have nothing after the call so there is no distinction to draw. Your “test” does not test anything.

When to use Semaphore instead of Dispatch Group?

I would assume that I am aware of how to work with DispatchGroup, for understanding the issue, I've tried:
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
performUsingGroup()
}
func performUsingGroup() {
let dq1 = DispatchQueue.global(qos: .userInitiated)
let dq2 = DispatchQueue.global(qos: .userInitiated)
let group = DispatchGroup()
group.enter()
dq1.async {
for i in 1...3 {
print("\(#function) DispatchQueue 1: \(i)")
}
group.leave()
}
group.wait()
dq2.async {
for i in 1...3 {
print("\(#function) DispatchQueue 2: \(i)")
}
}
group.notify(queue: DispatchQueue.main) {
print("done by group")
}
}
}
and the result -as expected- is:
performUsingGroup() DispatchQueue 1: 1
performUsingGroup() DispatchQueue 1: 2
performUsingGroup() DispatchQueue 1: 3
performUsingGroup() DispatchQueue 2: 1
performUsingGroup() DispatchQueue 2: 2
performUsingGroup() DispatchQueue 2: 3
done by group
For using the Semaphore, I implemented:
func performUsingSemaphore() {
let dq1 = DispatchQueue.global(qos: .userInitiated)
let dq2 = DispatchQueue.global(qos: .userInitiated)
let semaphore = DispatchSemaphore(value: 1)
dq1.async {
semaphore.wait()
for i in 1...3 {
print("\(#function) DispatchQueue 1: \(i)")
}
semaphore.signal()
}
dq2.async {
semaphore.wait()
for i in 1...3 {
print("\(#function) DispatchQueue 2: \(i)")
}
semaphore.signal()
}
}
and called it in the viewDidLoad method. The result is:
performUsingSemaphore() DispatchQueue 1: 1
performUsingSemaphore() DispatchQueue 1: 2
performUsingSemaphore() DispatchQueue 1: 3
performUsingSemaphore() DispatchQueue 2: 1
performUsingSemaphore() DispatchQueue 2: 2
performUsingSemaphore() DispatchQueue 2: 3
Conceptually, both of DispachGroup and Semaphore serve the same purpose (unless I misunderstand something).
Honestly, I am unfamiliar with: when to use the Semaphore, especially when workin with DispachGroup -probably- handles the issue.
What is the part that I am missing?
Conceptually, both of DispatchGroup and Semaphore serve the same purpose (unless I misunderstand something).
The above is not exactly true. You can use a semaphore to do the same thing as a dispatch group but it is much more general.
Dispatch groups are used when you have a load of things you want to do that can all happen at once, but you need to wait for them all to finish before doing something else.
Semaphores can be used for the above but they are general purpose synchronisation objects and can be used for many other purposes too. The concept of a semaphore is not limited to Apple and can be found in many operating systems.
In general, a semaphore has a value which is a non negative integer and two operations:
wait If the value is not zero, decrement it, otherwise block until something signals the semaphore.
signal If there are threads waiting, unblock one of them, otherwise increment the value.
Needless to say both operations have to be thread safe. In olden days, when you only had one CPU, you'd simply disable interrupts whilst manipulating the value and the queue of waiting threads. Nowadays, it is more complicated because of multiple CPU cores and on chip caches etc.
A semaphore can be used in any case where you have a resource that can be accessed by at most N threads at the same time. You set the semaphore's initial value to N and then the first N threads that wait on it are not blocked but the next thread has to wait until one of the first N threads has signaled the semaphore. The simplest case is N = 1. In that case, the semaphore behaves like a mutex lock.
A semaphore can be used to emulate a dispatch group. You start the sempahore at 0, start all the tasks - tracking how many you have started and wait on the semaphore that number of times. Each task must signal the semaphore when it completes.
However, there are some gotchas. For example, you need a separate count to know how many times to wait. If you want to be able to add more tasks to the group after you have started waiting, the count can only be updated in a mutex protected block and that may lead to problems with deadlocking. Also, I think the Dispatch implementation of semaphores might be vulnerable to priority inversion. Priority inversion occurs when a high priority thread waits for a resource that a low priority has grabbed. The high priority thread is blocked until the low priority thread releases the resource. If there is a medium priority thread running, this may never happen.
You can pretty much do anything with a semaphore that other higher level synchronisation abstractions can do, but doing it right is often a tricky business to get right. The higher level abstractions are (hopefully) carefully written and you should use them in preference to a "roll your own" implementation with semaphores, if possible.
Semaphores and groups have, in a sense, opposite semantics. Both maintain a count. With a semaphore, a wait is allowed to proceed when the count is non-zero. With a group, a wait is allowed to proceed when the count is zero.
A semaphore is useful when you want to set a maximum on the number of threads operating on some shared resource at a time. One common use is when the maximum is 1 because the shared resource requires exclusive access.
A group is useful when you need to know when a bunch of tasks have all been completed.
Use a semaphore to limit the amount of concurrent work at a given time. Use a group to wait for any number of concurrent work to finish execution.
In case you wanted to submit three jobs per queue it should be
import Foundation
func performUsingGroup() {
let dq1 = DispatchQueue(label: "q1", attributes: .concurrent)
let dq2 = DispatchQueue(label: "q2", attributes: .concurrent)
let group = DispatchGroup()
for i in 1...3 {
group.enter()
dq1.async {
print("\(#function) DispatchQueue 1: \(i)")
group.leave()
}
}
for i in 1...3 {
group.enter()
dq2.async {
print("\(#function) DispatchQueue 2: \(i)")
group.leave()
}
}
group.notify(queue: DispatchQueue.main) {
print("done by group")
}
}
performUsingGroup()
RunLoop.current.run(mode: RunLoop.Mode.default, before: Date(timeIntervalSinceNow: 1))
and
import Foundation
func performUsingSemaphore() {
let dq1 = DispatchQueue(label: "q1", attributes: .concurrent)
let dq2 = DispatchQueue(label: "q2", attributes: .concurrent)
let semaphore = DispatchSemaphore(value: 1)
for i in 1...3 {
dq1.async {
_ = semaphore.wait(timeout: DispatchTime.distantFuture)
print("\(#function) DispatchQueue 1: \(i)")
semaphore.signal()
}
}
for i in 1...3 {
dq2.async {
_ = semaphore.wait(timeout: DispatchTime.distantFuture)
print("\(#function) DispatchQueue 2: \(i)")
semaphore.signal()
}
}
}
performUsingSemaphore()
RunLoop.current.run(mode: RunLoop.Mode.default, before: Date(timeIntervalSinceNow: 1))
The replies above by Jano and Ken are correct regarding 1) the use of semaphore to limit the amount of work happening at once 2) the use of a dispatch group so that the group will be notified when all the tasks in the group are done. For example, you may want to download a lot of images in parallel but since you know that they are heavy images, you want to limit to two downloads only at a single time so you use a semaphore. You also want to be notified when all the downloads (say there are 50 of them) are done, so you use DispatchGroup. Thus, it is not a matter of choosing between the two. You may use one or both in the same implementation depending on your goals. This type of example was provided in the Concurrency tutorial on Ray Wenderlich's site:
let group = DispatchGroup()
let queue = DispatchQueue.global(qos: .utility)
let semaphore = DispatchSemaphore(value: 2)
let base = "https://yourbaseurl.com/image-id-"
let ids = [0001, 0002, 0003, 0004, 0005, 0006, 0007, 0008, 0009, 0010, 0011, 0012]
var images: [UIImage] = []
for id in ids {
guard let url = URL(string: "\(base)\(id)-jpeg.jpg") else { continue }
semaphore.wait()
group.enter()
let task = URLSession.shared.dataTask(with: url) { data, _, error in
defer {
group.leave()
semaphore.signal()
}
if error == nil,
let data = data,
let image = UIImage(data: data) {
images.append(image)
}
}
task.resume()
}
One typical semaphore use case is a function that can be called simultaneously from different threads and uses a resource that should not be called from multiple threads at the same time:
func myFunction() {
semaphore.wait()
// access the shared resource
semaphore.signal()
}
In this case you will be able to call myFunction from different threads but they won't be able to reach the locked resource simultaneously. One of them will have to wait until the second one finishes its work.
A semaphore keeps a count, therefore you can actually allow for a given number of threads to enter your function at the same time.
Typical shared resource is the output to a file.
A semaphore is not the only way to solve such problems. You can also add the code to a serial queue, for example.
Semaphores are low level primitives and most likely they are used a lot under the hood in GCD.
Another typical example is the producer-consumer problem, where the signal and wait calls are actually part of two different functions. One which produces data and one which consumes them.
Generally semaphore can be considered mainly that we can solve the critical section problem. Locking certain resource to achieve synchronisation. Also what happens if sleep() is invoked, can we achieve the same thing by using a semaphore ?
Dispatch groups we will use when we have multiple group of operations to be carried out and we need a tracking or set dependencies each other or notification when a group os tasks finishes its execution.

concurrency (dispatchqueue) test isn't going as expected ios/swift

So I have this code in a Swift 4 playground:
//: Playground - noun: a place where people can play
import UIKit
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
DispatchQueue.main.async {
for _ in 1...5 { print("Main") }
}
DispatchQueue.global().async {
for _ in 1...5 { print("Background") }
}
DispatchQueue.global(qos: .userInteractive).async {
for _ in 1...5 { print("User Interactive") }
}
Why does this print out:
User Interactive
Background
Background
User Interactive
User Interactive
Background
User Interactive
Background
User Interactive
Background
Main
Main
Main
Main
Main
I understand why the "User Interactive" and "Background" are printing out in a random order, but why is "Main" printing out after? Shouldn't it be prioritized because it's in the main queue? I guess I still don't fully understand how GCD and QOS works in Swift 4.
EDIT
I now added these two print outs and a few lines:
print("BEGINNING OF CODE")
DispatchQueue.main.async {
for _ in 1...5 { print("Main") }
}
DispatchQueue.global().sync {
for _ in 1...5 { print("Sync Global") }
}
DispatchQueue.global(qos: .background).async {
for _ in 1...5 { print("Background") }
}
DispatchQueue.global(qos: .userInteractive).async {
for _ in 1...5 { print("User Interactive") }
}
print("END OF CODE")
And it prints this out:
BEGINNING OF CODE
Sync Global
Sync Global
Sync Global
Sync Global
Sync Global
END OF CODE
User Interactive
Background
User Interactive
User Interactive
User Interactive
User Interactive
Background
Background
Background
Background
Main
Main
Main
Main
Main
My question now is: shouldn't the background queues be printed out before the "END OF CODE"? If the last two blocks were enqueued and immediately started executing, why aren't they printing out before? Is it because the main queue has a higher priority? But shouldn't it run asynchronously?
The code in the playground is being run on the main queue. So the first for loop is enqueued on the main queue and it won't be run until the end of your code is reached.
The other two blocks are enqueued onto background queues and can start running immediately.
The code you have running on the background queues is so simple and quick that it happens to finish before the first block has a chance to run.
Add a sleep inside one of the two background blocks and you will see that the "Main" output comes out sooner. Your test gives misleading results because it is to simple and too fast.
Global queues are non FIFO type.. They occupy a thread soon as they find one that is available. Thats why they run faster..
The main queue task runs serially on main thread.. usually called from background when UI needs updation

Resources