FreeRTOS task address location - freertos

I'm trying to use FreeRTOS and I have one question to understand why my two tasks do not switch in cycle.
And to know that, I have to know where address of task is stored ??, - in "pxCurrentTCB"??, - in stack?, - I did not find any.
In my case, I have two tasks with the same priority and I expect what they have to be switched by "yeildISR" in cycle, but now I see, in practice, what only one iterruction of switching is occurred, - one task switch to another and stayed there forever.

Related

Are Tasks automatically cancelled upon return/completion in Swift?

I am a little uncertain about task cancellation in swift. My question is:
If a task reaches its return line (in this example, Line 4), does this mean it will be automatically canceled? (and thus free up memory + any used thread(s) previously occupied by the Task?)
someBlock {
Task<Bool, Never> {
await doSomeWork()
return true // Line 4
}
}
As a follow-up, what if we then call .task on a SwiftUI View? Does anything change?
SomeView
.task {
await doSomeWork()
}
Thank you for your time!
If a task reaches its return line (in this example, Line 4), does this mean it will be automatically canceled
No. It means it will automatically end in good order. Cancellation of a task is very different and specialized thing. They are both ways of bringing a task to an end, but within that similarity they are effectively opposites of one another.
It does mean that the thread(s) used by the Task are now free for use, but this is not as dramatically important as it might seem, because await also frees the thread(s) used by the Task. That's the whole point of async/await; it is not thread-bound. A Task is not aligned with a single thread, the way a DispatchQueue is; to put it another way, we tend to use queue and thread as loose equivalents of one another, but with a Task, that is not at all the case.
As for what memory is released, it depends on what memory was being retained. The code given is too "pseudo" to draw any conclusions about that. But basically yes, your mental picture of this is right: a Task has a life of its own, and while it lives its code continues to retain whatever it has a strong reference to, and then when it ends (whether through finishing in good order or through cancellation) the Task's life ends (if you have not retained it elsewhere) and therefore so does whatever the Task's code is retaining.
Does anything change
The .task modifier, according to Apple, creates "an asynchronous task with a lifetime that matches that of the ... view". So I would say, yes: the Task no longer just has an independent life of its own, but rather it is being retained behind the scenes (in order that the SwiftUI framework can cancel it if the View goes out of existence), and so whatever its code has strong references to will remain.
On the other hand this might not be terribly important; it depends on whether you're using reference types. References to value types, such as a View, don't have that sort of memory management implications.

How does a DispatchQueue work? (specifically multithreading)

I don't understand the workings of a DispatchQueue and wanted to learn more about how they implement the foundational queueing theory requirements. I tried to inspect a queue using:
dump(DispatchQueue.global())
And this gave this output:
- <OS_dispatch_queue_global: com.apple.root.default-qos[0x10c041f00] = { xref = -2147483648, ref = -2147483648, sref = 1, target = [0x0], width = 0xfff, state = 0x0060000000000000, in-barrier}> #0
- super: OS_dispatch_queue
- super: OS_dispatch_object
- super: OS_object
- super: NSObject
I got that the label is com.apple.root.default-qos, and this is specified in the Apple docs and the class is the packaged OS_dispatch_queue_global. I understand qos is queryable on the queue itself and that makes sense as well. Width I think just means the allocated memory size.
What I don't understand are the relevances of xref, ref and sref, I think they are internal ids for the queues but I am not sure. I think they are related to fundamental queueing concepts (multithreading came to mind) but would be great to hone into this in more detail.
Is the autoreleaseFrequency hidden from this debug description? Also, what does in-barrier = 0 mean? I tried creating a custom queue and this was replaced by in-flight = 0.. so confused about that as well.
Any ideas on how these undocumented variables relate to queueing theory? I think these are undocumented internals of the API, so any educated and justified explanations would be fine!
Thanks.
Why ask this?
This is a fairly broad question about the internals of grand-central-dispatch. I had difficulty understanding the dumped output because the original WWDC '10 videos and slides for GCD are no longer public. I also didn't know about the open-source libdispatch repo (thanks Rob). That needn't be a problem, but there are no related QAs on SO explaining the topic in detail.
Why GCD?
According to the WWDC '10 GCD transcripts (Thanks Rob), the main idea behind the API was to simplify the boilerplate associated with using the #selector API for multithreading.
Benefits of GCD
Apple released a new block-based API instead of going with function pointers, to also enable type-safe code that wouldn't crash if the block had the wrong type signature. Using typedefs also made code cleaner when used in function parameters, local variables and #property declarations. Queues allow you to capture code and some state as a chunk of data that get managed, enqueued and executed automatically behind the scenes.
The same session mentions how GCD manages low-level threads under the hood. It enqueues blocks to execute on threads when they need to be executed and then releases those threads (PThreads to be precise) when they are no longer referenced. GCD manages threads automatically and doesn't expose this API - when a DispatchWorkItem is dequeued GCD creates a thread for this block to execute on.
Drawbacks of performSelector
performSelector:onThread:withObject:waitUntilDone: has numerous drawbacks that suggest poor design for the modern challenges of concurrency, waiting, synchronisation. leads to pyramids of doom when switching threads in a func. Furthermore, the NSObject.performSelector family of threading methods are inflexible and limited:
No options to optimise for concurrent, initially inactive, or synchronisation on a particular thread. Unlike GCD.
Only selectors can be dispatched on to new threads (awful).
Lots of threads for a given function leads to messy code (pyramids of doom).
No support for queueing without a limited (at the time when GCD was announced in iOS 4) NSOperation API. NSOperations are a high-level, verbose API that became more powerful after incorporating elements of dispatch (low-level API that became GCD) in iOS 4.
Lots of bugs related to unhandled invalid Selector errors (type safety).
DispatchQueue internals
I believe the xref, ref and sref are internal registers that manage reference counts for automatic reference counting. GCD calls dispatch_retain and dispatch_release in most cases when needed, so we don't need to worry about releasing a queue after all its blocks have been executed. However, there were cases when a developer could call retain and release manually when trying to ensure the queue is retained even when not directly in use. These registers allow libDispatch to crash when release is called on a queue with a positive reference count, for better error handling.
When calling a block with DispatchQueue.global().async or similar, I believe this increments the reference count of that queue (xref and ref).
The variables in the question are not documented explicitly, but from what I can tell:
xref counts the number of external references to a general DispatchQueue.
ref counts the total number of references to a general DispatchQueue.
sref counts the number of references to API serial/concurrent/runloop queues, sources and mach channels (these need to be tracked differently as they are represented using different types).
in-barrier looks like an internal state flag (DispatchWorkItemFlag) to track whether new work items submitted to a concurrent queue should be scheduled or not. Only once the barrier work item finishes, the queue returns to scheduling work items that were submitted after the barrier. in-flight means that there is no barrier in force currently.
state is also not documented explicitly but I presume points to memory where the block can access variables from the scope where the block was scheduled.

Does the POSIX Thread API offer a way to test if the calling thread alreeady holds a lock?

Short question: Does the POSIX thread API offer me a way to determine if the calling thread already holds a particular lock?
Long question:
Suppose I want to protect a data structure with a lock. Acquiring and releasing the lock need to happen in different functions. Calls between the functions involved are rather complex (I am adding multithreading to a 16-year-old code base). For example:
do_dangerous_stuff() does things for which the current thread needs to hold the write lock. Therefore, I acquire the lock at the beginning of the function and release it at the end, as the caller does not necessarily hold the lock.
Another function, do_extra_dangerous_stuff(), calls do_dangerous_stuff(). However, before the call it already does things which also require a write lock, and the data is not consistent until the call to do_dangerous_stuff() returns (so releasing and immediately re-acquiring the lock might break things).
In reality it is more complicated than that. There may be a bunch of functions calling do_dangerous_stuff(), and requiring each of them to obtain the lock before calling do_dangerous_stuff() may be impractical. Sometimes the lock is acquired in one function and released in another.
With a read lock, I could just acquire it multiple times from the same thread, as long as I make sure I release the same number of lock instances that I have acquired. For a write lock, this is not an option (attempting to do so will result in a deadlock). An easy solution would be: test if the current thread already holds the lock and acquire it if not, and conversely, test if the current thread still holds the lock and release it if it does. However, that requires me to test if the current thread already holds the lock—is there a way to do that?
Looking at the man page for pthread_rwlock_wrlock(), I see it says:
If successful, the pthread_rwlock_wrlock() function shall return zero; otherwise, an error number shall be returned to indicate the error.
[…]
The pthread_rwlock_wrlock() function may fail if:
EDEADLK The current thread already owns the read-write lock for writing or reading.
As I read it, EDEADLK is never used to indicate chains involving multiple threads waiting for each other’s resources (and from my observations, such deadlocks indeed seem to result in a freeze rather than EDEADLK). It seems to indicate exclusively that the thread is requesting a resource already held by the current thread, which is the condition I want to test for.
If I have misunderstood the documentation, please let me know. Otherwise the solution would be to simply call pthread_rwlock_wrlock(). One of the following should happen:
It blocks because another thread holds the resource. When we get to run again, we will hold the lock. Business as usual.
It returns zero (success) because we have just acquired the lock (which we didn’t hold before). Business as usual.
It returns EDEADLK because we are already holding the lock. No need to reacquire, but we might want to consider this when we release the lock—that depends on the code in question, see below
It returns some other error, indicating something has truly gone wrong. Same as with every other lock operation.
It may make sense to keep track of the number of times we have acquired the lock and got EDEADLK. Taking from Gil Hamilton’s answer, the lock depth would work for us:
Reset the lock depth to 0 when we have acquired a lock.
Increase the lock depth by 1 each time we get EDEADLK.
Match each attempt to acquire the lock with the following: If lock depth is 0, release the lock. Else decrease the lock depth.
This should be thread-safe without further synchronization, as the lock depth is effectively protected by the lock it refers to (we touch it only while holding the lock).
Caveat: if the current thread already holds the lock for reading (and no others do), this will also report it as being “already locked”. Further tests will be needed to determine if the currently held lock is indeed a write lock. If multiple threads, among them the current one, hold the read lock, I do not know if attempting to obtain a write lock will return EDEADLK or freeze the thread. This part needs some more work…
AFAIK, there's no easy way to accomplish what you're trying to do.
In linux, you can use the "recursive" mutex attribute to achieve your purpose (as shown here for example: https://stackoverflow.com/a/7963765/1076479), but this is not "posix"-portable.
The only really portable solution is to roll your own equivalent. You can do that by creating a data structure containing a lock along with your own thread index (or equivalent) and an ownership/recursion count.
CAVEAT: Pseudo-code off the top of my head
Recursive lock:
// First try to acquire the lock without blocking...
if ((err = pthread_mutex_trylock(&mystruct.mutex)) == 0) {
// Acquire succeeded. I now own the lock.
// (I either just acquired it or already owned it)
assert(mystruct.owner == my_thread_index || mystruct.lock_depth == 0);
mystruct.owner = my_thread_index;
++mystruct.lock_depth;
} else if (mystruct.owner == my_thread_index) {
assert(err == EBUSY);
// I already owned the lock. Now one level deeper
++mystruct.lock_depth;
} else {
// I don't own the lock: block waiting for it.
pthread_mutex_lock(&mystruct.mutex);
assert(mystruct.lock_depth == 0);
}
On the way out, it's simpler, because you know you own the lock, you only need to determine whether it's time to release it (i.e. the last unlock). Recursive unlock:
if (--mystruct.lock_depth == 0) {
assert(mystruct.owner == my_thread_index);
// Last level of recursion unwound
mystruct.owner = -1; // Mark it un-owned
pthread_mutex_unlock(&mystruct.mutex);
}
I would want to add some additional checks and assertions and significant testing before trusting this too.

Schedulers for network requests in RxSwift

I’ve been learning Rxswift and applying it on a project since start. I would like your help to fell more assured about a concept.
I understand the changes in the UI should be performed on the Mainscheduler, and you should explicitly use .observeOn(MainSchedule… in case you don’t use Drivers.
My doubt is: normally, should I explicitly switch to a background thread when performing network requests?.
I haven’t found many literature talking about exactly this, but I’ve read some projects code and most of them don’t, but a few do. Those eventually use Drivers or .observeOn(MainSchedule… to make the changes on the UI.
In https://www.thedroidsonroids.com/blog/rxswift-examples-4-multithreading, for instance, he says
So as you may guessed, in fact everything we did was done on a MainScheduler. Why? Because our chain starts from searchBar.rx_text and this one is guaranteed to be on MainScheduler. And because everything else is by default on current scheduler – well our UI thread may get overwhelmed. How to prevent that? Switch to the background thread before the request and before the mapping, so we will just update UI on the main thread
So what he does to solve the problem he mentions, is to explicitly declare
.observeOn(ConcurrentDispatchQueueScheduler(globalConcurrentQueueQOS: .Background))
Assuming the API Request would be performed on background anyway, what this does is to perform all other computations in the background as well, right?
Is this a good practice? Should I, in every API request, explicitly change to background and then changes back to Main only when necessary?
If so, what would be best way? To observe on background and then on Main? Or to subscribe on background and observe on Main, as is done in this gist:
https://gist.github.com/darrensapalo/711d33b3e7f59b712ea3b6d5406952a4
?
Or maybe another way?
P.S.: sorry for the old code, but among the links I found, these better fit my question.
Normally, i.e. if you do not specify any schedulers, Rx is synchronous.
The rest really depends on your use case. Four instance, all UI manipulations must happen on main thread scheduler.
Background work, including network requests, should run on background schedulers. Which ones - depends on priority and preference for concurrent/serial execution.
.subscribeOn() defines where the work is being done and .observeOn() defines where the results of it are handled.
So the answer to your specific questions: in case of a network call which results will be reflected in UI, you must subscribe on background scheduler and observe on main.
You can declare schedulers like that (just an example):
static let main = MainScheduler.instance
static let concurrentMain = ConcurrentMainScheduler.instance
static let serialBackground = SerialDispatchQueueScheduler.init(qos: .background)
static let concurrentBackground = ConcurrentDispatchQueueScheduler.init(qos: .background)
static let serialUtility = SerialDispatchQueueScheduler.init(qos: .utility)
static let concurrentUtility = ConcurrentDispatchQueueScheduler.init(qos: .utility)
static let serialUser = SerialDispatchQueueScheduler.init(qos: .userInitiated)
static let concurrentUser = ConcurrentDispatchQueueScheduler.init(qos: .userInitiated)
static let serialInteractive = SerialDispatchQueueScheduler.init(qos: .userInteractive)
static let concurrentInteractive = ConcurrentDispatchQueueScheduler.init(qos: .userInteractive)
P.S. Some 3rd-party libraries may provide observables that are pre-configured to execute on a background scheduler. In that case explicitly calling .subscribeOn() is not necessary. But you need to know for sure whether this is the case.
And a recap:
normally, should I explicitly switch to a background thread when performing network requests?. - yes, unless a library does it for you
Should I, in every API request, explicitly change to background and then changes back to Main only when necessary? - yes
If so, what would be best way? [...] subscribe on background and observe on Main
You are right. Of course the actual network request, and waiting for and assembling the response, is all done on a background thread. What happens after that depends on the network layer you are using.
For example, if you are using URLSession, the response already comes back on a background thread so calling observeOn to do anything other than come back to the main thread is unnecessary and a reduction of performance. In other words, in answer to your question you don't need to change to a background thread on every request because it's done for you.
I see in the article that the author was talking in the context of Alamofire which explicitly responds on the main thread. So if you are using Alamofire, or some other networking layer that responds on the main thread, you should consider switching to a background thread if the processing of the response is expensive. If all you are doing is creating an object from the resulting dictionary and pushing it to a view the switch in context is probably overkill and could actually degrade performance considering you have already had to suffer through a context switch once.
I feel it's also important to note that calling subscribeOn is absolutely pointless for either network layer. That will only change the thread that the request is made on, not the background thread that waits for the response, nor the thread that the response returns on. The networking layer will decide what thread it uses to push the data out and subscribeOn can't change it. The best you can do is use observeOn to reroute the data flow to a different thread after the response. The subscribeOn operator is for synchronous operations, not network requests.

How to make a function atomic in Swift?

I'm currently writing an iOS app in Swift, and I encountered the following problem: I have an object A. The problem is that while there is only one thread for the app (I didn't create separate threads), object A gets modified when
1) a certain NSTimer() triggers
2) a certain observeValueForKeyPath() triggers
3) a certain callback from Parse triggers.
From what I know, all the above three cases work kind of like a software interrupt. So as the code run, if NSTimer()/observeValueForKeyPath()/callback from Parse happens, current code gets interrupted and jumps to corresponding code. This is not a race condition (since just one thread), and I don't think something like this https://gist.github.com/Kaelten/7914a8128eca45f081b3 can solve this problem.
There is a specific function B called in all three cases to modify object A, so I'm thinking if I can make this function B atomic, then this problem is solved. Is there a way to do this?
You are making some incorrect assumptions. None of the things you mention interrupt the processor. 1 and 2 both operate synchronously. The timer won't fire or observeValueForKeyPath won't be called until your code finishes and your app services the event loop.
Atomic properties or other synchronization techniques are only meaningful for concurrent (multi-threaded) code. If memory serves, Atomic is only for properties, not other methods/functions.
I believe Parse uses completion blocks that are run on a background thread, in which case your #3 **is* using separate threads, even though you didn't realize that you were doing so. This is the only case in which you need to be worried about synchronization. In that case the simplest thing is to simply bracket your completion block code inside a call to dispatch_async(dispatch_get_main_queue()), which makes all the code in the dispatch_async closure run on the main, avoiding concurrency issues entirely.

Resources