ReadDirectoryChangesW: how to detect that the operation was successfully queued? - iocp

After calling ReadDirectoryChangesW (in overlapped mode) it returns 1 (true) in two opposite situations: 1) no files changes, 2) one or more file changed; But in first situation no data passed to IOCP and in second situation passed pointer to my overlapped struct.
How to determine if it passed my overlapped struct or not to IOCP while calling ReadDirectoryChangesW ? In another words how to determine does ReadDirectoryChangesW found changes or not?
In some part of my code I call GetQueuedCompletionStatus and get full information about changed files, but before it I wanna to know only fact: was changes or not;

If you're using ReadDirectoryChangesW() with an IOCP then you're using it in asynchronous mode and so after calling it you should wait for it to report the next change as it occurs.
Once you get a completion notification from the IOCP you can process it and then call ReadDirectoryChangesW() again to get more notifications.

Related

What happens if the `NEPacketTunnelflow` method `readPacketsWithCompletionHandler` is called multiple times?

When calling the method
- (void)readPacketsWithCompletionHandler:(void (^)(
NSArray<NSData *> *packets, NSArray<NSNumber *> *protocols))completionHandler;
the completionHandler is either called directly, in case packets are available at call time, or it is called at a later tim when packets become available.
Yet what is nowhere documented is: What happens if I call this method again before the prior set completionHandler has ever been called?
Will the new handler replace the prior set one and the prior set one won't get called at all anymore?
Are both handler scheduled and called as data arrives? And if so, will they be called in the order I passed them, in reverse order, or in random order?
Has anyone any insights on how that method is implemented?
Of course, I can make a demo project, create a test setup, and see what results I get through testing but that is very time consuming and not necessarily reliable. The problem with unspecified behavior is that it may change at will without letting anyone know. This method may behave differently on macOS and iOS, it may behave differently with every new OS release, or depending on the day of the week.
Or does the fact that nothing is documented is by intention? Do I have to interpret that as: You may call this method once and after your callback was executed, you may call it again with the same or a new callback. Everything else is undefined behavior and you cannot and should not rely on any specific behavior if use that API in a different manner.
As nobody has replied so far, I tried my best to figure it out myself. As testing is not good enough for me, here is what I did:
First I extracted the NetworkExtension framework binary from the dyld cache of macOS Big Sur using this utility.
Then I ran otool -Vt over the resulting binary file to get a disassembler dump of the binary.
My assembly skills are a bit rusty but from what I see the completionHandler is stored in a property named packetHandler, replacing any previous stored value there. Also a callback is created in that method and stored on an object obtained by calling the method interface.
When looking at the code of this created callback, it obtains the value of the packetHandler property and sets it to NULL after the value was obtained. Then it creates NSData and NSNumber objects, adds those to NSArray objects and calls the obtained handler with those arrays.
So it seems that calling the method again just replaces the previous completionHandler which is never be called in that case. So you must not rely that a scheduled handler will eventually be called at some time in the future if the tunnel is not teared down if the possibility exists that your code might replace it. Also calling the method multiple times to schedule multiple callbacks has no effect as as only the last one will be kept and eventually be called.

Does the POSIX Thread API offer a way to test if the calling thread alreeady holds a lock?

Short question: Does the POSIX thread API offer me a way to determine if the calling thread already holds a particular lock?
Long question:
Suppose I want to protect a data structure with a lock. Acquiring and releasing the lock need to happen in different functions. Calls between the functions involved are rather complex (I am adding multithreading to a 16-year-old code base). For example:
do_dangerous_stuff() does things for which the current thread needs to hold the write lock. Therefore, I acquire the lock at the beginning of the function and release it at the end, as the caller does not necessarily hold the lock.
Another function, do_extra_dangerous_stuff(), calls do_dangerous_stuff(). However, before the call it already does things which also require a write lock, and the data is not consistent until the call to do_dangerous_stuff() returns (so releasing and immediately re-acquiring the lock might break things).
In reality it is more complicated than that. There may be a bunch of functions calling do_dangerous_stuff(), and requiring each of them to obtain the lock before calling do_dangerous_stuff() may be impractical. Sometimes the lock is acquired in one function and released in another.
With a read lock, I could just acquire it multiple times from the same thread, as long as I make sure I release the same number of lock instances that I have acquired. For a write lock, this is not an option (attempting to do so will result in a deadlock). An easy solution would be: test if the current thread already holds the lock and acquire it if not, and conversely, test if the current thread still holds the lock and release it if it does. However, that requires me to test if the current thread already holds the lock—is there a way to do that?
Looking at the man page for pthread_rwlock_wrlock(), I see it says:
If successful, the pthread_rwlock_wrlock() function shall return zero; otherwise, an error number shall be returned to indicate the error.
[…]
The pthread_rwlock_wrlock() function may fail if:
EDEADLK The current thread already owns the read-write lock for writing or reading.
As I read it, EDEADLK is never used to indicate chains involving multiple threads waiting for each other’s resources (and from my observations, such deadlocks indeed seem to result in a freeze rather than EDEADLK). It seems to indicate exclusively that the thread is requesting a resource already held by the current thread, which is the condition I want to test for.
If I have misunderstood the documentation, please let me know. Otherwise the solution would be to simply call pthread_rwlock_wrlock(). One of the following should happen:
It blocks because another thread holds the resource. When we get to run again, we will hold the lock. Business as usual.
It returns zero (success) because we have just acquired the lock (which we didn’t hold before). Business as usual.
It returns EDEADLK because we are already holding the lock. No need to reacquire, but we might want to consider this when we release the lock—that depends on the code in question, see below
It returns some other error, indicating something has truly gone wrong. Same as with every other lock operation.
It may make sense to keep track of the number of times we have acquired the lock and got EDEADLK. Taking from Gil Hamilton’s answer, the lock depth would work for us:
Reset the lock depth to 0 when we have acquired a lock.
Increase the lock depth by 1 each time we get EDEADLK.
Match each attempt to acquire the lock with the following: If lock depth is 0, release the lock. Else decrease the lock depth.
This should be thread-safe without further synchronization, as the lock depth is effectively protected by the lock it refers to (we touch it only while holding the lock).
Caveat: if the current thread already holds the lock for reading (and no others do), this will also report it as being “already locked”. Further tests will be needed to determine if the currently held lock is indeed a write lock. If multiple threads, among them the current one, hold the read lock, I do not know if attempting to obtain a write lock will return EDEADLK or freeze the thread. This part needs some more work…
AFAIK, there's no easy way to accomplish what you're trying to do.
In linux, you can use the "recursive" mutex attribute to achieve your purpose (as shown here for example: https://stackoverflow.com/a/7963765/1076479), but this is not "posix"-portable.
The only really portable solution is to roll your own equivalent. You can do that by creating a data structure containing a lock along with your own thread index (or equivalent) and an ownership/recursion count.
CAVEAT: Pseudo-code off the top of my head
Recursive lock:
// First try to acquire the lock without blocking...
if ((err = pthread_mutex_trylock(&mystruct.mutex)) == 0) {
// Acquire succeeded. I now own the lock.
// (I either just acquired it or already owned it)
assert(mystruct.owner == my_thread_index || mystruct.lock_depth == 0);
mystruct.owner = my_thread_index;
++mystruct.lock_depth;
} else if (mystruct.owner == my_thread_index) {
assert(err == EBUSY);
// I already owned the lock. Now one level deeper
++mystruct.lock_depth;
} else {
// I don't own the lock: block waiting for it.
pthread_mutex_lock(&mystruct.mutex);
assert(mystruct.lock_depth == 0);
}
On the way out, it's simpler, because you know you own the lock, you only need to determine whether it's time to release it (i.e. the last unlock). Recursive unlock:
if (--mystruct.lock_depth == 0) {
assert(mystruct.owner == my_thread_index);
// Last level of recursion unwound
mystruct.owner = -1; // Mark it un-owned
pthread_mutex_unlock(&mystruct.mutex);
}
I would want to add some additional checks and assertions and significant testing before trusting this too.

Does URLSession.dataTask(with:completionHandler:) always call completionHandler only once?

After I create a new session data task with URLSession.dataTask(with:completionHandler:) and start the task by calling its resume() method, given that the app doesn't crash while the request is running, is it safe for me to assume that completionHandler (passed to URLSession.dataTask(with:completionHandler:) above) will always eventually get called only once, even if something weird happens with the network request (like if the connection drops) or with the app (like if it goes into the background)?
Note: I'm not explicitly calling cancel() or suspend() on the task. Just resume().
I want to know the answer to this question because (from my app's main thread) I'm creating and starting (one after the other) multiple asynchronous network requests and want to know when the last one has finished.
Specifically, I'm working on an app that has a custom class called Account. On launch, the app (assuming it finds an account access token stored in UserDefaults) creates only one instance of that class and stores it to a global variable (across the entire app) called account, which represents the app's currently-logged-in account.
I've added a stored var (instance) property to Account called pendingGetFooRequestCount (for example) and set it to 0 by default. Every time I make a call to Account.getFoo() (an instance method), I add 1 to pendingGetFooRequestCount (right before calling resume()). Inside completionHandler (passed to URLSession.dataTask(with:completionHandler:) and (to be safe) inside a closure passed to DispatchQueue.main.async(), I first subtract 1 from pendingGetFooRequestCount and then check if pendingGetFooRequestCount is equal to 0. If so, I know the last get-foo request has finished, and I can call another method to continue the flow.
How's my logic? Will this work as expected? Should I be doing this another way? Also, do I even need to decrement pendingGetFooRequestCount on the main thread?
URLRequest has a timeoutInterval property, its default value is 60 seconds. If there is no response by then, the completion is called with non-nil error.

How to make a function atomic in Swift?

I'm currently writing an iOS app in Swift, and I encountered the following problem: I have an object A. The problem is that while there is only one thread for the app (I didn't create separate threads), object A gets modified when
1) a certain NSTimer() triggers
2) a certain observeValueForKeyPath() triggers
3) a certain callback from Parse triggers.
From what I know, all the above three cases work kind of like a software interrupt. So as the code run, if NSTimer()/observeValueForKeyPath()/callback from Parse happens, current code gets interrupted and jumps to corresponding code. This is not a race condition (since just one thread), and I don't think something like this https://gist.github.com/Kaelten/7914a8128eca45f081b3 can solve this problem.
There is a specific function B called in all three cases to modify object A, so I'm thinking if I can make this function B atomic, then this problem is solved. Is there a way to do this?
You are making some incorrect assumptions. None of the things you mention interrupt the processor. 1 and 2 both operate synchronously. The timer won't fire or observeValueForKeyPath won't be called until your code finishes and your app services the event loop.
Atomic properties or other synchronization techniques are only meaningful for concurrent (multi-threaded) code. If memory serves, Atomic is only for properties, not other methods/functions.
I believe Parse uses completion blocks that are run on a background thread, in which case your #3 **is* using separate threads, even though you didn't realize that you were doing so. This is the only case in which you need to be worried about synchronization. In that case the simplest thing is to simply bracket your completion block code inside a call to dispatch_async(dispatch_get_main_queue()), which makes all the code in the dispatch_async closure run on the main, avoiding concurrency issues entirely.

ReactiveCocoa - Stop the triggering of subscribeNext until another signal has been completed

I'm pretty new to FRP and I'm facing a problem:
I subscribe to an observable that triggers subscribeNext every second.
In the subscribeNext's block, I zip observables that execute asynchronous operations and in zip's completed block I perform an action with the result.
let signal: RACSignal
let asynchOperations: [RACSignal]
var val: AnyObject?
// subscribeNext is trigered every second
signal.subscribeNext {
let asynchOperations = // several RACSignal
// Perform asynchronous operations
RACSignal.zip(asynchOperations).subscribeNext({
val = $0
}, completed: {
// perform actions with `val`
})
}
I would like to stop the triggering of subscribeNext for signal (that is normally triggered every second) until completed (from the zip) has been reached.
Any suggestion?
It sounds like you want an RACCommand.
A command is an object that can perform asynchronous operations, but only have one instance of its operation running at a time. As soon as you tell a command to start execute:ing, it will become "disabled," and will automatically become enabled again when the operation completes.
(You can also make a command that's enabled based on other criteria than just "am I executing right now," but it doesn't sound like you need that here.)
Once you have that, you could derive a signal that "gates" the interval signal (for example, if:then:else: on the command's enabled signal toggling between RACSignal.empty and your actual signal -- I do this enough that I have a helper for it), or you can just check the canExecute property before invoking execute: in your subscription block.
Note: you're doing a slightly weird thing with your inner subscription there -- capturing the value and then dealing with the value on the completed block.
If you're doing that because it's more explicit, and you know that the signal will only send one value but you feel the need to encode that directly, then that's fine. I don't think it's standard, though -- if you have a signal that will only send one value, that's something that unfortunately can't be represented at the type level, but is nonetheless an assumption that you can make in your code (or at least, I find myself comfortable with that assumption. To each their own).
But if you're doing it for timing reasons, or because you actually only want the last value sent from the signal, you can use takeLast:1 instead to get a signal that will always send exactly one value right at the moment that the inner signal completes, and then only subscribe in the next block.
Slight word of warning: RACCommands are meant to be used from the main thread to back UI updates; if you want to use a command on a background thread you'll need to be explicit about the scheduler to deliver your signals on (check the docs for more details).
Another completely different approach to getting similar behavior is temporal recursion: perform your operation, then when it's complete, schedule the operation to occur again one second later, instead of having an ongoing timer.
This is slightly different as you'll always wait one second between operations, whereas in the current one you could be waiting anywhere between zero and one seconds, but if that's a not a problem then this is a much simpler solution than using an RACCommand.
ReactiveCocoa's delay: method makes this sort of ad-hoc scheduling very convenient -- no manual NSTimer wrangling here.

Resources