Is the a good way to find server side of mach_msg? - ios

I am disassembling a lot of iOS operation system code now (frameworks, system daemons). One of the common methods to do a system call is usage of mach_msg.
So, I can see on the client side, how mach_msg is constructed. Quite often I know a system daemon, which will handle this call. However, I am not sure how to find call handler in this daemon disassembled code.
Is there a good rule of thumb, how to find a handler?

I found following (at least in one deamon)
a) mach_msg_server_once method is called and first parameter to it is callback method
b) Usually this callback method checks for msgh_id and looks up in dispatch table addresses of methods to dispatch call to.

Related

How to configure MainThread checker for your own code?

I have a class method that I want to always execute in the main thread.
Is there a way I could configure XCode Main Thread checker to raise a "purple warning" if that method is called on a background thread (just like it does for UI-related methods) ?
There're some undocumented environment variables like MTC_SUPPRESSION_FILE which allows you to provide a list of classes, methods & selectors to exclude from the checker. An opposite way to what you're looking for. I checked (quickly) the libMainThreadChecker.dylib and can't find more of them.
Then I got a reply from an Apple engineer - there's no documented way. It can mean anything - there's no way or there's a way, but it's not documented1. He suggests to just use ...
dispatchPrecondition(condition: .onQueue(.main))
... at the beginning of your method.
1 The library is available at /Applications/Xcode-beta.app/Contents/Developer/usr/lib/libMainThreadChecker.dylib if anyone wants to dig in.

FreeRtos calling vTaskDelete from IRQ

I spent some time but I can't find any info if it's allowed to call vTaskDelete from IRQ handler? I know some methods have specialized version for usage in IRQ routines however I can't find anything related to vTaskDelete. Currently it works but I don't want to do some hard to discover bug just because I didn't found info.
If you are calling a callback from the IRQ then it is still in the IRQ context. Calling vTaskDelete() with a NULL parameter would delete the task that was running before the interrupt was entered, so the interrupt would then try to return to a task that was no longer running. Even if that were not the case then the rule of thumb is not to use API functions that do not end in "FromISR" from an interrupt (the separate API ensures fewer decision points in the function, faster and standard interrupt entry as it doesn't need to keep an interrupt nesting variable, no need to pass parameters that don't make sense in an interrupt context [like a block time] into an interrupt function, etc.).
I assume you are not calling vTaskDelete with a NULL argument because there is no current task when you are in interrupt context. In any case vTaskDelete() should not be called from interrupt context. For example, it's implementation will call vPortFree() to free the TCB of the task.

Transitioning from VB6 On Error to .Net Try...Catch

Our company has recently transitioned from VB6 to VB.NET. Unfortunately all the error handling remains as On Error GoTo. This has not made it easy to track down errors that customers send back to tech support. As of now, the blocks of code that the On Error surrounds is entire subroutines, not uncommonly hundreds of lines of code for one sub, and possibly making calls to other routines. My question is how to best go about converting to Try...Catch blocks. I assume I can just substitute On Error GoTo Errorline for try and Errorline for catch. But this seems like too much for one try...catch block to encompass.
VB6's big weakness with error-handling is that the runtime does not provide a way for your code to get at the execution stack (method A called method B called method C, etc.) when an exception occurs; even if your On Error block catches the exception, you code doesn't know "where it is". To get around that deficiency, VB6 programmers have learned to enclose every method, in every module, with a catch-all On Error block, sometimes so that their own code can keep track of the execution stack for logging purposes. There were even third-party tools that could be used to instrument your code with On Error blocks, for exactly that purpose (VB/Rig, VB-Failsafe).
.Net's Exception object, however, does provide a .StackTrack property that represents the execution stack to the point of failure, so it's no longer necessary to use On Error blocks in every method, just so you can learn where your code failed, post-mortem.
Here is one simple strategy you can use, as you transition:
First, as you suggested, replace all of your "boilerplate" On Error Goto Errorline / :Errorline with Try / Catch ex As Exception blocks. But, do this only in the "top-level" methods where execution can "begin". In VB, these are usually all of the Event methods in your forms that directly handle system events (_Click, _MouseDown, _Timer, etc.)
Second, remove all the boilerplate error handling from "lower-level" methods -- methods that are merely called from the "top-level" or other "lower-level" methods.
Now you have provided a "safety net" of Try/Catch exception handling that will protect your app from dying from an unhandled exception. When an exception does occur, even deep in the execution stack, your code will unwind back to the nearest Catch, usually one of your UI event-handler methods. But, you will have the ex.StackTrack property that documents the execution up to the failure, module by module, method by method, with the source code line-number at each level.
One exception to the above strategy is when you find an error-handling block that is not boilerplate -- it was written specifically to handle a certain error(s), and to respond specifically. Leave this code in place, but again replace the On Error Goto Errorline / :Errorline with Try / Catch ex As Exception.
Here's a helpful rule of thumb: In your "top-level" methods, enclose the entire method in a "boilerplate" Try/Catch. In your "lower-level" methods, only write Try/Catch blocks around code where you can anticipate that certain exceptions will occur -- ones that your code wants to specifically respond to.
It's not unreasonable, or "too much", for a Try/Catch block to encompass large chunks of code. You should always strive to keep your methods as short as possible, but there is no reason to arbitrarily truncate or partition a long method just because it's enclosed by a Try/Catch.

Is it better for an API to dispatch itself to a queue and invoke a callback, or for the API caller to do the dispatching?

Examples:
Asynchronous method with its own dispatching:
// Library
func asyncAPI(callback: Result -> Void) {
dispatch_async(self.queue) {
...
callback(result)
}
}
// Caller
asyncAPI() { result in
...
}
Synchronous method with exposed dispatch queue:
// Library
func syncAPI() -> Result {
assert(isRunningOnCorrectQueue())
...
return result
}
// Caller
dispatch_async(api.queue) {
let result = api.syncAPI()
...
}
These two examples behave the same but I am looking to learn whether one of these ends up complicating a larget codebase more than the other, especially when there is a lot of asynchrony.
I would argue against both of the patterns you propose.
For the first pattern (where the API manages it's own backgrounding) I see little or no benefit to doing it this way, as opposed to leaving it to the caller. If you want to use a private, serial queue to protect data (or any other sort of critical section) internal to your API, that's fine, but that queue should be private, and it should specifically not target any public, non-global-concurrent queue (Note: it should especially not target the main queue). Ideally, the primary implementation of your API would also take a second parameter, so callers can specify on which queue to invoke the callback. (People can work around the lack of such a parameter by passing a callback block that re-dispatches to their desired queue, but I think that's clunkier than having an extra, optional parameter.) This puts the API consumer in complete control of the concurrency, while preserving your freedom to use queues internally to protect state.
As to the second approach, it's my opinion that we all should avoid creating new synchronous, blocking API. When you provide a synchronous, blocking API and don't provide a callback-based version, that means that you have denied consumers of your API any opportunity to avoid blocking. When you only provide synchronous, blocking API, then if someone wants to call your API in the background, at least one thread (in addition to any additional threads that your API consumes behind the scenes) will be consumed from the finite number of threads available to each process. (In the worst case this can lead to starvation conditions that are effectively deadlocks.)
Another red flag with this second example is that it vends a queue; Any time an API vends a queue, something is amiss. As mentioned, if you want to use a private serial queue to protect state or other critical sections internal to your API, go for it, but don't expose that queue to the outside world. If nothing else, it unnecessarily exposes details of your implementation. In looking at the system framework headers, I couldn't find a single case where a dispatch_queue_t was vended where it wasn't immediately obvious that the intent was for the API consumer to push in the queue, and not read it out.
It's also worth mentioning that these patterns are problematic regardless of whether your workload is CPU-bound or IO-bound. If it's CPU-bound, then not managing your own dispatch gives consumers of the API explicit control over how this CPU work is executed. If your workload is IO-bound, then you should use the OS- and libdispatch-provided asynchronous IO mechanisms (dispatch_io, dispatch_sources, kevent, etc) to avoid consuming a thread (or more than one) for the duration of your work.
Another answer here implied that forcing consumers to manage their own concurrency leads to "boilerplate" code. If you feel that the burden of API consumers potentially having to wrap calls to your API with dispatch_async is too great, then feel free to provide a convenience overload that dispatches to the default global concurrent queue, but please always leave the version that allows API consumers the ability to explicitly manage their own concurrency.
If, on the other hand, all this is internal to the implementation, and not part of the public API, then do whatever is most expedient, knowing that you can refactor the implementation behind the public API any time in the future.
As you said, the 2 generally accomplish the same thing but the first is more preferable in most scenarios. There are several benefits to using the first method.
The API is simpler. You simply call the method and provide code for the callback block.
Less boilerplate code, No typing dispatch_async every time you want to call it as it is just included in the method itself.
Less room for bugs/errors. By wrapping the asynchronous logic inside the method itself, you ensure that it is called on the right queue internally without the caller having to worry about any of that.
Touching on the last point, you also have finer control over the queue itself. Let's say you are trying to perform certain tasks on a particular queue. It is way simpler to simply wrap the code in a GCD call on that queue a single time rather than having to remember to reuse that same queue every time you want to call the method.

Difference between the WaitFor function for TMutex delphi and the equivalent in win32 API

The documentation of delphi says that the WaitFor function for TMutex and others sychronization objects wait until a handle of object is signaled.But this function also guarantee the ownership of the object for the caller?
Yes, the calling thread of a TMutex owns the mutex; the class is just a wrapper for the OS mutex object. See for yourself by inspecting SyncObjs.pas.
The same is not true for other synchronization objects, such as TCriticalSection. Any thread my call the Release method on such an object, not just the thread that called Acquire.
TMutex.Acquire is a wrapper around THandleObjects.WaitFor, which will call WaitForSingleObject OR CoWaitForMultipleHandles depending on the UseCOMWait contructor argument.
This may be very important, if you use STA COM objects in your application (you may do so without knowing, dbGO/ADO is COM, for instance) and you don't want to deadlock.
It's still a dangerous idea to enter a long/infinite wait in the main thread, 'cause the only method which correctly handles calls made via TThread.Synchronize is TThread.WaitFor and you may stall (or deadlock) your worker threads if you use the SyncObjs objects or WinAPI wait functions.
In commercial projects, I use a custom wait method, built upon the ideas from both THandleObjects.WaitFor AND TThread.WaitFor with optional alertable waiting (good for asynchronous IO but irreplaceable for the possibility to abort long waits).
Edit: further clarification regarding COM/OLE:
COM/OLE model (e.g. ADO) can use different threading models: STA (single-threaded) and MTA (multi or free-threaded).
By definition, the main GUI thread is initialized as STA, which means, the COM objects can use window messages for their asynchronous messaging (particulary when invoked from other threads, to safely synchronize). AFAIK, they may also use APC procedures.
There is a good reason for the CoWaitForMultipleHandles function to exist - see its' use in SyncObjs.pas THandleObject.WaitFor - depending on the threading model, it can process internal COM messages, while blocking on the wait handle.

Resources