While implementing the callback functionality I bumped into the following problems.
What is a good way to deal with the situation when an ill-behaving callee doesn't return? How can I detect this situation and continue my regular program flow?
My only idea until now is to create a thread which does the actual callback and kill it (and the application) after some timeout.
The second point is that i don't want to give the callee the possibility to mess around with my stack. How can I provide a clean stack for the callee and that with keeping in mind that eventually several callbacks could happen at the same time?
My solution until now is the following approach. Every time when an application installs a callback procedure it has to allocate some memory for stack usage and provide a pointer to it.
I will appreciate your constructive comments and proposals.
It depends on your security model. If your callee is trusted, you needn't to worry. Otherwise, I suggest putting it into a separate process.
You can queue further callbacks until the callback function returns.
Related
Please note that I am asking about a strictly dart only application this does not concern flutter in any means, dartvm refers to the dart virtual machine.
As far as I understand Dart's idea of reactive state is implemented through streams, the responsibility of handling the lifetime of a stream object is given to the programmer, at runtime one could manipulate the stream as they see fit according to what works for their design by adding to the stream; listening to it or disposing it.
My question is this, Is it necessary that I need to call the dispose() method of a stream before my application quits? If I do, how do I go about accomplishing that? Hooking into the VM state isn't well documented and using ProcessSignal listeners is not portable, If I don't, does the GC handle this case? What's the best practice in this case?
Dart streams do not have a dispose method. Therefore you don't need to call it.
But just to give a little more detail ...
Dart streams are many things. Or rather, streams are pretty simple, they're just a way to provide a connection between code which provides events and code which consumes events. After calling listen, the stream object is no longer part of the communication, events and pushback goes directly between the event source (possibly a StreamController) and the consumer (a StreamSubscription).
Event providers are many things.
Some events are triggered just by code doing things. There is no need to clean up after those, it's just Dart objects like everything else, and they will die with the program, and can be garbage collected earlier if no live code refers to them.
Some events are triggered by I/O operations on the underlying operating system. Those will usually be cleaned up when the program ends, because they are allocated through the Dart runtime system, and it knows how to stop them again.
It's still a good idea to cancel the subscription as soon as you don't need any more events. That way, you won't keep a file open too long and prevent another part of the program from overwriting it.
Some code might allocate other resources, not managed by the runtime, and you should take extra care to say when that resource is no longer needed.
You'll have to figure that out on a case-by-case basis, by reading the documentation of the stream.
For resources allocated through dart:ffi, you can also use NativeFinalizer to register a dispose function for the resource.
Generally, you should always cancel the subscription if you don't need any more events from a stream. That's the one thing you can do. If nothing else, it allows garbage collection to collect things a little earlier.
Let's say that I call MailboxProcessor.PostAndReply, which may run for a very long time due to whatever reasons. What would happen if I call MailboxProcessor.Post from some other thread while the first call has not returned yet?
What I mean is, yeah, sure, I can write a test that would recreate this situation. However, before I start reinventing the bicycle, I wonder if anyone already knows the answer on this question.
Thanks a lot!
The short answer: no, it doesn't block.
The longer version:
Mailbox processor uses a regular Queue<T> instead of a ConcurrentQueue<T> - which means posting uses a lock to enqueue, meaning that if a post were to be called from two different threads, one would block the thread till the other call returned - which would happen very fast, but block.
tl;dr: Post does not block is so far as no actual work is done on posting.
I'm using grpc in iOS with bidirectional streams.
For the stream that I write to, I subclassed GRXWriter and I'm writing to it from a background thread.
I want to be as quick as possible. However, I see that GRXWriter's status switches between started and paused, and I sometimes get an exception when I write to it during the paused state. I found that before writing, I have to wait for GRXWriter.state to become started. Is this really a requirement? Is GRXWriter only allowed to write when its state is started? It switches very often between started and paused, and this feels like it may be slowing me down.
Another issue with this state check is that my code looks ugly. Is there any other way that I can use bidirectional streams in a nicer way? In C# grpc, I just get a stream that I write freely to.
Edit: I guess the reason I'm asking is this: in my thread that writes to GRXWriter, I have a while loop that keeps checking whether state is started and does nothing if it is not. Is there a better way to do this rather than polling the state?
The GRXWriter pauses because the gRPC Core only accepts one write operation pending at a time. The next one has to wait until the first one completes. So the GRPCCall instance will block the writer until the previous write is completed, by modifying its state!
In terms of the exception, I am not sure why you are getting the problem. GRXWriter is more like an abstract class and it seems you did your own implementation by inheriting from it. If you really want to do so, it might be helpful to refer to GRXBufferedPipe, which is an internal implementation. In particular, if you want to avoid waiting in a loop for writing, writing again in the setter of GRXWriter's state should be a good option.
According to Apple document on NSOperation, we have to override main method for non-concurrent operations and start method for concurrent operations. But why?
First, keep in mind that "concurrent" and "non-concurrent" have somewhat specialized meanings in NSOperation that tend to confuse people (and are used synonymously with "asynchronous/synchronous"). "Concurrent" means "the operation will manage its own concurrency and state." "Non-concurrent" means "the operation expects something else, usually a queue, to manage its concurrency, and wants default state handling."
start does all the default state handling. Part of that is that it sets isExecuting, then calls main and when main returns, it clears isExecuting and sets isFinished. Since you're handling your own state, you don't want that (you don't want exiting main to finish the operation). So you need to implement your own start and not call super. Now, you could still have a main method if you wanted, but since you're already overriding start (and that's the thing the calls main), most people just put all the code in start.
As a general rule, don't use concurrent operations. They are seldom what you mean. They definitely don't mean "things that run in the background." Both kinds of operations can run in the background (and neither has to run in the background). The question is whether you want default system behavior (non-concurrent), or whether you want to handle everything yourself (concurrent).
If your idea of handling it yourself is "spin up an NSThread," you're almost certainly doing it wrong (unless you're doing this to interface with a C/C++ library that requires it). If it's creating a queue, you're probably doing it wrong (NSOperation has all kinds of features to avoid this). If it's almost anything that looks like "manually handling doing things in the background," you're probably doing it wrong. The default (non-concurrent) behavior is almost certainly better than what you're going to do.
Where concurrent operations can be helpful is in cases that the API you're using already handles concurrency for you. A non-concurrent operation ends when main returns. So what if your operation wraps an async thing like NSURLConnection? One way to handle that is to use a dispatch group and then call dispatch_wait at the end of your main so it doesn't return until everything's done. That's ok. I do it all the time. But it blocks a thread that wouldn't otherwise be blocked, which wastes some resources and in some elaborate corner cases could lead to deadlock (really elaborate. Apple claims it's possible and they've seen it, but I've never been able to get it to happen even on purpose).
So another way you could do it is to define yourself as a concurrent operation, and set isFinished by hand in your NSURLConnection delegate methods. Similar situations happen if you're wrapping other async interfaces like Dispatch I/O, and concurrent operations can be more efficient for that.
(In theory, concurrent operations can also be useful when you want to run an operation without using a queue. I can kind of imagine some very convoluted cases where this makes sense, but it's a stretch, and if you're in that boat, I assume you know what you're doing.)
But if you have any question at all, just use the default non-conurrent behavior. You can almost always get the behavior you want that way with little hassle (especially if you use a dispatch group), and then you don't have to wrap your brain around the somewhat confusing explanation of "concurrent" in the docs.
I would assume that concurrent vs. non-concurrent is not just a flag somewhere but a very substantial difference. By having two different methods, it is made absolutely sure that you don't use a concurrent operation where you should use a non-concurrent one or vice versa.
If you get it wrong, your code will absolutely not work because of this design. That's what you want, because you immediately fix it. If there was one method only, then using concurrent instead of non-concurrent would lead to very subtle errors that might be very hard to find. And non-concurrent instead of concurrent will lead to performance problems that you also might miss.
I created a class that handles serial port asynchronously. I use it to communicate with a modem. I have no idea why, but sometimes, when I close my application, I get the Blue Screen and my computer restarts. I logged my code step by step, but when the BSOD appeared, and my computer restarted, the file into which I was logging data contained only white spaces. Therefore I have no idea, what the reason of the BSOD could be.
I looked through my code carefully and I found several possible reasons of the problem (I was looking for all that could lead to accessing unallocated memory and causing AV exceptions).
When I rethought the idea of asynchronous operations, a few things came to my mind. Please verify whether these are right:
1) WaitCommEvent() takes a pointer to the overlapped structure. Therefore, if I call WaitCommEvent() inside a function and then leave the function, the overlapped structure cannot be a local variable, right? The event mask variable and event handle too, right?
2) ReadFile() and WriteFile() also take references or pointers to variables. Therefore all these variables have to be accessible until the overlapped read or write operations finish, right?
3) I call WaitCommEvent() only once and check for its result in a loop, in the mean time doing other things. Because I have no idea how to terminate asynchronous operations (is it possible?), when I destroy my class that keeps a handle to a serial port, I first close the handle, and then wait for the event in the overlapped structure that was used when calling the WaitCommEvent() function. I do this to be sure that the thread that waits asynchronously for a comm event does not access any fields of my class which is destroyed. Is it a good idea or is it stupid?
try
CloseHandle(FSerialPortHandle);
if Assigned(FWaitCommEvent) then
FWaitCommEvent.WaitFor(INFINITE);
finally
FSerialPortHandle := INVALID_HANDLE_VALUE;
FreeAndNil(FWaitCommEvent);
end;
Before I noticed all these, most of the variables mentioned in point one and two were local variables of the functions that called the three methods above. Could it be the reason of the BSOD or should I look for some other mistakes in my code?
When I corrected the code, the BSOD stopped occuring, but It might be a coincidence. How do you think?
Any ideas will be appreciated. Thanks in advance.
I read the CancelIo() function documentation and it states that this method cancells all I/O operations issued by the calling thread. Is it OK to wait for the FWaitCommEvent after calling CancelIo() if I know that WaitCommEvent() was issued by a different thread than the one that calls CancelIo()?
if Assigned(FWaitCommEvent) and CancelIo(FSerialPortHandle) then
begin
FWaitCommEvent.WaitFor(INFINITE);
FreeAndNil(FWaitCommEvent);
end;
I checked what happens in such case and the thread calling this piece of code didn't get deadlocked even though it did not issue WaitCommEvent(). I tested in on Windows 7 (if it matters). May I leave the code as is or is it dangerous? Maybe I misunderstood the documentation and this is the reason of my question. I apologize for asking so many questions, but I really need to be sure about that.
Thanks.
An application running as a standard user should never be able to cause a bug check (a.k.a. BSOD). (And an application running as an Administrator should have to go well out of its way to do so.) Either you ran into a driver bug or you have bad hardware.
By default, Windows is configured to save a minidump in %SystemRoot%\minidump whenever a bug check occurs. You may be able to determine more information about the crash by loading the minidump file in WinDbg, configuring WinDbg to use the Microsoft public symbol store, and running the !analyze -v command in WinDbg. At the very least, this should identify what driver is probably at fault (though I would guess it's your modem driver).
Yes, you do need to keep the TOverlapped structure available for the duration of the overlapped operation. You're going to call GetOverlappedResult at some point, and GetOverlappedResult says it should receive a pointer to a structure that was used when starting the overlapped operation. The event mask and handle can be stored in local variables if you want; you're going to have a copy of them in the TOverlapped structure anyway.
Yes, the buffers that ReadFile and WriteFile use must remain valid. They do not make their own local copies to use internally. The documentation for ReadFile even says so:
This buffer must remain valid for the duration of the read operation. The caller must not use this buffer until the read operation is completed.
If you weren't obeying that rule, then you were likely reading into unreserved stack space, which could easily cause all sorts of unexpected behavior.
To cancel an overlapped I/O operation, use CancelIo. It's essential that you not free the memory of your TOverlapped record until you're sure the associated operation has terminated. Likewise for the buffer you're reading or writing. CancelIo does not cancel the operation immediately, so your buffers might still be in use even after you call it.