I have a task that perform sequential logic, and I want to stop this task from performing its logic from another task. Is there a way to do that without causing a rendezvous?
How can I suspend the task?
Thanks in advance.
You can use Asynchronous Transfer of Control and put the part you want to stop into the abortable_part, or directly use abort to kill the task.
If you use GNAT, you could have a look at the GNAT.Tasking package.
Generally it's better to structure your sequential logic with "check points", such as a protected object flag, where a brief test can be made to see if there's been a signal to abort. Protected objects are designed to be a lightweight concurrency mechanism to support this sort of fast test.
Does it really need to be interruptible at any point in the statement sequence? Is the cost of the few extra micro- or milliseconds needed to complete a statement block or iteration and make a flag check really that unacceptable? How often do you anticipate needing to abort the processing sequence?
Having well-defined checkpoints at which to test for a signal to prematurely terminate processing can ensure that the sequence exits in a known state, which aids verifying correct operation and debugging if something goes awry.
You might look at protecting whatever this operation or data is by implementing it inside a protected object.
It sounds to me like you are looking for some kind of locking scheme. It is fairly easy to implement all kinds of different locking schemes with Ada protected objects, and this way you don't need explicit handshaking between specific tasks.
Related
In an example such as examples/allegro_hand, where a main thread advances the simulator and another sends commands to it over LCM, what's the cleanest way for each process to kill the other?
I'm struggling to kill the side process when the main process dies. I've wrapped the AdvanceTo with a try, and catch the error thrown when
MultibodyPlant's discrete update solver failed to converge
I can manually publish a boolean with drake::lcm::Publish within the catch block. In the side process, I subscribe and use something like this HandleStatus to process incoming messages. The corresponding HandleStatus isn't called unless I add a while(0 == lcm_.handleTimeout(10)) like this. When I do, the side process gets stuck waiting for a message, which doesn't come unless the simulation throws. Any advice for how to handle this case?
I'm able to kill the main process (allegro_single_object_simulation) by sending a boolean over LCM from the other (run_twisting_mug), AdvanceTo-ing to a smaller timestep within the main process, and checking the received boolean after each of the smaller AdvanceTos. This seems to work reliably, but may not be the cleanest solution.
If I'm thinking about this the wrong way and there's a better way to run an example like this, please let me know. Thanks!
We often use a process manager, like https://github.com/RobotLocomotion/libbot/tree/master/bot2-procman
to start and manage all of our processes. The ROS ecosystem has similar tools.
procman is open and available for you to use, but we don't consider it officially "supported" by the drake developers.
I'm using grpc in iOS with bidirectional streams.
For the stream that I write to, I subclassed GRXWriter and I'm writing to it from a background thread.
I want to be as quick as possible. However, I see that GRXWriter's status switches between started and paused, and I sometimes get an exception when I write to it during the paused state. I found that before writing, I have to wait for GRXWriter.state to become started. Is this really a requirement? Is GRXWriter only allowed to write when its state is started? It switches very often between started and paused, and this feels like it may be slowing me down.
Another issue with this state check is that my code looks ugly. Is there any other way that I can use bidirectional streams in a nicer way? In C# grpc, I just get a stream that I write freely to.
Edit: I guess the reason I'm asking is this: in my thread that writes to GRXWriter, I have a while loop that keeps checking whether state is started and does nothing if it is not. Is there a better way to do this rather than polling the state?
The GRXWriter pauses because the gRPC Core only accepts one write operation pending at a time. The next one has to wait until the first one completes. So the GRPCCall instance will block the writer until the previous write is completed, by modifying its state!
In terms of the exception, I am not sure why you are getting the problem. GRXWriter is more like an abstract class and it seems you did your own implementation by inheriting from it. If you really want to do so, it might be helpful to refer to GRXBufferedPipe, which is an internal implementation. In particular, if you want to avoid waiting in a loop for writing, writing again in the setter of GRXWriter's state should be a good option.
According to Apple document on NSOperation, we have to override main method for non-concurrent operations and start method for concurrent operations. But why?
First, keep in mind that "concurrent" and "non-concurrent" have somewhat specialized meanings in NSOperation that tend to confuse people (and are used synonymously with "asynchronous/synchronous"). "Concurrent" means "the operation will manage its own concurrency and state." "Non-concurrent" means "the operation expects something else, usually a queue, to manage its concurrency, and wants default state handling."
start does all the default state handling. Part of that is that it sets isExecuting, then calls main and when main returns, it clears isExecuting and sets isFinished. Since you're handling your own state, you don't want that (you don't want exiting main to finish the operation). So you need to implement your own start and not call super. Now, you could still have a main method if you wanted, but since you're already overriding start (and that's the thing the calls main), most people just put all the code in start.
As a general rule, don't use concurrent operations. They are seldom what you mean. They definitely don't mean "things that run in the background." Both kinds of operations can run in the background (and neither has to run in the background). The question is whether you want default system behavior (non-concurrent), or whether you want to handle everything yourself (concurrent).
If your idea of handling it yourself is "spin up an NSThread," you're almost certainly doing it wrong (unless you're doing this to interface with a C/C++ library that requires it). If it's creating a queue, you're probably doing it wrong (NSOperation has all kinds of features to avoid this). If it's almost anything that looks like "manually handling doing things in the background," you're probably doing it wrong. The default (non-concurrent) behavior is almost certainly better than what you're going to do.
Where concurrent operations can be helpful is in cases that the API you're using already handles concurrency for you. A non-concurrent operation ends when main returns. So what if your operation wraps an async thing like NSURLConnection? One way to handle that is to use a dispatch group and then call dispatch_wait at the end of your main so it doesn't return until everything's done. That's ok. I do it all the time. But it blocks a thread that wouldn't otherwise be blocked, which wastes some resources and in some elaborate corner cases could lead to deadlock (really elaborate. Apple claims it's possible and they've seen it, but I've never been able to get it to happen even on purpose).
So another way you could do it is to define yourself as a concurrent operation, and set isFinished by hand in your NSURLConnection delegate methods. Similar situations happen if you're wrapping other async interfaces like Dispatch I/O, and concurrent operations can be more efficient for that.
(In theory, concurrent operations can also be useful when you want to run an operation without using a queue. I can kind of imagine some very convoluted cases where this makes sense, but it's a stretch, and if you're in that boat, I assume you know what you're doing.)
But if you have any question at all, just use the default non-conurrent behavior. You can almost always get the behavior you want that way with little hassle (especially if you use a dispatch group), and then you don't have to wrap your brain around the somewhat confusing explanation of "concurrent" in the docs.
I would assume that concurrent vs. non-concurrent is not just a flag somewhere but a very substantial difference. By having two different methods, it is made absolutely sure that you don't use a concurrent operation where you should use a non-concurrent one or vice versa.
If you get it wrong, your code will absolutely not work because of this design. That's what you want, because you immediately fix it. If there was one method only, then using concurrent instead of non-concurrent would lead to very subtle errors that might be very hard to find. And non-concurrent instead of concurrent will lead to performance problems that you also might miss.
When certain conditions are met, I'd like to schedule a worker to run a particular job in 5 minutes. The thing is, if the same conditions are met again, I want to check if there's something scheduled to run. If there is such a worker scheduled to run, then, I don't want to enqueue again, but if there isn't, it should be queued. I hope you guys understood what I'm trying to do. Can it be achieved? If yes, how?
Sounds like you want to use or implement a simple persisted lock. The code that enqueues the job can first check for the availability of the lock, acquire and enqueue if available, skip if not. The enqueued job can be responsible for releasing the lock. You'll want to account for failure, like adding a lock timeout. The redis-mutex gem may be a useful implementation of this idea.
Best practices promote jobs that are idempotent. This means that you should be writing them in such a way that it should be safe to run them more than once. Any subsequent call doesn't change the result of the first call. You achieve this by writing logic that does the proper checks, and acts accordingly. Since you don't provide a description of what your worker does, I can't be more specific.
For an example, here is a link to the Sidekiq's FAQ: Make your workers idempotent and transactional
The benefit of this approach is that you're playing along the convenient abstraction of scheduled workers, instead of fighting against it.
I am developing a WPF application with C# 4.0 where some user-specific code will
be compiled at runtime and then executed inside an AppDomain. The process might take 10 ms or 5 minutes. The AppDomain will be created by Task.Factory.StartNew(). Works fine.
Now I want to be able to cancel/interrupt the execution. I can press a
Stop button while the codes is executing but how can I cancel the Task? I know:
there is the CancellationToken.IsCancellationRequested property but I cannot
loop through something. This is why I cannot check the value while executing
the (atomic) code. And unloading the AppDomain does not stop the Task.
FYI: I took the Task class because it easy to use. If Thread would be useful: No problem.
Can someone help me? A short code snippet would be nice :).
Thank you.
Aborting a thread or a task is a code smell of a badly designed solution.
If that is your decision, you should consider that every line of code could be the last one to be executed and consider releasing any unmanaged resource, lock, etc that could leave the system in an inconsistent state. In theory we should always be this careful, but in practice this doesn't hold.
If you try with a Thread and the inviting .Abort() method, you should consider that ThreadAbortException is a special exception in terms of try-catch and finally blocks. Additionally, you can't even be sure that the thread is going to be aborted.
In regards of using a Task, AFAIK (I'm not an expert in TPL) I'm afraid you cannot do what you want. You should somehow re-design your logic to consider the cancellation token and cleanly stop your computation.