Suppose I have the following:
action_client.cancel_all_goals()
print(action_client.get_status() != 'ACTIVE')
Is the above guaranteed to print True every time?
In other words, does SimpleActionClient.cancel_all_goals() cancel all goals before returning, or does it just send instructions to
cancel goals without waiting for the goals to be really cancelled?
action_client.cancel_all_goals() just sends an instruction to cancel goals without waiting.
Since the documentation of the method is not very helpful you need to have a look to action_client.h or action_client.py to find out whats happening.
The code shows that to cancel all goals only a simple message is published (self.pub_cancel.publish(cancel_msg) or self.pub_cancel.publish(cancel_msg)). This means the call is asynchronous and will not block.
This means your code
action_client.cancel_all_goals()
print(action_client.get_status() != 'ACTIVE')
will typically print False but this is not guaranteed because:
The client could be already PREEMPTED, ABORTED, ... before
ROS is not deterministic, this means in theory the client could be cancelled between the calls of cancel_all_goals and get_status due to a thread switch.
Related
I am using FreeRTOS v8.2.3 on a PIC32 micro controller. I have a case where I need to post the following 3 events to 3 corresponding queues from an ISR, in order to unblock a task awaiting one of these events at a time -
a) SETUP packet arrival
b) Transfer completed event 1
c) Transfer completed event 2
My exection sequence and requirement are as follows:
Case 1 (execution is blocked for an event at point_1):
As SETUP arrives while waiting at point_1 of execution -
i) the waiting task should be unblocked
ii)Setup received from queue and processed
Some code is processed and reaches point_2
Case 2 (execution is blocked for an event at point_2):
If any one of SETUP or transfer complete events occur at point_2 -
i) unblock the wait
ii) receive transfer_complete_1 or transfer_complete_2 event from queue to carry out some additional transfers and loop at point_2
iii)if it was a Setup queue event, do not receive, but go to point_1
The code does not seem to work when I try to use xQueueReceive and xQueueSelectFromSet on the Setup queue even when one of them is used at point_1 and the other used at point_2.
But seems to work fine if I use xQueueSelectFromSet at both the places and verify the queuset member handle that caused the event to proceed further.
Given the requirement above, the problem with using xQueueSelectFromSet at both the places is that
- the xQueueSelectFromSet call will be placed back to back, first on a Setup event at point_2 and then immediately on point_1 which is not intentional
- the xQueueSelectFromSet call at point_1 is also not desired
Hence can anyone please explain whether and how to use both a queueset and queuereceive on the same queue? If not possible how do we typically implement the above requirement in FreeRTOS?
This is a duplicate of a question asked on the FreeRTOS support forum, so below is a duplicate of the answer I gave there:
I don't fully understand your usage scenario, but some points which may help.
1) If a queue is a member of a queue set, then the queue can only be read after its handle has been returned from the queue set. Further, if a queue's handle is returned from a queue set then the item must be read from the queue. If either of these requirements are not met then the state of the queue set will not match that of the queues in the set.
2) If the same task is reading from the multiple queues then it is probably not necessary to use a queue set at all. See the "alternatives to using queue sets" section on the following page: http://www.freertos.org/Pend-on-multiple-rtos-objects.html
I've started working with CloudKit and finally started using subclassed NSOperation for most of my async stuff.
How ever, I have two questions.
How can I mark an operation as failed? That is, if operation A fails, I don't wan't its dependent operations to run. Can I just not mark it as isFinished? What happens to the unexecuted items already in the queue?
What would be the recommended route to take if I would like something like a try, catch, finally. The end goal is to have one last operation that can display some UI with information of success or report errors back to the user?
isFinished means your operations complete execution, you can cancel an operation but that means your operation is canceled and that could be done without even executing the operation and you can check that by calling isCanceled, if you want spefically failure and success flags after executing an NSOperation then in subclass add isFailure property and check in dependent operation before executing, and cancel that if isFailure is set to true.
You can add dependency on operation and check there status, and if all are successful just update UI on main thread or report and error.
Update
You can keep and array of dependent operations and when on failure you can cancel those operations.
Add the operations to a queue. When one of them fails call cancel on the queue. Future CKOperations will error on start with "operation was cancelled before it started" and any block operations won't even run.
One way is to use KVO on the queue's operationCount (Example on Github) and wait until it is zero. Then if you got an error at any stage (and captured it) you can make a final callback operation ended with the error. However, usually you will end up wanting to display a different message depending on the operation that caused the error, in which case it is best to handle them when they happen, rather than wait until the end when you then need to figure out which error came from which operation.
Overview:
When an operation fails (based on your business logic) and you want to abort all dependant operations then you can cancel dependant operations manually
Order in which you cancel operation is important as cancelling an operation would allow for the dependant operations to start (as the dependancy condition has been broken).
In order to all this you need to have variables holding each of those dependant operations, so that you cancel them in the order you intend.
Code:
var number = 0
let queue = OperationQueue()
let b = BlockOperation()
let c = BlockOperation()
let d = BlockOperation()
let a = BlockOperation {
if number == 1 {
//Assume number should not be 1
//If number is 1, then it is considered as a failure
//Cancel the remaining operations manually in the reverse order
//Reverse order is important because if you cancelled b first, it would start c
d.cancel()
c.cancel()
b.cancel()
}
}
b.addDependency(a)
c.addDependency(b)
d.addDependency(c)
queue.addOperation(a)
queue.addOperation(b)
queue.addOperation(c)
queue.addOperation(d)
E.g. suppose I have a module that implements gen_server behavior, and it has
handle_call({foo, Foo}, _From, State) ->
{reply, result(Foo), State}
;
I can reach this handler by doing gen_server:call(Server, {foo, Foo}) from some other process (I guess if a gen_server tries to gen_server:call itself, it will deadlock). But gen_server:call blocks on response (or timeout). What if I don't want to block on the response?
Imaginary use-case: Suppose I have 5 of these gen_servers, and a response from any 2 of them is enough for me. What I want to do is something like this:
OnResponse -> fun(Response) ->
% blah
end,
lists:foreach(
fun(S) ->
gen_server:async_call(S, {foo, Foo}, OnResponse)
end,
Servers),
Result = wait_for_two_responses(Timeout),
lol_i_dunno()
I know that gen_server has cast, but cast has no way to provide any response, so I don't think that that's what I want in this case. Also, seems like it should not be the gen_server's concern whether caller wants to handle response synchronously (using gen_server:call) or async (does not seem to exist?).
Also, the server is allowed to provide response asynchronously by having handle_call return no_reply and later calling gen_server:reply. So why not also support handling response asynchronously on the other side? Or does that exist, but I'm just failing to find it??
gen_server:call is basically a sequence of
send a message to the server (with identifier)
wait for the response of that particular message
wrapped in a single function.
for your example you can decompose the behavior in 2 steps: a loop that uses gen_server:cast(Server,{Message,UniqueID,self()} with all servers, and then a receive loop that wait for a minimum of 2 answers of the form {UniqueID,Answer}. But you must take care to empty your mail box at some point in time. A better solution should be to delegate this to a separate process which will simply die when it has received the required number of answers:
[edit] make some correction in the code now it should work :o)
get_n_answers(Msg,ServerList,N) when N =< length(ServerList) ->
spawn(?MODULE,get_n_answers,[Msg,ServerList,N,[],self()]).
get_n_answers(_Msg,[],0,Rep,Pid) ->
Pid ! {Pid,Rep};
get_n_answers(_Msg,[],N,Rep,Pid) ->
NewRep = receive
Answ -> [Answ|Rep]
end,
get_n_answers(_Msg,[],N-1,NewRep,Pid);
get_n_answers(Msg,[H|T],N,Rep,Pid) ->
%gen_server:cast(H,{Msg,Pid}),
H ! {Msg,self()},
get_n_answers(Msg,T,N,Rep,Pid).
and you cane use it like this:
ID = get_n_answers(Msg,ServerList,2),
% insert some code here
Answer = receive
{ID,A} -> A % tagged with ID to do not catch another message in the mailbox
end
You can easily implement that by sending each call in a separate process and waiting for responses from as many as required (in essence this is what async is about, isn't? :-)
Have a look at this simple implementation of parallel call which is based on the async_call from rpc library in OTP.
This is how it works in plain English.
You need to make 5 calls so (in the parent process) you spawn 5 child Erlang processes.
Each process sends back to the parent process a tuple containing its PID and the result of the call.
The tuple can be only constructed and send back only when the desired call has been completed.
In the parent process you loop through responses in the receive loop.
You can wait for all responses or just 2 or 3 out of the started 5.
The parent process (which spawns the worker processes) will eventually receive all responses (I mean those you want to ignore). You need a way to discard them if you don't want the message queue to grow infinitely. There are two options:
The parent process itself can be a transient process, created only for the call to spawn the other 5 child processes. Once the desired amount of responses is collected it can send the response back to a caller and die. Messages send to the died process will be discarded.
The parent process can continue receiving messages after it has received the desired amount of responses and simply discard them.
gen_server do not have a concept of async calls on client side. It is not trivial how to implement in consistently because gen_server:call is a combination of monitor for server process, send request message and wait for either answer or monitor down or timeout. If you do something like what you mentioned you will need to deal with DOWN messages from server somehow ... so hypothetical async_call should return some key for yeld and also an internal monitor reference for a case you are processing DONW messages from other processes... and do not want to mix it with yeld errors.
Not that good but possible alternative is to use rpc:async_call(gen_server, call, [....])
But this approach have a limitation in calling process will be a short lived rex child, so if your gen server use caller pid somehow other than send it a reply logic will be broken.
gen_sever:call to the process itself would surely block until timeout. To understand the reason, one should be aware of the fact that gen_server framework actually combine your specific code together into one single module, and gen_server:call would be "translated" as "pid ! Msg" form.
So imagine how this block of code takes effect, the process actually stay in a loop keeping receiving messages, and when the control flow run into a handling function, the receiving process is temporarily interrupted, so if you call gen_server:call to the process itself, since it is a synchronous function, it waits for response, which however would never come in until the handing function returns so that the process can continue to receive messages, so the code is in a deadlock.
I have a wxLua Gui app that has a "Run" button. Depending on selected options, Run can take a long time, so I would like to implement a "Cancel" button/feature. But it looks like everything in wxLua is working on one Gui thread, and once you hit Run, pressing Cancel does nothing, the Run always goes to completion.
Cancel basically sets a variable to true, and the running process regularly checks that variable. But the Cancel button press event never happens while Running.
I have never used co-routines; if the Run process regularly yields to a "Cancel check" process, will the Cancel event happen then?
Or is there another way?
(the following assumes that by "Run" you mean a long running operation in the same process and not running an external process using wxExecute or wxProcess.)
"Cancel" event is not triggered because by executing your Run logic you have not given a chance to the UI to handle the click event.
To avoid blocking the UI you need to do something like this. When you click Run button create a co-routine around the function you want to run:
coro = coroutine.create(myLongRunningFunction)
Your Run event is completed at this point. Then in EVT_IDLE event you will be resuming this coroutine as long as it's not complete. It will look something like this:
if coro then -- only if there is a coroutine to work on
local ok, res = coroutine.resume(coro, additional, parameters)
-- your function either yielded or returned
-- you may check ok to see if there was an error
-- res can tell you how far you are in the process
-- coro can return multiple values (just give them as parameters to yield)
if coroutine.status(coro) == 'dead' then -- finished or stopped with error
coro = nil
-- do whatever you need to do knowing the process is completed
end
end
You will probably need to request more IDLE event for as long as your process is not finished as some operating systems will not trigger IDLE events unless there is some other event triggered. Assuming your handler has event parameter, you can do event:RequestMore(true) to ask for more IDLE events (RequestMore).
Your long-running process will need to call coroutine.yield() at the right time (not too short as you will be wasting time to switch back and forth and not too long for users to notice delays in the UI); you probably need to experiment with this, but something timer-based with 100ms or so between calls may work.
You can check for Cancel values either in your IDLE event handler or in the long-running function as you do now. The logic I described will give your application UI a chance to process Cancel event as you expect.
I don't use WXWidgets, but the way I implement cancel buttons in my lua scripts which use IUP is to have a cancel flag, which is set when the button is pressed and the progress display is checked for during the run.
Usage is like this
ProgressDisplay.Start('This is my progress box',100)
for i=1,100 do
ProgressDisplay.SetMessage(i.." %")
fhSleep(50,40) -- Emulate performing the task
ProgressDisplay.Step(1)
if ProgressDisplay.Cancel() then
break
end
end
ProgressDisplay.Reset()
ProgressDisplay.Close()
If you want to see the definition for the ProgressDisplay see:
http://www.fhug.org.uk/wiki/doku.php?id=plugins:code_snippets:progress_bar
Suppose I use QTPs recovery scenario manager to set the playback synchronization timeout to 0. The handler would return with "continue with next statement".
I'd do that to make sure that any following playback statements don't waste their time waiting for the next non-existing/non-matching step before failing:
I have a lot of GUI tests that kind of get stuck because let's say if 10 controls are missing, their (consecutive) playback steps produce 10 timeout waits before failing. If the playback timeout is 30 seconds, I loose 10x30 seconds=5 minutes execution time while it really would be sufficient to wait for 30 seconds ONCE (because the app does not change anymore -- we waited a full timeout period already).
Now if I have 100 test cases (=action iterations), this possibly happens 100 times, wasting 500 minutes of my test exec time window.
That's why I come up with the idea of a recovery scenario function setting the timeout to 0 after/upon the first failed playback step. This would accelerate the speed while skipping the rightly-FAILED step, yet would not compromise the precision/reliability of identifying the next matching GUI context (which creates a PASSED step).
Then of course upon the next passed playback step, I would want to restore the original timeout value. How could I do that? This is my question.
One cannot define a recovery scenario function that is called for PASSED steps.
I am currently thinking about setting a method function for Reporter.ReportEvent, and "sniffing" for PASSED log entries there. I'd install that method function in the scenario recovery function which sets timeout to 0. Then, when the "sniffer" function senses a ReportEvent call with PASSED status during one of the following playback steps, I'd reset everything (i.e. restore the original timeout, and uninstall the method function). (I am 99% sure, however, that .Click and .Set methods do not call ReportEvent to write their result status...so this option might probably not work.)
Better ideas? This really bugs me.
It sounds to me like you tests aren't designed correctly, if you fail to find an object why do you continue?
One possible (non recovery scenario) solution would be to use RegisterUserFunc to override the methods you are using in order to do an obj.Exist(0) before running the required method.
Function MyClick(obj)
If obj.Exist(1) Then
obj.Click
Else
Reporter.ReportEvent micFail, "Click failed, no object", "Object does not exist"
End If
End Function
RegisterUserFunc "Link", "Click", "MyClick"
RegisterUserFunc "WebButton", "Click", "MyClick"
''# etc
If you have many controls of which some may be missing and you know that after 10 seconds you mentioned (when the first timeout occurs), nothing more will show up, then you can use the exists method with a timeout parameter.
Something like this:
timeout = 10
For Each control in controls
If control.exists(timeout) Then
do something with the control
Else
timeout = 0
End If
Next
Now only the first timeout will be 10 seconds. Each and every subsequent timeout in your collection of controls will have the timeout set to 0 which will save your time.