I'm looking for some clarification on task failure. As I understand it, if task 1 spawns task 2, task 2 is a child of task 1. If task 1 fails, does it automatically fail and clean up after task 2?
For example, I start a i/o task on a socket like so:
spawn(proc() {
start_new_socket(socket, socket_receiver)
});
We'll call that task 1. In task 1, I spawn another task:
fn start_new_socket(socket: Socket, receiver: Receiver<Message>) {
// Write task
let mut stream_write = socket.stream.clone();
spawn(proc() {
loop {
let msg = receiver.recv();
msg.send(&mut stream_write).unwrap();
}
});
// Open up a blocking read on this socket
let mut stream_read = socket.stream.clone();
loop {
let msg = Message::load(&mut stream_read).unwrap();
match msg.payload {
Text(ptr) => {
let json_slice = (*ptr).as_slice();
println!("Socket: {} recevied: {}", socket.id, json_slice);
parse_json(json_slice, socket.clone());
}
Binary(ptr) => {
// TODO - Do awesome binary shit
}
}
}
}
If the task 1, start_new_socket, fails because of EOF or something else in the stream, does the write task it started also fail?
I did the experiment with this code :
use std::io::Timer;
use std::time::Duration;
fn main () {
spawn(proc() {
let mut timer = Timer::new().unwrap();
loop {
println!("I from subtask !");
timer.sleep(Duration::seconds(1));
}
});
let mut other_timer = Timer::new().unwrap();
other_timer.sleep(Duration::seconds(5));
println!("Gonna fail....");
other_timer.sleep(Duration::seconds(1));
fail!("Failed !");
}
output is :
I from subtask !
I from subtask !
I from subtask !
I from subtask !
I from subtask !
Gonna fail....
I from subtask !
task '<main>' failed at 'Failed !', failing.rs:18
I from subtask !
I from subtask !
I from subtask !
I from subtask !
I from subtask !
I from subtask !
^C
So apparently not, subtask doesn't fail when main task does.
Related
I have three tasks, which shares a binary semaphore myBinarySemaphore. I'd like to know which task is currently having the binary semaphore. I could use a global variable to do this but does freeRTOS provide a method for this ?
Here's the code, I'm looking for a freeRTOS method to check which task has the binarySemaphore, in taskC for example. xTaskOwner is pure invention for example purpose. Thanks.
void taskA(void *pvParameters)
{
for(;;)
{
if(xSemaphoreTake(myBinarySemaphore, (TickType_t) 10) == pdTRUE)
{
xSemaphoreGive(myBinarySemaphore);
}
}
}
void taskB(void *pvParameters)
{
for(;;)
{
if(xSemaphoreTake(myBinarySemaphore, (TickType_t) 10) == pdTRUE)
{
xSemaphoreGive(myBinarySemaphore);
}
}
}
void taskC(void *pvParameters)
{
for(;;)
{
if(xTaskOwner(myBinarySemaphore) == taskA) // <== How to check with freeRTOS which task has the semaphore ?
printf("taskA has the semaphore");
else if (xTaskOwner(myBinarySemaphore) == taskB)
printf("taskB has the semaphore");
}
}
PS & EDIT: let's assume that taskC can be run simultaneously than the other tasks, because otherwise my example is wrong.
I would add a Queue with a simple message in it that says which task currently has the semaphore. Every time you take the semaphore, you overwrite the queue. In taskC you can do a xQueuePeek and see which task did the overwrite.
OR
You can use event flags to signal which task has the semaphore. Each task has its own flag on a share event group.
I am using Twilio Flex to support a call center. I have a TaskRouter workflow set up where Task Reservation Timeout is set to 120 seconds. In its filter, I've created two routing steps. The first one finds matching workers in the main queue and has a timeout of 120 seconds. After 120 seconds, it should move to Call Forward Queue. In the call forward queue, no workers exist (target worker expression: 1==2). I'm catching all these events with a "trEventListener" function. Once a task is moved into the Call Forward queue, I call the "callForward" function which uses twiml.dial() to connect the call to an external number. I also change this task's status to "canceled" with a custom reason so I can track it in flex insights. I am using the guide in this link to form my logic: https://support.twilio.com/hc/en-us/articles/360021082934-Implementing-Voicemail-with-Twilio-Flex-TaskRouter-and-WFO.
Call forwarding is working fine but according to Flex insights, there are some calls that get handled after 120 seconds (between 120 - 300 seconds). Ideally, these should be forwarded as well. There is also no error logged for me to track down why this is happening to only a handful of calls.
Furthermore, in some cases, when I try to change the task status to cancel with my custom reason, it spits out the following error: Cannot cancel task because it is not pending or reserved. In other cases, it works fine. It's again hard to figure out why it's selectively working and not consistent in its behavior.
Here is the function code.
trEventListener.js:
exports.handler = function(context, event, callback) {
const client = context.getTwilioClient();
let task = '';
let workspace = '';
console.log(`__[trEventStream]__: Event recieved of type: ${event.EventType}`);
// setup an empty success response
let response = new Twilio.Response();
response.setStatusCode(204);
// switch on the event type
switch(event.EventType) {
case 'task-queue.entered':
// ignore events that are not entering the 'Call Forward' TaskQueue
if (event.TaskQueueName !== 'Call Forward') {
console.log(`__[trEventStream]__: Entered ${event.TaskQueueName} queue - no forwarding required!`);
return callback(null, response);
}
console.log(`__[trEventStream]__: entered ${event.TaskQueueName} queue - forwarding call!`);
task = event.TaskSid;
workspace = event.WorkspaceSid;
const ta = JSON.parse(event.TaskAttributes);
const callSid = ta.call_sid;
let url = `https://${context.DOMAIN_NAME}/forwardCall`;
// redirect call to forwardCall function
client.calls(callSid).update({
method: 'POST',
url: encodeURI(url),
}).then(() => {
console.log(`__[trEventStream]__: [SUCCESS] ~> Task with id ${task} forwarded to external DID`);
// change task status to canceled so it doesn't appear in flex or show up as a pending task
client.taskrouter.workspaces(workspace)
.tasks(task)
.update({
assignmentStatus: 'canceled',
reason: 'Call forwarded'
})
.then(task => {
console.log(`__[trEventStream]__: [SUCCESS] ~> Task canceled`);
return callback(null, response);
}).catch(err => {
console.log(`__[trEventStream]__: [ERROR] ~> Task not marked complete: `, err);
// doesn't warrant reponse 500 since call still forwarded :)
return callback(null, response);
});
}).catch(err => {
console.log(`__[trEventStream]__: [ERROR] ~> Task failed to forward to external DID: `, err);
response.setStatusCode(500);
return callback(err, response);
});
break;
default:
return callback(null, response);
}
};
callForward.js:
exports.handler = function(context, event, callback) {
console.log(`forwarding call`);
// set-up the variables that this Function will use to forward a phone call using TwiML
// REQUIRED - you must set this
let phoneNumber = event.PhoneNumber || context.NUMBER;
// OPTIONAL
let callerId = event.CallerId || null;
// OPTIONAL
let timeout = event.Timeout || null;
// OPTIONAL
let allowedCallers = event.allowedCallers || [];
let allowedThrough = true;
if (allowedCallers.length > 0) {
if (allowedCallers.indexOf(event.From) === -1) {
allowedThrough = false;
}
}
// generate the TwiML to tell Twilio how to forward this call
let twiml = new Twilio.twiml.VoiceResponse();
let dialParams = {};
if (callerId) {
dialParams.callerId = callerId;
}
if (timeout) {
dialParams.timeout = timeout;
}
if (allowedThrough) {
twiml.dial(dialParams, phoneNumber); // making call :)
}
else {
twiml.say('Sorry, you are calling from a restricted number. Good bye.');
}
// return the TwiML
callback(null, twiml);
};
Any kind of help and/or guidance will be appreciated.
Twilio developer evangelist here.
When you redirect a call from a task, its task is cancelled with the reason "redirected" so you don't need to cancel it yourself.
Your code was failing to update the task occasionally because of a race condition between your code and the task getting cancelled by Twilio.
I'm trying to create an async task that will not block the request. The user make the request, the task will start, and the controller will render "Job is running...", this is for avoid the request being blocked waiting the task to complete.
Once the task is finish, it will execute the onComplete and do something with the result of that task (for example call a service that will send a mail to an user)
| Error 2014-09-16 17:38:56,721 [Actor Thread 3] ERROR gpars.LoggingPoolFactory - Async execution error: null
The code is the following:
package testasync
import static grails.async.Promises.*
class TestController {
def index() {
//Create the job
def job1 = task {
println 'Waiting 10 seconds'
Thread.sleep(10000)
return 'Im done'
}
//On error
job1.onError { Throwable err ->
println "An error occured ${err.message}"
}
//On success
job1.onComplete { result ->
println "Promise returned $result"
}
render 'Job is running...'
}
Complete stacktrace:
| Error 2014-09-17 10:35:24,522 [Actor Thread 3] ERROR gpars.LoggingPoolFactory - Async execution error: null
Message: null
Line | Method
->> 72 | doCall in org.grails.async.factory.gpars.GparsPromise$_onError_closure2
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 62 | run in groovyx.gpars.dataflow.DataCallback$1
| 1145 | runWorker in java.util.concurrent.ThreadPoolExecutor
| 615 | run in java.util.concurrent.ThreadPoolExecutor$Worker
^ 745 | run . . . in java.lang.Thread
I ended using the executor framework with the grails-executor plugin. I uploaded a very basic example here: https://github.com/agusl88/grails-async-job-queuqe
That code is using a "custom" version of the grails-executor plugin, i merged some PR's from the plugin repo and packaged as jar just for testing propuses. The repo of the plugin is this: https://github.com/basejump/grails-executor
I was able to get rid of this exception in a controller by removing the onComplete and onError calls. I guess the exception happens because the parent thread ended when you called render.
So your:
Promise p = task {
complexAsyncMethodCall(); // (1) do stuff
}
.onComplete { result -> println result } // (2) on success
.onError { Throwable t -> System.err.println("Error: " + t) } // (3) on error
Becomes:
Promise p = task {
try {
def result = complexAsyncMethodCall(); // (1) do stuff
println result // (2) on success
} catch(Throwable t) {
System.err.println("Error: " + t) // (3) on error
}
}
This adds coupling between your work (1) and the result processing (2 and 3) but you could overcome this by writing your own Closure wrapper that takes extra Closures as arguments. Something like this:
// may not work! written off the top of my head
class ProcessableClosure<V> extends Closure<V> {
Closure<V> work;
Closure<?> onError;
Closure<?> onComplete;
#Override
public V call(Object... args) {
try {
def result = work.call(args); // (1) do stuff
onComplete.call(result); // (2) on complete
} catch(Exception e) {
onError.call(result); // (3) on error
}
}
}
That makes your code more readable:
Closure doWork = { complexAsyncMethodCall(); } // (1) do stuff
Closure printResult = { println it } // (2) on complete
Closure logError = { Throwable t -> log.error t } // (3) on error
Closure runEverythingNicely = new ProcessableClosure(work: doWork, onComplete: printResult, onError: logError)
Promise p = task { runEverythingNicely }
When creating a Promise async task inside a controller you actually have to return the response by calling the get() method on the task, or the onError and onComplete methods will never be called. Adding:
job1.get()
Before your call to render will resolve the issue.
In my case, just returning a promise worked.
MyService.groovy
import static grails.async.Promises.*
def getAsync(){
Promise p = task {
//Long running task
println 'John doe started digging a hole here.'
Thread.sleep(2000)
println 'John doe working......'
return 'Kudos John Doe!'
}
p.onError { Throwable err ->
println "Poor John"+err
}
p.onComplete { result ->
println "Congrats." +result
}
println 'John Doe is doing something here.'
return p
}
In My Grails service, there is a part of a method I wish to run asynchronously.
Following, the doc for 2.3.x http://grails.org/doc/2.3.0.M1/guide/async.html
I do
public class MyService {
public void myMethod() {
Promise p = task {
// Long running task
}
p.onError { Throwable err ->
println "An error occured ${err.message}"
}
p.onComplete { result ->
println "Promise returned $result"
}
// block until result is called
def result = p.get()
}
}
However, I want to execute mine without any blocking. The p.get() method blocks. How do I execute the promise without any sort of blocking. I don't care if myMethod() returns, it is a kinda of fire and forget method.
So, according to the documentation if you don't call .get() or .waitAll() but rather just make use of onComplete you can run your task without blocking the current thread.
Here is a very silly example that I worked up in the console to as a proof of concept.
import static grails.async.Promises.*
def p = task {
// Long running task
println 'Off to do something now ...'
Thread.sleep(5000)
println '... that took 5 seconds'
return 'the result'
}
p.onError { Throwable err ->
println "An error occured ${err.message}"
}
p.onComplete { result ->
println "Promise returned $result"
}
println 'Just to show some output, and prove the task is running in the background.'
Running the above example gives you the following output:
Off to do something now ...
Just to show some output, and prove the task is running in the background.
... that took 5 seconds
Promise returned the result
I have a bluebird promise which can be cancelled. When cancelled, I have to do some work to neatly abort the running task. A task can be cancelled in two ways: via promise.cancel() or promise.timeout(delay).
In order to be able to neatly abort the task when cancelled or timed out, I have to catch CancellationErrors and TimeoutErrors. Catching a CancellationError works, but for some reason I can't catch a TimeoutError:
var Promise = require('bluebird');
function task() {
return new Promise(function (resolve, reject) {
// ... a long running task ...
})
.cancellable()
.catch(Promise.CancellationError, function(error) {
// ... must neatly abort the task ...
console.log('Task cancelled', error);
})
.catch(Promise.TimeoutError, function(error) {
// ... must neatly abort the task ...
console.log('Task timed out', error);
});
}
var promise = task();
//promise.cancel(); // this works fine, CancellationError is caught
promise.timeout(1000); // PROBLEM: this TimeoutError isn't caught!
How can I catch timeout errors before a timeout is set?
when you cancel a promise, the cancellation first bubbles to its parents as long as a parents are found that are still cancellable, this is very different from normal rejection which only propagates to children.
.timeout does a simple normal rejection, it doesn't do cancellation, so that's why it's not possible to do it like this.
You can either cancel after a delay:
var promise = task();
Promise.delay(1000).then(function() { promise.cancel(); });
or set the timeout in the task function:
var promise = task(1000);
function task(timeout) {
return new Promise(function (resolve, reject) {
// ... a long running task ...
})
.timeout(timeout)
.cancellable()
.catch(Promise.CancellationError, function(error) {
// ... must neatly abort the task ...
console.log('Task cancelled', error);
})
.catch(Promise.TimeoutError, function(error) {
// ... must neatly abort the task ...
console.log('Task timed out', error);
});
}
You can also create a method like:
Promise.prototype.cancelAfter = function(ms) {
var self = this;
setTimeout(function() {
self.cancel();
}, ms);
return this;
};
Then
function task() {
return new Promise(function (resolve, reject) {
// ... a long running task ...
})
.cancellable()
.catch(Promise.CancellationError, function(error) {
// ... must neatly abort the task ...
console.log('Task cancelled', error);
})
}
var promise = task();
// Since it's a cancellation, it will propagate upwards so you can
// clean up in the task function
promise.cancelAfter(1000);