Twillio Enqueue / Dequeue (or conference) Multiple callers - twilio

I am building an "overflow" queue of sorts for a call center. I'll spare you the logical reason and pitfalls of the current call center, but this is the task at hand.
I've taken the following steps:
(1) Create a flow in Twilio studio (to manage some inputs, etc, as well as enqueue the caller)
(2) Handle reservations by conferencing with the outbound (cell center) number.
There are two apparent issues:
(1) When a second call goes into the queue, it comes in without a reservation (because my one worker is on the "first" call??)
(2) I can essentially continually route folk in the queue to the call center until somebody picks up... however, with Taskrouter, it matches the one worker and the other calls are just stuck in the queue.
Ideally the final functionality would be that anybody in the queue hears the hold music until they are connected to the call center (which has a significant capacity for concurrent calls). I might be overthinking it, but (for example) if 50 calls were in queue, and only one worker - what happens to the 49 calls while the "worker" accepts the reservation? Would I need to create 50 workers? It seems like a bulky workaround, but there has to be a solution by all of you Twilio wiz's out there!
I am handling an assignment callback via Flask app, so that is able to handle the reservation and conference it... however, it can only do this with the first call (and worker) while other calls stack up without a reservation.
Any information is appreciated!

this was actually fairly easy once I was able to grasp the concept (for anybody that is interested).
For this specific application, task router and the 'conference' instruction (given to the assignment callback) handle connecting the inbound call to the outbound call center - if and only if the call center connects is the conference created. Otherwise, it will find the next available worker (ie call center) and attempt the same process.
Here is a simple conference instruction for your callback URL (in response to a new task):
ret = '{"instruction": "conference", "timeout": "300"}'
resp = Response(response=ret, status = 200, mimetype = 'application/json')
Then, you can either programmatically manage workers, tasks, attributes, etc. The conference instruction handles a lot of the heavy lifting as far as ensuring connections between the inbound and outbound calls, as well as corner cases.

Related

Delaying or sorting REST API triggered outbound IVR calls outside of allowed TCPA time?

We will be sending requests for outbound calls via REST API to Twilio Studio via batch each morning. However, the order in which they are sent is arbitrary, and some called parties will be in time zones in which, calls should not be made at that time (e.g. calling PST time zones at 8:00AM EST). How can we deal with this? I could put in a split based on the State, which would be known. However then what? Could I include a loop based on a time check? If so, it is conceivable that the number of called parties waiting for their time zone to become eligible would exceed the number of concurrent outbound calls which are allowed. Would this then prevent normally eligible calls from being placed, or do flow executions not count towards this limit unless a call has already been placed?
I had thought about storing the queued requests in Sync, and executing them based on the State criteria in conjunction with a time check function. However, I'm not sure if this would even work.
Is there some means of sorting, or otherwise selecting queued API requests based on a criteria?
Any help would be appreciated. Thank you!
The decision on when to place the call would be determined outside of Twilio.
You would first identify which timezone the customers are in and group those based on Pacific, Mountain, Central, Eastern, say using the address within your CRM - which is safer then using their area code.
Then, once the time is appropriate for that timezone, you would make a call to the Twilio Studio Executions end-point to place each of the calls.
You can monitor queue_time to determine how many milliseconds a call remains in the queue before being placed, in case you need to increase you CPS (or slow down your calling) and to avoid abnormally large queues resulting in calls placed outside allowed business hours.
So, in short, the queueing logic is handled on your side rather then on Twilio's side.
Because of our specific use case, it is desirable to have the functionality self-contained. We prioritize inbound calls and the call volume varies, so the number of concurrent calls placed by the outbound IVR is very low. This means that a call can be queued for an extended period of time, and our allowed calling window may expire. Therefore, we must make this check immediately prior to making the attempt
I was able to resolve this with a function which checks the current time via a new Date().toISOString() and adds or subtracts the offset based on the time zone of the called party.

How to implement a multithreaded infinite-task with subthreads in Rails

Inside my Rails 4 application I need to make API calls to a webservice where I can ask for an instance of stock quotes. I can ask for 1000 stocks in one request and four requests at a time which is the throttle limit.
The workflow goes like this:
A user makes a request to my app and wants to get quotes for 12000 stocks.
So I chunk it into twelve requests and put them in a Queue.
At application start I start a thread in a loop which is supposed to look at the queue and since I am allowed to make concurrent requests I'd like to make 4 requests in parallel.
I get stuck in several ways. First of all I need to take into consideration that I can get multiple requests of 12000 stocks at a time since different users can trigger the same request.
Second, I ll use the Thin web server wjich is multithreaded. So I guess I have to use a Mutex.
How can this be achieved?
Queues are already threadsafe data structures, so you don't need a mutex to work with them.
You'd just start 4 threads at the start of your app, which each poll the queue for work, do some work, and then do something (which is up to you) to notify the DB and/or user that the work is complete. The actual workers will be something like:
work_queue = Queue.new
4.times do
Thread.new do
loop do
job = work_queue.pop
# Do some work with the job
end
end
end
Queue#pop blocks until data is available, so you can just fire off those queues and the first thread waiting for data will get the job when it's pushed in, the next thread will get the next job, and so on. While there are no worker threads available, jobs will accumulate in the queue.
What you actually do with the output of the job is probably the more interesting question here, but you haven't defined what should happen when the quotes are retrieved.
You might also look at Sidekiq.

How to grab a list of only active calls and active queues from Twilio

How do i grab only active queues while incoming call to my twilio number.Is there any way to filter only latest queues and update those queues order.
Twilio evangelist here.
I guess this depends on what you mean by "Active". If you mean, Queues that have calls waiting in them, there is not currently a way to have Twilio give you just that list.
You can ask Twilio to give you a list of Queues and then your application can filter that list using the CurrentSize parameter.
If you want to dequeue callers in a order other than the default FIFO behavior seen when you <Dial> a call into a <Queue>, you can use the REST API to dequeue a specific call:
https://www.twilio.com/docs/api/rest/member#instance-post-example-2
If that still does not give you enough control for your scenario you can always fall back to using a simple <Conference> instead of a <Queue>. In that case you would add calls to a conference in a muted state, and your application would be responsible for tracking what calls are in the conference, how long they have waited and what order they arrived.
Hope that helps. If I've misunderstood what you mean by Active, please let me know.

Should I convert my action method to async action method?

I have a web site where user can upload a PDF and convert it to WORD doc.
It works nice but sometimes (5-6 times per hour) the users have to wait more than usual for the conversion to take place....
I use ASP.NET MVC and the flow is:
- USER uploads file -> get the stream and convert it to word -> save word file as a temp file -> return the user the url
I am not sure if I have to convert this flow to asynchronous? Basically, my flow is sequential now BUT I have about 3-5 requests per second and CPU is dual core and 4 GB Ram.
And as I know maxConcurrentRequestsPerCPU is 5000; also The default value of Threads Per Processor Limit is 25; so these default settings should be more than fine, right?
Then why still my web app has "waitings" some times? Are there any IIS settings I need to modify from default to anything else or I should just go and make my sync method for conversion to be async?
Ps: The conversion itself is taking between 1 seconds to 40-50 seconds depending on the pdf file size.
UPDATE: Basically what it's not very clear for me is: if a user uploads a file and the conversion is long shouldn't only current request "suffer" because of this? Because the next request is independent, make another CPU call and different thread so should be no wait here, isn't it?
There are a couple of things that must be defined clearly here. Async(hronous) method and flow are not the same thing at least as far as I can understand.
An asynchronous method (using Task, usually also leveraging the async/await keywords) will work in the following way:
The execution starts on thread t1 until it reaches an await
The (potentially) long operation will not take place on thread t1 - sometimes not even on an app thread at all, leveraging IOCP (I/O completion ports).
Thread t1 is free and released back to the thread pool and is ready to service other requests if needed
When the (potentially) long operation returns a thread is taken from the thread pool (could even be the same t1 or, most probably, another one) and the rest of the code execution resumes from the last await encountered
The rest of the code executes
There's a couple of things to note here:
a. The client is blocked during the whole process. The eventual switching of threads and so on happens only on the server
b. This approach is mainly designed to alleviate an unwanted condition called 'thread starvation'. It is not meant to speed up the total client waiting time and it usually doesn't speed up the process.
As far as I understand an asynchronous flow would mean, at least in this case, that after the user's request of converting the document, the client (i.e. the client's browser) would quickly receive a response in which (s)he is informed that this potentially long process has started on the server, the user should be patient and this current response page might provide progress feedback.
In your case I recommend the second approach because the first approach would not help at all.
Of course this will not be easy. You need to emulate a queue, you need to have a processing agent and an eviction policy (most probably enforce by the same agent if you don't want a second agent).
This would work along the following lines:
a. The end user submits a file, the web server receives it
b. The web server places it in the queue and receives a job number
c. The web server returns the user a response with the job number (let's say an HTML page with a polling mechanism that would periodically receive progress from the server)
d. The agent would start processing the document when it gets the chance (i.e. finishes other work) and update its status in a common place for the web server to pick this information
e. The web server would receive calls from the HTML response asking for the status of the job and would find out that the job is complete and offer a download link or start downloading it directly.
This can be refined in some ways:
instead of the client polling the server, websockets or long polling (for example SignalR covers both) could be used
many processing agents could be used instead of one if the hardware configuration makes sense
The queue can be implemented with a simple RDBMS, Remus Rușanu has a nice article about this.

Adobe Actionscript - multiple service request processing

Does anyone know of any good resources that fully explain how functions and results will fire in an Adobe AIR app where multiple things are happening at once?
As a simple test, I've created a single service that I just keep changing the url of, then issuing a send(). It seems that no matter how many send() calls I put in, all of these get executed before the 'result' eventListener function gets called for the first time.
Is this how it works? i.e. the current function gets fully executed, with the async returns queueing up to be processed after AIR has finished what it's currently doing.
Likewise, if the user does something while all this is going on, I presume their request goes to the back of the queue as well?
All that makes sense, but I'm just wondering if it's documented anywhere.
While I'm on one, is it recommended practice to reuse the same HTTPService in this way, or is it better to create one for each concurrent transaction? Just because it works, doesn't mean it's the right thing to do...
I'm not aware of any documentation that explains this, but I can confirm that code blocks get executed before async calls are made, or at least before their result is being processed. If it didn't work that way, you would for instance not always be able to attach a responder to a token of a service call, because the result might already have been processed.
var token:AsyncToken = myService.someMethod();
token.addResponder(new Responder(resultHandler, faultHandler));
Developers coming from other platforms find this strange as they would expect the assignment of the responder to be too late.
So while I don't have an official explanation about the technical details inside the Flash Player, I can assure that it works this way.
If the user does something while a call is pending, the new request will indeed just be added as a new asynchronous call. Note that we can't realy speak of a queue, as there is no guarantee that the response of the first call comes in before the response of the second call. This depends on how much time the actual requests take.
You can perfectly reuse an HTTPService instance.
PS: Based on this, we were able to build the Operation API in Spring ActionScript. It is basically an API that allows you to execute asynchronous processes in a uniform way, without having to worry about the details of the actual async process.
The following code executes an async process and attaches a handler to it. This is also something that puzzles many developers at first, for reasons similar to the asyncToken situation.
var operation:IOperation = doSomeOperation();
operation.addCompleteListener(aCompleteHandler);
operation.addErrorListener(anErrorHandler);

Resources