Limiting concurrent outbound API triggered calls in Twilio Studio? - twilio

We are building an outbound IVR that will receive a payload, in bulk, via REST API, and place an outbound call to the recipient. We are attempting to limit the number of concurrent flow executions, or calls placed at a time, to prevent transfers by the called parties from flooding the shared inbound queue. Is there any way to accomplish this internally to Twilio?
If my assumptions are correct, the limiting factors when placing outbound API calls via Twilio Studio are the inbound API queue, the number of concurrent flow executions, and the number of Calls Per Second (CPS).
My understanding is that the queued API requests are executed 30 at a time, on a FIFO basis- as one execution is completed, another begins.
Each execution can then place a call at a rate of no more than 1 CPS, so 30 seconds for all calls to be sent.
Is this correct?
Is there any means of throttling these executions, or outbound calls?
A CPS limitation would be ideal, however the minimum number is 1 CPS, which is still 3600 Calls Per Hour, far too many for this call center to handle. Can this be lowered to less than 1 CPS?

Your assumptions are correct, as far as I can tell, but Twilio is built to, within those limits, make the calls you want to make as fast as it can.
If you don't want to Twilio to place the outbound calls, you need to not send the API request to start the call until you are ready. You can do this by knowing the concurrency that you can handle in your own application and by subscribing to call status webhooks so that you know when a call is complete and you can then place a new outbound call.

Related

What are various techniques to control rate of API based webhooks sent (From Server to Client)

I have a system as shown in the diagram. Multiple clients request to the web pod, web pod pushes the request to a shared queue.
Messages from the queue are de-queued based on the rate at which process_task_worker processes the request.
Once the processing is done, it sends the message to the shared queue. These messages are then dequeued by the webhook_worker and webhook (API call) is fired to respective clients.
I would like to add rate limiter to rate limit the number of webhook sent by the webhook_worker per client. rate limiting factor could be a generic count across all the clients.
But if rate limit is breached, that message should be re-enqueued to the queue instead of discarding it.
What all techniques can i employ to solve this? What is the term called for such thing since we are not discarding it but rather re-enqueuing it?

Twilio function metrics

According to the docs:
Functions are limited to 30 concurrent invocations - meaning that if
you have more than 30 Functions being invoked at the same time, you
will begin to see new Function invocations return with a 429 status
code. To keep below the threshold optimize Functions to return as fast
as possible - avoid artificial timeouts and use asynchronous calls to
external systems rather than waiting on a large number of API calls to
complete.
-- https://www.twilio.com/docs/runtime/functions/faq
How can I obtain the execution time metrics of my Twilio functions? In order to ensure my functions stay under the 30 concurrent invocations limit, I'll need to compute the number of concurrent invocations, based on execution time and number of requests. I need to know the execution times (and ideally the number of invocations, but I can get that elsewhere). Does Twilio provide any metrics for Twilio Functions?
Twilio developer evangelist here.
I don't believe that information is available either in the response headers or via the API right now.
If you are concerned about breaching those limits, I would recommend you raise this with your account executive at Twilio. If you don't have an account executive, that sort of usage sounds as though you should have a relationship, so get in touch with sales or failing that support.

How to ring multiple workers in Twilio Flex

In Twilio Flex, when a call comes in, I want all workers in a given queue to ring.
They should all be able to pick up the call and the first one to do so is connected to the customer while the call disappear for the others.
For now, TaskRouter seems to select a single eligible worker to send the call.
How can I have TaskRouter to simulring all eligible workers instead of ringing them one by one?
Alan's comment got me in the right direction.
To have multiple workers ring at the same time, just increase the MAX RESERVED WORKERS property in the TaskQueue.

How to control Rails app requests to external api?

I am building a Rails 4 (postgres) app on the back of a third party API. For now, the third party API allows with 100 requests per min.
The roundtrip for the user takes about 2000 ms so I want to move this into a worker.
I considered using sidekiq, but with each new user and new background thread comes the possibility that I'll exceed my API quota.
What is the best way to control my applications interaction with the third party API? Do I need a single serial queue to control the rate limit effectively?
I assume you'll get an error (like exception) when you are over the 100 requests limit. If all API requests are in a sidekiq worker, the worker will automatically retry on error. Initially the retry will be quite soon, but you can overwrite the retry time with something like:
sidekiq_retry_in do
rand(60..75)
end
In this way each retry will be 60 to 75 seconds after the error.
You can check more about sidekiq's error handling here: https://github.com/mperham/sidekiq/wiki/Error-Handling

what would be the possible approach to go : SQS or SNS?

I am going to make the rails application which integrates the Amazon's cloud services.
I have explore amazon's SNS service which gives the facility of public subscription which i don't want to do. I want to notify only particular subscriber.
For example if I have 5 subscriber in one topic then the notification should be goes to particular subscriber.
I have also explored amazon's SQS in which i have to write a poller which monitor the queue for message. SQS has also a lock mechanism but the problem is that it is distributed so there would be a chance of getting same message from another copy of queue for process.
I want to know that what would be the possible approach to go.
SQS sounds like what you want.
You can run multiple "worker" processes that compete over messages in the queue. Each message is only consumed once. The logic behind the "lock" / timeout that you mention is as follows: if one of your workers were to die after downloading a message, but before processing it, then you want that message to eventually time out and be re-downloaded for processing on another node.
Yes, SQS is built on a polling model. For example, I have a number of use cases in which I use a minutely cron job to poll for new messages in the queue and take action on any messages found. This pattern is stupid simple to build and works wonders for a bunch of use cases -- a handy little "client" script that pushes a message into the queue, and the cron activated script that will process that message within a minute or so.
If your message pattern is extremely sparse -- eg, only a few messages a day -- it may seem wasteful to poll constantly while the queue is empty. It hardly matters.
My original calculation was that a minutely cron job would cost $0.04 (now $0.02) per month. Since then, SQS added a "Long-Polling" feature that lets you achieve sub-second latency on processing new messages by sending 1 "long-poll" message every 20 seconds to poll an idle queue. Plus, they dropped the price 50%. So per month, that's 131k messages (~$0.06), a little bit more expensive, but with near realtime request processing.
Keep in mind that a minutely cron job I described only costs ~$0.04 / month in request load (30d*24h*60m * 1c / 10k msgs). So at a minutely clip, cost shouldn't really be a concern here. Even polling every second, the price rises only to $2.59 / mo, not exactly a bank buster.
However, it is possible to avoid frequent polling using a webservice that takes an SNS HTTP message. Such an architecture would work as follows: client pushes message to SNS, which pushes message to SQS and routes an HTTP request to your webservice, triggering it to drain the queue. You'd still want to poll the queue hourly or daily, just in case an HTTP request was dropped. In the end though, I'm not sure I can think of any scenario which really justifies such complexity. I'd much rather pay $0.04 a month to have a dirt simple cron job polling my queue.

Resources