Twilio Studio Flow - Adding a Retry/Delay - twilio

I'm trying to figure out how I can implement a retry policy in a Twilio Studio Flow. I see that they have an example, but it only has a delay of no more then 10-seconds.
I want something that can use to retry when my webhooks service is down. I did setup the sample from the Twilio docs but it only seems to work when you want a delay of no more then 10-seconds. But I need it to pause for an hour or two. So say the HTTP Post step fails because the webhooks service is offline, I want it to pause for an hour and try again. Then pause for 2, then 3, then 4, etc. and try again. The point being, I don't want to lose the user's response.
What I am trying to do is not lose any of the user responses from a survey if my webhooks application goes down. We saw this happen in production for a couple of hours and we lost survey response from 200 users.
If this is not possible, is there a way I can reach back out to Twilio logs and get access to the responses that failed while the webhooks service was down? I recall running into something where you can pull back the logs, which could then be used to identify the ones that failed.

This kind of logic isn't really built into Studio. Ten second waits are typically the most you will see due to both Twilio Functions & the http request widget timing out at this point.
If you wish to include this kind of wait then you will need some sort of workaround where you go into a send & wait for reply widget (which ignores responses from your customers with some additional logic) and has a timeout set to the amount of time you want to wait. You can then transition to the webhook request again and re-attempt.
Alternatively, you can create a utility which uses the Execution resource to find all the failed flows for a given time period so you can choose how best to move forward.

Related

Handling 3 call queues in twilio in an elegant way

I'd love some advice on my twilio setup for a problem I'm trying to solve.
Overview
A client can call our twilio number and have one of three cases to handle:
- is our client and has a question - should be transfered to CC queue (2ppl),
- wants to buy our services - should be transfered to Sales queue (7ppl),
- has some other case - should be transfered to a different queue, lets call it Other (1 person)
Current solution
The client calls:
welcome message
we gather his digit input
enqueue a call to the appropiate queue
assign a task to an available worker with the conference instruction
Problem with the current solution
If there are no workers in the office to handle the call the client will wait forever until he hangs-up by himself. The wroker who answers doesn't know the clients phone number so he isn't able to fallow-up if neccesary.
Questions
After the client picks a queue i would like to check if I have any workers in the office in that queque (not in Offline mode). If everybody is on offline mode redirect to voicemail and a an email is sent to a specified email address with the caller phone number and voicemail recording url.
After the worker picks-up (accepts the reservation) send him a message with the clients phone number.
If no worker answers within a specified amount of time (for example 5 minutes) the call gets redirected to voicemail and a an email is sent to a specified email address with the caller phone number and voicemail recording url.
Twilio developer evangelist here.
Answers in order, parts 1 and 3 need to talk about voicemail which I'll cover at the bottom:
You can use a skip_if expression to skip a queue if there are no available workers.
I assume this is using the reservation.conference instruction in the JavaScript SDK. You can actually inspect the reservation object at this stage too, check out reservation.task.attributes for all the task attributes, which should include the call attributes. You can use this to show your agent on screen or send them a message some other way.
For this, you should set a timeout on your queue. When the timeout fires the task should drop through to the next queue in the workflow.
Voicemail
For parts 1 and 3 we are ejecting a task from one queue, but it needs to go somewhere else to be dealt with. You want to send the calls to voicemail, which doesn't require an agent to deal with it. Currently, the best way to deal with this is direct the task to a queue that has one bot worker in it. The job of the worker is to redirect incoming reservations straight to some TwiML. You achieve this by instantly responding to an assignment callback with the redirect instruction.
To create voicemail, you can combine <Say> and <Record>. Make sure you set the recordingStatusCallback attribute to a URL in your application, you can then use the results to email the link to the voicemail recording.
Let me know if this helps at all.
thank you for yout time to answer my questions. Below please find my reply:
1. It seems that this does not work in console - I find this information in the documentation "skip_if cannot be configured through the console - it must be posted on the workflow API". As I am not using the workflow API this is probably not a solution for me.
2. I using this tutorial: https://www.twilio.com/docs/quickstart/php/taskrouter/twiml-dequeue-call but instead of using dequeue instruction i use conference. I don't quit get how to "inspect the reservation" - maybe you have a tutorial on that? While looking for other solutions I came up with workspace event calback, but I am not sure if this would work.
3) How can I do that in the console?

Too many concurrent connections opened Microsoft Graph API

I'm currently running a web application that uses Microsoft Graph's API and we encountered the following message today which severely impacted our application, for a whole day:
"error": {
"code": "ErrorTooManyObjectsOpened",
"message": "Too many concurrent connections opened., The process failed to get the correct properties.",
"innerError": {
"request-id": "removed",
"date": "2017-12-13T17:01:14"
}
}
please note that the request-id was removed
Let me summarize what our web application does.
Basically, we have 2 email folders that we are actively subscribed to, Junk and Folder A.
If anything hits Folder A, we strip the body of the email message and then move the message to Folder B. The subscription on our Junk folder also strips the body and sends them over to Folder B.
Sometimes the webhook subscription service skips messages that may come at the same time, therefore we have 2 cron jobs in our server that run a script and check Junk/Folder A for any messages every 5 minutes, therefore my assumption is that the cron job runs about 288*2 times per day. Not counting our subscription to the folders, we usually get around 200-300 email messages per day.
Unfortunately Microsoft's Graph error codes page does not provide us with any explanation about this code. I would really appreciate if anyone can explain what this means and how to avoid it from happening.
This is occurring because your application is exceeding the throttling thresholds.
There are several different throttling metrics that can affect Microsoft Graph requests. For a high-level overview, see the Microsoft Graph throttling guidance. Since in this case you're hitting Exchange Online via Graph, you can find more specific information from What throttling values do I need to take into consideration? in the Exchange documentation.
Architecturally, you are making a lot of unnecessary calls into the API. Rather than having both a subscription and a scheduled job, you should use just the webhook subscription and the /delta endpoint.
Each call to the /delta endpoint gives you a token that can be used to fetch any changes to a given resource since the token was originally issued. So regardless of if 1 email came in or 1,000, you only get the new emails.
Once you're using the /delta to find your changes, you then use a webhook only as a "trigger". When you receive the webhook, you can ignore the contents and instead issue a request to /delta. This ensures that you capture every incoming email even if you didn't necessarily receive separate webhook notifications.
There is a bug. After making 500 message move requests, a "cannot copy/move error" occurs. Subsequently, a "429: Too many concurrent connections opened" error occurs. Most applications miss the first error because you continually get the 429 error afterwards.
If you let the application "rest" for 30 minutes, the throttle resets itself and you can continue on. I do not think there is a time limit for hitting the 500 moves. My application did 500 moves after 6.5 hours and then we started getting the error.
And, if you keep trying your move call before the 30 min rest period, it never resets. Also, in the response, the retry-after is null... so, that doesn't help you.
If you find a work around, please let me know. We are trying a few things like setting the category, then manually moving the messages. I am also investigating making a rule the moves them for us or some other job. I cannot find a way to execute a rule from the Graph API.
See this link for more information. Also, the more people who report having this issue, hopefully the sooner it can be resolved. Outlook API Throttling documentation #144

Shopify not calling on my fulfillment /fetch_tracking_numbers

I am Developing a new warehouse integration for the company I work for as there was not existing solution.
I have gotten almost every feature to work including
fulfillment request and stock requests and even registered a carrier service for real time shipping rates however for some reason i can not get the
/fetch_tracking_numbers call to fire from shopify according to the documentation
"Once per hour Shopify will make a request to this endpoint if there are any completed fulfillments awaiting tracking numbers from the remote fulfillment service."
however I have added logs to the call so i can trouble shoot it however it seems that shopify never makes this call to the server.
If I visit the url myself i can fire the code (logs and all) however it doesn't seem like shopify is doing so
In the install I made sure to provide a valid call back url (thats why fetch stock works fine) and set the tracking support field to true but still nothing
One way to be sure would be ensure that a product's variant is set to your custom fulfillment company. Then complete a bogus order for the product. Now fulfill it. Once you have fulfilled it, Shopify will poll your end point for order's and their tracking numbers. It works fine for me... but I am thinking maybe you're waiting for Shopify and nothing is actually fulfilled.
I am probably too late to answer this but Shopify will make calls to this only if
1) you have "completed" fulfillments
2) tracking number for this completed fulfillments is pending.
You need to mark "complete" a fulfillment after shipping all the order line items

Triggering a SWF Workflow based on SQS messages

Preamble: I'm trying to put together a proposal for what I assume to be a very common use-case, and I'd like to use Amazon's SWF and SQS to accomplish my goals. There may be other services that will better match what I'm trying to do, so if you have suggestions please feel free to throw them out.
Problem: The need at its most basic is for a client (mobile device, web server, etc.) to post a message that will be processed asynchronously without a response to the client - very basic.
The intended implementation is to for the client to post a message to a pre-determined SQS queue. At that point, the client is done. We would also have a defined SWF workflow responsible for picking up the message off the queue and (after some manipulation) placing it in a Dynamo DB - again, all fairly straightforward.
What I can't seem to figure out though, is how to trigger the workflow to start. From what I've been reading a workflow isn't meant to be an indefinite process. It has a start, a middle, and an end. According to the SWF documentation, a workflow can run for no longer than a year (Setting Timeout Values in SWF).
So, my question is: If I assume that a workflow represents one message-processing flow, how can I start the workflow whenever a message is posted to the SQS?
Caveat: I've looked into using SNS instead of SQS as well. This would allow me to run a server that could subscribe to SNS, and then start the workflow whenever a notification is posted. That is certainly one solution, but I'd like to avoid setting up a server for a single web service which I would then have to manage / scale according to the number of messages being processed. The reason I'm looking into using SQS/SWF in the first place is to have an auto-scaling system that I don't have to worry about.
Thank you in advance.
I would create a worker process that listens to the SQS queue. Upon receiving a message it calls into SWF API to start a workflow execution. The workflow execution id should be generated based on the message content to ensure that duplicated messages do not result in duplicated workflows.
You can use AWS Lambda for this purpose . A lambda function will be invoked by SQS event and therefore you don't have to write a queue poller explicitly . The lambda function could then make a post request to SWF to initiate the workflow

How to make sure that timed out request was not carried out? ios

Hey I'm developing an iOS application which communicates with an external web service in order to make various kinds of requests.
I'm aware of Murphy's Law "Anything that can go wrong, will go wrong" and that made me think about timeouts. Currently my application does not handle the situation when a request get completed and times out simultaneously. How should I handle such situations?
Without cooperation from the service provider there's not a lot you can do. If your app sees a timeout it cannot from that deduce whether the request actually completed or not. Could be it worked and something in the infrastructure failed to deliver the response, could be that it failed and hence you saw no timely response.
You have some actions you can take that will help the user. I assume that you have available to you the details of the request you attempted to send, your app should keep that locally. You are now in a position to do some useful things:
Some service authors allow you to safely submit the same request twice. So just resubmit, if it previously worked the service will just say "yep, already done that, here's the details|, if not it will just do the work as normal.
Some service authors allow you to query the status of previous request, so you can determine what has been done and what has not.
In some cases there is no IT system way to deal with the problem, the user will need to contact a help desk or call centre. Here having the details of what was previously attempted can be very useful.

Resources