How to get Twilio Studio Flow execution via call sid - twilio

We're trying to include the IVR steps in our UI, but to get the steps I have to make several API calls. That's fine, except the only way I can seem to get the relevant info is to load all flow executions.
If I could pass the flow.sid via the HTTP Request widget then I could go fetch the info I need later instead of having to iterate through all the previous executions. I tried passing {{flow.data}} as the request body, thinking it was JSON, but it ends up being empty.
Here's a spike that someone wrote for us, modified to just work with a single execution.
require "httparty"
STUDIO_FLOW_SID = "FW***"
AUTH = {username: ENV["TWILIO_ACCOUNT_SID"], password: ENV["TWILIO_AUTH_TOKEN"]}
DATE_CREATED_FROM = "2019-09-01T000000Z"
DATE_CREATED_TO = "2019-10-01T000000Z"
# Retrieves all executions in the given date range
executions_url = "https://studio.twilio.com/v1/Flows/#{STUDIO_FLOW_SID}/Executions?DateCreatedFrom=#{DATE_CREATED_FROM}&DateCreatedTo=#{DATE_CREATED_TO}"
response = HTTParty.get(executions_url, basic_auth: AUTH)
# If I can get the individual execution from the IVR {{flow.data}}
# that would be ideal
execution = response.parsed_response["executions"].first
execution_context_url = execution["links"]["execution_context"]
execution_context = HTTParty.get(execution_context_url, basic_auth: AUTH)
# Or, if I could work backwards and get the execution context ID from
# the call somehow, that would work too.
call_sid = execution_context.parsed_response["context"]["trigger"]["call"]["CallSid"]
steps = HTTParty
.get(execution["links"]["steps"], basic_auth: AUTH)
.parsed_response["steps"]
.sort_by { |step| step["date_created"] }
.map { |step| step["transitioned_to"] }
.select { |step| step.include?("option") || step.include?("menu") }
puts [call_sid, steps].inspect
tl;dr -
I either need the Flow execution info passed in an HTTP Request widget or I need to work backwards from a CallSid to get the execution steps.

Twilio developer evangelist here.
The execution Sid can be accessed in the flow's data, under flow.sid.
This was missing in the documentation, but I've just added it here: https://www.twilio.com/docs/studio/user-guide#context-variables
Note: {{flow.sid}} doesn't currently appear in Studio's autocomplete, but it's there, I promise!

I found a way to get the execution from the call:
Get the list of incoming phone numbers: https://www.twilio.com/docs/phone-numbers/api/incomingphonenumber-resource#read-multiple-incomingphonenumber-resources
Search for the number using the destination of the call
Then you can get the flowid by parsing voice_url
With this flow_id and the date the call arrived, you can get the list of executions: https://www.twilio.com/docs/studio/rest-api/v2/execution#read-a-list-of-executions
If you have more than one, you can use the calling number to search in the list.
Anyway, if you are calling the function from Studio, philnash answer is better :)

Related

Slack Conversations API conversations.kick returning "channel_not_found" for a public channel

I am writing a Slack integration that can boot certain users out of public channels when certain conditions are met. I have added several OAuth scopes to the bot token, including the following:
channels:history
channels:manage
channels:read
chat:write
chat:write.public
groups:write
im:write
mpim:write
users:read
I am writing my bot in Python using the slack-bolt library and asyncio. However when I try to invoke this code:
await app.client.conversations_kick(channel=channel_id, user=user_id)
I get the following error:
slack_sdk.errors.SlackApiError: The request to the Slack API failed. (url: https://www.slack.com/api/conversations.kick)
The server responded with: {'ok': False, 'error': 'channel_not_found'}
I know for a fact that both the channel_id and user_id arguments I'm passing in are valid. The channel ID I'm using is the string C01PAE3DB0A. I know it is valid because I can use the very same value for channel_id in the following API call:
response = await app.client.conversations_info(channel=channel_id)
And when I call conversations_info like that I get all of the information about my channel. (The same is true for calling users_info with the user_id - it returns successfully.) So why is that when I pass my valid channel_id parameter to conversations_kick I consistently receive this channel_not_found error? What am I missing?
So I got in touch directly with Slack support about this and they confirmed that there is a bug on their end. Specifically, the bug is that I should have received a restricted_action error response instead of a channel_not_found response. Apparently this is a known issue that is on their backlog.
The reason the API call would (try to) return this restricted_action error is simply because there is a workspace setting that, by default, prevents non-admins from kicking people out of public channels. Furthermore, this setting can only be changed by the workspace owner - one tier above admins.
But assuming you are the owner of the Slack workspace, you simply have to log into the Settings & Permissions page, which should look something like this:
And then you have to change the setting labeled "People who can remove members from public channels" from "Workspace admins and owners only (default)" to "Everyone, except guests."
Once I made that change, my API calls started succeeding.

Unautorized API calls JwtCreator

I'm playing around with the DocuSign's Ruby Quickstart app and I've done the following:
have an Admin account
have an organization
created an Integration(Connected App) for which I've granted signature impersonation scopes in the Admin Dashboard(made RSA keys, put callback urls, etc)
even if I've done the above, I've also made the request to the consent URL in a browser: SERVER/oauth/auth?response_type=code &scope=signature%20impersonation&client_id=CLIENT_ID &redirect_uri=REDIRECT_URI
Integration appears to have everything enabled
Then in the JwtCreator class the check_jwt_token returns true, updates account info correctly.
But when I try the following(or any other API call):
envelope_api = create_envelope_api(#args)
options = DocuSign_eSign::ListStatusChangesOptions.new
options.from_date = (Date.today - 30).strftime('%Y/%m/%d')
results = envelope_api.list_status_changes #args[:account_id], options
The api call raises an exception with DocuSign_eSign::ApiError (Unauthorized):
Args are:
#args = {
account_id: session[:ds_account_id],
base_path: session[:ds_base_path],
access_token: session[:ds_access_token]
}
All with correct info.
What am I missing?
For clarity, I was using some classes from the Quickstart app(like JwtCreator, ApiCreator, etc) along my code.
Not sure at this point if it's my mistake or part of the Quickstart app but this call:
results = envelope_api.list_status_changes #args[:account_id], options
the account_id was something like this "82xxxx-xxxx-xxxx-xxxx-xxxxxxxx95e" and I was always getting Unauthorized responses.
On a medium.com tutorial the author used the 1xxxxxx account_id and with this form, it worked.

Rails - multiple theads to avoid the slack 3 second API response rule

I am working with the slack API. My script does a bunch of external processing and in some cases it can take around 3-6 seconds. What is happening is the Slack API expects a 200 response within 3 seconds and because my function is not finished within 3 seconds, it retries again and then it ends up posting the same automated responses 2-3 times.
I confirmed this by commenting out all the functions and I had no issue, it posted the responses to slack fine. I then added sleep 10 and it done the same responses 3 times so the ohly thing different was it took longer.
From what I read, I need to have threaded responses. I then need to first respond to the slack API in thread 1 and then go about processing my functions.
Here is what I tried:
def events
Thread.new do
json = {
"text": "Here is your 200 response immediately slack",
}
render(json: json)
end
puts "--------------------------------Json response started----------------------"
sleep 30
puts "--------------------------------Json response completed----------------------"
puts "this is a successful response"
end
When I tested it the same issue happened so I tried using an online API tester and it hits the page, waits 30 seconds and then returns the 200 response but I need it to respond immediately with the 200, THEN process the rest otherwise I will get duplicates.
Am I using threads properly or is there another way to get around this Slack API 3 second response limit? I am new to both rails and slack API so a bit lost here.
Appreciate the eyes :)
I would recommend using ActionJob to run the code in the background if you don't need to use the result of the code in the response. First, create an ActiveJob job by running:
bin/rails generate job do_stuff
And then open up the file created in app/jobs/do_stuff_job.rb and edit the #perform function to include your code (so the puts statements and sleep 30 in your example). Finally, from the controller action you can call DoStuff.perform_later and your job will run in the background! Your final controller action will look something like this:
def events
DoStuff.perform_later # this schedules DoStuff to be done later, in
# the background, so it will return immediately
# and continue to the next line.
json = {
"text": "Here is your 200 response immediately slack",
}
render(json: json)
end
As an aside, I'd highly recommend never using Thread.new in rails. It can create some really confusing behavior especially in test scripts for a number of reasons, but usually because of how it interacts with open connections and specifically ActiveRecord.

YouTube reporting API reports are all blank

I have created set of YouTube reporting jobs for a YouTube channel. The jobs were created and run every day as scheduled. However when I go to download the jobs they are all blank.
This is how I authenticate with the API:
def authenticate_from_credentials(API_SERVICE_NAME, API_VERSION):
youtube_client_id = os.environ['youtube_client_id']
youtube_client_secret = os.environ['youtube_client_secret']
youtube_refresh_token = os.environ['youtube_refresh_token']
credentials = client.OAuth2Credentials(
access_token=None,
client_id=youtube_client_id,
client_secret=youtube_client_secret,
refresh_token=youtube_refresh_token,
token_expiry=None,
token_uri='https://oauth2.googleapis.com/token',
user_agent=None,
revoke_uri=None
)
youtube_reporting = build(API_SERVICE_NAME, API_VERSION, credentials=credentials)
return youtube_reporting
This is the method I have been using to create the jobs:
# Call the YouTube Reporting API's jobs.create method to create a job.
def create_reporting_job(youtube_reporting, report_type_id, name):
# Provide keyword arguments that have values as request parameters.
reporting_job = youtube_reporting.jobs().create(
body=dict(
reportTypeId=report_type_id,
name=name
),
).execute()
print ('Reporting job "%s" created for reporting type "%s" at "%s"'
% (reporting_job['name'], reporting_job['reportTypeId'],
reporting_job['createTime']))
I authenticate like this:
youtube_reporting=authenticate_from_credentials('youtubereporting','v1')
And I will create a job like this:
create_reporting_job(youtube_reporting,"channel_combined_a2","Channel Combined a2")
I am not sure what the problem is here. The channel does have content and subscribers so the reports shouldn't be empty. I think there could be an issue with credentials or perhaps the wrong channel is associated with the report since the developer's Google accounts are different than the content owners. But I checked the channels associated with the Oauth credentials I am using and it was the right channel.
Why might my reports be empty and how can I fix this?
I hit the same issue, the problem is that you need to wait several hours for the report to become generated on the backend, at which point re-querying for reports will show results.
There is a subtle mention about this delay on https://developers.google.com/youtube/reporting/v1/reports under Step 3:
The API response to the jobs.create method contains a Job resource,
which specifies the ID that uniquely identifies the job. You can
start retrieving the report within 48 hours of the time that the job
is created, and the first available report will be for the day that
you scheduled the job.
This was quite confusing.

Twilio PCI Compliant <Gather> in Function Widget with Studio

I've been stalking around here and have gotten most of my answers as I make my way through this new tool, but I'm now stuck and need some direct advice.
The Gather function in Studio is not PCI compliant, so I have to shift my call to a Function and return the parsed data--I finally figured out how to do that one--however, I've found that I cannot call the web service housed within the single function and had to send the with event.Digits to another function to make the web service call to my token provider. This works, however it has led to a strange result: my token is read back as TTS and then the call is hung up. I have no TTS action in play. Below are my sets of code:
Initial function called from Studio:
const got = require('got');
exports.handler = function(context, event, callback) {
let twiml = new Twilio.twiml.VoiceResponse();
twiml.gather({
input: 'dtmf',
finishOnKey: '#',
timeout: 10,
action: 'paymenttest',
method: 'GET'
}).say('Enter CC');
console.log(twiml);
callback(null, twiml);
};
This successfully calls my function with the digits entered:
const got = require('got');
exports.handler = function(context, event, callback) {
let twiml = new Twilio.twiml.MessagingResponse();
const url ='my payment gateway' + event.Digits + '&EXPDATE=1220&CARDTYPE=VI';
got.get(url, {
headers: {
'content-Type': 'application/x-www-form-urlencoded'
}
}).then(function(response) {
// Check the response and ask your second question here
event.callback(null, response.body);
}).catch(function(error) {
// Boo, there was an error.
callback(error)
});
};
This successfully returns the token....but as mentioned prior...it's read back out to me instead of getting included in the data returned back to Studio.
Twilio developer evangelist here.
Right now Studio is not well setup for using TwiML from a Twilio Function and then continuing the flow. In your case when you return the token from your second Function Twilio is dealing with it as if you just returned text to a regular TwiML webhook. When this happens Twilio defaults to assuming you meant <Say> and reads out the text.
While the team work on redirecting calls back into Studio flows there is a workaround.
Instead of returning the token in the second Function, return some TwiML that includes a <Redirect> to your Studio flow's webhook URL with ?FlowEvent=audioComplete appended. You will also need to add a dummy Say/Play widget after your Function widget (it becomes the next part in the flow that can trigger an "audio complete" message, so exists to collect that and send on to the next widget).
The only thing that we haven't handled in thie workaround is sending the token to the flow. I don't believe we can do this via this redirect workaround, so instead I'd recommend storing the token in your own database or something like a Twilio Sync object. This way you can use it outside of Studio however you like. If you need it within the Studio flow then you can create one more Function that returns the token as JSON, and that will be stored within the flow variables then.
If you would prefer to use <Pay>, as this would be a lot easier, I also recommend requesting the pay connector you need.
I think philnash answer here got old, even when it still works.
Right now, you should have to call the first function using TwiML Redirect node.
In the second function, you should have to add a redirect to the webhook of the flow adding ?FlowEvent=return&foo=bar (where foo=bar should be changed by the info you really want to return).

Resources