Twilio Studio - Send and Wait for Reply - Multiple - twilio

I am trying to define a period of time (lets say 10 minutes, that stores all responses after a certain widget), and then save all those replies into one variable. Is this possible to do with twilio studio?
Example:
BOT: [sends message]
//start time - 0 min
USER: [reply1]
USER: [reply2]
USER: [reply3]
//end time - 10 min
finalString = reply1+reply2+reply3...reply i
Then I'd like to send that via HTTP POST request (This part seems to be easy if the values are all stored). But I'd want the HTTP request to execute after the 10 minutes, if there is at least one reply.
Any Twilio Evangelists that could help me out?

The Twilio Send & Wait For Reply widget has a parameter called "Stop Gathering After", which waits X number of seconds for a response, before heading down the No Reply path. The parameter does not accept liquid syntax, so it is isn't possible to subtract the remaining time from your 10 minute budget (should you receive a response to the first widget say 3 minutes in), to ensure all the currently captured responses are sent to your server for that 10 minute maximum collection time.

Twilio developer evangelist here.
That's not something Studio is set up to do. It is intended for back and forth conversation, not a waiting period.
If you want to store something up like this, you will need to build the solution yourself.

Related

Getting Locust to send a predefined distribution of requests per second

I previously asked this question about using Locust as the means of delivering a static, repeatable request load to the target server (n requests per second for five minutes, where n is predetermined for each second), and it was determined that it's not readily achievable.
So, I took a step back and reformulated the problem into something that you probably could do using a custom load shape, but I'm not sure how – hence this question.
As in the previous question, we have a 5-minute period of extracted Apache logs, where each second, anywhere from 1 to 36 GET requests were made to an Apache server. From those logs, I can get a distribution of how many times a certain requests-per-second rate appeared; e.g. there's a 1/4000 chance of 36 requests being processed on any given second, 1/50 for 18 requests to be processed on any given second, etc.
I can model the distribution of request rates as a simple Python list: the numbers between 1 and 36 appear in it an equal number of times as 1–36 requests per second were made in the 5-minute period captured in the Apache logs, and then just randomly get a number from it in the tick() method of a custom load shape to get a number that informs the (user count, spawn rate) calculation.
Additionally, by using a predetermined random seed, I can make the test runs repeatable to within an acceptable level of variation to be useful in testing my API server configuration changes, since the same random list elements should be retrieved each time.
The problem is that I'm not yet able to "think in Locust", to think in terms of user counts and spawn rates instead of rates of requests received by the server.
The question becomes this:
How do you implement the tick() method of a custom load shape in such a way that the (user count, spawn rate) tuple results in a roughly known distribution of requests per second to be sent, possibly with the help of other configuration options and plugins?
You need to create a Locust User with the tasks you want it to run (e.g. make your http calls). You can define time between tasks to kind of control the requests per second. If you have a task to make a single http call and define wait_time = constant(1) you can roughly get 1 request per second. Locust's spawn_rate is a per second unit. Since you have the data you want to reproduce already and it's in 1 second intervals, you can then create a LoadTestShape class with the tick() method somewhat like this:
class MyShape(LoadTestShape):
repro_data = […]
last_user_count = 0
def tick(self):
self.last_user_count = requests_per_second
if len(self.repro_data) > 0:
requests_per_second = self.repro_data.pop(0)
requests_per_second_diff = abs(last_user_count - requests_per_second)
return (requests_per_second, requests_per_second_diff)
return None
If your first data point is 10 requests, you'd need requests_per_second=10 and requests_per_second_diff=10 to make Locust spin up all 10 users in a single second. If the next second is 25, you'd have requests_per_second=25 and requests_per_second_diff=15. In a Load Shape, spawn_rate also works for decreasing the number of users. So if next is 16, requests_per_second=16 and requests_per_second_diff=9.

How to handle Google Ads API rate limit when calling REST API?

I am using Google Ads REST API to pull Ads data. I am not using client library.
One question, how do you programatically check current API usage when calling requests, so you can stop and wait before continuing? Other APIs like Facebook Marketing API has a header in the result that tells you how much requests you have left, so I could stop and wait. Is there a similar info on Google Ads REST API?
Thank you for reading this.
I've seen nothing in the documentation so far to suggest that there is :(
(There is, separately, a RateExceeded error, which includes a retryAfterSeconds field, if you're going too fast / the API is overloaded.)
Ultimately, I tried this method. So far, I haven't reached limit with it:
The basic developer token for Google Ads API allow 15,000 requests per day as of this answer (Link: https://developers.google.com/google-ads/api/docs/access-levels). So that's 15,000 / 24 = 625 requests every hours.
Further divisions show that I can have 625/60 = 10.4 requests every minutes. So 1 request every 6 seconds will ensure I won't reach rate limit.
So my solution is:
Measure the time it takes to complete a request call and subsequent processing
If total time is over 6 seconds, perform the next request. Else, wait so the total time is 6 seconds, then perform the next request.
The below code is what I used to perform this. Hope it helps you guys.
import time
from math import ceil
waiting_seconds = 6
start_time = time.time()
###############PERFORM API REQUEST HERE
#Measure how long it takes, should be at least 6 secs to be under API limit
end_time = time.time()
elapsed = end_time - start_time
if elapsed < waiting_seconds:
remaining = ceil(waiting_seconds - elapsed)
time.sleep(remaining)

Given Twilio states 1 sms per second - do I need to modify my bulk sending sms function? - Django

On their pricing page Twilio states one sms per second for "clean local numbers". Does this mean I cannot send more than one sms per second for that number https://www.twilio.com/sms/pricing/us
I'm using the following code and it seems to be working in terms of sending sms to subscriber phone numbers:
for subscriber in subscribers:
subscriber_num = subscriber.phone_number
my_sms = "My message to subscribers"
client.messages.create(
to= subscriber_num,
from_=reviewnum,
body=my_sms
)
Should I modify my code to take into account one sms per second for "clean local numbers"
Twilio will queue SMS segments if you exceed 1 segment per second, up to 4 hours worth of segments can be queued and will drain at 1 segment per second.
You can set the maximum lifetime a segment will remain in queue with the validity period (default is 4 hours), keeping in mind you don't want SMS segments to be delivered past normal business hours.

Grafana Alerting when there is no change in data for x minutes

Been rolling around the web and forums, cannot find a resource on this.
What I am to achieve is create an alert for when there is no change in data for a period of time.
We are monitoring openfiles for our webserver/s so this number fluctuates rather often. Noticed that when the number is stagnant it points to an issue on the server. So what we want is if openfile remains X for 2minutes alert us.
I made such an alert through a small succession of things:
I have an exclusive 'alerting dummy board', for all the alerts, since I can only have one alert per graph (grafana version 6.6.0)
I use the following query: avg_over_time(delta(Sensor_Data[1m])[20s:]) - this calculates the 20s average of 'first_value-last_value of 1min interval'
My data gathering program feeds into prometheus and this in turn into grafana -- if this program freezes, it might continue sending the last value to prometheus, and the above query will drop to strictly zero.
so I have an alert which goes off if the above query is within a range (-0.01, 0.01) for a minute (a typical value of the above query with system running is abs(query) > 0.18)
Thus, Grafana sends an alert if the Sensor_Data value does not change within about 2-3 minutes.
If you do use Prometheus and Alert manager, There is a nice function that worked for me.
changes
So using something like this in Alert manager will trigger if no changes for the time interval
changes(metric_name[5m]) = 0
This has worked for me. Make sure you're using a rate or increase function (no change means it will drop to zero) and filter the query like the following:
increase(metric_name) > 0
Then, in Alert Config, set "If no data or all values are null" to "Alerting". That way, when there's no data, the alert will be triggered.

Maxiumum number of custom events in Flurry analytics?

What is the maximum number of custom events you can report per session with Flurry analytics?
The number of events you can report per session in Flurry is 1000. I asked this question to Flurry support, as I couldn't find it elsewhere (and none of the answers here really answered the question). They answered and also sent me a short document titled "Flurry Methodology and Best Practices" that contained, among other things, this summary:
300 unique events per app
1000 events max per session
100 unique event names per session
10 params max per event
Key/Value for params must be Strings
Key max length 255 chars
Value max length 255 chars
As the definition of "session" is important, I quote, from the same document:
Flurry analytics is based on a session model that only “phones home”
at the launch and backgrounding of a session. This prevents
“talkiness” from the SDK, conserves battery by not always running the
radios and allows data to travel in a coherent package."
(...)
One addition to the Flurry session model is the concept that a user
may bounce out of an app for a very brief time and reenter the app and
still be within the original session. A time may be set, in millis,
that is colloquially referred to as the “session timeout”. It is
configurable at app launch (see setContinueSessionMillis for more
details) within a range of 5 seconds to 1 minute with a default of 10
seconds. If, when the user comes back into the app, the “session
timeout” has not been exceeded, then the SDK will treat the “new”
session as a continuation of the former session.
Upon this new launch, if there are any sessions not sent, they will be
sent. At that time, as well, the SDK will make a decision on whether
or not to continue a session or start a new one.
The document is here. Flurry support sent it to me in late February, 2015.
The limit appears to be 300 different event ids, and therefore 300 custom events. Quoting: http://support.flurry.com/index.php?title=Analytics/GettingStarted/TechnicalQuickStart
Your application is currently limited to counting occurrences for 300
different Event ids (maximum length 255 characters).
Addional details from here
Yes, there is a limit of 300 Events for each application. Each event
can have up to 10 parameters, and each parameter can have any number
of values.
I believe it is infinite:
Each Event can have up to 10 parameters, and each parameter can have
an infinite number of values associated with it. For example, for the
‘Author’ parameter, there may be 1,000 possible authors who wrote an
article. We can keep track of each author via this single parameter.
So if you can have an infinite number of values you could have 10 million authors. Since they are all just values each one can be tracked (via the parameter). If they "can keep track of each author via this single parameter" then I don't think your event count would be mitigated. This would assume you setup your event types properly like in their example:
NSDictionary *articleParams =
[NSDictionary dictionaryWithObjectsAndKeys:
#"John Q", #"Author", // Capture author info
#"Registered", #"User_Status", // Capture user status
nil];
[Flurry logEvent:#"Article_Read" withParameters:articleParams];
One event with a maximum of 10 dictionary items, with an infinite number of possible values... I think it would be safe to say you aren't limited here.
There is a limit of 300 Events for each app. Each event can have up to 10 parameters, and each parameter can have any number of values. Please check all details here

Resources