Twilio ratelimit implementation is not seems working - twilio

Doc: https://www.twilio.com/docs/verify/api/service-rate-limit-buckets
The doc is not well explained the bottom queries.
1.created ratelimit with the unique name as 'phone_number'
twilio api:verify:v2:services:rate-limits:create \
--service-sid VAxxxxxxxxxxxx \
--description "Limit verifications by End User phone_number" \
--unique-name "phone_number"
Note: --unique-name i have passed as 'phone_number' static string not actual user phone number.. because '+' not allowed also https://www.twilio.com/docs/verify/api/programmable-rate-limits?code-sample=code-start-a-verification-with-a-rate-limit&code-language=Node.js&code-sdk-version=3.x#selecting-properties-to-rate-limit states it is static text combinations
previously i have used my phone number without + symbol.
2.Step 2
created a bucket with max 4 and duration 60
twilio api:verify:v2:services:rate-limits:buckets:create \
--service-sid VAxxxxxxxxxxxxxxxxxxxxxxxxxx \
--rate-limit-sid RK7xxxxxxxxxxxxxxxxxxxxx \
--max 4 \
--interval 60
After this bucket creation i expect to limit the sms sending for all the user based on users phone number ( in 60 second only 4 sms will be send for a user).
But what i can experience that i am receiving all sms which i created message.create().. ratelimit is not seems working..
Q1.is there any issue with the unique name as static text 'phone_number' what i can see is there is no combination of 'phone_number' and
'phone_number_country_code' ?
Ex:--unique-name "phone_number_country_code_and_phone_number"
or unique-name is a parameter that we need to send actual mobile of user?
Q2.if created one bucket with max: 4 duration: 60
second, then in 60 second only 4 sms will be send per user ?
Q3.one bucket and one rate limit enough for an application to handle all
users ?
Q4.if the above step is wrong what is the flow of implementation of this apis?
Q5.3rd api is showing in the doc is send verifications api, is this api is necessary when implementing ratelimits (i assume only create rate-limit and bucket create api are sufficient to enable ratelimit) ?
Q6.if i set bucket max: 4 and duration: 60 , what happen to the 5th sms is scheduled for that user (ex: 5th otp ) is that will deliver in same interval or the next intervel or it will discarded ?
or
let me know the actual sequential flow of ratelimit apis to accomplish the ratelimit to apply to all users. limit: 4 sms per user for specific 60 second interval.

Did you implement the send verification API with the ratelimit option using the unique-name property of the ratelimit?
twilio api:verify:v2:services:verifications:create \
--service-sid VAXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \
--rate-limits "{\"phone_number\": \"1234567890\"}" \
--to to \
--channel channel
As I understand it, the purpose of the rate limit is to limit the verification messages. I think the idea is that you create a rate limit and then apply it to a bucket, like you demonstrated. THEN you apply that limit to the verification service using the unique uniqueName and the phone number.

Related

Fetch Twilio inbound and outbound prices of a Twilio Phone number when calling/ receiving

I am trying to find out what are the call prices before make / Receive calls.
Let's say I have a Twilio phone number ABC and customer Phone number XYZ. I want to know the twilio pricing in below scenarios
When Calling from ABC to XYZ, what would be the outgoing price/ minute (Outbound call from Twilio Number)
When calling from XYZ to ABC, what would be the incoming price/ minute (Inbound call to Twilio Number)
I have checked this voice pricing article. There is a REST call mentioned like below.
number = client.pricing.v2 \
.voice \
.numbers('+15017122661') \
.fetch(origination_number='+15108675310')
Response:
{
"country": "United States",
"destination_number": "+15017122661",
"inbound_call_price": {
"base_price": null,
"current_price": null,
"number_type": null
},
"iso_country": "US",
"origination_number": "+15108675310",
"outbound_call_prices": [
{
"base_price": "0.013",
"current_price": "0.013",
"origination_prefixes": [
"ALL"
]
}
],
"price_unit": "USD",
"url": "https://pricing.twilio.com/v2/Voice/Numbers/+18001234567"
}
But I couldn't figure out what exactly this response means.
is the outbound_call_prices means the prices to make calls from +15108675310 to +15017122661?
inbound_call_price means the prices to receive calls from +15017122661 to +15108675310?
Can someone help to understand this.
In the response to the REST call you provided, the outbound_call_prices array shows the current and base prices per minute for making outbound calls from the origination_number (+15108675310 in this case) to the destination_number (+15017122661 in this case).
The inbound_call_price object, on the other hand, shows the current and base prices per minute for receiving inbound calls on the destination_number (+15017122661 in this case) from any phone number.
Note that the outbound_call_prices array may include multiple entries for different calling destinations or number types, depending on the Twilio phone number you are querying and the countries and number types you are calling.

what happens if number of users is lesser then hatch rate in locust

I am doing load test for an application using Locust , but i need to test my app using a single user , scenario is i have single user using which i need to execute some apis multiple times .
So in locust the 2 parameters are number of users and hatch rate , so in my case what would be the values of these 2 .
if i keep number of users = 1 what should be my hatch rate value ?
and also if number of users =1 and hatch rate = 10 then what does that mean from locust point of view ?
from locust_base import LocustBase
from locust import HttpUser, TaskSet, task, between, SequentialTaskSet, events
class LoadTestSummary(LocustBase):
#task()
def get_org_summary(self):
response = self.client.get(url)
if response.status_code != 200:
raise Exception('Failure in org summary call. {}: {}'.format(response.status_code,
response.text))
class TestScenario(HttpUser):
tasks = [LoadTestSummary]
wait_time = between(5, 9)
host = "https://google.com"
users = 1
hatch-rate = 10
The number of users is the maximum, and it is not affected by the hatch rate.
If you do for example locust -u 1 -r 10000, you will still only get 1 user. In your case you dont need to set hatch rate at all.

How can I implement a windowed rolling average with KSQL KTable correctly?

I am trying to implement a volume rolling average into KSQL.
Kafka currently ingests data from a producer into the topic "KLINES". This data is across multiple markets with a consistent format. I then create a stream from that data like so:
CREATE STREAM KLINESTREAM (market VARCHAR, open DOUBLE, high DOUBLE, low DOUBLE, close DOUBLE, volume DOUBLE, start_time BIGINT, close_time BIGINT, event_time BIGINT) \
WITH (VALUE_FORMAT='JSON', KAFKA_TOPIC='KLINES', TIMESTAMP='event_time', KEY='market');
I then create a table which calculates the average volume over the last 20 minutes for each market like so:
CREATE TABLE AVERAGE_VOLUME_TABLE_BY_MARKET AS \
SELECT CEIL(SUM(volume) / COUNT(*)) AS volume_avg, market FROM KLINESTREAM \
WINDOW HOPPING (SIZE 20 MINUTES, ADVANCE BY 5 SECONDS) \
GROUP BY market;
SELECT * FROM AVERAGE_VOLUME_TABLE_BY_MARKET LIMIT 1;
For clarity, produces:
1560647412620 | EXAMPLEMARKET : Window{start=1560647410000 end=-} | 44.0 | EXAMPLEMARKET
What I wish to have is a KSQL Table that will represent the current "kline" state of each market while also including that rolling average volume calculated in "AVERAGE_VOLUME_TABLE_BY_MARKET" KTable so I can perform analysis between current volume and the average rolling volume
I have tried to join like so:
SELECT K.market, K.open, K.high, K.low, K.close, K.volume, V.volume_avg \
FROM KLINESTREAM K \
LEFT JOIN AVERAGE_VOLUME_TABLE_BY_MARKET V \
ON K.market = V.market;
But obviously this results in an error as the "AVERAGE_VOLUME_TABLE_BY_MARKET" key includes the TimeWindow and also the market.
A serializer (key:
org.apache.kafka.streams.kstream.TimeWindowedSerializer) is not compatible to
the actual key type (key type: java.lang.String). Change the default Serdes in
StreamConfig or provide correct Serdes via method parameters.
Am I approaching this problem correctly?
What I want to achieve is:
Windowed Aggregate KTable + Kline Stream ->
KTable representing current market state
including average volume from the KTable
which displays the current market state possible in KSQL. Or must I use KStreams or another library to accomplish this?
A great aggregation example is here: https://www.confluent.io/stream-processing-cookbook/ksql-recipes/aggregating-data
Applicable to this example, how would I use the aggregate to compare to fresh data as it arrives in the KSQL Table?
Cheers, James
I believe what you're looking for may be LATEST_BY_OFFSET:
CREATE TABLE AVERAGE_VOLUME_TABLE_BY_MARKET AS
SELECT
market,
LATEST_BY_OFFSET(volume) AS volume,
CEIL(SUM(volume) / COUNT(*)) AS volume_avg
FROM KLINESTREAM
WINDOW HOPPING (SIZE 20 MINUTES, ADVANCE BY 5 SECONDS)
GROUP BY market;

Google Analitics Measurement Protocol - how to split e-commerce payload?

I have problem with Enhanced E-commerce Measurement Protocol ( docs ). Sometimes client is buying about 100 different products in one transaction. It exceedes the size of 8192 bytes ( reference ) and request doesn't go through.
I tried to split that into small packs with:
transaction details + one item with index 1 (all items as pr1id)
I tried also to split that with increment index:
transaction details + one item with incrementing index (for ex. first I send transaction + pr1id, then transaction + pr2id etc)
I always end up with only one item in google analytics. Is there any way to split it in working and correct way? I couldn't find solution in google or docs.

Is there a limit on the number of followers we can access through the twitter API?

Is there a limit on the number of followers that I can get for a 'user' ( who is not me ) from the Twitter API?
I have written a python script to hit this
URL: https://api.twitter.com/1/followers/ids.json?screen_name=userid
and it returns me a maximum of 5000, for even twitter users who have more than 2,000,000 followers.
I 'm trying to build a recommendation system, so I'd require all the followers of a particular user.
there is only a limit on the number of followers you can retrieve per-page to 5000, as you have discovered.
to retrieve the next 5000 followers and so-on, you need to use the next-cursor=X parameter in your requests. the next-cursor value should be present in your first response.
response example:
{
"previous_cursor": 0,
"ids": [
143206502,
143201767,
777925
],
"previous_cursor_str": "0",
"next_cursor": 0,
"next_cursor_str": "0"
}
example taken from https://dev.twitter.com/docs/api/1/get/followers/ids
There is no limit on the number of follower you can retrieve.
But the number of request you can send to the API has limit.
You can see it on headers of the response:
Example before make 5 calls:
x-rate-limit-limit: 900
x-rate-limit-remaining: 895
Here you can check the limits for each token type (user or app) and the request:
https://dev.twitter.com/rest/public/rate-limits
Here you have information about API Rate Limits:
https://dev.twitter.com/rest/public/rate-limiting

Resources