Header "x-ms-throttle-limit-percentage" not coming in response - microsoft-graph-api

My application makes a lot of calls to the graph API to get the properties I need. It is impossible to reduce the number of requests in my case. And for this, I need to understand when the number of requests approaches the limit and that I need to stop doing them so as not to get 429)
The documentation says that the parameter "x-ms-throttle-limit-percentage" should come in the header when the number of requests approaches the limit from 0.8. As I understand it, 0.8 is a coefficient from 1, where 1 is the upper limit of the limit:
https://learn.microsoft.com/en-us/graph/throttling?view=graph-rest-1.0#regular-responses-requests
But I didn’t get this parameter in the header, although Retry-After with TooManyRequests.
How can I get this parameter in the response? Perhaps you need to specify additional parameters for this? Or set up Tenant for this?
Or is there another way to view throttle-limit?
Thanks in advance for your reply)

If you haven't got "x-ms-throttle-limit-percentage" parameter in header response, this means that you haven't consumed more than 0.8 of its limit, its mentioned in docs. please check the screenshot.
You can check service specific throttle limit ,please follow docs ,
We were curious to know, what service you were hitting ?

Related

Is there a way in grafana to get number of requests at a given time instance?

I have one endpoint for which I would like to see number of requests at a given time (not period). For instance, how many requests were received at 9.30 a.m.
The function I believe I can make use of might be: echo_requests_total, but it just accumulates the count and if I try increase() or rate() functions even those do not produce the expected output, which is obvious.
I am not even sure about what I want is even possible or not.
Any help would be appreciated.

Why does the Twitter API search endpoint only show a max of 15 results when sorting by popular tweets?

When using the search endpoint, I am only getting a max of 15 results, no matter how popular of a search query I use. Setting count to 100 does not make a difference, however it does when sorting by most recent. Does anybody else experience this? Is it a possible bug or on purpose?
Here's an example call:
https://api.twitter.com/1.1/search/tweets.json?q=pluto&result_type=popular&count=100
Docs: https://dev.twitter.com/rest/public/search
I have actually the same problem. I can just tell you, if your request has more result than 15, that you can "repeat" the request checking the last block "search_metadata" in the json file. You get directly the next request to do under "next_results". If there are non more results you will not see this part of code.

Rate Limit Twitter API

I'm kind of confusion with twitter api guide on rate limiting mention over here https://dev.twitter.com/docs/rate-limiting/1.1
In their guide twitter has mention the follow field would be present in the response headers which can be use to a determine the amount of api call allowed , left and will rest at info
X-Rate-Limit-Limit: the rate limit ceiling for that given request
X-Rate-Limit-Remaining: the number of requests left for the 15 minute window
X-Rate-Limit-Reset: the remaining window before the rate limit resets in UTC epoch seconds
Now they have also given a rate limit status api to query against
https://dev.twitter.com/docs/api/1.1/get/application/rate_limit_status
Now I'm kind of confuse which of the above value should I follow to see how much api call is available for me before the desired limit is reach .
Both seem to return the same. While /get/application/rate_limit_status is an API call which returns rate limits for all resources, X-rate-limits sets the header for the resource you just called.
Use /get/application/rate_limit_status to cache the no of API calls remaining, refresh at periodic intervals rather than having to make a call and then parse the header info to check if you've exceeded rate limits

When caching response what would be a reasonable value for maxage header when response does not change?

I have a YQL query that returns data that I know for sure will not ever change.
In order to avoid rate limits, I was thinking of adding a maxage header to the yql response.
Now I'm wondering what a reasonable value would be (in the case where I know for certain that the response will never ever change): a year ? 10 years ? more ?
Are there any specificities as to the way yql would treat the maxage header ?
Nice article on maxAge and how to use it: http://www.yqlblog.net/blog/2010/03/12/avoiding-rate-limits-and-getting-banned-in-yql-and-pipes-caching-is-your-friend/ . This should answer most of your queries about max age.
For your second question, if the response will never ever change, why even make an API call in the first place? You could eliminate the network latency altogether and have a conf/property file having the response on your server itself.
Am not quite sure if I understood what you meant by if there were any specifications to the way YQL would treat the header but will try to answer it to best of my knowledge. From the link I shared earlier, following are a few lines:
Secondly you can just ask YQL to cache the response to a statement for longer – just append the _maxage query parameter to your call and the result will be stored in cache for that length of time (but not shorter than it would have been originally):
http://query.yahooapis.com/v1/public/yql?q=select * from weather.forecast where location=90210&_maxage=3600
This is really useful when you’re using output from a table that’s not caching enough or an XML source without having to do any open table work
Hope this helps.

Ajax Security Question: Supplying Available usernames dynamically

I am designing a simple registration form in ASP.net MVC 1.0
I want to allow the username to be validated while the user is typing (as per the related questions linked to below)
This is all easy enough. But what are the security implications of such a feature?
How do i avoid abuse from people scraping this to determine the list of valid usernames?
some related questions: 1, 2
To prevent against "malicious" activities on some of my internal ajax stuff, I add two GET variables one is the date (usually in epoch) then I take that date add a salt and SHA1 it, and also post that, if the date (when rehashed) does not match the hash then I drop the request otherwise fulfill it.
Of course I do the encryption before the page is rendered and pass the hash & date to the JS. Otherwise it would be meaningless.
The problem with using IP/cookie based limits is that both can be bypassed.
Using a token method with a good, cryptographically strong, salt (say something like one of Steve Gibson's "Perfect Passwords" https://www.grc.com/passwords.htm ) it would take a HUGE amount of time (on the scale of decades) before the method could reliably be predicted and there for ensures a certain amount security.
you could limit the number of requests to maybe 2 per 10 seconds or so (a real user may put in a name that is taken and modify it a bit and try again). kind of like how SO doesn't let you comment more than once every 30 seconds.
if you're really worried about it, you could take a method above and count how many times they tried in a certain time period, and if it goes above a threshold, kick them to another page.
Validated as in: "This username is already taken"? If you limit the number of requests per second it should help
One common way to solve this is simply by adding a delay in the request. If the request is sent to the server, wait 1 (or more) seconds to respond, then respond with the result (if the name is valid or not).
Adding a time barrier doesn't really effect users not trying to scrape, and you have gotten a 60-requests per minute limit for free.
Bulding on the answer provided by UnkwnTech, which is some pretty solid advice.
You could go a step further and make the client perform some of calculation to create the return hash - this could just be some simple arithmatic like subtrating a few numbers, adding the data and multiplying by 2.
The added arithmatic does mean an out-of-box username scraping script is unlikely to work and forces the client into using up greater CPU.

Resources