i have a project on Google Cloud Platform, using the Youtube Data V3 API. Everything was going well, earlier this month after receiving several emails alerting that I had to do an Audit, the queries for the day stopped completely. they were completely zeroed.
I followed the link to perform the Audit, and i successfully completed all the changes that were told to me in my application, strictly following the regulations. The audit went well. No further changes were required from me.
But the issue is that the queries per day remain at zero. I can't edit. It occurred to me that maybe using the Google Cloud Trial there could be some change. Negative. I'm still unable to increase the limits, not even using the balance they give you as a gift.
The project used approximately a margin of about 25,000 to 300,000 queries / day. I have requested 500.000 queries / day filling in the quota expansion form to have a little more margin.
Meanwhile the project has been stopped for almost a month. If anyone knows something or how I should proceed about it,
Thank you very much.
Have a nice day,
Related
I got this error:
The request cannot be completed because you have exceeded your quota.
and I can not understand YouTube limits the number of requests? That is, I cannot create my project by taking API from my channel? If this is so, what is the point of YouTube Data API, if at the development stage I was already limited, what will happen when users come in, then my project will fall within 5 minutes?
and I cannot understand how I was able to make 10,000 requests per day, given that I worked on the localhost for about 3 hours, is this possible?
Indeed the Google's Developers Console shows text like Queries per day, but that's very much misleading (and may well be reported as an Web UI bug to Google).
You have to acknowledge that YouTube Data API's quota system is not accounting for the number of endpoints calls you made during a day long, but it accounts for the cumulated number of quota units corresponding to each of your endpoint calls.
For example, if you have 10000 units of quota allocated for daily usage, you may very easily exceed this upper bound after only 100 calls to the Search.list API endpoint.
Many API users find the default amount of quota allocated -- 10000 units -- to be quite constraining -- that even during the development stage of their apps. For tackling this issue, I recommend two things:
Develop your app such that to cache API responses it received from the endpoints it calls; this way, during the development stage of your app (afterwards, even during production, but albeit functioning with a different logic), repeated calls to endpoints would not result in actual API requests, but would get served from the app's local cache.
Apply for a quota extension, using Google's official form; be aware that, as per the experience of users of this forum, Google's answer, usually, does not arrive shortly.
For the past week or so, we've been experiencing 504, Gateway Timeout errors while making fetching email messages from the MS Graph API. Prior to that for over a month of running, the same application did not experience that error, at least not in any significant frequency.
We are using V1.0 of the MS Graph API
Our query is fairly simple:
$top=100&$orderBy=lastModifiedDateTime desc&$filter=lastModifiedDateTime lt 2019-09-09T19:27:55Z and parentFolderId ne 'JunkEmail'
We get the timeout for users who have large volumes of data (> 100K email messages), but occasionally do get it for users with lesser (around 18K email messages) volume. Volume has not changed much from the time where the system was working, to now when we see many timeouts.
We've tried simplifying the query, reducing the number of messages we request etc., but that seems to have only limited and intermittent impact.
My question - What can we do to eliminate/significantly reduce the possibility of getting the 504, Gateway Timeout error from the MS Graph API?
I suspect that since we are asking for messages without a folder filter, it may be possible that we are stressing out the query engine. Just a hunch, and if any one has real insight into MS Graph API, i'd love to know if that may be possible. Also, any information that helps us better understand what is going on under the hood would be much appreciated.
Update 1 (2019-09-13 15:44:00 EST) - Here is a visualization of a set of fetch requests made by the app over a 12 hour period (approximately). The pink bars are the number of successful fetches, and the light blue ones are the failed requests (all having 504, Gateway Timeout as the failure code). As you can see, when the app starts it has a number of failures, which eventually reduce and go away. Then from around 4:30AM to 9:30AM, there are a number of failures, which eventually subside. Almost all failures happen while fetching messages for one user, who has a very large mailbox (> 220K messages). I realize this is a small data set, and am happy to generate one that runs for a longer period of time if that helps. Also, the app in question is running on our Azure tenant, as a part of a Azure Function app, in the "East US" location.
Update 2, (16th Sept 2019, 09:32:00 EST) - We ran the system for the last 3 days and here is a visualization of the fetch requests made by the app during that time. The blue bars are successful fetches, and the pink bars are failed fetched (all having 504, Gateway Timeout as the failure code). The summary is that except for a small window 11PM - 2AM on the first night, no request succeeded for this one particular user with a large mailbox. In effect, that means that inspite of retry logic etc., we are unable to process that user's data.
Microsoft Graph can be slow at times and will throttle occasionally.
I'd advise you let the Graph SDK do the hard work to save you from writing code to handle all this yourself.
Use the Microsoft Graph client library version 1.17.0+ as it introduced auto retry on 504 errors. It alsos handle throttling (code 429) when they occur.
The point I am trying to make is that you can retry when you get a 504 or 429 yourself or delegate such responsibilities to a SDK
Good to hear that the retry is helping. I've got a couple of options to try:
1) Change your query and move the ordering responsibilities to the client. $orderBy=lastModifiedDateTime desc and the filter require indices to be created and this increase the load on the mailbox. Doing client-side ordering may be better for these large mailboxes.
2) Use delta query (with your filter) to sync and get incremental changes. You will have to add a folder hierarchy sync. You may be able to make parallel calls. I suspect that this will give you much better performance after the initial sync.
I encountered the same issue. 504 error while trying to get all messages. After a thorough inspection I figured that in our case the problem was draft items. In some cases they were throwing errors. After adding filter "isDraft eq false" 504 stopped and we're getting all messages. Turns out that some drafts are broken. They won't show up in OWA or Outlook and in our case the one that was messing with the query was stored under parentFolderId that was non-existent, which is a huge problem in and of itself in my opinion.
I have standard ASP.NET MVC project and I need to calculate application availability to find out our SLA level. So, I need to get something like this for our web application.
Information from my hosting provider
System Availability: 99.9860%
Total Uptime: 30d 10h:22m:44s
Total Downtime: 0d 0h:6m:9s
Total Reboots: 3
Mean Time Between Reboots: 10.15 days
But I need to calculate availability for application. So, the question is
How to calculate ASP.NET MVC application availability in proper way?
Maybe someone has already implemented that, or any suggestion how to do that, any help will be appreciated.
Where to start?
The first point what I think that is Application Insights and availability test. The problem is that the minimum value of test frequency is 5 minutes. I need more precise measurements.
Next, create a some tool that will call my app every second and collect information. Result: a very large number of requests.
Also, get some perf counters from IIS or something like that. Need to investigate if it is possible.
I know that the question possible is too broad, but I didn't find any info about implementation of application availability. What do you think about that?
It would take to long if I was to explain all parts that can be done, so I'll keep it short.
Usually you define all these details in a Service Level Agreement where you also define the availability target (i.e. 99 %) that also include planned downtime. A 99 % availability target is to have the app running and its functionality as described in the document with at most approx. 87.6 h per year. Here is a SLA uptime calculator.
The normal interval is 5 minutes as you say, but it you can prove by using an external site / service that the suppliers are not meeting the requirements, you calculate your loss (revenue loss, labor costs etc) and claim the money from them. You already have a Business Impact Analysis (BIA) I guess otherwise you should do it.
Ok, now to the programming / DevOps part. I usually develop applications / services with this in mind and report its status to a third party service like NewRelic, Uptrends or similar. As an example I also use a self-made service for this because accurate requirements for delivering data at least once a second with a hard deadline. In my solution I use WebSockets to send data in both directions following a schedule, event or when needed. A benefit with that is that you can send status (good or bad) let say every 500 ms and you will know within one second if the app has failed (≈ 499 ms + 500 ms).
Using a service like this you can measure the uptime, custom events of interest and possible errors within a second and a ton of other metrics. Usually within 5-100 ms but WCET/WCRT is hard to estimate.
To answer your question, you cannot calculate application availability with so few measure points, once every 5 min is covering approx. 12 seconds per hour and you cannot have any reliable calculation from that. You can assume everything was ok between the measure points but that is called guessing. I have made implementations that have 14 400 measure points per hour in order to provide 500 ms accuracy (Banks).
I hope you got an answer that helps you with your problem.
This question is regarding the O365 Activity Management API
We are using the API to retrieve audit log notifications from multiple channels (AzureAD, Outlook, SharePoint, etc.) for very large tenants, meaning that we need to retrieve potentially millions of notifications over a relatively short timespan.
O365 gathers audit notifications into a series of "blobs" which then contain a number of individual notifications (JSON messages). To my understanding, which in part comes from correspondence with the API's dev. team and from reading the docs, these blobs should contain a "considerable" number notifications as to function as a sort of batch approach when doing the actual web requests.
In our approach, we request blobs URLs for an interval of an hour, and then do a request for the individual blobs.
However, we have tested with a number of different tenants and different PublisherIdentifiers, but only seem to get around 2.5 messages per blob on average, no matter the total number of notifications "waiting" to be fetched.
This becomes a major issue for the larger tenants as is puts a strain on the SIEM solution running the fetcher logic (a Python service), due to the number of needed requests, and it also gives us throttling issues with the API itself.
In effect, we simply cannot fetch the audit notifications fast enough to keep up - within the retention period. Had the blobs contained more notifications per blob, we would be fine - as the total amount of data (in MBs) is not that large.
A "funny" thing is, that if we use the visual query tool within the Admin Center of the tenant, it searches and retrieves the notifications very fast.
My questions
Has anyone had any experience with this issue, or perhaps had a better "batch performance"?
Does anyone have any ideas as to what we could try to get a better performance?
As mentioned we have been in direct contact with the dev team and the program manager in Redmond. They have been very helpful with other issues we had, but they referred us to support for this specific issue - who in turn referred us to the forums / community. We currently do not have access to premium support...
Example request for content blobs for an hour
https://manage.office.com/api/v1.0/{tenantid}/activity/feed/subscriptions/content?contentType=Audit.Exchange&PublisherIdentifier={pub.id}&startTime=2017-12-03T10:31:24&endTime=2017-12-03T11:31:24
When retrieving the individual blobs, we just use the URLs given to us by the above request.
You can avoid throttling by appending "?PublisherIdentifier={Tenant ID}" to the contentUri in the retrieve content get request.
How can I add a PublisherId to a GetBlob call to the Office365 Rest API to avoid throttling?
I have been working with Office 365 Management Activity API for the past 6 months. I too faced this kind of issue before. This issue will occur if you are trying to get all the audit log contents from your Office 365 tenant at a particular interval, it will result in throttling issue. For your information, it is not possible to avoid throttling issues (resource over usage) for large active tenants.
To overcome these issues, you can create and deploy a web application in cloud and register with Office 365 Management Activity API webhook.
Whenever the office 365 tenant wrap the activity logs into an Azure Blob, it will immediately give the blob details to your registered Web Application. You can refer this link to know about how to enable webhook for a Web Application. Once you received the blob detail from Office 365 tenant, extract the logs from the Azure Blob and save it in your own blob storage / store in SQL / NOSQL databases.
I had a similar issue. Pulling down logs would take longer than the interval of time allotted to the Python script and the script would start overlapping itself or would fall behind when trying to pull logs for a SIEM implementation.
https://github.com/IntegralDefense/o365_log_fetch
I'm a little late to this post, but by using Asyncio in Python 3.5+ as well as aiohttp, you can make concurrent calls to O365 Management API and pull down the logs much faster. I performed some testing and retrieved logs for a 13 hour window (Audit.Exchange, Audit.AzureActiveDirectory, and Audit.Sharepoint). It took around 20 minutes using 'requests' and sequentially making the API calls. After implementing Asyncio/aiohttp, the same time frame took just under 2 minutes (500,000+ individual events were pulled from the data located at several thousand content blobs/locations).
I've been running the script in 10 minute intervals and usually the script completes in < 10 seconds.
The script I pasted above also supports pagination. So if you get a content list that was truncated in the response from Microsoft, the script will keep reaching out and pulling down more content locations.
At this time, the documentation isn't up to speed, but hopefully that will be caught up soon.
Is there a way in JIRA to run a report to see how many issues were "resolved" by what users and how quickly since the issue was reported? It needs to be per user
Thanks
You can build arbitrary reports yourself with a Report Plugin Module, but my experience is that it's quite a hassle. Note that plugins will only work in self-hosted Jira installations, not in Atlassians hosted service.
Another way would be to leverage the REST API in order to fetch worklogs and process them externally.
Your requirement needs some clarifications I think. It seems you want to see the number of issues that were moved from some status to another status, or perhaps the last time the resolution field was set to a value (any value?). Then group those results by JIRA user.
A second requirement is to track the time from issue creation to the last time the resolution field was set. Again grouping by user.
I'd try using the Vertygo SLA plugin from Valiantsys to do this. It lets you define custom fields to track the time between two JIRA events such as a field updated or a status changed. I believe it can sum those fields and display grouped results in the JIRA statistics and two-dimensional gadgets.
Reports that group by user often become quite large as the number of users increases.