I'm using the google safebrowsing api for getting fullhashes for the prefixes of hashes of url present in threat lists. When i'm using 5 threads to do the job, the job completes but the latency is high, so i tried increasing the threads to 7 and more but i'm getting the following error on doing that:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 504 Gateway Time-out
{
"code" : 504,
"errors" : [ {
"domain" : "global",
"message" : "Deadline expired before operation could complete.",
"reason" : "backendError"
} ],
"message" : "Deadline expired before operation could complete.",
"status" : "DEADLINE_EXCEEDED"
}
But, I'm sure that my daily quota has not exceeded.
By looking at the console, i can see that the number of requests per second is not more than the default quota (3000 req/100sec).
What other reason can be there for the above error ?
Related
I'm currently working on migrating from the Google My business V4.9 to the Business Profile Performance API .
For number of credentials the changes are working as expected but for others, there is a problem with making the request as I'm getting this error.
GET https://businessprofileperformance.googleapis.com/v1/locations/****:getDailyMetricsTimeSeries?dailyMetric=CALL_CLICKS&dailyRange.endDate.day=8&dailyRange.endDate.month=1&dailyRange.endDate.year=2023&dailyRange.startDate.day=31&dailyRange.startDate.month=12&dailyRange.startDate.year=2022
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "The caller does not have permission",
"reason" : "forbidden"
} ],
"message" : "The caller does not have permission",
"status" : "PERMISSION_DENIED"
}
Worth to mention that for the same credentials and scope everything is working with the older endpoints.
I'm currently using https://www.googleapis.com/auth/plus.business.manage, can this be the problem as it has been deprecated(But you can still use it) and now there is https://www.googleapis.com/auth/business.manage instead?
UPDATE
It seems that before it simply returned an empty list of reports for this location, and now it's throwing an exception.
I am getting an error when trying to connect my DV360 account to Datorama Stream. I know for sure my credentials arent the problem because I am granted acces when using Google Analytics or Adwords with the same account but specifically with DV360 I get this:
Provider's Internal Error
403 Forbidden GET https://displayvideo.googleapis.com/v1/advertisers?pageSize=100&partnerId=4611731
{ "code" : 403, "errors" :
[ { "domain" : "global",
"message" : "No permission for attempted operation on PARTNER with ID \"4611731\".",
"reason" : "forbidden" } ],
"message" : "No permission for attempted operation on PARTNER with ID \"4611731\".", "status" : "PERMISSION_DENIED" }
Does anyone know what the problem might be? would it be something on the web app side?
Cheers,
In my java spring boot application, I am using google spreadsheet API's but sometimes (very rarely) I got the below exception from batchUpdate API.
Below is the code :-
BatchUpdateSpreadsheetRequest requestBody = new BatchUpdateSpreadsheetRequest();
requestBody.setRequests(requests);
Sheets sheetService = googleConfigConstant.getSheetService();
Sheets.Spreadsheets.BatchUpdate reques =
sheetService.spreadsheets().batchUpdate(spreadsheetId, requestBody);
Exception what I am getting :-
com.google.api.client.googleapis.json.GoogleJsonResponseException: 409
Conflict
POST https://sheets.googleapis.com/v4/spreadsheets/dummy-spreadsheet-id:batchUpdate?
quotaUser=testuser
{
"code" : 409,
"errors" : [ {
"domain" : "global",
"message" : "The operation was aborted.",
"reason" : "aborted"
} ],
"message" : "The operation was aborted.",
"status" : "ABORTED"
}
After searching a lot I am not able to find the root cause, any help would be appreciated.
{
"code" : 403,
"errors" : [ {
"domain" : "youtube.quota",
"message" : "The request cannot be completed because you have exceeded your quota.",
"reason" : "quotaExceeded"
} ],
quota calculator
test quota usage
The queries or requests quota is 50k not video uplaods quota video uploads quota is 25-50 tops you can refer this thread for more information.
Hope this helps
One of our jobs started throwing up the following warning/error:
856573 [main] WARN com.google.cloud.dataflow.sdk.runners.DataflowPipelineJob - There were problems getting current job status:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 429 Too Many Requests
{
"code" : 429,
"errors" : [ {
"domain" : "global",
"message" : "Resource has been exhausted (e.g. check quota).",
"reason" : "rateLimitExceeded"
} ],
"message" : "Resource has been exhausted (e.g. check quota).",
"status" : "RESOURCE_EXHAUSTED"
}
at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:145)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:40)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest$1.interceptResponse(AbstractGoogleClientRequest.java:321)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1056)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
at com.google.cloud.dataflow.sdk.runners.DataflowPipelineJob.getState(DataflowPipelineJob.java:188)
at com.google.cloud.dataflow.sdk.runners.DataflowPipelineJob.waitToFinish(DataflowPipelineJob.java:126)
at com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner.run(BlockingDataflowPipelineRunner.java:86)
at com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner.run(BlockingDataflowPipelineRunner.java:47)
at com.google.cloud.dataflow.sdk.Pipeline.run(Pipeline.java:145)
at com.tls.cdf.job.AbstractCloudDataFlowJob.execute(AbstractCloudDataFlowJob.java:100)
at com.tls.cdf.CloudDataFlowJobExecutor.main(CloudDataFlowJobExecutor.java:44)
At first we though this was an error allocating the desired resources (VM's) for the job, but in fact the job ran fine and it was able to scale up as needed. It seems to be when trying to retrieve the job status is the problem.
Interestingly, each time the error was thrown in the application (there were multiple reported while the job ran), the developer console would also bork with this:
The job id is: 2015-05-04_20_49_53-2521857061976321751
What is this warning/error related to?
There was an issue on our end reporting monitoring information. We've rolled back the change, and you should be good now. Please let us know if you're still having issues. Sorry for the trouble!