I have got a celery scheduling requests to many MS Graph resources after a few hours I get, constantly, the following response:
{'error': {'code': 'UnknownError', 'message': '', 'innerError': {'date': '2020-11-27T08:19:26', 'request-id': '714b14e1-e082-4aa9-8ea1-ddc38e84c4b4', 'client-request-id': 'xxx'}}}
This is the real request-id
I schedule requests every 5' and I avoid overlapped requests of the same resource.
I've never got a 429, so I have to guess it is not a throttling issue.
Ask for any extra information you need.
Ty in advance.
Related
We have a POST endpoint in our serverless api which listens to a Magento 2 integration activation callback and processes the payload. The Content-Type of this callback request is application/x-www-form-urlencoded. However, when we try to get the callback, the lambda function finishes execution immediately, skipping the entire function body. What we see in the Cloudwatch logs is only this. Not even console.logs are printed. (the endpoint only prints a string to the console. No async operations are in place. Yet this problem persists)
2020-12-12T12:24:47.012+05:30 START RequestId: 4afba03d-54ef-4b5e-bd44-157b0b7a9f9b Version: $LATEST
2020-12-12T12:24:47.050+05:30 END RequestId: 4afba03d-54ef-4b5e-bd44-157b0b7a9f9b
2020-12-12T12:24:47.050+05:30 REPORT RequestId: 4afba03d-54ef-4b5e-bd44-157b0b7a9f9b Duration: 37.83 ms Billed Duration: 38 ms Memory Size: 128 MB Max Memory Used: 109 MB Init Duration: 893.79 ms
When we try to hit the same endpoint from POSTMAN with Content-Type: application/json, the endpoint works as expected.
Therefore we thought that the problem might be the Content-Type header and read somewhere that adding request mapping templated would solve this problem. Therefore, we even added a mapping template for content type application/x-www-form-urlencoded in the integration request of the lambda method with following content, time to time. But our problem was not solved unfortunately.
"{ "body": "$util.base64Decode($input.body)" }"
{
"formparams" : $input.json('$')
}
{
"body" : $input.json('$')
}
My question is: How we can set the endpoint to print the POST request payload, preventing it from immediate exiting?
We have been searching for a solution to this problem since a week. It would be a great help, if someone can input their helpful, valuable suggestions to solve this problem. Thanks in advance
Since the Content-Type of the Magento 2 Integration activation callback is application/x-www-form-urlencoded, the lambda event for that POST request was something like this.
console.log(event) -> {body: "a=var&b=other_var&c=another_var"}
The endpoint didn't even print anything because I had put console.log(JSON.parse(event. body)). This results in a JSON parse error and the endpoint immediately finishes execution.
When I started parsing the query parameter event body instead of JSON.parse(), the problem was solved.
I'm working on generation of Google Presentation and sometimes batchUpdate throws the error:
{
"error": {
"code": 500,
"message": "Internal error encountered.",
"status": "INTERNAL"
}
}
Here's the example of the request body
Issue:
Your request body is huge. You are requesting many updates in the presentation with a single call. Since you are getting a 500 error, the server is most likely having problems while processing this huge amount of requests.
It's certainly not a question of write request limits, since you are only making one single (large) write request (and HTTP status is not the appropriate one either).
Solution:
In any case, I would suggest you to split your call into as many parts as necessary so that you never get this error. Group the requests into different request bodies and call batchUpdate several successive times. This should fix your problem.
Reference:
presentations.batchUpdate
Slides API: Usage Limits
500 Internal Server Error
When searching for a keyword ("Test email"), using Outlook API, and the keyword exist in my mailbox, I retrived a result within a second. However, if the keyword ("blablabla") does not exist in the mailbox I do not get a reply. Almost after 2 minutes I get a runtime error.
http get on: https://outlook.office365.com:443/api/v2.0/users/user#sub.onmicrosoft.com/messages?$search=blablablablabla&$top=100&$select=Sender,ToRecipients,CcRecipients,Weblink,Subject,ReceivedDateTime,BodyPreview
http headers:
Authorization: bearer theToken
x-AnchorMailbox: theEmailAddress
I expected to get an HTTP Response that says something like "Could not find any result"
Instead I get no reply upto two minutes and than a runtime error with that yellow background that instructs to turn on
<customError mode="Off" />
Though the http staus code is 200.
I think as an API consumer I should not get such replies.
I'm trying to call Google Assistant API using Protocol Buffers (protobuf) over HTTP. refer to:
https://googleapis.github.io/HowToRPC#grpc-fallback-experimental
My problem is that I frequently get HTTP error code 502 when sending request to the back-end service.
To test the problem, I wrote a python script to send (through HTTP POST) the pre-built protobuf binaries and check the response. The test results are:
32KB audio data (about 1 second length of audio), 20 times post, 0 times 502 error received / 32KB 20/0, failure rate: 0%
2*32KB, 20/0, failure rate: 0%
3*32KB, 20/3, failure rate: 15%
4*32KB, 20/10, failure rate: 50%
6*32KB, 20/19, failure rate: 95%
HTTP status code is 200 for successful requests, while 502 for failed cases.
where we can see, the larger audio length the greater failure rate.
The python code to post pre-built protobuf binaries is as below. while the content of file f1 is the just protobuf binaries.
def postData():
url = "https://embeddedassistant.googleapis.com/$rpc/google.assistant.embedded.v1alpha2.EmbeddedAssistant/Assist"
header = {"Content-type".encode('utf-8'): "application/x-protobuf".encode('utf-8'),"Accept".encode('utf-8'):"text/plain".encode('utf-8'), "Connection".encode('utf-8'):"keep-alive".encode('utf-8'), "Authorization".encode('utf-8'):repToken.encode('utf-8')}
with open(fl) as f:
r = requests.post(url, data=f, headers=header)
with open(fl + "_out", 'wb') as fd:
print(r.status_code)
fd.write(r.content)
f.close()
fd.close()
I also tried to post binary files which contain invalid protobuf, e.g a mp3 file,
and in this case, no matter the size of the file, the HTTP status code returned is always 400 with following message, which is just expected.
Invalid PROTO payload received. Invalid request data in stream body, unexpected message type: 7a
It seems the back-end service sets some kind of limitation for the latency of data transfer which makes it doesn’t work well with low bandwidth connections?
When attempting to the list the drives for a SharePoint site, I recently began receiving a Microsoft.SharePoint.Client.UnknownError response.
The request and response is as follows:
Request:
https://graph.microsoft.com/v1.0/sites/<site_id>/drives?select=*,system
Response:
request-id: 050894cc-a435-498a-ae0a-ead3d46924f9
client-request-id: 050894cc-a435-498a-ae0a-ead3d46924f9
x-ms-ags-diagnostic: {"ServerInfo":{"DataCenter":"East US","Slice":"SliceB","Ring":"NA","ScaleUnit":"000","Host":"AGSFE_IN_21","ADSiteName":"EST"}}
{
"error":{
"code":"-1, Microsoft.SharePoint.Client.UnknownError",
"message":"Unknown Error",
"innerError":{
"request-id":"050894cc-a435-498a-ae0a-ead3d46924f9",
"date":"2017-12-06T12:38:44"
}
}
}
The same call was working earlier this week without any changes on my end.
I found a few posts from about a month ago that indicate this is possibly a regression on the MS side of things.
Microsoft Graph Exception code -1 starting this week
Microsoft Graph API for SharePoint in Python: Microsoft.SharePoint.Client.UnknownError
MS Graph API Unknown Error when trying to get folder's children
Is this a indeed a regression on the MS side? Is it a change in API behavior that I need to adjust for?
This was indeed a regression that hit a limited number of scenarios. We've since fixed it so you shouldn't see the unexpected behavior any more.