AWS Lambda Serverless endpoint exits without executing function - post

We have a POST endpoint in our serverless api which listens to a Magento 2 integration activation callback and processes the payload. The Content-Type of this callback request is application/x-www-form-urlencoded. However, when we try to get the callback, the lambda function finishes execution immediately, skipping the entire function body. What we see in the Cloudwatch logs is only this. Not even console.logs are printed. (the endpoint only prints a string to the console. No async operations are in place. Yet this problem persists)
2020-12-12T12:24:47.012+05:30 START RequestId: 4afba03d-54ef-4b5e-bd44-157b0b7a9f9b Version: $LATEST
2020-12-12T12:24:47.050+05:30 END RequestId: 4afba03d-54ef-4b5e-bd44-157b0b7a9f9b
2020-12-12T12:24:47.050+05:30 REPORT RequestId: 4afba03d-54ef-4b5e-bd44-157b0b7a9f9b Duration: 37.83 ms Billed Duration: 38 ms Memory Size: 128 MB Max Memory Used: 109 MB Init Duration: 893.79 ms
When we try to hit the same endpoint from POSTMAN with Content-Type: application/json, the endpoint works as expected.
Therefore we thought that the problem might be the Content-Type header and read somewhere that adding request mapping templated would solve this problem. Therefore, we even added a mapping template for content type application/x-www-form-urlencoded in the integration request of the lambda method with following content, time to time. But our problem was not solved unfortunately.
"{ "body": "$util.base64Decode($input.body)" }"
{
"formparams" : $input.json('$')
}
{
"body" : $input.json('$')
}
My question is: How we can set the endpoint to print the POST request payload, preventing it from immediate exiting?
We have been searching for a solution to this problem since a week. It would be a great help, if someone can input their helpful, valuable suggestions to solve this problem. Thanks in advance

Since the Content-Type of the Magento 2 Integration activation callback is application/x-www-form-urlencoded, the lambda event for that POST request was something like this.
console.log(event) -> {body: "a=var&b=other_var&c=another_var"}
The endpoint didn't even print anything because I had put console.log(JSON.parse(event. body)). This results in a JSON parse error and the endpoint immediately finishes execution.
When I started parsing the query parameter event body instead of JSON.parse(), the problem was solved.

Related

FastAPI RedirectResponse gets {"message": "Forbidden"} when redirecting to a different route

Please bare with me for a question for which it's nearly impossible to create a reproducible example.
I have an API setup with FastAPI using Docker, Serverless and deployed on AWS API Gateway. All routes discussed are protected with an api-key that is passed into the header (x-api-key).
I'm trying to accomplish a simple redirect from one route to another using fastapi.responses.RedirectResponse. The redirect works perfectly fine locally (though, this is without api-key), and both routes work perfectly fine when deployed on AWS and connected to directly, but something is blocking the redirect from route one (abc/item) to route two (xyz/item) when I deploy to AWS. I'm not sure what could be the issue, because the logs in CloudWatch aren't giving me much to work with.
To illustrate my issue let's say we have route abc/item that looks like this:
#router.get("/abc/item")
async def get_item(item_id: int, request: Request, db: Session = Depends(get_db)):
if False:
redirect_url = f"/xyz/item?item_id={item_id}"
logging.info(f"Redirecting to {redirect_url}")
return RedirectResponse(redirect_url, headers=request.headers)
else:
execution = db.execute(text(items_query))
return convert_to_json(execution)
So, we check if some value is True/False and if it's False we redirect from abc/item to xyz/item using RedirectResponse(). We pass the redirect_url, which is just the xyz/item route including query parameters and we pass request.headers (as suggested here and here), because I figured we need to pass along the x-api-key to the new route. In the second route we again try a query in a different table (other_items) and return some value.
I have also tried passing status_code=status.HTTP_303_SEE_OTHER and status_code=status.HTTP_307_TEMPORARY_REDIRECT to RedirectResponse() as suggested by some tangentially related questions I found on StackOverflow and the FastAPI discussions, but that didn't help either.
#router.get("/xyz/item")
async def get_item(item_id: int, db: Session = Depends(get_db)):
execution = db.execute(text(other_items_query))
return convert_to_json(execution)
Like I said, when deployed I can successfully connect directly to both abc/item and get a return value if True and I can also connect to xyz/item directly and get a correct value from that, but when I pass a value to abc/item that is False (and thus it should redirect) I get {"message": "Forbidden"}.
In case it can be of any help, I try debugging this using a "curl" tool, and the headers I get returned give the following info:
Content-Type: application/json
Content-Length: 23
Connection: keep-alive
Date: Wed, 27 Jul 2022 08:43:06 GMT
x-amzn-RequestId: XXXXXXXXXXXXXXXXXXXX
x-amzn-ErrorType: ForbiddenException
x-amz-apigw-id: XXXXXXXXXXXXXXXX
X-Cache: Error from cloudfront
Via: 1.1 XXXXXXXXXXXXXXXXXXXXXXXXX.cloudfront.net (CloudFront)
X-Amz-Cf-Pop: XXXXX
X-Amz-Cf-Id: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
So, this is hinting at a CloudFront error. Unfortunately I don't see anything slightly hinting at this API when I look into my CloudFront dashboard on AWS, there literally is nothing there (I do have permissions to view the contents though...)
The API logs in CloudWatch look like this:
2022-07-27T03:43:06.495-05:00 Redirecting to /xyz/item?item_id=1234...
2022-07-27T03:43:06.495-05:00 [INFO] 2022-07-27T08:43:06.495Z Redirecting to /xyz/item?item_id=1234...
2022-07-27T03:43:06.496-05:00 2022-07-27 08:43:06,496 INFO sqlalchemy.engine.Engine ROLLBACK
2022-07-27T03:43:06.496-05:00 [INFO] 2022-07-27T08:43:06.496Z ROLLBACK
2022-07-27T03:43:06.499-05:00 END RequestId: 6f449762-6a60189e4314
2022-07-27T03:43:06.499-05:00 REPORT RequestId: 6f449762-6a60189e4314 Duration: 85.62 ms Billed Duration: 86 ms Memory Size: 256 MB Max Memory Used: 204 MB
I have been wondering if my issue could be related to something I need to add to somewhere in my serverless.yml, perhaps in the functions: part. That currently looks like this for these two routes:
events:
- http:
path: abc/item
method: get
cors: true
private: true
request:
parameters:
querystrings:
item_id: true
- http:
path: xyz/item
method: get
cors: true
private: true
request:
parameters:
querystrings:
item_id: true
Finally, it's probably good to note that I have added custom middleware to FastAPI to handle the two different database connections I need for connecting to other_items and items tables, though I'm not sure how relevant this is, considering this functions fine when redirecting locally. For this I implemented the solution found here. This custom middleware is the reason for the redirect in the first place (we change connection URI based on route with that middleware), so I figured it's good to share this bit of info as well.
Thanks!
As noted here and here, it is mpossible to redirect to a page with custom headers set. A redirection in the HTTP protocol doesn't support adding any headers to the target location. It is basically just a header in itself and only allows for a URL (a redirect response though could also include body content, if needed—see this answer). When you add the authorization header to the RedirectResponse, you only send that header back to the client.
A suggested here, you could use the set-cookie HTTP response header:
The Set-Cookie HTTP response header is used to send a cookie from the
server to the user agent (client), so that the user agent can send it back to
the server later.
In FastAPI—documentation can be found here and here—this can be done as follows:
from fastapi import Request
from fastapi.responses import RedirectResponse
#app.get("/abc/item")
def get_item(request: Request):
redirect_url = request.url_for('your_endpoints_function_name') #e.g., 'get_item'
response = RedirectResponse(redirect_url)
response.set_cookie(key="fakesession", value="fake-cookie-session-value", httponly=True)
return response
Inside the other endpoint, where you are redirecting the user to, you can extract that cookie to authenticate the user. The cookie can be found in request.cookies—which should return, for example, {'fakesession': 'fake-cookie-session-value-MANUAL'}—and you retrieve it using request.cookies.get('fakesession').
On a different note, request.url_for() function accepts only path parameters, not query parameters (such as item_id in your /abc/item and /xyz/item endpoints). Thus, you can either create the URL in the way you already do, or use the CustomURLProcessor suggested here, here and here, which allows you to pass both path and query parameters.
If the redirection takes place from one domain to another (e.g., from abc.com to xyz.com), please have a look at this answer.

Micropython: https request blocks further requests

I'm on a M5Stack atom lite running micropython, making POST requests to a given endpoint with json payload. The following code leads to suspicious behaviour:
if (pin1.value()) == True:
if uart1.any():
try:
req = urequests.request(method='POST', url='https://my-server.com/my-endpoint', json={'requestCode':'yadayada'})
if req.status_code == 200:
rgb.setColorAll(0x00ff00)
rgb.setBrightness(100)
wait_ms(1500)
rgb.setBrightness(0)
else:
rgb.setColorAll(0xff0000)
rgb.setBrightness(100)
wait_ms(1500)
rgb.setBrightness(0)
except:
pass
wait_ms(2)
The first request succeeds and the correct payload is sent to the endpoint. Yet, all subsequent requests fail.
The same holds true for GET requests to https endpoints.
If I change to http, both GET and POST requests work fine, one after another.
Defining the content type in the headers has no effect.
Neither does closing the session right after the request (using headers).
As of request 2, to a https endpoint, I get the exception:
OSError(-17040, 'MBEDTLS_ERR_RSA_PUBLIC_FAILED+MBEDTLS_ERR_MPI_ALLOC_FAILED')
Does anyone see what I'm doing wrong with these https-requests? Thanks in advance for any hints!

HTTP POST returns 21201 - No 'To' Number Specified - MS Flow

Looking to send an HTTP POST through Microsoft Flow/Power Automate to make a voice call in Twilio. I feel like I've tried every iteration possible, but keep getting the error 21201:
{
"code": 21201,
"message": "No 'To' number is specified",
"more_info": "https://www.twilio.com/docs/errors/21201",
"status": 400
}
Screenshot of Power Automate HTTP Action
I've seen other vids of people using Azure Functions with C# and it feels like I should be able to do what I need here...like, really close. But I'm not a dev, so maybe I'm way off. Would appreciate any direction on this.
Thanks!
The issue appears to be you are sending a content type of application/json where Twilio requires application/x-www-form-urlencoded
Creating or Updating Resources with the HTTP POST and PUT Methods
Also found this:
Custom connector action with x-www-form-urlencoded content-type

Create time out scenario in SOAPUI

I am working on a web-project. I have created one Http Url Connection. But for that, I have to test the code for time-out InterruptedIOException, that will execute on time-out, but even after setting time-out time as 1msec, my case is executed successfully.
How can I make delay from SOAPUI, so that I can have time-out successfull?
If you want to test how a client will react to a timeout, create a mockservice in SoapUI, and have it execute an OnRequest script prior to returning the (usually pre-determined) response. The script can be as simple as:
sleep(60000)
This would give you a 60-second delay before responding.
Select your response in the tree.
Then at the bottom in "MockResponse Properties" look for:
If you need to simulate a timeout thrown from an HTTP connectivity,
then better use the script
mockRequest.getHttpResponse().sendError(408)
This will generate an html response as well. You may set any HTTP code status you desire.
You may put it in "OnRequest Script", or in the "Script" of an existing Mock Response.
Use HTTP STatus property of Response Message set the value to 408

Supporting the "Expect: 100-continue" header with ASP.NET MVC

I'm implementing a REST API using ASP.NET MVC, and a little stumbling block has come up in the form of the Expect: 100-continue request header for requests with a post body.
RFC 2616 states that:
Upon receiving a request which
includes an Expect request-header
field with the "100-continue" expectation, an origin server MUST
either respond with 100 (Continue) status and continue to read
from the input stream, or respond with a final status code. The
origin server MUST NOT wait for the request body before sending
the 100 (Continue) response. If it responds with a final status
code, it MAY close the transport connection or it MAY continue
to read and discard the rest of the request. It MUST NOT
perform the requested method if it returns a final status code.
This sounds to me like I need to make two responses to the request, i.e. it needs to immediately send a HTTP 100 Continue response, and then continue reading from the original request stream (i.e. HttpContext.Request.InputStream) without ending the request, and then finally sending the resultant status code (for the sake of argument, lets say it's a 204 No Content result).
So, questions are:
Am I reading the specification right, that I need to make two responses to a request?
How can this be done in ASP.NET MVC?
w.r.t. (2) I have tried using the following code before proceeding to read the input stream...
HttpContext.Response.StatusCode = 100;
HttpContext.Response.Flush();
HttpContext.Response.Clear();
...but when I try to set the final 204 status code I get the error:
System.Web.HttpException: Server cannot set status after HTTP headers have been sent.
The .NET framework by default always sends the expect: 100-continue header for every HTTP 1.1 post. This behavior can be programmatically controlled per request via the System.Net.ServicePoint.Expect100Continue property like so:
HttpWebRequest httpReq = GetHttpWebRequestForPost();
httpReq.ServicePoint.Expect100Continue = false;
It can also be globally controlled programmatically:
System.Net.ServicePointManager.Expect100Continue = false;
...or globally through configuration:
<system.net>
<settings>
<servicePointManager expect100Continue="false"/>
</settings>
</system.net>
Thank you Lance Olson and Phil Haack for this info.
100-continue should be handled by IIS. Is there a reason why you want to do this explicitly?
IIS handles the 100.
That said, no it's not two responses. In HTTP, when the Expect: 100-continue comes in as part of the message headers, the client should be waiting until it receives the response before sending the content.
Because of the way asp.net is architected, you have little control over the output stream. Any data that gets written to the stream is automatically put in a 200 response with chunked encoding whenever you flush, be it that you're in buffered mode or not.
Sadly all this stuff is hidden away in internal methods all over the place, and the result is that if you rely on asp.net, as does MVC, you're pretty much unable to bypass it.
Wait till you try and access the input stream in a non-buffered way. A whole load of pain.
Seb

Resources