Request URI too long via POST request - ruby-on-rails

I'm sending a POST request via Net as such:
http = Net::HTTP.new(mixpanel_endpoint.host, mixpanel_endpoint.port)
request = Net::HTTP::Post.new(mixpanel_endpoint.request_uri)
http.request(request)
The issue is that the request_uri is over the max limit. It's a BASE64 encoded string.
Does anybody know what to do about this?
<Net::HTTPRequestURITooLong 414 Request URI Too Long readbody=true>

Net::HTTPRequestURITooLong is a 414 HTTP code from the server, you will need to change the request to conform to what the endpoint allows.
10.4.15 414 Request-URI Too Long
The server is refusing to service the request because the Request-URI
is longer than the server is willing to interpret. This rare condition
is only likely to occur when a client has improperly converted a POST
request to a GET request with long query information, when the client
has descended into a URI "black hole" of redirection (e.g., a
redirected URI prefix that points to a suffix of itself), or when the
server is under attack by a client attempting to exploit security
holes present in some servers using fixed-length buffers for reading
or manipulating the Request-URI.
reference: https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html

Are you adding the data directly to the URL?
Try splitting out the endpoint URL from the data. For example:
Net::HTTP::Post.new(request_endpoint, "whatever_param_value=#{base64_encoded_data}")

Related

Is there any way, how to get the redirect uri?

Background:
Let's have a WebAssembly (wasm) originating from .net code.
This wasm uses HttpClient and HttpClientHandler to access a backend API at https://api.uri.
The actual backend API location might change in time (like https://api.uri/version-5), but there is still this fixed endpoint, which provides redirection (3xx response) to the current location (which is in the same domain).
The API allows CORS, meaning it sends e.g. Access-Control-Allow-Origin: * headers in the responses.
In the normal (non-wasm) world, one just:
Plainly GETs the https://api.uri with no additional headers (CORS safe).
Retrieve the Location: header (containing e.g. https://api.uri/version-5) from the 3xx response as the final URI.
GETs/POSTs the final URI with additional headers (as needed, e.g. custom, auth, etc.).
Note: In ideal world, the redirection is handled transparently and the first two steps can just be omitted.
Although in the wasm world:
You are not allowed to (let the wasm/browser) send the OPTIONS pre-flight requests to a redirecting endpoint (https://api.uri).
You can't send any non-cors headers, when wanting to prevent pre-flight requests (reason for two stages, plain and full, described above).
You can't see the Location: header value (like https://api.uri/version-5) when trying the manual redirection (HttpClientHandler.AllowAutoRedirect = false), because the response is just artificially crafted with HTTP status code of 0 and ReasonPhrase == "opaqueredirect" - adoption to browser's Fetch API. What a nonsense! #1...
You can't see the auto-followed Location: header value in response.RequestMessage?.RequestUri, when trying the (default) automatic redirection (HttpClientHandler.AllowAutoRedirect = true), because there is still the original URI (https://api.uri) instead of the very expected auto-followed one (https://api.uri/version-5). What a nonsense! #2...
You can't send the full blown request with all the headers and rely on the automatic redirection, because it would trigger pre-flight, which is sill not allowed on redirecting endpoint.
So, the obvious question is:
Is there ANY way, how to handle such simple scenario from the Web Assembly?
(and not crash on CORS)
GET https://api.uri => 3xx, Location: https://api.uri/version-5
GET https://api.uri/version-5, Authorization: Basic BlaBlaBase64= ; Custom: Cool-Value => 200
Note: All this has been discovered within the Uno Platform wasm head, but I believe it applies for any .net wasm.
Note: I also guess "disabled" CORS (on the request side, via Sec-Fetch-Mode: no-cors) wouldn't help either, as then such request is not allowed to have additional headers/methods, right?

Micropython: https request blocks further requests

I'm on a M5Stack atom lite running micropython, making POST requests to a given endpoint with json payload. The following code leads to suspicious behaviour:
if (pin1.value()) == True:
if uart1.any():
try:
req = urequests.request(method='POST', url='https://my-server.com/my-endpoint', json={'requestCode':'yadayada'})
if req.status_code == 200:
rgb.setColorAll(0x00ff00)
rgb.setBrightness(100)
wait_ms(1500)
rgb.setBrightness(0)
else:
rgb.setColorAll(0xff0000)
rgb.setBrightness(100)
wait_ms(1500)
rgb.setBrightness(0)
except:
pass
wait_ms(2)
The first request succeeds and the correct payload is sent to the endpoint. Yet, all subsequent requests fail.
The same holds true for GET requests to https endpoints.
If I change to http, both GET and POST requests work fine, one after another.
Defining the content type in the headers has no effect.
Neither does closing the session right after the request (using headers).
As of request 2, to a https endpoint, I get the exception:
OSError(-17040, 'MBEDTLS_ERR_RSA_PUBLIC_FAILED+MBEDTLS_ERR_MPI_ALLOC_FAILED')
Does anyone see what I'm doing wrong with these https-requests? Thanks in advance for any hints!

AWS API Gateway issue for HTTP Method

I created an AWS API-gateway for an HTTP method PUT. When I do a test in API-gateway, that works fine, but when I call it from a REST client, I get 404 bad-request and missing authentication token errors. I didn't set any authorization to true or a required API key to true.
I passed these query parameters to a REST client:
auth_id : 8798iuyiu123123
time_stamp :1231231
test_json : [{"id"=>"1","value"=>"mount"},{"id"=>"2","value"=>"chart"}]
HEADER
content-type : application/json
When I change the test_json value to %5B%7B%22id%22:%221%22,%22value%22:%22test%22%7D,%7B%22id%22:%222%22,%22value%22:%2213+%D8%B4%D8%A7%D8%B1%D8%, then I get the response.
i am new to react, calling from react
Request.put('https://api-gateway.sqwdwed123.com/eretw/update-chart')
.set('Content-Type', 'application/json')
.query({ auth_id: localStorage.auth_id})
.query({ time_stamp:this.props.time_stamp})
.query({ test_json:JSON.stringify(newadd)})
should i pass this test_json through body?
Am I doing anything wrong?
This is usually related to requesting a URL that doesn't exist. Please make sure you're using the correct HTTP method and resource path to a valid resource (the sample invoke URL does not include any resource path). If this still doesn't work. Make sure you actually deployed your API.
The HTTP Response of Bad Request is because you have the Query Parameter that are not URL Encoded. There are 2 things that you can do now:
Pass the test_json as Query Param but making sure that they are URL Encoded. This will put a restriction on the size of the string and hence Not Recommended.
Pass the test_json as Request Body. (Recommended)

Maximum length of URL fragments (hash)

Is there a length limit for the fragment part of an URL (also known as the hash)?
The hash is client side only, so the rules for HTTP may not apply to it.
It depends on the browser.
I found that in safari, chrome, and Firefox, an URL with a long hash is legal, but if it is a param send to the server, the browser will display an 414 or 413 error.
for example:
an URL like http://www.stackoverflow.com/?abc#{hash value with 100 thousand characters} will be ok. and you can use location.hash to get the hash value in javascript but an URL like http://www.stackoverflow.com/?abc&{query with 100 thousand characters} will be illegal, if you paste this link in the address bar, a 413 error code will be given and the message is the client issued a request that was too long. If that is a link in a web page, in my computer, Nginx response the 414 error message.
I don't know the situation in IE.
So I think, the limitation of the length of URL is just for transmission or HTTP server, the browser will check it sometimes, but not every time, and it will always be allowed to be used as a hash.
There is definitely a length for the whole url.
Read
RFC2616 - Hypertext Transfer Protocol
Maximum URL length is 2,083 characters in Internet Explorer

Supporting the "Expect: 100-continue" header with ASP.NET MVC

I'm implementing a REST API using ASP.NET MVC, and a little stumbling block has come up in the form of the Expect: 100-continue request header for requests with a post body.
RFC 2616 states that:
Upon receiving a request which
includes an Expect request-header
field with the "100-continue" expectation, an origin server MUST
either respond with 100 (Continue) status and continue to read
from the input stream, or respond with a final status code. The
origin server MUST NOT wait for the request body before sending
the 100 (Continue) response. If it responds with a final status
code, it MAY close the transport connection or it MAY continue
to read and discard the rest of the request. It MUST NOT
perform the requested method if it returns a final status code.
This sounds to me like I need to make two responses to the request, i.e. it needs to immediately send a HTTP 100 Continue response, and then continue reading from the original request stream (i.e. HttpContext.Request.InputStream) without ending the request, and then finally sending the resultant status code (for the sake of argument, lets say it's a 204 No Content result).
So, questions are:
Am I reading the specification right, that I need to make two responses to a request?
How can this be done in ASP.NET MVC?
w.r.t. (2) I have tried using the following code before proceeding to read the input stream...
HttpContext.Response.StatusCode = 100;
HttpContext.Response.Flush();
HttpContext.Response.Clear();
...but when I try to set the final 204 status code I get the error:
System.Web.HttpException: Server cannot set status after HTTP headers have been sent.
The .NET framework by default always sends the expect: 100-continue header for every HTTP 1.1 post. This behavior can be programmatically controlled per request via the System.Net.ServicePoint.Expect100Continue property like so:
HttpWebRequest httpReq = GetHttpWebRequestForPost();
httpReq.ServicePoint.Expect100Continue = false;
It can also be globally controlled programmatically:
System.Net.ServicePointManager.Expect100Continue = false;
...or globally through configuration:
<system.net>
<settings>
<servicePointManager expect100Continue="false"/>
</settings>
</system.net>
Thank you Lance Olson and Phil Haack for this info.
100-continue should be handled by IIS. Is there a reason why you want to do this explicitly?
IIS handles the 100.
That said, no it's not two responses. In HTTP, when the Expect: 100-continue comes in as part of the message headers, the client should be waiting until it receives the response before sending the content.
Because of the way asp.net is architected, you have little control over the output stream. Any data that gets written to the stream is automatically put in a 200 response with chunked encoding whenever you flush, be it that you're in buffered mode or not.
Sadly all this stuff is hidden away in internal methods all over the place, and the result is that if you rely on asp.net, as does MVC, you're pretty much unable to bypass it.
Wait till you try and access the input stream in a non-buffered way. A whole load of pain.
Seb

Resources