I have an application which makes GET request of length 18k characters. If this requests goes through HAProxy then I get immediately 400. If I hit directly my service everything is fine. Is there a parameter in HAProxy which sets maximum length of URL request in HAProxy?
Thanks in advance
Related
I have been scouring the internet with no luck. I have a basic LUA script for HAProxy, which looks like this:
core.Info("LUA script for parsing request ID element - loaded");
function parseId(txn, salt)
local payload = txn.sf:req_body()
-- parses hex value from element named "ID". Example payload: {"Platform":"xyz.hu","RecipientId":"xyz.hu","Channel":"xyz","CallbackURL":"http://x.x.x.x:123","ID":"5f99453d000000000a0c5164233e0002"}
local value = string.sub(string.match(payload, "\"ID\":\"[0-9a-f]+\""), 8, -2)
core.Info("ID : " .. value)
return value
end
-- register HAProxy "fetch"
core.register_fetches("parseId", parseId)
What it does is what it says: takes a 32 characater long ID from an incoming request. In the HAProxy config file, the result is used for sticky-session handling:
stick-table type string len 32 size 30k expire 30m
stick on "lua.parseId" table gw_back
This produces two lines of log for each request:
ID: xyz which is logged from the LUA script
The detailed request data which is logged from the HAProxy config file using "log-format", e.g.:
Jan 20 22:13:52 localhost haproxy[12991]: Client IP:port = [x.x.x.x:123], Start Time = [20/Jan/2022:22:13:52.069], Frontend Name = [gw_front], Backend Name = [gw_back], Backend Server = [gw1], Time to receive full request = [0 ms], Response time = [449 ms], Status Code = [200], Bytes Read = [308], Request = ["POST /Gateway/init HTTP/1.1"], ID = [""], Request Body = [{"Platform":"xyz.hu","RecipientId":"xyz.hu","Channel":"xyz","CallbackURL":"http://x.x.x.x:123","ID":"61e9d03e000000000a0c5164233e0002"}]
I wanted to extend logging due to some strange issues happening sometimes, so I wanted to one (or both) of below approaches:
Pass the "ID" value back from the LUA script into the HAProxy config as a variable, and log it along with the request details. I can log the full request body, but don't want to due to GDPR and whatnot.
Get some request details in the LUA script itself, and log it along with the ID.
So, basically, to be able to connect the ID with the request details. If multiple requests are coming to same URL very quickly, it is difficult to find which of them belongs to a specific ID. However I couldn't accomplish these.
For the first one, I added this line into the LUA before returning the "value" variable:
txn:set_var("req_id", value)
I was hoping this would create a variable in HAProxy called "req_id", and I can log it with "log-format", but all I got was empty string:
ID = [""]
For the second one, I'm at a complete loss. I'm not able to find ANY documentation on these. I have been scouring the internet with no luck. E.g. the txn.sf:req_body() function which I know is working, I simply cannot find it documented anywhere, so I'm not sure what other functions are available to get some request details.
Any ideas for either or both of my approaches? I'm attaching my full HAProxy config here at the end, just in case:
global
log 127.0.0.1 len 10000 local2 debug
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
lua-load /opt/LUA/parseId.lua
stats socket /etc/haproxy/haproxysock level admin
defaults
log global
option httplog
option dontlognull
mode http
timeout connect 5000
timeout client 50000
timeout server 50000
# Request body is temporarily logged in test environment
log-format "Client IP:port = [%ci:%cp], Start Time = [%tr], Frontend Name = [%ft], Backend Name = [%b], Backend Server = [%s], Time to receive full request = [%TR ms], Response time = [%Tr ms], Status Code = [%ST], Bytes Read = [%B], Request = [%{+Q}r], ID = [%{+Q}[var(txn.req_id)]], Request Body = [%[capture.req.hdr(0)]]"
frontend gw_front
bind *:8776
option http-buffer-request
declare capture request len 40000
http-request capture req.body id 0
http-request track-sc0 src table gw_back
use_backend gw_back
backend gw_back
balance roundrobin
stick-table type string len 32 size 30k expire 30m
stick on "lua.parseId" table gw_back
# Use HTTP check mode with /ping interface instead of TCP-only check
option httpchk POST /Gateway/ping
server gw1 x.x.x.x:8080 check inter 10s
server gw2 y.y.y.y:8080 check inter 10s
listen stats
bind *:8774 ssl crt /etc/haproxy/haproxy.cer
mode http
maxconn 5
stats enable
stats refresh 10s
stats realm Haproxy\ Statistics
stats uri /stats
stats auth user:password
I'm on a M5Stack atom lite running micropython, making POST requests to a given endpoint with json payload. The following code leads to suspicious behaviour:
if (pin1.value()) == True:
if uart1.any():
try:
req = urequests.request(method='POST', url='https://my-server.com/my-endpoint', json={'requestCode':'yadayada'})
if req.status_code == 200:
rgb.setColorAll(0x00ff00)
rgb.setBrightness(100)
wait_ms(1500)
rgb.setBrightness(0)
else:
rgb.setColorAll(0xff0000)
rgb.setBrightness(100)
wait_ms(1500)
rgb.setBrightness(0)
except:
pass
wait_ms(2)
The first request succeeds and the correct payload is sent to the endpoint. Yet, all subsequent requests fail.
The same holds true for GET requests to https endpoints.
If I change to http, both GET and POST requests work fine, one after another.
Defining the content type in the headers has no effect.
Neither does closing the session right after the request (using headers).
As of request 2, to a https endpoint, I get the exception:
OSError(-17040, 'MBEDTLS_ERR_RSA_PUBLIC_FAILED+MBEDTLS_ERR_MPI_ALLOC_FAILED')
Does anyone see what I'm doing wrong with these https-requests? Thanks in advance for any hints!
I'm sending a POST request via Net as such:
http = Net::HTTP.new(mixpanel_endpoint.host, mixpanel_endpoint.port)
request = Net::HTTP::Post.new(mixpanel_endpoint.request_uri)
http.request(request)
The issue is that the request_uri is over the max limit. It's a BASE64 encoded string.
Does anybody know what to do about this?
<Net::HTTPRequestURITooLong 414 Request URI Too Long readbody=true>
Net::HTTPRequestURITooLong is a 414 HTTP code from the server, you will need to change the request to conform to what the endpoint allows.
10.4.15 414 Request-URI Too Long
The server is refusing to service the request because the Request-URI
is longer than the server is willing to interpret. This rare condition
is only likely to occur when a client has improperly converted a POST
request to a GET request with long query information, when the client
has descended into a URI "black hole" of redirection (e.g., a
redirected URI prefix that points to a suffix of itself), or when the
server is under attack by a client attempting to exploit security
holes present in some servers using fixed-length buffers for reading
or manipulating the Request-URI.
reference: https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
Are you adding the data directly to the URL?
Try splitting out the endpoint URL from the data. For example:
Net::HTTP::Post.new(request_endpoint, "whatever_param_value=#{base64_encoded_data}")
I am using JMeter to get the no of bytes - response timings - request status for series of requests.
Few request gets called/executed repeated no of times, how can I make sure that My JMeter Test plan repeats those network requests for correct N - No of times.
In your thread group, set the loop count to the 'No of Times(N)' you want the request to be repeated.
Is there a length limit for the fragment part of an URL (also known as the hash)?
The hash is client side only, so the rules for HTTP may not apply to it.
It depends on the browser.
I found that in safari, chrome, and Firefox, an URL with a long hash is legal, but if it is a param send to the server, the browser will display an 414 or 413 error.
for example:
an URL like http://www.stackoverflow.com/?abc#{hash value with 100 thousand characters} will be ok. and you can use location.hash to get the hash value in javascript but an URL like http://www.stackoverflow.com/?abc&{query with 100 thousand characters} will be illegal, if you paste this link in the address bar, a 413 error code will be given and the message is the client issued a request that was too long. If that is a link in a web page, in my computer, Nginx response the 414 error message.
I don't know the situation in IE.
So I think, the limitation of the length of URL is just for transmission or HTTP server, the browser will check it sometimes, but not every time, and it will always be allowed to be used as a hash.
There is definitely a length for the whole url.
Read
RFC2616 - Hypertext Transfer Protocol
Maximum URL length is 2,083 characters in Internet Explorer