Limiting per user backend resource usage with nginx proxy - ruby-on-rails

I am using nginx to proxy to a unicorn upstream running a Ruby on Rails application. I want to be able to limit the total amount of the backend resources a singe user (IP address) can consume. By backend resources, I mean the number of active requests a user can have running on the upstream unicorn processes at once.
So for example, if an IP address already has 2 writing connections to a particular upstream, I want any further requests to be queued by nginx, until one of the previously open connections is complete. Note that I don't want requests to be dropped - they should just wait until the number of writing connections drops below 2 for the user.
This way, I can ensure that even if one user attempts many requests for a very time consuming action, they don't consume all of the available upstream unicorn workers, and some unicorn workers are still available to service other users.
It seems like ngx_http_limit_conn_module might be able to do this, but the documentation is not clear enough for me to be sure.
Another way to think about the problem is that I want to protect against DoS (but not DDoS, i.e. I only care about DoS from one IP at a time), by making the server appear to any one IP address as if it has the ability to process N simultaneous requests. But in reality the server can process 10*N requests, but I am limiting the simultaneous requests from any one IP to 1/10th of the server's real capacity. Just like a normal server behaves, when the number of simultaneous workers is exceeded requests are queued until previous requests have completed.

You can user limit_req module
http://nginx.org/en/docs/http/ngx_http_limit_req_module.html
It doesn't limit number of connections, but it limits requests per second. Just use large burst to delay request and not to drop them.
Here's example.
http {
limit_req_zone $binary_remote_addr zone=one:10m rate=2r/s;
...
server {
...
location / {
limit_req zone=one burst=50;
}
You know that average request processing time is say 1 second, so setting limit to 2r/s allows only two workers being busy with this particular ip address (approximately, of course). If request takes 0.5 sec to complete, you can set 4r/s.

If you know the time consuming url, then you can make use of the limit_req module to implement 2r/s for the long requests, and no limit for short requests.
http {
...
#
# Limit request processing rate or connection
# If IP address is 127.0.0.1, the limit_conn_zone will not count it.
#
geo $custom_remote_addr $custom_limit_ip {
default $binary_remote_addr;
127.0.0.1 "";
}
limit_req_zone $custom_limit_ip zone=perip:10m rate=2r/s;
...
server {
...
# By default, do not enforce the maximum allowed number of connections for the remote IP
set $custom_remote_addr 127.0.0.1;
# If the URI matches a super time consuming requests, limit to 2r/s.
if ($uri ~* "^/super-long-requests") {
set $custom_remote_addr $remote_addr;
}
limit_req zone=perip burst=50;
...
}
}

Related

HAProxy 2.0 LUA Fetches API - how to get request details and how to pass variable back to HAProxy

I have been scouring the internet with no luck. I have a basic LUA script for HAProxy, which looks like this:
core.Info("LUA script for parsing request ID element - loaded");
function parseId(txn, salt)
local payload = txn.sf:req_body()
-- parses hex value from element named "ID". Example payload: {"Platform":"xyz.hu","RecipientId":"xyz.hu","Channel":"xyz","CallbackURL":"http://x.x.x.x:123","ID":"5f99453d000000000a0c5164233e0002"}
local value = string.sub(string.match(payload, "\"ID\":\"[0-9a-f]+\""), 8, -2)
core.Info("ID : " .. value)
return value
end
-- register HAProxy "fetch"
core.register_fetches("parseId", parseId)
What it does is what it says: takes a 32 characater long ID from an incoming request. In the HAProxy config file, the result is used for sticky-session handling:
stick-table type string len 32 size 30k expire 30m
stick on "lua.parseId" table gw_back
This produces two lines of log for each request:
ID: xyz which is logged from the LUA script
The detailed request data which is logged from the HAProxy config file using "log-format", e.g.:
Jan 20 22:13:52 localhost haproxy[12991]: Client IP:port = [x.x.x.x:123], Start Time = [20/Jan/2022:22:13:52.069], Frontend Name = [gw_front], Backend Name = [gw_back], Backend Server = [gw1], Time to receive full request = [0 ms], Response time = [449 ms], Status Code = [200], Bytes Read = [308], Request = ["POST /Gateway/init HTTP/1.1"], ID = [""], Request Body = [{"Platform":"xyz.hu","RecipientId":"xyz.hu","Channel":"xyz","CallbackURL":"http://x.x.x.x:123","ID":"61e9d03e000000000a0c5164233e0002"}]
I wanted to extend logging due to some strange issues happening sometimes, so I wanted to one (or both) of below approaches:
Pass the "ID" value back from the LUA script into the HAProxy config as a variable, and log it along with the request details. I can log the full request body, but don't want to due to GDPR and whatnot.
Get some request details in the LUA script itself, and log it along with the ID.
So, basically, to be able to connect the ID with the request details. If multiple requests are coming to same URL very quickly, it is difficult to find which of them belongs to a specific ID. However I couldn't accomplish these.
For the first one, I added this line into the LUA before returning the "value" variable:
txn:set_var("req_id", value)
I was hoping this would create a variable in HAProxy called "req_id", and I can log it with "log-format", but all I got was empty string:
ID = [""]
For the second one, I'm at a complete loss. I'm not able to find ANY documentation on these. I have been scouring the internet with no luck. E.g. the txn.sf:req_body() function which I know is working, I simply cannot find it documented anywhere, so I'm not sure what other functions are available to get some request details.
Any ideas for either or both of my approaches? I'm attaching my full HAProxy config here at the end, just in case:
global
log 127.0.0.1 len 10000 local2 debug
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
lua-load /opt/LUA/parseId.lua
stats socket /etc/haproxy/haproxysock level admin
defaults
log global
option httplog
option dontlognull
mode http
timeout connect 5000
timeout client 50000
timeout server 50000
# Request body is temporarily logged in test environment
log-format "Client IP:port = [%ci:%cp], Start Time = [%tr], Frontend Name = [%ft], Backend Name = [%b], Backend Server = [%s], Time to receive full request = [%TR ms], Response time = [%Tr ms], Status Code = [%ST], Bytes Read = [%B], Request = [%{+Q}r], ID = [%{+Q}[var(txn.req_id)]], Request Body = [%[capture.req.hdr(0)]]"
frontend gw_front
bind *:8776
option http-buffer-request
declare capture request len 40000
http-request capture req.body id 0
http-request track-sc0 src table gw_back
use_backend gw_back
backend gw_back
balance roundrobin
stick-table type string len 32 size 30k expire 30m
stick on "lua.parseId" table gw_back
# Use HTTP check mode with /ping interface instead of TCP-only check
option httpchk POST /Gateway/ping
server gw1 x.x.x.x:8080 check inter 10s
server gw2 y.y.y.y:8080 check inter 10s
listen stats
bind *:8774 ssl crt /etc/haproxy/haproxy.cer
mode http
maxconn 5
stats enable
stats refresh 10s
stats realm Haproxy\ Statistics
stats uri /stats
stats auth user:password

Set Maximum content-length Accepted

Is there a way in Rails to specify the maximum allowed content-length so that requests that exceed this value are rejected immediately?
I have a login form on my application that is the only POST available to an unauthenticated user. This has been identified as a potential vulnerability to a slow POST DoS attack. One of the mitigations is to limit the allowed request size.
I cannot seem to find the knob to turn which will allow me to automatically reject the request if the content-length exceeds a particular value.
We're using the Puma web server if that affects the answer.
Puma has two parameters actually, the number of threads and the number of workers. If we slightly change the default puma.rb, it will look like that:
workers Integer(ENV['WORKERS_NUMBER'] || 1)
max_threads_count = Integer(ENV['RAILS_MAX_THREADS'] || 1)
min_threads_count = max_threads_count
threads min_threads_count, max_threads_count

DocumentDB return "Request rate is large", parse on azure

I'm runing parse on azure (Parse Server on managed Azure services),
I'ts include DocumentDB as database and have limit for requests per seconds.
Some parse cloud functions are large and the speed of requests is too high (even for S3 tier) so i'm getting this error (seen using Visual Studio Team Services (was Visual Studio Online) and Streaming logs).
error: Uncaught internal server error. { [MongoError: Message: {"Errors":["Request rate is large"]}
ActivityId: a4f1e8eb-0000-0000-0000-000000000000, Request URI: rntbd://10.100.99.69:14000/apps/f8a35ed9-3dea-410f-a89a-28650ff41381/services/2d8e5320-89e6-4750-a06f-174c12013c69/partitions/53e8a085-9fed-4880-bd90-f6191765f625/replicas/131091039101528218s]
name: 'MongoError',
message: 'Message: {"Errors":["Request rate is large"]}\r\nActivityId: a4f1e8eb-0000-0000-0000-000000000000, Request URI: rntbd://10.100.99.69:14000/apps/f8a35ed9-3dea-410f-a89a-28650ff41381/services/2d8e5320-89e6-4750-a06f-174c12013c69/partitions/53e8a085-9fed-4880-bd90-f6191765f625/replicas/131091039101528218s' } MongoError: Message: {"Errors":["Request rate is large"]}
ActivityId: a4f1e8eb-0000-0000-0000-000000000000, Request URI: rntbd://10.100.99.69:14000/apps/f8a35ed9-3dea-410f-a89a-28650ff41381/services/2d8e5320-89e6-4750-a06f-174c12013c69/partitions/53e8a085-9fed-4880-bd90-f6191765f625/replicas/131091039101528218s
at D:\home\site\wwwroot\node_modules\mongodb-core\lib\cursor.js:673:34
at handleCallback (D:\home\site\wwwroot\node_modules\mongodb-core\lib\cursor.js:159:5)
at setCursorDeadAndNotified (D:\home\site\wwwroot\node_modules\mongodb-core\lib\cursor.js:501:3)
at nextFunction (D:\home\site\wwwroot\node_modules\mongodb-core\lib\cursor.js:672:14)
at D:\home\site\wwwroot\node_modules\mongodb-core\lib\cursor.js:585:7
at queryCallback (D:\home\site\wwwroot\node_modules\mongodb-core\lib\cursor.js:241:5)
at Callbacks.emit (D:\home\site\wwwroot\node_modules\mongodb-core\lib\topologies\server.js:119:3)
at null.messageHandler (D:\home\site\wwwroot\node_modules\mongodb-core\lib\topologies\server.js:397:23)
at TLSSocket.<anonymous> (D:\home\site\wwwroot\node_modules\mongodb-core\lib\connection\connection.js:302:22)
at emitOne (events.js:77:13)
How to handle this error?
TL;DR;
Upgrade the old S3 collection to a new single collection under the new pricing scheme. This can support up to 10K RU (up from 2500 RU)
Delete the old S3 collection and create a new partitioned collection. Will require support for partitioned collection in parse.
Implement a backoff strategy in line with the x-ms-retry-after-ms response header.
Long answer:
Each request to DocumentDB returns a HTTP header with the Request charge for that operation. The number of request units is configured per collection. As per my understanding you have 1 collection of size S3, so this collection can only handle 2500 Request Units per second.
DocumentDB scales by adding multiple collections. With the old configuration using S1 -> S3 you must do this manually, i.e. you must distribute your data over the collections using an algorithm such as consistent hashing, a map or perhapse date. With the new pricing in DocumentDB you can use partitioned collections, by defining a partition key, DocumentDB will shard your data for you. If you see sustained rates of RequestRateTooLarge errors I recommend scaling out the partitions. However, you will need to investigate if Parse supports partitined collections.
When you receive a HTTP 429 RequestRateTooLarge there's also a header called x-ms-retry-after-ms :### where ### denotes the number of milliseconds to wait before you retry the operation. What you can do is to implement a back-off strategy which retries the operation. Do note that if you have clients hanging on the server during retries, you may build up request queues and clog the server. I recommend adding a Queue to handle such burst. For short burst of requests this is a nice way to handled it without scaling up the collections.
i used Mlab as external mongoDB database and configure the parse app in azure to use it instead of documentDB.
I have to will to pay so much for "performance" increase.

Connection in RabbitMQ server auto lost after 600s

I'm using rabbitMQ server with amq.
I am having a difficult problem. After leaving the server alone for about 10 min, the connection is lost.
What could be causing this?
If you look at the Erlang client documentation http://www.rabbitmq.com/erlang-client-user-guide.html you will see a section titled Connecting To A Broker
This gives you a few different options that you can specify when setting up your connection to the RabbitMQ server, one of the options is the heartbeat, as you can see the default is 0 so no heartbeat is specified.
I don't know the exact Erlang notation, but you will need to do something like:
{ok, Connection} = amqp_connection:start(#amqp_params_network{heartbeat = 5})
The heartbeat timeout is specified in seconds. So this would cause your consumer to heartbeat back to the server every 5seconds.
Also take a look at this discussion: https://groups.google.com/forum/?fromgroups=#!topic/rabbitmq-discuss/u227xzvqOr8
The default connection timeout for the RabbitMQ connection factory is 600 seconds (at least in the Java client API), hence your 10 minutes. You can change this by specifying to the connection factory your timeout of choice.
It is good practice to ensure your connection is release and recreated after a specific amount of time, to prevent eventual leaks and excessive resournces. Your code should ensure that it seeks a valid connection that is not close to be timed-out, and re-establish a new connection on the ones that did time-out. Overall, adopt a connection-pooling approach.
- Java example:
ConnectionFactory factory = new ConnectionFactory();
factory.setHost(this.serverName);
factory.setPort(this.serverPort);
factory.setUsername(this.userName);
factory.setPassword(this.userPassword);
factory.setConnectionTimeout( YOUR-TIMEOUT-IN-SECONDS );
Connection = factory.newConnection();

How to check server connection

i want to check my server connection to know if its available or not to inform the user..
so how to send a pkg or msg to the server (it's not SQL server; it's a server contains some serviecs) ...
thnx in adcvance ..
With all the possibilities for firewalls blocking ICMP packets or specific ports, the only way to guarantee that a service is running is to do something that uses that service.
For instance, if it were a JDBC server, you could execute a non-destructive SQL query, such as select * from sysibm.sysdummy1 for DB2. If it's a HTTP server, you could create a GET packet for index.htm.
If you actually have control over the service, it's a simple matter to create a special sub-service to handle these requests (such as you send through a CHECK packet and get back an OKAY response).
That way, you avoid all the possible firewall issues and the test is a true end-to-end one. PINGs and traceroutes will be able to tell if you can get to the machine (firewalls permitting) but they won't tell you if your service is functioning.
Take this from someone who's had to battle the network gods in a corporate environment where machines are locked up as tight as the proverbial fishes ...
If you can open a port but don't want to use ping (i dont know why but hey) you could use something like this:
import socket
host = ''
port = 55555
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((host, port))
s.listen(1)
while 1:
try:
clientsock, clientaddr = s.accept()
clientsock.sendall('alive')
clientsock.close()
except:
pass
which is nothing more then a simple python socket server listening on 55555 and returning alive

Resources