Caddy ratelimit for generated resources - rate-limiting

I am trying to deploy caddy server with ratelimit (https://caddyserver.com/v1/docs/http.ratelimit) for my API which has mutiple resources, e.g.:
http://myapi.com/resourceA
http://myapi.com/resourceB
http://myapi.com/resourceC
These resources are created by the users and I don't know their names. What I need is a ratelimit of ~5 requests/minute per resource per IP.
So if a users sends 5 requests to http://myapi.com/resourceA he should still be able to send another 5 requests to http://myapi.com/resourceB.
Is there any way to do this with caddy? Or does anyone have a good idea how to do this?
Thanks!

Related

Pointing an external URL to always go to same instance of a Docker container

I have the following use case that I would appreciate any input on. I have a Docker Swarm running with Traefik pointed to it for ingress and routing. Currently I have it as a single service that is defined to have 6 replicas of the same image, so works out to be two containers each on three nodes.
The containers basically host a GraphQL server and my requirement is that depending on which client the request is coming from, it always goes to the same specific container (ie replica). So as an example say I have user1 and user2 at client1 and user3 and user4 at client2, if user1 makes a request and it goes to replica1, then if user2 makes a request it MUST go to replica1 as well. So basically, I could take a numeric hash on the client id (client1) and mod 6 it and decide which replica it goes to and in that way make sure any subsequent calls from any user in that client id goes to the same replica. Additionally, that information of what client the call is coming from is coded in a JWT token that the user sends in their request.
Any idea how I would go about changing my Docker Swarm to implement this? My best guess is to change the swarm to not be 6 replicas and instead define each container as a separate service with its own port. Then I could potentially point Traefik to nginx or something similar which would then receive the request, grab the JWT, decode it to find the client id, take a hash and then internally route it to the appropriate node:port combination.
But I feel like there must be a more elegant and simpler way of doing this. Maybe Traefik could facilitate this directly somehow or Docker Swarm has some configuration that I don't know about, that could be used. Any ideas?
Edit: Just to clarify my usecase, not just looking for the same user to always go to the same container but the same type of user to always go to the same container
For this kind of routing you need to setup Traefik for Sticky Sessions
This is a Traefik middleware that adds a cookie to responses that is used in subsequent requests to route to the same service.

Twilio IP Messaging token issue

I'm setting up an iOS app to use the IP Messaging and video calling apis. I'm able to connect, create channels and setup a video call if I manually create hard-coded tokens for the app. However, if I want to use the PHP server (as described here https://www.twilio.com/docs/api/ip-messaging/guides/quickstart-ios) then I always get an error and it can't connect anymore.
I'm attaching a screenshot of what I see when I hit the http://localhost:8080 address which seems to produce a 500 Internal error on this URL: https://cds.twilio.com/v2/Streams
Thanks so much!
After much time spent on this I decided to try the node backend instead - under other server-side languages of the PHP and I have it running in 2 minutes! I used the exact same credentials as the ones that I was using on the PHP config file so either my PHP environment has something strange or the PHP backend needs some fixing. In any case, I'm able to move forward using the node backend, so if you run into the same issue just try node instead of PHP. woohoo!

Is it possible to increase CloudFlare time-out?

Is it possible to increase CloudFlare's time-out? If yes, how?
My code takes a while to execute and I wasn't planning on Ajaxifying it the coming days.
No, CloudFlare only offers that kind of customisation on Enterprise plans.
CloudFlare will time out if it fails to establish a HTTP handshake after 15 seconds.
CloudFlare will also wait 100 seconds for a HTTP response from your server before you will see a 524 timeout error.
Other than this there can be timeouts on your origin web server.
It sounds like you need Inter-Process Communication. HTTP should not be used a mechanism for performing blocking tasks without sending responses, these kind of activities should instead be abstracted away to a non-HTTP service on the server. By using RabbitMQ (or any other MQ) you can then pass messages from the HTTP element of your server over to the processing service on your webserver.
I was in communication with Cloudflare about the same issue, and also with the technical support of RabbitMQ.
RabbitMQ suggested using Web Stomp which relies on Web Sockets. However Cloudflare suggested...
Websockets would create a persistent connection through Cloudflare and
there's no timeout as such, but the best way of resolving this would
be just to process the request in the background and respond asynchronously, and serve a 'Loading...' page or similar, rather than having the user to wait for 100 seconds. That would also give a better user experience to the user as well
UPDATE:
For completeness, I will also record here that
I also asked CloudFlare about running the report via a subdomain and "grey-clouding" it and they replied as follows:
I will suggest to verify on why it takes more than 100 seconds for the
reports. Disabling Cloudflare on the sub-domain, allow attackers to
know about your origin IP and attackers will be attacking directly
bypassing Cloudflare.
FURTHER UPDATE
I finally solved this problem by running the report using a thread and using AJAX to "poll" whether the report had been created. See Bypassing CloudFlare's time-out of 100 seconds
Cloudflare doesn't trigger 504 errors on timeout
504 is a timeout triggered by your server - nothing to do with Cloudflare.
524 is a timeout triggered by Cloudflare.
See: https://support.cloudflare.com/hc/en-us/articles/115003011431-Troubleshooting-Cloudflare-5XX-errors#502504error
524 error? There is a workaround:
As #mjsa mentioned, Cloudflare only offers timeout settings to Enterprise clients, which is not an option for most people.
However, you can disable Cloudflare proxing for that specific (sub)domain by turning the orange cloud into grey:
Before:
After:
Note: it will disable extra functionalities for that specific (sub)domain, including IP masking and SSL certificates.
As Cloudflare state in their documentation:
If you regularly run HTTP requests that take over 100 seconds to
complete (for example large data exports), consider moving those
long-running processes to a subdomain that is not proxied by
Cloudflare. That subdomain would have the orange cloud icon toggled to
grey in the Cloudflare DNS Settings . Note that you cannot use a Page
Rule to circumvent Error 524.
I know that it cannot be treated like a solution but there is a 2 ways of avoiding this.
1) Since this timeout is often related to long time generating of something, this type of works can be done through crontab or if You have access to SSH you can run a PHP command directly to execute. In this case connection is not served through Cloudflare so it goes as long as your configuration allows it to run. Check it on Google how to run scripts from command line or how to determine them in crontab by using /usr/bin/php /direct/path/to/file.php
2) You can create subdomain that is not added to cloudlflare and move Your script there and run them directly through URL, Ajax call or whatever.
There is a good answer on Cloudflare community forums about this:
If you need to have scripts that run for longer than around 100 seconds without returning any data to the browser, you can’t run these through Cloudflare. There are a couple of options: Run the scripts via a grey-clouded subdomain or change the script so that it kicks off a long-running background process and quickly returns a status which the browser can poll until the background process has completed, at which point the full response can be returned. This is the way most people do this type of action as keeping HTTP connections open for a long time is unreliable and can be very taxing also.
This topic on Stackoverflow is high in SERPs so I decided to write down this answer for those who will find it usefull.
https://support.cloudflare.com/hc/en-us/articles/115003011431-Troubleshooting-Cloudflare-5XX-errors#502504error
Cloudflare 524 error results from a web page taking more than 100 seconds to completely respond.
This can be overridden to (up to) 600 seconds ... if you change to "Enterprise" Cloudflare account. The cost of Enterprise is roughtly $40k per year (annual contract required).
If you are getting your results with curl, you could use the resolve option to directly access your IP, not using the Cloudflare proxy IP:
For example:
curl --max-time 120 -s -k --resolve lifeboat.com:443:127.0.0.1 -L https://lifeboat.com/blog/feed
The simplest way to do this is to increase your proxy waiting timeout.
If you are using Nginx for instance you can simply add this line in your /etc/nginx/sites-availables/your_domain:
location / {
...
proxy_read_timeout 600s; # this increases it by 10mins; feel free to change as you see fit with your needs.
...
}
If the issue persists, make sure you use let's encrypt to secure your server alongside Nginx and then disable the orange cloud on that specific subdomain on Cloudflare.
Here are some resources you can check to help do that
installing-nginx-on-ubuntu-server
secure-nginx-with-let's-encrypt

Is it possible for NGINX resend request to Unicorn?

I want to know if it is possible that if my NGINX send a request to Unicorn and NGINX doesn't receive an answer, can NGINX send the request to Unicorn again?
Not normally, no. I can think of a workaround, but I'm sure that I would advise it. Using the Nginx upstream module you can define multiple upstream backend ends. Nginx has no way of knowing if the the different upstream servers listed are actually the same backend, nor does it care. The docs for the upstream module say: " If an error occurs during communication with a server, the request will be passed to the next server."
So if you arrange to have your app available at two different addresses (sockets, domain names, IPs or ports), then you can set Nginx to use the upstream module to try the same request twice.
If the the app was not responding properly in time the first time the request was made, there's a good chance it won't respond successfully on the follow-up request. Using the upstream module in the intended way is recommended instead: Use two unique app servers, who might be sharing state through a backend resource like a database.

Apache server getting two access every 5 seconds

I have an issue about apache server which is my server getting two access from unknown source every 5 seconds (exactly 5 seconds). One rails app is working on apache and getting log on either on apache and rails log. Using amazon aws as server and o and used load balance to share access to two server under 1 domain address. But either of them access repeatly.
Is there anything you know about this? (Sorry about bad english)
When you allocate an Elastic IP it is drawn from AWS pool of available addresses, and there is a chance it was used in the past, so it is not unusual to see traffic coming from bots, crawlers, old clients, etc...

Resources