I'm adding to my php script a 503 header statement. In all examples I've seen, this is followed by some variation of:
header('Retry-After: 300');
Is it necessary to include the "Retry-After" statement? I'd prefer not to.
No it is not necessary.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
"The server is currently unable to handle the request due to a temporary overloading or maintenance of the server. The implication is that this is a temporary condition which will be alleviated after some delay. If known, the length of the delay MAY be indicated in a Retry-After header. If no Retry-After is given, the client SHOULD handle the response as it would for a 500 response."
If known, the length of the delay MAY be indicated in a Retry-After header.
Retry-After header is a measure you can adopt to tell clients to back off.
Services that deal with lots of traffic usually have loop detection logic, to detect clients that call expensive APIs too often. For example, a misbehaving client would be asking for auth tokens from an authentication service every few seconds, instead of caching tokens until they expire.
Of course, there is no guarantee that a client will obey the retry-after rule, in which case you can throttle them harder.
Related
we've been dealing with constant attacks on our authentication url, we're talking millions of requests per day, my guess is they are trying to brute force passwords.
Whenever we would block the IP with the server firewall, few seconds later the attacks would start again from a different IP.
we ended up implementing a combination of throttling through rack-attack plus custom code to dynamically block the IPs in the firewall. But as we improved our software's security, so did the attackers, and now we are seeing every request they make is done from a different IP, one call per IP, still several per seconds, not as many but still an issue.
Now i'm trying to figure out what else can i do to prevent this, we tried recaptcha but quickly ran out of the monthly quota and then nobody can login.
I'm looking into Nginx rate limiter but from what I can see it also uses the IP, considering they now rotate IPs for each request, is there a way that this would work?
Any other suggestions on how to handle this, maybe one of you went through the same thing?
Stack: Nginx and Rails 4, Ubuntu 16.
For your condition, the most effective way to prevent this attack is using captcha. You mentioned that you have used recaptcha, and that can be run out of soon, but if you develeop the captcha yourself and you would have unlimited captcha images.
As for the other prevent method, like lock the IPs, this is always useless when the attackers use IP pool, there are so many IPs(including some IoT devices' IPs) that you can not identify/lock them all even if you use the commercial Threat Intelligence Pool.
So the suggestion like this
Develop the captcha yourself,and implement this on your api,
Identify and lock the IPs that you think malicious
Set some rules to identify the UA and Cookie of the http request (usually the normal request is deferent from the attack)
Use WAF (if you have enough budget)
I'm using NSURLSession to download epub file from server, which needs to support pause/resume downloads.
On referring Apple docs I found some conditions needs to be handled from server side for using cancelByProducingResumeData
Out of the conditions in Apple docs below conditions are not handled in my server
The server provides either the ETag or Last-Modified header (or
both) in its response
The server supports byte-range requests
Is there any workaround that can be done from client side for pause/resume with out making changes in the response headers? Any help is much appreciated.
At least for background downloads, there is no alternative to that server feature, but if you’re able to write CGI scripts or similar, it is easy enough to add those headers. That said, I would be concerned about the security of any server that lacks such basic functionality in this day and age, as most major servers have supported it for a decade at this point—unless, of course, the reason for the lack of support is that you are using a CGI script that lacks this support, in which case the solution is to fix the script.
For foreground downloads, you can use chunking as a rather crude alternative. Break the file up into shorter pieces and assemble them in your client code. That way, you only have to refetch a single chunk if it fails or the user pauses the download.
That said, most EPUB files are small enough that I have to wonder if it is even worth the effort. Most users are going to be on a fast LTE network while doing downloads, where an average EPUB will download in a fraction of a second.
I want to implement a simple health check and make it available via http.
Up to now I have only experience writing nagios plugins. Nagios has this API spec
Is there already a common way how to write vendor-neutral health checks?
If not, what should a sane health check return to make it portable to many different monitoring server implementations?
Although there is no standard for format of health checks, you should consider major monitoring tools and their expectations from your protocol.
In most cases they react to specific HTTP answer codes.
For example Amazon Route 53:
waits for an HTTP status code of 200 or greater and less than 400
Another tool, Consul, has more specific definition:
The status of the service depends on the HTTP response code: any 2xx code is considered passing, a 429 Too Many Requests is a warning, and anything else is a failure.
So you might need to check a few top tools you might integrate later and choose an approach suitable for all of them.
Let's say we have an app that displays some kind of dashboard. This dashboard however should be updated extremely often(say at every 500ms). I'm familiar with long pull requests and know how I could implement them with NSURLConnection in some background thread. However it seems this will lead to two big problems - request/response concurrency and overhead of long pull requests at such short time intervals. Although first problem can be solved with some techniques, I think such frequent requests to a server is a general problem.
So after some research I found NSStream class, and it's descedants NSInputStream & NSOutputStream. My idea is to make connection to server and keep it alive for the whole time. And just at 500ms intervals to send GET request at output stream and read data from the input stream.
So here are my questions:
Am I on the right track for implementing this?
Should the server be prepared on some special way of dealing with this kind of connections(I mean won't it drop the connection after some timeout)?
Is there real benefit of skipping connection establishing to improve app performance and to lower refresh time at the dashboard?
UPDATE
I've implemented classic way. When I hit the method for requesting if previous request not yet finished I'm cancelling it. So basically I've only one active connection at a time to prevent concurrency. Also if I didn't receive response for 500ms I do not need this response at all, as it will be outdated anyway. I'm accomplishing pretty neat results in both Wi-Fi and 3G. As I expected on edge there is dropped response every 3 to 4 requests.
Still wondering however about the streams. I did try to follow this apple ref, but when I send HTPP GET via output stream, my input stream return 403 Forbidden from the server. This could be entirely server problem, however I'm not sure if this is the right track and whether it's worthy to change server side.
Q1) Am I on the right track for implementing this?
A) I'd suggest WebSockets
Q2)Should the server be prepared on some special way of dealing with
this kind of connections(I mean won't it drop the connection after
some timeout)?
A)Even though you could try Configuring
Persistent(Keep-Alive)Connections on webserver to do it easily
I'd suggest WebSockets
Q3)Is there real benefit of skipping connection establishing to
improve app performance and to lower refresh time at the dashboard?
A)Yes,Connection opening and closing are costly process that's why
there are Keep-alive connection and Google also introduced SPDY
for Webapps.so Sockets would solve this problem for you.
WebSockets
is good way to go.
Frequent polling is not a way to go because you contact the server very frequently 0.5 seconds
WebSocket provides full-duplex communication.Additionally, WebSocket enables streams of messages on top of TCP. TCP alone deals with streams of bytes with no inherent concept of a message
The WebSocket protocol was standardized by the IETF as RFC 6455 in 2011, and the WebSocket API in Web IDL is being standardized by the W3C
WebSocket is designed to be implemented in web browsers and web servers, but it can be used by any client or server application. The WebSocket Protocol is an independent TCP-based protocol. Its only relationship to HTTP is that its handshake is interpreted by HTTP servers as an Upgrade request. The WebSocket protocol makes more interaction between a browser and a website possible, facilitating live content and the creation of real-time games. This is made possible by providing a standardized way for the server to send content to the browser without being solicited by the client, and allowing for messages to be passed back and forth while keeping the connection open. In this way a two-way (bi-directional) ongoing conversation can take place between a browser and the server
You can find more about WebSockets here
Here are some good WebSocket client libraries on Objective C
SocketRocket and
UnittWebSocketClient
Note:
These libraries use NSStream
Hope this helps
As long as your server is HTTP server, server disconnect you after returning result.
So, if you want to keep connection alive long enough, you must implement your own protocol based on NSStream/Socket both iOS and Server.
You may choose famous socket based protocol like WebSocket, Square's SocketRocket is famous library for iOS based on NSStream.
If your dashboard needs real time update, I think it's worth deploying NSStream/Socket based protocol.
I've built a few native iPhone applications that required uploading large/hirez image files to a remote server. I've found that on 3g networks a request can get blocked/dropped if it uses too much bandwidth in a certain amount of time. I believe that to be about 1mb/min
reference:
Max payload size for http request and response, iphone
How is this being handled in trigger.io's API call: request.ajax(options)?
Along the same lines, I've run into trouble with connections retrying multiple times after failure. By default is there any connection retry going on behind the scenes?... or will the error callback fire on first connection failure?
Also! Is there a way to set the timeout of the request from the client side?
Currently, we don't offer any bandwidth throttling in the request module. The HTTP library we are using doesn't support it (note that the ASIHTTPRequest wrapper is no longer maintained, so we can't use that, unfortunately...).
If we find an alternative HTTP library which does support what we need and throttling, then we'd certainly consider switching to use it!
FWIW, we've not had any customers report problems with app store rejection due to bandwidth throttling (or lack of it).
Any connection or HTTP errors will result in the error callback being called - you have control over whatever retry logic you want.
For timeouts, see http://docs.trigger.io/en/v1.4/modules/request.html#ajax - timeout is a support parameter in the options hash.