Grace period issue for cached POST request on Varnish 6 - post

We are using Varnish 6. We successfully cached "POST" requests sends for graphql
TTL is 120second
Grace: 1 hour
When TTL is over, but still in grace period, Varnish is not trigger post request to backend server behind the scene. in somehow, Varnish is again asking for backend and mean while end user is waiting for this process.
How we can make sure Varnish can auto-trigger cached POST requests as well in Grace period.

An expired object that has some grace time left will trigger a backend fetch. This is by design.
The only way to avoid any backend fetch is by increasing the TTL. The only reason grace mode exists is to ensure users don't have to wait for expired content while Varnish is fetching it asynchronously.
See https://www.youtube.com/watch?v=hlPNEnQny7o for a video about the lifetime of an object in Varnish (which includes TTL, grace & keep).
See https://www.youtube.com/watch?v=51WUTB1cUeM for a dedicated video about grace mode.

Related

First http request in iOS networking is slow, subsequent requests are much faster

I'm experiencing slow response times for my first http POST request to my server.
This happens both in Android and iOS networking libraries. (Volley on Android, and Alamofire in iOS).
First response is roughly 0.7s-0.9s, whereas subsequent requests are 0.2s.
I'm guessing this is due to the session being kept-alive by the server, therefore eliminating the need for establishing a new session on each request.
I figure I can make a dummy request when the app starts to start the session, but it doesn't seem very elegant.
I also control the server side (Node.js) so if any configuration needs to be done there I can also try it.
Investigating a little further, I tried sending an https CONNECT request before issuing the first "real" POST request, and the behavior replicates.
After 30 seconds or so, the connection is dropped (probably at the iOS URLSession level, the load balancer is configured to keep connections as 60 seconds).
In theory this makes sense because setting up an https connection takes up several (12 total) packets and I'm on an inter continental connection.
So my solution is to send a CONNECT request when I expect the user to send a regular request.

Atomically move redis key on expiration

Is there a way to atomically move a redis key from one place to another when it expires? There's ways of doing this in the client by being notified of redis expire notifications, but if no clients are running when the notification is triggered then the event is missed.
But if there's a way to do it on the server (through a LUA script maybe) then it can be atomic and the key exists in one place before the expiry and the other place after expiry.
Expiration keyspace notification isn't fired when the key expires. It's not guaranteed to happen as you might expect... (see Timing of expired events)
When the key is accessed by a command and is found to be expired.
Via a background system that looks for expired keys in background, incrementally, in order to be able to also collect keys that are never
accessed.
IMHO, I believe that you should go with another approach. Use some external task scheduler and automatically start a task to move expired keys some seconds or minutes before they're going to get expired. I understand that you'll check if target keys are still alive using ttl command.
For me, key expiration is a good approach to automatically free up memory but you shouldn't use it to produce actions based on expiration events since it's unreliable for such use cases.
Lua scripts cannot be triggered by a keyspace notification.
You must do this on client side.

Objective-C - How to prevent session id reusing when app terminated?

My main question is how to detect the application termination by the end user when it was in the background (Suspended) to be able to send logout request to the server ?
We already have a timeout interval in the server to kill the session, but assume that the interval is 5 minutes so this means that the session will be alive for 5 minutes after the user terminated the app and anyone can sniff on the data and reuse it.
Notes:
We use HTTPS connection and SSL Certificate Pining.
We also implemented a heartbeat web service to be called by client app every fixed interval to tell the server to keep the session alive for this interval, if this web service didn't call for specific session, the server will kill this session.
Once your app is suspended you don't get any further notice before you are terminated. There is no way to do what you want.
Plus, the user could suspend your app to do something else (like play a game) and then not go back to your app for DAYS.
If you want to log out when the user leaves your app, do it on the willBeSuspended message. Ask for more background time and send a logout right then and there.
Mohamed Amer,
Here is an approach used by Quickblox Server and I feel its pretty much solid though it involves a little overhead.
Once the client application (either iOS android) establishes the session with quickblox server, quickblox server expects the client application to send the presence information to server after a regular interval continuously.
Sending the presense information is pretty much simple. They have written a api which we keep hitting after a interval of 5 mins with session id that we have. They validate the session id and once found valid they will extend the expiration time for the user ascociated with that id for 5 mins more.
What they will do I believe is that,
Approach 1 : they maintain the last hit time and for all the subsequesnt request they check if the request time is within the the time frame of 5 min if yes simply process it. If the request comes after 5 min they will delete the session id for the user and respond saying you have timeout the session.
Approach 2 : Because they provide online and offline info as well they cant simply depend on the incoming request to delete the session id from server so they probably create a background thread which swipes over the db to find the entry with last hit time greater then 5 min and removes it from DB. and declares the user session expired.
Though this involves client apps continously hitting the server and increases the burden on the server for the app like chat application in which presense information is so vital this overhead is still fine i believe.
Hope I have provided you with some idea at least :)

POST request taking longer than timeout causing duplicates

I have an iOS application where I POST transactions to an API each time a transaction is completed. Once I get a 200 response code from the server I update an attribute on the transaction:
newTransaction.Synced = true
Incase the network connection ever drops I also POST every transaction where Synced = false when Reachability detects a network connection.
In perfect network conditions this works wells. However when I enable the Network Link Conditioner on my iPad and set packet loss to say 40% I start to see duplicated transactions on my server. What I assumed was happening is that it was taking longer than 30 seconds (the client side timeout on the request) to send my request and get the response from the server due to the high packet loss.
To confirm this, I made my API Sleep for 40 seconds for each web request and disabled Network Link Conditioner. As expected, the iOS app never set the Synced attribute to true as it was timing out before it got the response. However the server still created the entity for each POST request that was generated each time the iOS app launched or got network connectivity.
What's the best way to handle this situation so that duplicates never occur? I did think of adding a GUID to the transaction and then coding the API not to re-add the transaction if the GUID already exists. However the flip side is the iOS app would still never know the transaction has successfully synced. Is there a better way to handle this? Perhaps a timeout on the request which the server also adheres to?
Your Idea of assigning the GUID to transaction is good, but you might need to maintain a table on client side (browser memory) which will hold a record of all the calls you made to server and never heard back.

Sending SPDY requests results in "The request timed out" errors with NSUrlSession in iOS

My iOS app loads images from an nginx HTTP server. After I send 400+ such requests the networking 'gets stuck' and all subsequent HTTP requests result in "The request timed out" error. I can make the images load again only when I restart the app.
Details:
I am using NSURLSession.sharedSession().dataTaskWithURL to send four hundred HTTP GET requests to jpeg files.
Requests are sent sequentially, one after another. The interval between requests is 10 ms.
Each previous unfinished request is cancelled with cancel() method of NSURLSessionDataTask object.
Interestingly:
I can only have this issue with HTTPS requests and when SPDY is enabled on the server.
Non-secure HTTP requests work fine.
Non-SPDY HTTPS requests work fine. I tested it by turning SPDY off on the server side, in the nginx config.
Problem appears both on iOS 8 and 9, on physical device and in the simulator. Both on Wi-Fi and LTE.
When I look at nginx access logs, I can still see the 'stuck' requests coming in. Important nuance: the request log record appears at the exact moment when the iOS app is giving up on it after the time out period ends.
I was hoping to analyze HTTP requests with Charles Proxy but the problem cures itself when requests go through Charles. That is - everything works with Charles, much like effect in quantum mechanics when the fact of looking influences the outcome.
I was able to reproduce the issue when the iOS app connected to two different servers with vastly different nginx configurations. This probably means that the issue is not related to a particular nginx setup.
I analyzed the app using "Activity Monitor" instrument. The number of threads it is using during the bulk HTTP requests jumps from 5 to 10. In comparison, when I send just a single HTTP requests the number of threads jumps to 8. CPU load rarely goes above 30%.
What can be the cause of the issue? Can anyone recommend other ways or tools for analysing and debugging it?
Analysing with scheduling instrument
Demo app
This demo app reproduces the issue 100% of the time for me.
https://github.com/exchangegroup/ImageLoadDemo
Versions and settings
My nginx config: http://pastebin.com/pYYjdxfP
OS X: 10.10.4 (14E46), iOS: 8 and 9, Xcode: 7.0 (7A218), nginx: 1.9.4
Not ideal workaround
I managed to keep requests working only if I create a new NSURLSession for each individual request and clear the previous session with finishTasksAndInvalidate or invalidateAndCancel.
// Request 1
let configuration = NSURLSessionConfiguration.defaultSessionConfiguration()
let session = NSURLSession(configuration: configuration)
session.dataTaskWithURL ...
// Request 2
// clear the previous request
session.finishTasksAndInvalidate()
let session2 = NSURLSession(configuration: configuration)
session2.dataTaskWithURL ...
One possibility is that iOS started sending the request, and then packet loss prevented the headers and request body from being fully delivered.
Another possibility that comes to mind is that your server may not be logging the request until it actually finishes trying to deliver it, which would make the time stamps in the server logs line up with when the connection was closed, rather than when it was opened. (IIRC, that's what Apache does; I haven't worked with nginx, so I can't speak for its behavior.) If that's the case, then this is just a simple connection stall. As for why it is stalling, I couldn't guess.
Does the problem occur exclusively for HTTPS traffic? If you can reproduce it with HTTP, you don't need Charles Proxy; just use OS X's "Internet Sharing" feature, and capture the packets with tcpdump or wireshark, listening on the bridge interface. If you can't reproduce it with HTTP, my money would be on a problem with fetching the CRLs or performing the OCSP check while validating the server's certificate.
Is your app ending up with a huge number of threads as a result of excessive async dispatching to new queues, by any chance? Because that could easily cause all sorts of odd misbehavior.
How long is the timeout? If it is too short, your app might simply be running up against performance limitations of the hardware while processing the results of 400 requests delivered in only four seconds.
Also, are you trying to schedule these requests simultaneously? Because I seem to recall reading about a bug that causes NSURLSession to hit a brick wall if you start too many tasks in a single session at the same time. You might try adding tasks only after the number of tasks in a session drops below some threshold and see if that fixes the problem.

Resources