Changing allowsCellularAccess on existing NSURLSession - ios

Is it possible to change the value for allowsCellularAccess on an existing NSURLSession by modifying the underlying NSURLSessionConfiguration?
I want to honor any changes in a user's settings for my application without cancelling existing requests if their device is currently connected to WiFi.

No. A session copies its configuration. It does not retain it. What I would do in your situation is:
Make a copy of the session's existing configuration and change that flag.
Create a new session with the modified configuration.
If the user is on Wi-Fi, call finishTasksAndInvalidate on the old session. This will keep the session around long enough to finish any existing requests, after which it will go away.
If the user is on cellular, call invalidateAndCancel, then wait to restart those tasks until the user is on Wi-Fi.
Additionally, you may be able to call cancelByProducingResumeData: on a task, and then recreate (resume) it in a different session with a different configuration. The task will still report its original configuration for allowsCellularAccess, but will behave according to the configuration of the new session. (The stale reporting might be considered a bug.)

Related

How to manage and reload multiple QuickFIX/J sessions independently?

I can configure multiple sessions in a single QuickFIX/J settings file and then start them all with a single SocketInitiator. But I would like to be able to modify the configuration of one or more sessions and then restart just those sessions without affecting any others.
I could do this by having multiple settings files and using one SocketInitiator per session. But it seems as though QuickFIX/J is not intended to be used this way. Would it cause me any problems?
It is perfectly fine to start up an Initiator per session. It is a matter of taste. In any case: having a separate Initiator per session is independent and will not affect the other sessions.
If you want to follow the approach with a single Initiator then you could try to add/remove sessions dynamically via createDynamicSession()/removeDynamicSession(). There still is some manual work though.
Find the Session that you want to reload. logout() and close() it.
Call removeDynamicSession() for that Session.
Get the settings for the SessionID that you want to reload from the running Initiator. Remove these from the running Initiator via removeSetting().
Then reload the settings from the settings file for the needed Session and put them to the settings of the Initiator.
Then call createDynamicSession() for the SessionID

URLSession caching even after app restart

I've just come across something that has entirely changed my mental image of URLSession caching in iOS.
We were hitting an endpoint that only ever got hit once.
Restarting the app wouldn't hit the endpoint again.
Deleting the app would cause it to hit the endpoint again... but only once.
The header of the response contains...
Cache-Control public, max-age=1800
So it is down to caching. By manually telling the URLSession to ignore the cache it would hit the endpoint again.
In the docs it shows the caching policy and how it works as a workflow diagram.
https://developer.apple.com/documentation/foundation/nsurlrequestcachepolicy/nsurlrequestuseprotocolcachepolicy
But where is the cached data stored once the app is terminated? Surely the app and everything to do with it is removed from memory?
The URLSession is using URLCache for it's caching system. It's used for all network resources. You can access it directly or setting your own through URLSessionConfiguration. The underlying location of the URLCache is on the file system rather than in memory. There is a way to manage cache yourself though. Say, for instance, your response should be encrypted on the device. Slightly bad example, but you get the point. ;)
Here's an article how to manage cache programmatically if you are needing more control over caching.

Thread-safe way of changing the connection search_paths

I want to be able to switch between different DB schemas in a Rails 4 app.
The plan is to add a new middleware in the very beginning of the stack that will do that for me.
The only way to do it is by setting ActiveRecord::Base.connection.schema_search_path = '"$user",my_schema'.
The problem I have with this is that this connection will go to the pool and all the following requests will use the schema that was set in the first one (basically leaking it through).
So the solution I see is to always reset the search path to what it was before and always set it on each request.
But I don't want to do this because:
99% of the requests will go to the default (public) schema, executing set search_path to '$user$,my_schema' would be additional query that could have been avoided
higher risk of leaking (other middleware may establish the connection earlier, or some changes to Rails or gems outside of my control)
All that especially applies to threaded servers, like Puma.
So are there any better alternatives to my solution with a middleware?
Thanks.
When you return connections to the pool, you must ensure the pool runs DISCARD ALL; to reset the connection state.
That will clear any SET ROLE, SET SESSION AUTHORIZATION, session variables, search_path setting, etc.

iOS: Data Sessions vs Ephemeral Sessions

My app's webView loads a page and I inject some javascript which automates a click for me and adds an item to my wish list. For something like this would you recommend using a data session or an ephemeral session to load the page? Speed is important to me, and I'm trying to optimize is as much as I can in Objective-C (yupp, even milliseconds).
The page basically loads a product page so everything but the actual product is always going to be the same (background view, website menu bar, button images, etc). Right now I'm using NSURLConnection, and I'm trying to update my code to use NSURLSession instead.
Default sessions behave similarly to other Foundation methods for downloading URLs. They use a persistent disk-based cache and store credentials in the user’s keychain.
Configuration that uses global or shared cookie, cache and credential storage objects. Behaviour is similar to NSURLConnection.
The shared session uses the global singleton credential, cache and cookie storage objects. This can be used in place of existing code that uses +[NSURLConnection sendAsynchronousRequest:queue:completionHandler:]
Ephemeral sessions do not store any data to disk; all caches, credential stores, and so on are kept in RAM and tied to the session. Thus, when your app invalidates the session, they are purged automatically.
Private Session Configuration that does not persist cookie, cache and credential storage objects. As the name indicates, the configuration settings are short living and are deleted when the session is invalidated.
Background sessions are similar to default sessions, except that a separate process handles all data transfers. Background sessions have some additional limitations, described in “Background Transfer Considerations.”
Background session is similar to Default session, But it can be used to perform networking operations on behalf of a suspended application, within certain constraints.
Similar to default session but upload or download of data can be performed even when the application is in suspended state.
Reference from Apple Doc
//Default session
+ (NSURLSessionConfiguration *)defaultSessionConfiguration;
//Ephemeral
+ (NSURLSessionConfiguration *)ephemeralSessionConfiguration;
//Background
+ (NSURLSessionConfiguration *)backgroundSessionConfiguration:(NSString *)identifier;
NSURLSession Tasks and Delegates
Below image explains types of NSURLSession Tasks and their hierarchy.
More Details
I think you'd use a default session as you want it to cache data to disk. Something an ephemeral session doesn't
The bottleneck is almost always IO so you want caching when the data doesn't change anyways.
For rapidly chaining data this wouldn't be worth it but you explicitly say that the data won't change

How to properly handle asynchronous database replication?

I'm considering using Amazon RDS with read replicas to scale our database.
Some of our controllers in our web application are read/write, some of them are read-only. We already have an automated way for identifying which controllers are read-only, so my first approach would have been to open a connection to the master when requesting a read/write controller, else open a connection to a read replica when requesting a read-only controller.
In theory, that sounds good. But then I stumbled open the replication lag concept, which basically says that a replica can be several seconds behind the master.
Let's imagine the following use case then:
The browser posts to /create-account, which is read/write, thus connecting to the master
The account is created, transaction committed, and the browser gets redirected to /member-area
The browser opens /member-area, which is read-only, thus connecting to a replica. If the replica is even slightly behind the master, the user account might not exist yet on the replica, thus resulting in an error.
How do you realistically use read replicas in your application, to avoid these potential issues?
I worked with application which used pseudo-vertical partitioning. Since only handful of data was time-sensitive the application usually fetched from slaves and from master only in selected cases.
As an example: when the User updated their password application would always ask master for authentication prompt. When changing non-time sensitive data (like User Preferences) it would display success dialog along with information that it might take a while until everything is updated.
Some other ideas which might or might not work depending on environment:
After update compute entity checksum, store it in application cache and when fetching the data always ask for compliance with checksum
Use browser store/cookie for storing delta ensuring User always sees the latest version
Add "up-to-date" flag and invalidate synchronously on every slave node before/after update
Whatever solution you choose keep in mind it's subject of CAP Theorem.
This is a hard problem, and there are lots of potential solutions. One potential solution is to look at what facebook did,
TLDR - read requests get routed to the read only copy, but if you do a write, then for the next 20 seconds, all your reads go to the writeable master.
The other main problem we had to address was that only our master
databases in California could accept write operations. This fact meant
we needed to avoid serving pages that did database writes from
Virginia because each one would have to cross the country to our
master databases in California. Fortunately, our most frequently
accessed pages (home page, profiles, photo pages) don't do any writes
under normal operation. The problem thus boiled down to, when a user
makes a request for a page, how do we decide if it is "safe" to send
to Virginia or if it must be routed to California?
This question turned out to have a relatively straightforward answer.
One of the first servers a user request to Facebook hits is called a
load balancer; this machine's primary responsibility is picking a web
server to handle the request but it also serves a number of other
purposes: protecting against denial of service attacks and
multiplexing user connections to name a few. This load balancer has
the capability to run in Layer 7 mode where it can examine the URI a
user is requesting and make routing decisions based on that
information. This feature meant it was easy to tell the load balancer
about our "safe" pages and it could decide whether to send the request
to Virginia or California based on the page name and the user's
location.
There is another wrinkle to this problem, however. Let's say you go to
editprofile.php to change your hometown. This page isn't marked as
safe so it gets routed to California and you make the change. Then you
go to view your profile and, since it is a safe page, we send you to
Virginia. Because of the replication lag we mentioned earlier,
however, you might not see the change you just made! This experience
is very confusing for a user and also leads to double posting. We got
around this concern by setting a cookie in your browser with the
current time whenever you write something to our databases. The load
balancer also looks for that cookie and, if it notices that you wrote
something within 20 seconds, will unconditionally send you to
California. Then when 20 seconds have passed and we're certain the
data has replicated to Virginia, we'll allow you to go back for safe
pages.

Resources