How to manage and reload multiple QuickFIX/J sessions independently? - quickfixj

I can configure multiple sessions in a single QuickFIX/J settings file and then start them all with a single SocketInitiator. But I would like to be able to modify the configuration of one or more sessions and then restart just those sessions without affecting any others.
I could do this by having multiple settings files and using one SocketInitiator per session. But it seems as though QuickFIX/J is not intended to be used this way. Would it cause me any problems?

It is perfectly fine to start up an Initiator per session. It is a matter of taste. In any case: having a separate Initiator per session is independent and will not affect the other sessions.
If you want to follow the approach with a single Initiator then you could try to add/remove sessions dynamically via createDynamicSession()/removeDynamicSession(). There still is some manual work though.
Find the Session that you want to reload. logout() and close() it.
Call removeDynamicSession() for that Session.
Get the settings for the SessionID that you want to reload from the running Initiator. Remove these from the running Initiator via removeSetting().
Then reload the settings from the settings file for the needed Session and put them to the settings of the Initiator.
Then call createDynamicSession() for the SessionID

Related

Aren't PWAs user unfriendly if the service worker is not immediately active?

I posted another question as a brute-force solution to this one (Angular: fully install service worker before anything else) but I thought I'd make a separate one to discuss the use case for when a service worker is used as intended.
According to the service worker life cycle (https://developers.google.com/web/fundamentals/primers/service-workers/lifecycle), the SW is installed but it's only active once you then reload the page (you can claim() the page but that's only for calls that happen after the service worker is installed). The reasoning is that if and existing version is updated, the old one and the new one do not mix states and caches. I can agree with that decision.
What I have trouble understanding is why it is not immediately active once it is initially installed. Instead, it requires a page reload unless you explicitly define precaching rules in the SW. If you define caching rules with wildcards, it's not possible to precache those so you need the reload.
Given a single page PWA (like Angular), a user will discover the site and browser around on it but the page will never be reloaded during that session. If they then want to use the site offline later, they need to have refreshed or re-opened the tab at least one other time. That seems like a pretty big pitfall to me.
Am I missing something here?
Your understanding of the service worker lifecycle is correct but I do not think the pitfall you mentioned is as severe as you think it is.
If I understand you correctly, the user experience will only be negatively affected if the user loses connectivity during the initial browsing of the page (before the service worker is active) and is missing an offline asset. If this is truly a scenario you want to account for then that offline asset can be pre-cached in the browser-side javascript. Alternatively, as you mentioned, you can skipWaiting() and claim() to make the service worker active without the user refreshing the page.

Changing allowsCellularAccess on existing NSURLSession

Is it possible to change the value for allowsCellularAccess on an existing NSURLSession by modifying the underlying NSURLSessionConfiguration?
I want to honor any changes in a user's settings for my application without cancelling existing requests if their device is currently connected to WiFi.
No. A session copies its configuration. It does not retain it. What I would do in your situation is:
Make a copy of the session's existing configuration and change that flag.
Create a new session with the modified configuration.
If the user is on Wi-Fi, call finishTasksAndInvalidate on the old session. This will keep the session around long enough to finish any existing requests, after which it will go away.
If the user is on cellular, call invalidateAndCancel, then wait to restart those tasks until the user is on Wi-Fi.
Additionally, you may be able to call cancelByProducingResumeData: on a task, and then recreate (resume) it in a different session with a different configuration. The task will still report its original configuration for allowsCellularAccess, but will behave according to the configuration of the new session. (The stale reporting might be considered a bug.)

Thread-safe way of changing the connection search_paths

I want to be able to switch between different DB schemas in a Rails 4 app.
The plan is to add a new middleware in the very beginning of the stack that will do that for me.
The only way to do it is by setting ActiveRecord::Base.connection.schema_search_path = '"$user",my_schema'.
The problem I have with this is that this connection will go to the pool and all the following requests will use the schema that was set in the first one (basically leaking it through).
So the solution I see is to always reset the search path to what it was before and always set it on each request.
But I don't want to do this because:
99% of the requests will go to the default (public) schema, executing set search_path to '$user$,my_schema' would be additional query that could have been avoided
higher risk of leaking (other middleware may establish the connection earlier, or some changes to Rails or gems outside of my control)
All that especially applies to threaded servers, like Puma.
So are there any better alternatives to my solution with a middleware?
Thanks.
When you return connections to the pool, you must ensure the pool runs DISCARD ALL; to reset the connection state.
That will clear any SET ROLE, SET SESSION AUTHORIZATION, session variables, search_path setting, etc.

How to properly handle asynchronous database replication?

I'm considering using Amazon RDS with read replicas to scale our database.
Some of our controllers in our web application are read/write, some of them are read-only. We already have an automated way for identifying which controllers are read-only, so my first approach would have been to open a connection to the master when requesting a read/write controller, else open a connection to a read replica when requesting a read-only controller.
In theory, that sounds good. But then I stumbled open the replication lag concept, which basically says that a replica can be several seconds behind the master.
Let's imagine the following use case then:
The browser posts to /create-account, which is read/write, thus connecting to the master
The account is created, transaction committed, and the browser gets redirected to /member-area
The browser opens /member-area, which is read-only, thus connecting to a replica. If the replica is even slightly behind the master, the user account might not exist yet on the replica, thus resulting in an error.
How do you realistically use read replicas in your application, to avoid these potential issues?
I worked with application which used pseudo-vertical partitioning. Since only handful of data was time-sensitive the application usually fetched from slaves and from master only in selected cases.
As an example: when the User updated their password application would always ask master for authentication prompt. When changing non-time sensitive data (like User Preferences) it would display success dialog along with information that it might take a while until everything is updated.
Some other ideas which might or might not work depending on environment:
After update compute entity checksum, store it in application cache and when fetching the data always ask for compliance with checksum
Use browser store/cookie for storing delta ensuring User always sees the latest version
Add "up-to-date" flag and invalidate synchronously on every slave node before/after update
Whatever solution you choose keep in mind it's subject of CAP Theorem.
This is a hard problem, and there are lots of potential solutions. One potential solution is to look at what facebook did,
TLDR - read requests get routed to the read only copy, but if you do a write, then for the next 20 seconds, all your reads go to the writeable master.
The other main problem we had to address was that only our master
databases in California could accept write operations. This fact meant
we needed to avoid serving pages that did database writes from
Virginia because each one would have to cross the country to our
master databases in California. Fortunately, our most frequently
accessed pages (home page, profiles, photo pages) don't do any writes
under normal operation. The problem thus boiled down to, when a user
makes a request for a page, how do we decide if it is "safe" to send
to Virginia or if it must be routed to California?
This question turned out to have a relatively straightforward answer.
One of the first servers a user request to Facebook hits is called a
load balancer; this machine's primary responsibility is picking a web
server to handle the request but it also serves a number of other
purposes: protecting against denial of service attacks and
multiplexing user connections to name a few. This load balancer has
the capability to run in Layer 7 mode where it can examine the URI a
user is requesting and make routing decisions based on that
information. This feature meant it was easy to tell the load balancer
about our "safe" pages and it could decide whether to send the request
to Virginia or California based on the page name and the user's
location.
There is another wrinkle to this problem, however. Let's say you go to
editprofile.php to change your hometown. This page isn't marked as
safe so it gets routed to California and you make the change. Then you
go to view your profile and, since it is a safe page, we send you to
Virginia. Because of the replication lag we mentioned earlier,
however, you might not see the change you just made! This experience
is very confusing for a user and also leads to double posting. We got
around this concern by setting a cookie in your browser with the
current time whenever you write something to our databases. The load
balancer also looks for that cookie and, if it notices that you wrote
something within 20 seconds, will unconditionally send you to
California. Then when 20 seconds have passed and we're certain the
data has replicated to Virginia, we'll allow you to go back for safe
pages.

Clever way to put azure MVC app into maintenance mode

Does anybody have any quick and clever ways to flip an MVC app running on Windows Azure into a "maintenace mode"
I don't have a huge need for this because I use the azure staging environment a lot but occasionally I do have the need to make sure there are no users in the production instance of the application (mainly database updates).
I'd like to be able to do this on the fly without uploading new code or swapping deployment slots. Any suggestions?
The friendliest way to do it is on login. When a user authenticates, check a maintenance mode flag in the database and don't let them log in. Let active users continue to use the application until they log out or their session times out. Keep an activity log so you can know when all users have expired.
Of course this means it will take time from when you put the app into maintenance mode and when it is effectively ready, but it's not nice to boot out an active user.
If the usage pattern of your app makes it so this methodology will not ensure no activity in a reasonable time, you can add a timeout on top of this. Check the same maintenance flag for a request every so often. Doesn't have to be every request but every five minutes or so. If necessary you can also cache the maintenance mode value locally for a reasonable period of time (a few minutes).
I would use routing for this. Have the flag be inspected during routing configuration. If it is on, route to "Maintenance" screens
I would suggest adding a Global Action Filter that respects you maintenance mode Flag.

Resources