Access Split Database Problem with Simultaneous Users - ms-access-2016

I have a split database that was created in Access 2016 and has some users using Access 2016 and others using Access 365.
It works fine when only one person is using it. When two people access it at the same time, sometimes it will generate a copy of the back-end file for one of the users, not saving that user's data to the networked back-end file. The problem usually occurs with the second user to open the front end, but not always.
It gives a message like, 'unable to sync ***_be.accdb', generating a copy. It doesn't matter if the user is using 2016 or 365.
Another symptom is when this occurs, if the user whose file was copied opens one of the forms, they see the screen (form) of the person who is correctly linked to the networked back-end file.
When I monitor the networked back-end file, sometimes it updates quickly with changes and other times it takes a while. Basically, it's not consistent in how it allows access and transfers data to the back-end file from user to user.
I've tried one user on vpn the other on the network, both users on the network and both users on vpn with no obvious difference.
Has anyone run into this?

Related

Can I transfer files from a local server to my Nextcloud server without using the internet and allow users to access them?(same computer)

I used docker technology to set up a nextcloud server for myself and my family
Can I transfer files from a local server to my Nextcloud server without using the internet and allow users to access them?
Because I have discovered two strange things:
1.Placing files directly under a specific user's file path on the server does not allow the user to successfully access the file.
2.As long as I don't delete the files added by the user, even if I directly change the content of the files on the server, the user can still accurately and correctly read the original content.
Or is the user profile path that I think is incorrect?
I think it's /var/www/html/data/"USERID"/files
I would like to know how to solve it, but at the same time, I also want to know what is the reason that causes the following two problems.
Thank you so much.

How can I ensure a persistent connection to a specific GCP Cloud Run instance?

I've built an app (with flask, flask-login and dash) on GCP Cloud Run. The app allows users to login, look at some fancy dashboards and leave comments on certain pages. It works great performance-wise: instances spin up quickly for users with minimal lag, the BigQuery interface I built works great and pub/sub messages sent from user interactions do exactly what they're supposed to do.
The only issue I'm having right now is that there's something weird about which instance of a container a user connects to. What will often happen is a user will login to my app via their browser successfully, and then when navigating to another password-protected page will receive a 401 error (seemingly randomly).
My belief is that this behavior is happening because the navigation request (clicking a link to another password protected page) from the user to another password protected page spins up another Cloud Run instance. Is there any way to force Cloud Run to maintain a specific instance of my container for a given request? So that if a user logs in and then navigates GCP doesn't take the next request and decide to autoscale?
I've experimented with setting the maximum number of requests for the app's frontend container to 1 but it doesn't seem to improve this behavior which happens sporadically throughout a given user's session.
To clarify, the frontend part of the app is still usable, but it is an annoying user experience to constantly have to login again.
Any help or guidance is appreciated!
The answer was as simple as turning on session affinity per #DazWilkin 's comment.
What I did:
Went to the Cloud Run dashboard on GCP and selected the service of interest
Clicked "Edit and Deploy New Revision"
Went to the "Connections"
Checked the box next to the "Session affinity" preview feature
Clicked deploy
This ended up completely solving the problem!

Is it possible to upload a docker image from the web interface?

Good morning everyone.
I've been having a problem for some time, I have some users in my Rails application, and let's suppose that each user has a resource in their panel where they can register certain parameters in a form.
Now comes what I imagine is not possible, it would be possible by a web interface the user after registering his parameters, he would have access to a button that would start a new docker container on the same server (droplet) of the application or on another server ( droplet), and of course the image that would be raised, would be started with the parameters that the user registered in the resource.
Sounds confusing, but it's the only solution I have for users to have a live monitor type. Because within this container, requests will be triggered with certain conditions and times, depending on the parameters sent by the users.

signalr notifications based on nonweb originating events

Our system has two servers (S1) one is running processesing and data storage (basically DB) and the other one is a webserver (WS).
There are two types of even that can happen in the system:
User A pings User B. In this case we check if user B is logged in and we push a notification to User B client throw SignalR. It works.
Services constantly running on S1 and generating new data that concenrs multiple users. My goal is as soon as a new data important for user A is generated I immediately want to dispatch a signalR notification to user A client provided he/she is logged in.
This part 2 is not quite clear for me how to design. My thought right now is to start an indefinite process on webserves that monitors our DataBase and checks if new records are generated fpr this user and then push a SignalR message.
That would be fine, but now we have 10k users logged in and I don't think the right decision would be run 10k threads monitoring activities.
Basically, my question is what would a proper way do design signalR based notification mechanism that is based on events that are not originated on our webserver.
I would use a service bus or mq, for example this Free MQ https://www.rabbitmq.com/
You can proxy the messages direcly to the Clients using this proxy library (I'm the author).
Doc's here https://github.com/AndersMalmgren/SignalR.EventAggregatorProxy/wiki
Demo https://github.com/AndersMalmgren/SignalR.EventAggregatorProxy/tree/master/SignalR.EventAggregatorProxy.Demo.MVC4
You can also set up a sql dependency that triggers a message to your signalr clients,
http://techbrij.com/database-change-notifications-asp-net-signalr-sqldependency
This link is the one that I based my code on.
couple of things to watch for, the setup of the table. You cannot use 3 part table names
"SELECT [CMRID],
[SolutionID],
[CreateDT],
[ModifyDT]
**FROM [dbo].[Case]**
WHERE [ModifyDT] > " + LastExecutionDateTime;
Also, and this is very important, you MUST reset the event handler every time the dependency triggers, if not it will work the first time and then stop working.
I hope this helps you.

How to properly handle asynchronous database replication?

I'm considering using Amazon RDS with read replicas to scale our database.
Some of our controllers in our web application are read/write, some of them are read-only. We already have an automated way for identifying which controllers are read-only, so my first approach would have been to open a connection to the master when requesting a read/write controller, else open a connection to a read replica when requesting a read-only controller.
In theory, that sounds good. But then I stumbled open the replication lag concept, which basically says that a replica can be several seconds behind the master.
Let's imagine the following use case then:
The browser posts to /create-account, which is read/write, thus connecting to the master
The account is created, transaction committed, and the browser gets redirected to /member-area
The browser opens /member-area, which is read-only, thus connecting to a replica. If the replica is even slightly behind the master, the user account might not exist yet on the replica, thus resulting in an error.
How do you realistically use read replicas in your application, to avoid these potential issues?
I worked with application which used pseudo-vertical partitioning. Since only handful of data was time-sensitive the application usually fetched from slaves and from master only in selected cases.
As an example: when the User updated their password application would always ask master for authentication prompt. When changing non-time sensitive data (like User Preferences) it would display success dialog along with information that it might take a while until everything is updated.
Some other ideas which might or might not work depending on environment:
After update compute entity checksum, store it in application cache and when fetching the data always ask for compliance with checksum
Use browser store/cookie for storing delta ensuring User always sees the latest version
Add "up-to-date" flag and invalidate synchronously on every slave node before/after update
Whatever solution you choose keep in mind it's subject of CAP Theorem.
This is a hard problem, and there are lots of potential solutions. One potential solution is to look at what facebook did,
TLDR - read requests get routed to the read only copy, but if you do a write, then for the next 20 seconds, all your reads go to the writeable master.
The other main problem we had to address was that only our master
databases in California could accept write operations. This fact meant
we needed to avoid serving pages that did database writes from
Virginia because each one would have to cross the country to our
master databases in California. Fortunately, our most frequently
accessed pages (home page, profiles, photo pages) don't do any writes
under normal operation. The problem thus boiled down to, when a user
makes a request for a page, how do we decide if it is "safe" to send
to Virginia or if it must be routed to California?
This question turned out to have a relatively straightforward answer.
One of the first servers a user request to Facebook hits is called a
load balancer; this machine's primary responsibility is picking a web
server to handle the request but it also serves a number of other
purposes: protecting against denial of service attacks and
multiplexing user connections to name a few. This load balancer has
the capability to run in Layer 7 mode where it can examine the URI a
user is requesting and make routing decisions based on that
information. This feature meant it was easy to tell the load balancer
about our "safe" pages and it could decide whether to send the request
to Virginia or California based on the page name and the user's
location.
There is another wrinkle to this problem, however. Let's say you go to
editprofile.php to change your hometown. This page isn't marked as
safe so it gets routed to California and you make the change. Then you
go to view your profile and, since it is a safe page, we send you to
Virginia. Because of the replication lag we mentioned earlier,
however, you might not see the change you just made! This experience
is very confusing for a user and also leads to double posting. We got
around this concern by setting a cookie in your browser with the
current time whenever you write something to our databases. The load
balancer also looks for that cookie and, if it notices that you wrote
something within 20 seconds, will unconditionally send you to
California. Then when 20 seconds have passed and we're certain the
data has replicated to Virginia, we'll allow you to go back for safe
pages.

Resources