Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I have a software that uses a SQL Server express database. When the software runs local the data is loaded fast however when I run it remote there is always some delay populating grids etc..etc..
I'm looking to make a some kind of preloader or an alert box that appears with a bar indicating that data is being loaded into the software and prevent the user from clicking on the form.
Can you guys point me to a tutorial or if you can give just a general idea how to accomplish that ?
Forget optimisations like background threads and async dataset loading until you've got the basic workflow of your app correct. Generally, the thing to do with datasets is to open the minimum number necessary to permit the current user operation, and open others, e.g. needed for drilling down into a selected patient's details only as needed. In each case you open the dataset before the related form is shown; that way, the opportunity for the user to try working with only partially loaded data never arises.
So in a situation like this apparently is, where the user browses a collection of patients
in a Patients table, start out with a form containing a DBGrid connected to a dataset component
that delivers the Patient rows. Don't show the form until after you've opened the Patients table, in read-only mode. And don't open any other datasets yet.
Presumably there's a collection of patient detail tables that need to be opened to show the data of a given patient on one or more forms - I imagine there might be a top-level Patient's ID Details form,
and maybe a number of drill-down ones which can be invoked from it. Again, don't show this form(s) until the tables needed to supply the patient data are open. The easiest way to make the user aware that they should wait while something completes
is to surround the code involved with something like this:
Screen.Cursor := crSqlWait;
Screen.activeForm.Update; // refreshes the current form to ensure
// the cursor gets updated on-screen
// Open the patient details table(s) and create the related form(s) here
Screen.Cursor := crDefault;
// Now, show whichever is the principal patient detail form
Once the user has finished with a patient's details, close the form(s) that were opened to do it and close the related datasets.
Sql Server and Delphi are quite capable of populating a top-level DBGrid with outline info for several thousand patients with hardly any perceptible delay, as long as the data is all retrieved into one dataset (e.g. an AdoQuery) using one SQL SELECT statement. Don't take my word for it, try it with your own data. If it seems to slow, you're doing something wrong.
The key is not to attempt to do more than you need to at the time. As I've explained, only retrieve patient-specific data once the user has selected a top-level patient record to work on. Until the app knows which patient the user is working one, it's pointless trying to retrieve patient-specific data of the type you mentioned in comments and would only slow down the app and generate needless network traffic.
Related
I have created a Main Userform that incorporates, multipage many fields and buttons and those link to various other userforms and worksheets and fields. I have reached a point where when I try to F5 I get an "Compile Error - Out of Memory"
I'm newer to troubleshooting these kinds of issues and granted I did not have a plan when I started in terms of structuring the forms and modules or what this would grow into.
This specific issue came having a Page that has a scroll feature that looks at a worksheet and pulls in records into various comboboxes based on a status of Open, closed, Hold etc. Each record is retrieving approx 7-8 fields and each page has approx 50 records that could display except closed which has to have enough for all.
I have read a couple things about ending Object to = nothing and enabling some advanced Windows to allow more memory allocation. I feel like maybe its a combination of structure and not clearing memory when i move around the tool. Any advise help or resources you could point me towards?
Attached is the Error, VBA project tree and a screen shot of one of the multipage items being pulled into the userform from the worksheet. (there will be multiple pages beside "open" that could have up to 100 or more records.
Thanks again,
Attached are Project Structure, error message, and userform-multipage screenshot example
Update: I was able to move past this, My issue was that I had a very bulky form that called a lot of textboxes and combo boxes upon initialization. This obviously required a lot of memory to render all this fields at once. Hence the error.
Solution: I re-thought the form and decided to use a list box and upon selecting a record from the list the fields I needed populate in the box below the list. This allowed me to go from hundreds of boxes to 12. This also was coupled that I had multi- page within a page. Sometimes you just need to take a step back and rethink and restructure your plan.
at the moment we are migrating the database component of our Delphi7 application from the BDE components to the AnyDAC Version 8.0.5 components.
The TTable (BDE) has the following behavior, before editing the record from another application instance (session):
The record is refreshed and changes are visible from other instances. The record will be refreshed in the method TBDEDataSet.InternalEdit.
The dataset is set into edit mode (DataSet.State = dsEdit)
Using the appropriate AnyDAC components (TADTable) the records does not reflect the changes done by other instances.
No special changes to TADConnection and TADTable are made.
Any help appreciated.
I cannot speak for BDE as I don't want to get in touch with it anymore, but what you've described I can read like:
Why does not AnyDAC refresh the tuple before editing starts?
If that is so, and correct me if I'm wrong, that would be quite against UX. Imagine, that you were a user of your own application and wanted to edit a certain tuple in a data grid view. You would click some edit button to enter editing mode, and the whole row would suddenly change in front of your eyes (or editor would be filled by different data than you've seen). Would you like this to happen?
If so, then I'm afraid you'll need to perform such refresh manually with AnyDAC (or FireDAC). The point here is that the engine either locks the tuple by transaction, or tracks the changes inside the internal storage whilst you're in the editing mode.
In neither case refreshes the tuple before editing starts (no matter which locking options you use). And I'm personally fine with this behavior as it could lead to what I've described above.
So how can I refresh the active tuple before editing starts then?
To refresh particular tuple to which the dataset cursor points before dataset editing starts you can call e.g. RefreshRecord from the BeforeEdit event, for example:
procedure TForm1.ADTable1BeforeEdit(DataSet: TDataSet);
begin
TADTable(DataSet).RefreshRecord;
end;
But then your database editing capability becomes a moving target (well, maybe it is already).
I'm developing a software that displays information in a DBGrid via a TSimpleDataSet (dbExpress components)
The software in question is used on 2 different computers by 2 different people.
They both view and edit the same information at different times.
I'm trying to figure out a way to automatically update the DBGrid (or rather, the DataSet, right?) on Computer B once Computer A makes a change to a row (edits something/whatever) and vice-versa.
Currently I've set up a TButton named Refresh that once clicked executes the following code:
procedure TForm2.actRefreshDataExecute(Sender: TObject);
begin
dbmodule.somenameDataSet.MergeChangeLog;
dbmodule.somenameDataSet.ApplyUpdates(-1);
dbmodule.somenameDataSet.Refresh;
dbmodule.somename1DataSet.MergeChangeLog;
dbmodule.somename1DataSet.ApplyUpdates(-1);
dbmodule.somename1DataSet.Refresh;
dbmodule.somename2DataSet.MergeChangeLog;
dbmodule.somename2DataSet.ApplyUpdates(-1);
dbmodule.somename2DataSet.Refresh;
dbmodule.somename3DataSet.MergeChangeLog;
dbmodule.somename3DataSet.ApplyUpdates(-1);
dbmodule.somename3DataSet.Refresh;
end;
This is fine and works as intended, once clicked.
I'd like an auto update feature for this, for example when Computer A edits information in a row, Computer B's DBGrid should update it's display accordingly, without the need to click the refresh button.
I figured I would use a TTimer and set it at a specific interval, on both software on both PC's.
My actual question is:
Is there a better way than a TTimer for this? If so, please elaborate.
Also, if the TTimer route is the way to go any further info you might find useful to state would be appreciated (pro's and con's and so on)
I'm using Rad Studio 10 Seattle and dbExpress components, the datasets connect to a MySQL database on my hosting where my website is.
Thanks!
Well, Ken White and Sertac Akyuz are certainly correct that using a server-originated notification to determine when to refresh your local dataset is preferable to continually re-reading all the data you are using from the server.
The problem AFAIK is that there is no Emba-supplied notification system which works with MySql. See this list of databases supported by FireDAC's Database Alerts:
http://docwiki.embarcadero.com/RADStudio/XE8/en/Database_Alerts_(FireDAC)
and note that it does not list MySql.
Luckily, I think there is a work-around which should be viable for a v. small system like yours currently is. As I understand it, you and your colleague's PCs are on a LAN and the MySql Server is outside your LAN and on the internet. In that situation, it doesn't need a round trip to the server for one of you to get a notification that the other has changed something in the database. Using an analogy akin to Ken's, you can, as it were, lean over the desk and say to your colleague "Hey, I've changed something, so you need to refresh your data."
A very low-tech way of implementing that would be to have somewhere on your LAN a resource that both of you can easily get at, which you can update when you make a change to the DB that means that the other of you should update your data from the server. One way to do that is to have a small, shared datafile with a number of records in it, one per server db table, which has some sort of timestamp or version-ID number which gets updated when you update the corresponding server table. Then, you can periodically check (poll) this datafile to see whether a given table has changed since you last checked; obviously, if it has, you then re-read the data you want from it from the server and update your local record of the info you read from the shared file.
You can update the shared file using handlers for the events of your Delphi client-side datasets.
There are a number of variations on this theme that I'm sure will be apparent to you; the implementational details really don't matter.
To update the shared file I'm talking about, you will need to lock it while writing to it. This answer:
How do I get the handle for locking a file in Delphi?
will show you how to do that.
Of course, the shared local resource doesn't have to be a data file. One alternative would be to use a Microsoft Message Queue service, which is sometimes used for this kind of thing, but has a steeper learning curve than a shared data file.
By the way, this kind of thing is far easier to do (at least on a small scale like you have) if you use 3-tier database access (e.g. using datasnap).
In a three tier system, only the middle tier (a Delphi datasnap server which you write, but it's not that hard) talks to the server, and the clients only talk to the middle tier. This makes it easy for the middle tier server to notify the other client(s) when one of them changes the db data.
The three-tier arrangement also helps minimise the security problems with accessing a database server via the internet, because you only need one secure connection to the server, not one per client. But that's straying a bit far from your immediate problem.
I hope all this is clear, if not, ask.
Just use a timer and make it refresh the dataset every 5 min. No big deal.
If the usage is not frequent then you can set it to fire every 10 or 15 min.
There is nothing wrong with the timer if it set on longer intervals.
Today's broadband connection's can easily handle the traffic so can Access.
If the table is not huge of course.
In this document describing the lifecycle of a Windows 10 UWP app, it states:
Users now expect your app to remember its state as they multitask on their device. For example, they expect the page to be scrolled to the same position and all of the controls to be in the same state as before. By understanding the application lifecycle of launching, suspending, and resuming, you can provide this kind of seamless behavior.
However, there doesn't appear to be much documentation on how this is actually achieved. I gather that everything is to be manually saved by the app developer, and then recreated from scratch on resume using whatever data you stashed away when the app was suspending, all in order to create the illusion that the exact memory state of the app never changed.
I'm trying to puzzle through this using just a minimal example, a XAML page containing nothing other than a TextBox. Even this situation, though, I'm struggling a bit to understand how to achieve the goal. I'll provide more general thoughts, but my concrete question simply is how do you save and then restore a simple TextBox for resume from termination? I'm working in C++/CX but will take any help I can get.
Here are my thoughts on this so far:
At minimum, obviously the text of the TextBox has to be saved.
This could be saved into the ApplicationData::Current->LocalSettings.
One issue I see immediately is that the document I cited above on lifecycles states that apps must take care of their saving within 5 seconds of the suspend signal or face termination. A Textbox could potentially hold a lot of data, causing a save to potentially be cutoff in the face of busy IO, particularly if we start scaling beyond the trivial single TextBox situation.
Fortunately, the document states, "We recommended that you use the application data APIs for this purpose because they are guaranteed to complete before the app enters the Suspended state. For more info, see Accessing app data with the UWP app." Unfortunately, when you follow that link, there is nothing relevant there providing any more detail, and I can't find anything documenting this behavior in the API's. By saving into ApplicationData::Current->LocalSettings are we safe from being cut off with corrupted or lost data?
Once the minimum has been taken care of, next we'll probably need extras like cursor and window position.
We can get the cursor position with TextBox->SelectionStart, which as far as I can tell, is undocumented in the API of this usage of returning the current cursor position. This seems an easy fit to also store as an int into ApplicationData::Current->LocalSettings.
How can we get, save, and restore the scroll position of the TextBox window?
Now that we've got the extras, how about the hard stuff, like undo history? I'm assuming this is impossible as my question on Stackoverflow on how to access the TextBox's undo facility has gotten no answers. Nonetheless, it does seem like a poor user-experience if they swap to another app, come back thinking that the app never closed due to the beautiful and seamless restore from termination we implemented, and their undo history has been wiped out.
Is there anything else that would need to be saved for the TextBox to create the ideal user-experience? Am I missing something or is there an easier way to do all this? How would something like Microsoft's Edge Browser handle the complex case where there are dozens of tabs, form inputs, scroll positions, etc. that all need to be saved in 5 seconds?
The App lifecyle document you reference has been updated for Windows 10, but seems to have lost some of the important bits that you are wondering about.
I found an old blog post, Managing app lifecycle so your apps feel "always alive", that seems to be the inspiration for your link.
In the blog post, there is a paragraph towards the end that reads:
Save the right data at the right time
Always save important data incrementally throughout the life of your app. Because your app has only up to five seconds to run suspending event handler code, you need to ensure that your important app data has been saved to persistent storage by the time it is suspended.
There are two types of data for you to manage as you write your app: session data and user data. Session data is temporary data that is relevant to the user’s current experience in your app. For example, the current stock the user is viewing, the current page the user is reading in an eBook or the scroll position in a long list of items. User data is persistent and must always be accessible to the user, no matter what. For example, an in-progress document that the user is typing, a photo that was taken in the app or the user’s progress in your game.
Given the above, I'll attempt to answer your questions:
how do you save and then restore a simple TextBox for resume from termination?
As the end user is typing in the TextBox, the app saves the contents in the background to the data store. To borrow from how word processing software works, you auto-save the textbox "document". I would consider the textbox content to be what the blog post above describes as "user data". Since the save is done outside of suspension, there is no time window to worry about.
When your app resumes from termination, it checks the data store and loads any data into the textbox.
Once the minimum has been taken care of, next we'll probably need extras like cursor and window position.
I would consider these items "session data" and would save them during suspension. After all there is no need to keep track of this info while the app is active. The user doesn't care where the cursor was 10 minutes ago when he started typing - he only cares about the cursor position at the time of suspension.
how about the hard stuff, like undo history?
I would consider undo history to be "user data" and would save it while it is happening (outside of suspension). In other words, as the user types in content, your app should be saving the information necessary to undo.
I’m attempting to build a multi peer network app and I’ve got everything working for a single conversation between two users however the intention was to build a master- -> detail app like whatsapp where you have a list of conversations and tapping one takes you to the conversation. The problem I’m having is all the housekeeping in maintain multiple sessions.
My structure is that I have a ‘conversation manager’ which has an array of ‘conversations’ which are wrappers for an MCSession that have an array of messages. When a conversation is started (either by inviting a recipient or by accepting an invitation) the conversation object(session) is added to the array, which is the data source for the master table view. When a conversation is selected from the list, in prepare for segue I pass the conversation object to the detail view controller and it’s array of messages become the data source for the detail screen.
I’m having numerous issues trying to get this working, such as messages not being delivered in conversations currently not on screen, keeping all the sessions active, not allowing multiple separate conversations between the same two people etc.
My specific question is that, most of the examples and tutorials, including Apples sample app focus around one conversation, one active session at a time. Am I wasting my time trying to get this working. Ie. Was the framework only designed to accommodate a single active session at a time?
I ran into this curious about the same thing!
I realize this is nearly 3 years old but here's a thought:
If you're using MPCF then you're accepting that these chats are within wifi/bluetooth range. Well, you could accept the limitation of one session at a time and the limitation of up to 7 active chats at any time. You and seven others? 1:1 Then you can just pair those chats up. Each peer could have 7 threads. We can assume that only 8 people have your app open and are within range? I realize this doesn't -completely- solve your problem but it hopefully it gives some direction since I'm not sure another option is possible.
And no answer/help for three years kinda stinks so I'm hoping to pitch in!
If you did find a better answer I'd love to know what you found!