Avoid too long active transaction with IBX components - delphi

My question is very simple, what is the best practice to avoid to have too long active transaction with an application that use many component TIBDataSet? I would avoid to have very old OAT and than have very bad performance
My application have more dataset that must be always opened (until the application is running). I would avoid to close and reopen the transaction because I will be reopen all dataset.
I must be replace this component?
And if yes, what is the best choice?
ClientDataSet with DataSetProvider or switch to IBO component (also If I wouldn't install other component on my IDE)

Read-only transactions don't affect performance of FB server. In our project we use single projectwide read-only always open transaction for data fetching and multiple short living transactions for data modification.
We use modified IBX components where second separate transaction for data reading was added.

Related

Auto refresh a TDataSet / DBGrid

I'm developing a software that displays information in a DBGrid via a TSimpleDataSet (dbExpress components)
The software in question is used on 2 different computers by 2 different people.
They both view and edit the same information at different times.
I'm trying to figure out a way to automatically update the DBGrid (or rather, the DataSet, right?) on Computer B once Computer A makes a change to a row (edits something/whatever) and vice-versa.
Currently I've set up a TButton named Refresh that once clicked executes the following code:
procedure TForm2.actRefreshDataExecute(Sender: TObject);
begin
dbmodule.somenameDataSet.MergeChangeLog;
dbmodule.somenameDataSet.ApplyUpdates(-1);
dbmodule.somenameDataSet.Refresh;
dbmodule.somename1DataSet.MergeChangeLog;
dbmodule.somename1DataSet.ApplyUpdates(-1);
dbmodule.somename1DataSet.Refresh;
dbmodule.somename2DataSet.MergeChangeLog;
dbmodule.somename2DataSet.ApplyUpdates(-1);
dbmodule.somename2DataSet.Refresh;
dbmodule.somename3DataSet.MergeChangeLog;
dbmodule.somename3DataSet.ApplyUpdates(-1);
dbmodule.somename3DataSet.Refresh;
end;
This is fine and works as intended, once clicked.
I'd like an auto update feature for this, for example when Computer A edits information in a row, Computer B's DBGrid should update it's display accordingly, without the need to click the refresh button.
I figured I would use a TTimer and set it at a specific interval, on both software on both PC's.
My actual question is:
Is there a better way than a TTimer for this? If so, please elaborate.
Also, if the TTimer route is the way to go any further info you might find useful to state would be appreciated (pro's and con's and so on)
I'm using Rad Studio 10 Seattle and dbExpress components, the datasets connect to a MySQL database on my hosting where my website is.
Thanks!
Well, Ken White and Sertac Akyuz are certainly correct that using a server-originated notification to determine when to refresh your local dataset is preferable to continually re-reading all the data you are using from the server.
The problem AFAIK is that there is no Emba-supplied notification system which works with MySql. See this list of databases supported by FireDAC's Database Alerts:
http://docwiki.embarcadero.com/RADStudio/XE8/en/Database_Alerts_(FireDAC)
and note that it does not list MySql.
Luckily, I think there is a work-around which should be viable for a v. small system like yours currently is. As I understand it, you and your colleague's PCs are on a LAN and the MySql Server is outside your LAN and on the internet. In that situation, it doesn't need a round trip to the server for one of you to get a notification that the other has changed something in the database. Using an analogy akin to Ken's, you can, as it were, lean over the desk and say to your colleague "Hey, I've changed something, so you need to refresh your data."
A very low-tech way of implementing that would be to have somewhere on your LAN a resource that both of you can easily get at, which you can update when you make a change to the DB that means that the other of you should update your data from the server. One way to do that is to have a small, shared datafile with a number of records in it, one per server db table, which has some sort of timestamp or version-ID number which gets updated when you update the corresponding server table. Then, you can periodically check (poll) this datafile to see whether a given table has changed since you last checked; obviously, if it has, you then re-read the data you want from it from the server and update your local record of the info you read from the shared file.
You can update the shared file using handlers for the events of your Delphi client-side datasets.
There are a number of variations on this theme that I'm sure will be apparent to you; the implementational details really don't matter.
To update the shared file I'm talking about, you will need to lock it while writing to it. This answer:
How do I get the handle for locking a file in Delphi?
will show you how to do that.
Of course, the shared local resource doesn't have to be a data file. One alternative would be to use a Microsoft Message Queue service, which is sometimes used for this kind of thing, but has a steeper learning curve than a shared data file.
By the way, this kind of thing is far easier to do (at least on a small scale like you have) if you use 3-tier database access (e.g. using datasnap).
In a three tier system, only the middle tier (a Delphi datasnap server which you write, but it's not that hard) talks to the server, and the clients only talk to the middle tier. This makes it easy for the middle tier server to notify the other client(s) when one of them changes the db data.
The three-tier arrangement also helps minimise the security problems with accessing a database server via the internet, because you only need one secure connection to the server, not one per client. But that's straying a bit far from your immediate problem.
I hope all this is clear, if not, ask.
Just use a timer and make it refresh the dataset every 5 min. No big deal.
If the usage is not frequent then you can set it to fire every 10 or 15 min.
There is nothing wrong with the timer if it set on longer intervals.
Today's broadband connection's can easily handle the traffic so can Access.
If the table is not huge of course.

IBtransaction and Firebird for a multi-user program

I have a multiuser delphi program which has Firebird database behind it.
And I want 2 user can insert 2 records same time but with given automated number for a field.
On the other hand I am not sure Firebird is eligible for this without one use commit and close the table. And the other one refreshing it...
I heard bad things about commitretaining and I don't now what to do now. Like:
Which transaction setting is best for me?
Wait or No-wait if I have to use commitretaining how can I do that safely?
Use GENERATORS. With GENERATORS you get always unique numbers. It doesn't matter how many transactions are active, they live outside the transaction control.
See Firebird Generator Guide

Interbase transaction monitoring

I have a very strange problem with transactions in Interbase 7.5 which seem to be stuck.
I can track the problem with IBConsole -> right click DB -> Performance Monitor -> Transactions
Usually this list should show only a few active transaction. But I get several hundred active transactions when I start my application (a web module for an apache webserver using Delphi 7 Interbase components, e.g. IBQuery, IBTransaction, ...)
Transaction type is always listed as snapshot, if this is of relevance.
I have already triple checked all sql statements and cannot find anything that should produce such problems...
Is there any way get the sql statements of a specific transaction?
Any other suggestion how to find such a problem would be very welcome.
Is there any way get the sql statements of a specific transaction?
Yes, you can SELECT from TMP$STATEMENTS WHERE TRANSACTION_ID = .... That's from memory, but should get you started.
In IB Performance Monitor, you can locate the transaction from the statements tab, using the button on the toolbar. Can't remember if you can go the other way in that app. It's been a long time since I wrote it!
Active IBX data-sets require an active transaction all the time. If you don't have active data-sets just don't forget to commit all the active transactions.
If you have active data-sets, you can configure all your components to use the same TIbTransaction object, and you can also configure the unique TIbTransaction to commit or rollback after a idle time-out period via the IdleTimer and DefaultAction properties.
Terminating the transaction (by manually or automatically committing or rolling back) will close all the linked datasets (TIBQuery, TIBTable and the like).
You may be tempted to use the CommitRetaining or RollbackRetaining methods to terminate the transaction without closing the related data-sets, but this may affect the performance of the server, and my advise is to always avoid using it.
If you want to improve your application, you should consider changing your database connection layer or introducing a in-memory capable dataset over IBX, for example, Delphi's TClientDataSet, which allows you to retrieve data and retain it in memory while closing all the underlying datasets (and transactions), while allowing you to use the traditional Insert/Append/Edit/Delete methods to modify the data and then apply that changes to the database in a new short-time transaction.

Is it possible to take a snapshot of a dataset?

IN my application I use DBAware components exclusively (except a few places).
I have a scenario in which I create a Master dataset (e.g. customer), detail dataset (e.g. Orders), subdetail dataset (e.g. order items). TYpically I allow users to make changes (the dataset are in Browse mode) and then I post. Simple.
Anyway on editing the subdataset I want to add a kind of simple undo feature: one opens a form to edit the dataset (that is with db componets, so changes to the form will change the dataset), if the user Cancels the operation I would like to restore the dataset as it was before opening the form.
Now for implementing this I can think of makeing a copy of the dataset in a TClientDataSet or similar component, but are there other techiniques? Like is it possible with Delphi to create in an easy way a "snapshot" of the data. With pseudocode:
MySubDetailDataSet.SaveSnapShot;
SubDetailForm.ShowModal;
if ModalResult = mrCancel then MySubDetailDataSet.RestoreSnapShot;
Is something like that possible "off the shelf" with Delphi components?
By the way I use SDAC from DevArt components, so if you know a technique that is available only with those components and not with Delphi standard ones it is welcome!
In a client dataset changes are stored in a delta - you can call CancelUpdates to clear the delta and revert to the original dataset. There are other more granular approaches. See "Undoing changes" in the help.
If you're using a RDBMS and you're properly inside a transaction, you can rollback the transaction. Some databases offer savepoints to rollback to a given savepoint instead of rolling back a whole transaction, but that's database specific. Transactions usually are per session, not per single table or query. You have to be sure only the changes you may need to roll back are performed in a given transaction.
Client dataset may be a "lighter" approach because they manage data client-side only and don't require database resources. While you're in a transaction inside the database, some resources are needed to keep track of it and changed data. Transaction should be as long as required, but not longer.
Be also aware that transactions may imply some locks. Lock management can be very different from database to database, and some may escalate locks, blocking more users than needed. Always test with a sufficient number of concurrent users to ensure transactions are used properly.
In AnyDAC you can do:
var
iPrevSP: Integer;
...
iPrevSP := MySubDetailDataSet.SavePoint;
SubDetailForm.ShowModal;
if ModalResult = mrCancel then
MySubDetailDataSet.SavePoint := iPrevSP;
The similar technique is accessible with TClientDataSet, kbmMemTable. Not the answer probably, as you are using DevArt product.
With DevArt I managed copying data to a TVitualTable (a DevArt Version of a TCLientDataSet), anyway SavePoint feature as easy as in AnyDAC is not there.
You can use TClientDataset and load it from file or stream and save the original data inside, and every time you want to rollback, reload it from original data.

What's the difference between Jet OLEDB:Transaction Commit Mode and Jet OLEDB:User Commit Sync?

Althoug both Jet/OLE DB parameters are relativly well documented I fail to understand the difference between these two connection parameters:
The first one:
Jet OLEDB:Transaction Commit Mode
(DBPROP_JETOLEDB_TXNCOMMITMODE)
Indicates whether Jet writes data to
disk synchronously or asynchronously
when a transaction is committed.
The second one:
Jet OLEDB:User Commit Sync
(DBPROP_JETOLEDB_USERCOMMITSYNC)
Indicates whether changes that were
made in transactions are written in
synchronous or asynchronous mode.
What's the difference? When to use which?
This is very long, so here's the short answer:
Don't set either of these. The default settings for these two options are likely to be correct. The first, Transaction Commit Mode, controls Jet's implicit transactions, and applies outside of explicit transactions, and is set to YES (asynchronous). The second controls how Jet interacts with its temporary database during an explicit transaction and is set to NO (synchronous). I can't think of a situation where you'd want to override the defaults here. However, you might want to set them explicitly just in case you're running in an environment where the Jet database engine settings have been altered from their defaults.
Now, the long explanation:
I have waded through a lot of Jet-related resources to see if I can find out what the situation here is. The two OLEDB constants seem to map onto these two members of the SetOptionEnum of the top-level DAO DBEngine object (details here for those who don't have the Access help file available):
dbImplicitCommitSync
dbUserCommitSync
These options are there for overriding the default registry settings for the Jet database engine at runtime for any particular connection, or for permanently altering the stored settings for it in the registry. If you look in the Registry for HLKM\Software\Microsoft\Jet\X.X\ you'll find that under the key there for the Jet version you're using there are keys, of which two are these:
ImplicitCommitSync
UserCommitSync
The Jet 3.5 Database Engine Programmer's Guide defines these:
ImplicitCommitSync: A value of Yes indicates that Microsoft Jet will wait for commits to finish. A value other than Yes means that Microsoft Jet will perform commits asynchronously.
UserCommitSync: When the setting has a value of Yes, Microwsoft Jet will wait for commits to finish. Any other value means that Microsoft Jet will perform commits asynchronously.
Now, this is just a restatement of what you'd already said. The frustrating thing is that the first has a default value of NO while the second defaults to YES. If they really were controlling the same thing, you'd expect them to have the same value, or that conflicting values would be a problem.
But the key actually turns out to be in the name, and it reflects the history of Jet in regard to how data writes are committed within and outside of transactions. Before Jet 3.0, Jet defaulted to synchronous updates outside of explicit transactions, but starting with Jet 3.0, IMPLICIT transactions were introduced, and were used by default (with caveats in Jet 3.5 -- see below). So, one of these two options applies to commits OUTSIDE of transactions (dbImplicitCommitSync) and the other for commits INSIDE of transactions (dbUserCommitSync). I finally located a verbose explanation of these in the Jet Database Engine Programmer's Guide (p. 607-8):
UserCommitSynch
The UserCommitSynch setting determines
whether changes made as part of an
explicit transaction...are written to
the database in synchronous mode or
asynchronous mode. The default value...is Yes, which specifies
asynchronous mode. It is not
recommended that you change this value
because in synchronous mode, there is
no guarantee that information has been
written to disk before your code
proceeds to the next command.
ImplicitCommitSync
By default, when
performing operations that add,
delete, or update records outside of
explicit transactions, Microsoft Jet
automatically performs internal
transactions called implicit
transactions that temporarily save
data in its memory cache, and then
later write the data as a chunk to the
disk. The ImplicitCommitSync setting
determines whether changes made by
using implicit transactions are
written to the database in synchronus
mode or asynchronous mode. The default
value...is No, which specifies that
these changes are written to the
database in asynchronous mode; this
provides the best performance. If you
want implicit transactions to be
written to the database in synchronous
mode, change the value...to Yes. If
you change the value...you get
behavior similar to Microsoft Jet
versions 2.x and earlier when you
weren't using explicit transactions.
However, doing so can also impair
performance considerably, so it is not
recommended that you change the value
of this setting.
Note: There is no longer a need to use
explicit transactions to improve the
performance of Microsoft Jet. A
database application using Microsoft
Jet 3.5 should use explicit
transactions only in situations where
there may be a need to roll back
changes. Micosoft Jet can now
automatically perform implicit
transactions to improve performance
whenever it adds, deletes or changes
records. However, implicit
transactions for SQL DML statements
were removed in Microsoft Jet
3.5...see "Removal of Implicit Transactions for SQL DML Statements"
later in this chapter.
That section:
Removal of Implicit Transactions for SQL DML Statements
Even with all the work in Microsoft
Jet 3.0 to eliminate transactions in
order to obtain better performance,
SQL DML statements were still placed
in an implicit transaction. In
Microsoft Jet 3.5, SQL DML statements
are not placed in an implicit
transaction. This substantially
improves performance when running SQL
DML statements that affect many
records of data.
Although this change provides a
substantial performance improvement,
it also introduces a change to the
behavior of SQL DML statements. When
using Microsoft Jet 3.0 and previous
versions that use implicit
transactions for SQL DML statements,
an SQL DML statement rolls back if any
part of the statement is not
completed. When using Microsoft Jet
3.5, it is possible to have some of the records committed by SQL DML
statement while others are not. An
example of this would be when the
Microsoft Jet cache is exceeded. The
data in the cache is written to disk
and the next set of records is
modified and placed in the cache.
Therefore, if the connection is
terminated, it is possible that some
of the records were saved to disk, but
others were not. This is the same
behavior as using DAO looping routines
to update data withoug an explicit
transaction in Microsoft Jet 3.0. If
you want to avoid this behavior, you
need to add explicit transactions
around the SQL DML statement to define
a set of work and you must sacrifice
the performance gains.
Confused yet? I certainly am.
The key point to me seems to me to be that dbUserCommitSync seems to control the way Jet writes to the TEMPORARY database it uses for staging EXPLICIT transactions, while dbImplicitCommitSync relates to where Jet uses its implicit transactions OUTSIDE of an explicit transaction. In other words, dbUserCommitSync controls the behavior of the engine while inside a BeginTrans/CommitTrans loop, while dbImplicitCommitSync controls how Jet behaves in regard to asynch/synch outside of explicit transactions.
Now, as to the "Removal of Implicit Transactions" section: my reading is that implicit transactions apply to updates when you're looping through a recordset outside of a transaction, but no longer apply to a SQL UPDATE statement outside a transaction. It stands to reason that an optimization that improves the performance of row-by-row updates would be good and wouldn't actually help so much with a SQL batch update, which is already going to be pretty darned fast (relatively speaking).
Also note that the fact that it is possible to do it both ways is what enables DoCmd.RunSQL to make incomplete updates. That is, a SQL command that would fail with CurrentDB.Execute strSQL, dbFailOnError, can run to completion if executed with DoCmd.RunSQL. If you turn off DoCmd.SetWarnings, you don't get a report of an error, and you don't get the chance to roll back to the initial state (or, if you are informed of the errors and decide to commit, anyway).
So, what I think is going on is that SQL executed through the Access UI is wrapped in a transaction by default (that's how you get a confirmation prompt), but if you turn off the prompts and there's an error, you get the incomplete updates applied. This has nothing to do with the DBEngine settings -- it's a matter of the way the Access UI executes SQL (and there's an option to turn it off/on).
This contrasts to updates in DAO, which were all wrapped in the implicit transactions starting with Jet 3.0, but starting with Jet 3.5, only sequential updates were wrapped in the implicit transactions -- batch SQL commands (INSERT/UPDATE/DELETE) are not.
At least, that's my reading.
So, in regard to the issue in your actual question, in setting up your OLEDB connection, you'd set the options for the Jet DBEngine for that connection according to what you were doing. It seems to me that the default Jet DBEngine settings are correct and shouldn't be altered -- you want to use implicit transactions for edits where you're walking through a recordset and updating one row at a time (outside of an explicit transaction). On the other hand, you can wrap the whole thing in a transaction and get the same result, so really, this only applies to cases where you're walking a recordset and updating and have not used an explicit transaction, and the default setting seems quite correct to me.
The other setting, UserCommitSync, seems to me to be something you'd definitely want to leave alone as well, as it seems to me to apply to the way Jet interacts with its temp database during an explicit transaction. Setting it to asynchronous would seem to me to be quite dangerous as you'd basically not know the state of the operation at the point that you committed the data.
You'd think that USERCOMMITSYNC=YES would be the option to commit synchronously. And that is the cause of the confusion.
I spent ages googling on this topic because I found that the behavior I was getting with old vb6 applications was not the same as I get in .net oledb/jet4
Now I really should back up what I'm going to say with a link to the actual page(s) I read but I can't find those pages now.
Anyway, I was browsing MSDN website and found a page that described a 'by design' error in Jet3 which transposed the functionality of USERCOMMITSYNC meaning a value of NO gets synchronous commit.
Therefore MS set the default to NO and we get synchronous commit by default. Exactly as described above by David Fenton. A behavior we've all come to accept.
But, the document then went on to explain that the behavior in oledb/Jet4 has been changed. Basically MS fixed their bug and now a setting of USERCOMMITSYNC=YES does what it says.
But did they change the default? I think not because now my explicit transactions are NOT committing synchronously in .Net applications using oledb/jet4.

Resources