FireDAC ApplyUpdates without clearing the Delta - delphi

Is it possible to call Applyupdates on a FireDAC Query on cached updates mode without clearing its Delta ?.
The reason to do so is because I have 4 FDQuerys that I want to save together or cancel together if any of them raises on error. I could use a single transaction to rollback all changes if any problem happens, but that would leave the Delta empty for every FDQuery where its ApplyUpdates were successful.
So I would like to call some kind of ApplyUpdates that doesn't clear the Delta, and only if all the FDQuerys ApplyUpdates are successful then I would Commit the Transaction and call CommitUpdates on every FDQuery to clear their Deltas. But if one of them fails the changes of every FDQuery would still remain in their Deltas, so I Rollback the transaction and the user can still fix the data and try to save them again.
Update: As #Brian has commented, setting the property UpdateOptions.AutoCommitUpdates to False does the trick and doesn't clear the Delta.

Setting UpdateOptions.AutoCommitUpdates to false will leave the deltas alone when ApplyUpdates is called. The current help for AutoCommitUpdates is lacking however and describes the False setting incorrectly in a confusing way. It should be by more like:
AutoCommitUpdates controls the automatic committing of cached updates.
Specifies whether the automatic committing of updates is enabled or
disabled.
If AutoCommitUpdates is True, then all the successfully updated records applied by the call to ApplyUpdates are automatically marked as unchanged. You do not need to explicitly call CommitUpdates.
If AutoCommitUpdates is False, then your application must explicitly call CommitUpdates to mark all changed records as unchanged.
I put in a ticket to fix the help: RSP-31141

Related

Error at Delphi : Dataset not in edit or insert mode

Right now I'm struggling to solve a bug that caused by the dataset mode at Delphi (using ADODataset),
details as below for the add button mechanism :
IDMain: =self.DBTextIDUser.Caption+'-'+self.DBEditWorkingDate.Text;
datamodule1.ADODataSetWorkingDetails.Append;
with datamodule1.ADODataSetWorkingDetails do
begin
dbgridworkinghours.Fields[0].AsString := IDMain;
dbgridworkinghours.Fields[7].AsString := self.DBTextIDUser.Caption;
dbgridworkinghours.Fields[8].AsString := self.DBTextName.Caption;
dbgridworkinghours.Fields[9].AsString := self.DBEditWorkingDate.Text;
dbgridworkinghours.Fields[11].AsString := self.DBTextPeriod.caption;
dbgridworkinghours.Fields[10].AsString := self.DBTextToday.Caption;
end;
I already set the adodataset to append mode at the save button :
datamodule1.ADODataSetWorkingDetails.post;
when I click the save button, an error appears that:
The adodataset not in edit/insert mode
I already used this mechanism at the other form and it works
note: I already tried setting the adodataset mode to insert, but still faced the same error
What #kobik said.
Your problem is most likely being caused by something you haven't told us in your q.
I think the important thing is for you to find out how to debug this sort of thing
yourself, so that even if you don't understand the cause, you can at least isolate
it and provide better information when you ask for help here. So I'm going to
outline how to do that.
In your Project Options, check the box "Use Debug DCUs"
Set up two event handlers, for your ADODataSetWorkingDetails's
AfterPost and AfterScroll events, put a some "do nothing" code in both of them
(to stop the IDE removing them). Put a debugger breakpoint on the first line
inside the AfterScroll handler, but not (yet) the AfterScroll one.
Compile and run your program.
You should find that somewhere after you call Append but before you click your
Save button, the debugger stops on your AfterPost breakpoint.
When it does,
go to View | Debug windows | Call stack. This will show you a list of
program lines, the one at the top being the one closest to where the breakpoint
tripped. This will likely be deep inside the VCL's run-time code (which is why
I said to check "Use Debug DCUs". Scroll down the list towards the bottom, and
eventually you should come to a line which is the cause of why Post was called.
If it isn't obvious to you why the AfterPost event was called, put a breakpoint
on your Append line and run the program again. When this breakpoint trips,
put another breakpoint inside your AfterScroll event, resume the program
by pressing F9 and see if the AfterScroll breakpoint is hit. If
it is, again view the Call stack and that should show you why it was called -
if it isn't obvious, then add the contents of tthe Call stack window to your q.
If the cause is obvious, then change your code to avoid it.
The reason I've gone on about the AfterScroll event is that what isn't obvious
is that when your code causes a dataset to scroll, any pending change (because
the dtaset is in dsInsert or dsEdit state will cause the change to be posted
and you will then got the error you've quoted if you try to call Post on the
dataset again. Calling Append initially sets a dataset into dsInsert state, btw.
See if you can at least identify what is causing your dataset to post before
it is supposed to, and let us know in a comment to your q or this answer.
Btw, I strongly recommend that you get out of the habit of using the with construct in your code. Although it may save you a bit of typing, in the long term it will likely make bugs far more likely to happen and far harder to find.
Update TDataSet and its descendants have a State property which is of type TDataSetState (see
DB.Pas). Normally, for browsing data and navigating around the dataset, the
dataset is in dsBrowse state. If you call Edit or Append (or Insert), the dataset
is temporarily put in dsEdit or dsInsert state, respectively. Various routines in DB.Pas
check the dataset state before certain operations are performed and raise an exception if the
DataSet in not in the correct state for the operation to go ahead. It is very, very likely
that it is one of these checks that is giving you the exception.
My original hunch was that your error was occurring because something was happening which
was causing Post to be called, because if Post succeeds, it puts the dataset back into
dsBrowse state, so when clicking your Save button calls Post, the dataset is already
in dsBrowse state. You can, of course, put a breakpoint in TDataSet.Post in DB.Pas
check which state the dataset is actually in when it is called.
There are two other main possibilities for the cause of your exception, namely that
either TDataSet.Cancel or the general Abort method is being called. To investigate
these, put breakpoints on the first lines inside TDataSet.Cancel (in DB.Pas) and
Abort (in SysUtils.Pas). If either of these breakpoints trips between you calling
Append and Post, then you can use the Call Stack view to try and figure
out why execution has reached there.

TClientDataSet.ApplyUpdates() doesn't apply updates

My Delphi project has a TAdoQuery accesssing data on an MS Sql Server 2014 server, and TClientDataSet that receives the AdoQuery data via a TDataSetProvider. This is created from a Project Template I set up.
Normally, I've found this set-up to work faultlessly, but with this particular project I'm having a problem: ApplyUpdates() fails silently and the Sql Server data is not updated. In my stripped down debugging project, the only code I have, apart from a button-click handler which calls it, is:
procedure TForm1.ApplyUpdates;
var
Errors : Integer;
begin
Errors := ClientDataSet1.ApplyUpdates(0);
Caption := IntToStr(Errors) + '/' + IntToStr(ClientDataSet1.ChangeCount);
end;
After this executes, the form's caption should be 0/0 of course but what it actually says is 0/1. So on the face of it, no errors occurred but the CDSs ChangeCount hasn't been reset to zero as it should be. My q is, how can ApplyUpdates return no errors but the server dataset doesn't get updated.
Fwiw, I added the ChangeCount display as part of my effort to debug the problem. But I'm afraid I haven't been able to follow what's supposed to be going on in the details of the "conversation" between the DataSetProvider and its DataSet to apply the updates on the server.
I ran into this problem recently on a quick project I rustled up without the precaution of setting an OnReconcileError handler, as queried by #mjn.
Once I'd set up the OnReconcileError handler, it was obvious that the problem was that the provider's TSqlResolver wasn't able to identify the row to update. Iirc, the message on the ReconcileError pop-up form was words to the effect of "Unable to locate record. No key specified."
So, first thing I tried was to include this in my CDS's AfterOpen:
CDS1.Fields[0].ProviderFlags := [pfInKey];
(CDS1.Fields[0] is the PK field of the dataset)
Contrary to my expectations, that didn't fix it. After scratching my head for a while, I had a careful look on the server and discovered that the recently-recreated table I was using did not have a primary key index.
Once I'd created the primary key index on the server, the ApplyUpdates problem went away.
However, what puzzles me about this is that prompted by your q, I dropped the primary key index on my server table and the problem hasn't started occurring again (!). I'm guessing this is due to some kind of cacheing effect on my machine but I don't really want to reboot it right now to investigate.

Progress OpenEdge how to prevent someone from updating a record

I need an alternative way to prevent someone from accessing a particular piece of code.
I'll explain the scenario.
There are two programs.
In the first program an end-user creates a proforma invoice. When he/she then views the details on the invoice. The code displays the details with the main table's record in EXCLUSIVE-LOCK. This is to prevent other end-users from changing anything while the first user is busy viewing the details. So even when the proforma invoice is completed and can no longer be changed. The main table's record is still in EXCLUSIVE-LOCK. which is wrong, but it prevents other users from messing with it while the first user is still busy updating it. However, the people who work in this program leave the program in the detail view. They don't go out.
The problem is when the second program is used to dispatch the items on the proforma invoice. It uses the same main table's record. And therefore can't do anything because the first program still has it in EXCLUSIVE-LOCK.
My question is...
How can I prevent users changing data in the first program as if the main table's record was in EXCLUSIVE-LOCK, but without actually having it in exclusive-lock? Over multiple sessions...
This might be better a comment, but I don't have enough reputation points to make comments. Sorry.
Some notes:
Optimistic locking -- if it is viable for your situation -- is almost certainly the best solution.
If you are going to add an isLocked field to the table, you will probably want several other fields:
Date/Time the record was locked -- or else an expiration timestamp
LockHolder -- so you know whether or not you've got it.
The expiration can be automatic (as with a cron sweep), or can be ignored unless someone else wants the record. The program which sets the lock must also be smart enough to check to see if it still holds it. It gets complicated.
There are times when it is not convenient to make schema changes to a table, or there are too many tables that need changes. In those cases, you can add these fields to a separate LockIt table. One table can handle these locks for all your other tables.
Aside:
We also use our LockIt table for another purpose: to make sure that only one copy of a given program can run at a time. (Usually this is for cron jobs or a batch daemon.) The program exclusive-locks a particular record in the LockIt table (but DOES NOT start a transaction!), and it holds that lock as long as the program is running.
Most likely, your transaction scope is wrong. I'll go ahead and assume you are trying to dispatch with the invoicing program still open. Then you can't, because the record is still locked. Chances are your whole program is a transaction, and the record will keep locked for as long as the screen is running. Try and revisit your updates, enclose your real update operations in a DO TRANSACTION block, put some MESSAGE TRANSACTION statements around in different places and see what the results are. This will help you find the points in which Progress is being led to "believe" the record still needs to be locked.
I assume you're not using appserver since this behaviour most likely wouldn't be a problem in that case.
One solution could be to change into a "optimistic locking" approach. This means that you start out with a "NO-LOCK" and once you need to change the record you upgrade the lock to EXCLUSIVE-LOCK. This approach will work but you will need to make sure that the record still exists and isn't changed by some other user.
Depending on how often your invoices actually change this might (or might not) be a solution. If a mishap happens "once in a while" this might be a viable solution. If it happens often (every day or so) you need to do something else.
Basic pseudo code for an optimistic approach:
FIND FIRST record WHERE somethingsomething NO-LOCK.
/* Here goes code for displaying the record */
/* .... */
/* Here's the updates */
IF userWantsToSave THEN DO:
FIND CURRENT record EXCLUSIVE-LOCK NO-ERROR NO-WAIT.
IF AVAILABLE record THEN DO:
IF CURRENT-CHANGED record THEN DO:
MESSAGE "Changed!" VIEW-AS ALERT-BOX ERROR.
/* Your code goes here */
END.
ELSE DO:
/* Your code for updating goes here */
MESSAGE "Success!" VIEW-AS ALERT-BOX INFORMATION.
END.
END.
ELSE IF NOT AVAILABLE record THEN DO:
IF LOCKED record THEN DO:
MESSAGE "Locked!" VIEW-AS ALERT-BOX ERROR.
/* Your code goes here */
END.
ELSE DO:
MESSAGE "Deleted!" VIEW-AS ALERT-BOX ERROR.
/* Your code goes here */
END.
END.
END.
Here's an example from the knowledgebase that goes more into depth with this.

TDBGrid doesn't update when multiple users are editing it

I am developing an application which has a simple database. All of the functions are going well but when a user is editing the database from the program, the other user cannot see the content immediately. The other user needs to close the program and reopen it for the data to appear and its DBGrid be updated with those changes form the other computers. I am using Delphi 7 for this and ZeosLib to access my Firebird database. I tried using the refresh button on the DBNavigator but it doesn't work.
The components I used to connect to the database are:
ZConnection
ZQuery
DataSource
DBGrid
DBNavigator
This is the code for my ZConnection and ZQuery.
object ZConnection1: TZConnection
ControlsCodePage = cGET_ACP
UTF8StringsAsWideField = False
Connected = True
Port = 3051
Database = '192.168.254.254:test'
User = 'test'
Password = 'test'
Protocol = 'firebird-2.5'
Left = 96
Top = 8
end
object ZQuery1: TZQuery
Connection = ZConnection1
Active = True
SQL.Strings = (
'select * from "test"')
Params = <>
Left = 128
Top = 8
object ZQuery1ID: TStringField
FieldName = 'ID'
Required = True
Size = 8
end
Sounds like you're running afoul of ACID. This is a basic guarantee of SQL-style databases, that all database updates will be Atomic, Consistent, Isolated, and Durable, and is accomplished through transactions.
Specifically, you're having trouble with the Consistency and the Isolation, which ensure that an external viewer never sees an update before it's finished, even if that update contains more than one change. (The classic example is a bank transfer, which requires subtracting money from one account and adding it to another. If you only see one of these two actions but not the other one, you have bad data.)
You can think of a transaction as an independent view of the state of the database. Every database connection has its own transaction, and any changes it makes are invisible to anyone else (Isolated) until they Commit (finalize) the transaction. Depending on the transactions' isolation settings, they may remain invisible to other users even after that, if they have an ongoing transaction, until they commit their transaction and begin a new one. It sounds like your code isn't taking this into account.
If you need updates to become visible immediately, you'll want to ensure that the transaction's isolation mode is READ COMMITTED, and set up database events to send notifications to connected clients when various things get updated, so the clients can perform refreshes of their data. You'll also want to ensure that the user updates result in a Commit action right away, so that the isolated data will become available.
Since I don't use ZeosLib, I can't explain all the details of how you'll need to set this all up, but this is enough to get you on the right track.
I suggest that you add a timer to the form which displays the grid. Set the timer so that it fires its OnTimer event once a minute (or longer). In this event, close the query then reopen it. This way, everyone always gets current information (albeit a minute late).
with qWhatever do // this is the query which is connected to the grid
try
disablecontrols;
close;
open
finally
enablecontrols
end;
For a multi-user application, where clients need to receive notifications, one option is to use Firebird events to send a 'broadcast' message for every data change (SQL INSERT, UPDATE or DELETE).
Clients can 'register' (listen) for a specific message type, and whenever the Firebird server sends a message with this type, they will receive it, and run client application code, which in your case would refresh the user interface (grid).
While this can be a sufficient solution in many simple use cases, there are also some restrictions. I recently blogged about this topic here:
Firebird Database Events and Message-oriented Middleware
(I am author of middleware libraries for Delphi and Free Pascal)
I solved this problem by adding this before query.
IBDatabase1.Close;
IBDatabase1.Open;

TClientDataSet and processing records with StatusFilter

I'm using a TClientDataSet as a local dataset without the provider concept. After working with it a method is called that should generate the corresponding SQL statements using the StatusFilter to resolve the changes (generate SQL basically).
This looked easy initially after reading documentation (set StatusFilter to [dsInsert], process all inserts SQL, set StatusFilter to [dsModified] process all updates, the same with deletes) but after a few tests now looks far from trivial, for example:
If I add a record, then edit it: setting the StatusFilter to [dsInserted] displays it, but with the original data.
If I add a record, then edit, then delete it: the record appears with StatusFilter set to [dsInserted] and [dsModified] also.
And other similar situations..
1) I know that if first I process all inserts, then all updates then all deletes the database will be updated in the correct state but it looks far from right this approach (generating useless sql statements).
2) I've tried to access the PRecInfo(ClientDataSet.ActiveBuffer + ClientDataSet.RecordSize).Attribute information (dsRecNew, dsRecOrg, etc.) but still not manage to resolve the logic.
3) I can program the logic to resolve it, for example before processing and insert, set StatusFilter to [dsDeleted], and locating by the primary key if the record to see if its deleted thereafter.. the same with edits, before inserting, checking if the record was updated after so the insert sql in the updated version and so on.. but it should be more easy..
¿Did someone tried to solve this in an elegant and straightforward way? ¿I'm missing something? Thanks

Resources