Saving deleted records to another table - delphi

I am deleting records from one table (based on a condition) like :
procedure TForm3.AdvGlowButton1Click(Sender: TObject);
begin
if MessageDlg('Are you sure???' , mtConfirmation, [mbYes, mbNo], 0) = mrNo then
Abort else
Case cxRadioGroup1.ItemIndex of
0: begin
with Form1.ABSQuery1 do begin
Form1.ABSQuery1.Close;
Form1.ABSQuery1.SQL.Clear;
Form1.ABSQuery1.SQL.Text :='delete from LOG where status="YES" ';
Form1.ABSQuery1.ExecSQL;
Form1.ABSTable1.Refresh;
end;
end;
End;
end;
However,I want to save these deleted records in another table that I have created for the purpose (LOG_ARCHIVE) which is identical to the LOG table. So how do I save these deleted records over there ?

If you were using a database that supported it, you could use a BEFORE DELETE trigger. However, according to a search on the Absolute Database documentation, there's no support for CREATE TRIGGER and a search on triggers at the same site returns nothing about them either.
The lack of trigger support probably just leaves you with performing an INSERT into the other table first, before doing the DELETE from your LOG table. According to the documentation again, a query is able to be used as the source of data for an INSERT (see the second example on the linked page). This means you can do something like this:
ABSQuery1.SQL.Text := 'insert into LOG_ARCHIVE'#13 +
'(select * from LOG where status = ''Yes'')';
ABSQuery1.SQL.ExecSQL;
ABSQuery1.Close;
{
No need to use SQL.Clear here. Setting the SQL.Text replaces
what was there before with new text.
}
ABSQuery1.SQL.Text :='delete from LOG where status=''YES''';
ABSQuery1.ExecSQL;
You really should wrap this entire operation in a transaction (Delphi example here), so that in case something fails both the INSERT and DELETE can be undone. (For instance, if the INSERT works putting the rows in the LOG_ARCHIVE file, but the DELETE then fails for some reason, you have no way to remove the rows you inserted into the archive file.) A transaction can be started before you do the INSERT, rolled back if it (or the DELETE fails or committed if both of them succeed.

Related

Is there anyway to run multiple-procedures back-to-back in PL/SQL so that a table is created and then filled in as structured in my code?

I am working with Oracle SQL Developer and I am trying get the below code to work and just can't figure it out. I have tried multiple different methods including implementing for loops, execute immediate's, scheduling and recompiling.
BEGIN
ORDER_STATUS_1_DROP_TABLE; -- If the table exist, drop it
ORDER_STATUS_2_CREATE_TABLE; -- Create the table
GRANT_NEWANALYTICS; -- Grant users select access
ORDER_STATUS_3_SCRIPT; -- Run script to insert data into table
END;
What the code is trying to do is this:
Procedure 1: Drop if exist, otherwise skip. I don't want to see any warning errors stating that no table exist if this procedure is run when there is no table. This procedure by itself does work as intended.
create or replace PROCEDURE ORDER_STATUS_1_DROP_TABLE IS
table_does_not_exist EXCEPTION;
PRAGMA EXCEPTION_INIT(table_does_not_exist, -942);
BEGIN
EXECUTE IMMEDIATE 'DROP TABLE <Table Name>';
EXCEPTION
WHEN table_does_not_exist then
dbms_output.put_line( 'table dose not exist');
END ORDER_STATUS_1_DROP_TABLE;
Procedure 2: Once the table is dropped, this procedure recreates it with the correct . I don't want to see any errors for "this table already exist" and that is why, in part, procedure 1 exists. This by itself works as intended.
create or replace PROCEDURE ORDER_STATUS_2_CREATE_TABLE IS
v_sql LONG;
BEGIN
v_sql:= 'create table <Table Name>
(<parameters>)';
EXECUTE IMMEDIATE v_sql;
END ORDER_STATUS_2_CREATE_TABLE;
Procedure 3: This just gives users select access to the table created in the last procedure. This procedure works as it was intended.
create or replace PROCEDURE GRANT_NEWANALYTICS IS
BEGIN
EXECUTE IMMEDIATE
'GRANT SELECT ON <Table Name> TO <UserID>';
END;
Procedure 4: This is a complicated query. It is an insert select all from (table left join to a few other tables based upon fields and conditions, etc). After procedures 1-3 are run, this procedure by itself has not issue running, but by itself.
create or replace PROCEDURE ORDER_STATUS_3_SCRIPT IS
BEGIN
DELETE FROM <Table Name>;
INSERT INTO <Table Name>
SELECT * FROM(<Multiple Table Joins>);
END ORDER_STATUS_3_SCRIPT;
When I run the procedures like this:
BEGIN
ORDER_STATUS_1_DROP_TABLE; -- If the table exist, drop it
ORDER_STATUS_2_CREATE_TABLE; -- Create the table
GRANT_NEWANALYTICS; -- Grant users select access
ORDER_STATUS_3_SCRIPT; -- Run script to insert data into table
END;
I get the following error report:
Error report -
ORA-04068: existing state of packages has been discarded
ORA-04065: not executed, altered or dropped stored procedure "<user>.ORDER_STATUS_3_SCRIPT"
ORA-06508: PL/SQL: could not find program unit being called: "<user>.ORDER_STATUS_3_SCRIPT"
ORA-06512: at line 5
04068. 00000 - "existing state of packages%s%s%s has been discarded"
*Cause: One of errors 4060 - 4067 when attempt to execute a stored procedure.
*Action: Try again after proper re-initialization of any application's state.
Now, If I run these separately, it works. So if I first run this:
BEGIN
ORDER_STATUS_1_DROP_TABLE; -- If the table exist, drop it
ORDER_STATUS_2_CREATE_TABLE; -- Create the table
GRANT_NEWANALYTICS; -- Grant users select access
END;
<OUTPUT> PL/SQL procedure successfully completed.
And then this:
BEGIN
ORDER_STATUS_3_SCRIPT; -- Run script to insert data into table
END;
<OUTPUT> <Query runs>
I have not issues. I want to run these set of procedures in ones sweep and could use some help on the idea of such. Anyone have any ideas?
If you want to run all these procedures as a part of a single PL/SQL block then every reference to your table would need to be via dynamic SQL. So ORDER_STATUS_3_SCRIPT would need to use dynamic SQL to build the insert statement(s) to populate the table rather than using simple static SQL. That's obviously possible but it does increase the complexity of the script. Potentially substantially.
Having two PL/SQL blocks, which you've demonstrated works, seems much simpler.

Delphi DBGrid date format for Firebird timestamp field

I display the content of a Firebird database into a TDBgrid. The database has a 'TIMESTAMP' data type field that I would like to display with date/time format:
'YYYY/MM/DD HH:mm:ss'. (Now its displayed as 'YYMMDD HHmmss')
How to achieve this?
I tried this:
procedure TDataModule1.IBQuery1AfterOpen(DataSet: TDataSet);
begin
TDateTimeField(IBQuery1.FieldByName('timestamp_')).DisplayFormat := 'YYYY/MM/DD HH:mm:ss';
end;
But this causes some side effects at other parts of the program, so its not an alternative. For example at the 'IBQuery1.Open' statement I get the '...timestamp_ not found...' debugger message in the method that I clear the database with.
function TfrmLogger.db_events_clearall: integer;
begin
result := -1;
try
with datamodule1.IBQuery1 do begin
Close;
With SQL do begin
Clear;
Add('DELETE FROM MEVENTS')
end;
if not Prepared then
Prepare;
Open; //Exception here
Close;
Result := 1;
end;
except
on E: Exception do begin
ShowMessage(E.ClassName);
ShowMessage(E.Message);
Datamodule1.IBQuery1.close;
end;
end;
end;
I get the same exception message when trying to open the query for writing into the database.
*EDIT >>
I have modified the database clear as the following:
function TfrmLogger.db_events_clearall: integer;
var
IBQuery: TIBQuery;
IBTransaction: TIBTransaction;
DataSource: TDataSource;
begin
result := -1;
//Implicit local db objects creation
IBQuery := TIBQuery.Create(nil);
IBQuery.Database := datamodule1.IBdbCLEVENTS;
DataSource := TDataSource.Create(nil);
DataSource.DataSet := IBQuery;
IBTransaction := TIBTransaction.Create(nil);
IBTransaction.DefaultDatabase := datamodule1.IBdbCLEVENTS;
IBQuery.Transaction := IBTransaction;
try
with IBQuery do begin
SQL.Text := DELETE FROM MSTEVENTS;
ExecSQL;
IBTransaction.Commit;
result := 1;
end;
except
on E : Exception do
begin
ShowMessage(E.ClassName + ^M^J + E.Message);
IBTransaction.Rollback;
end;
end;
freeandnil(IBQuery);
freeandnil(DataSource);
freeandnil(IBTransaction);
end;
After clearing the database yet i can load the records into the dbgrid, seems like the database has not been updated. After the program restart i can see all the records been deleted.
The whole function TfrmLogger.db_events_clearall seems very dubious.
You do not provide SQL_DELETE_ROW but by the answer this does not seem to be SELECT-request returning the "resultset". So most probably it should NOT be run by ".Open" but instead by ".Execute" or ".ExecSQL" or something like that.
UPD. it was added SQL_DELETE_ROW = 'DELETE FROM MEVENTS'; confirming my prior and further expectations. Almost. The constant name suggests you want to delete ONE ROW, and the query text says you delete ALL ROWS, which is correct I wonder?..
Additionally, since there is no "resultset" - there is nothing to .Close after .Exec.... - but you may check the .RowsAffected if there is such a property in DBX, to see how many rows were actually scheduled to be deleted.
Additionally, no, this function DOES NOT delete rows, it only schedules them to be deleted. When dealing with SQL you do have to invest time and effort into learning about TRANSACTIONS, otherwise you would soon get drown in side-effects.
In particular, here you have to COMMIT the deleting transaction. For that you either have to explicitly create, start and bind to the IBQuery a transaction, or to find out which transaction was implicitly used by IBQuery1 and .Commit; it. And .Rollback it on exceptions.
Yes, boring, and all that. And you may hope for IBX to be smart-enough to do commits for you once in a while. But without isolating data changes by transactions you would be bound to hardly reproducible "side effects" coming from all kinds of "race conditions".
Example
FieldDefs.Clear; // frankly, I do not quite recall if IBX has those, but probably it does.
Fields.Clear; // forget the customizations to the fields, and the fields as well
Open; // Make no Exception here
Close;
Halt; // << insert this line
Result := 1;
Try this, and I bet your table would not get cleared despite the query was "opened" and "closed" without error.
The whole With SQL do begin monster can be replaced with the one-liner SQL.Text := SQL_DELETE_ROW;. Learn what TStrings class is in Delphi - it is used in very many places of Delphi libraries so it would save you much time to know this class services and features.
There is no point to Prepare a one-time query, that you execute and forget. Preparation is done to the queries where you DO NOT CHANGE the SQL.Text but only change PARAMETERS and then re-open the query with THE SAME TEXT but different values.
Okay, sometimes I do use(misuse?) explicit preparation to make sure the library fetches parameters datatypes from the server. But in your example there is neither. Your code however does not use parameters and you do not use many opens with the same neverchanging SQL.text. Thus, it becomes a noise, making longer to type and harder to read.
Try ShowMessage(E.ClassName + ^M^J + E.Message) or just Application.ShowException(E) - no point to make TWO stopping modal windows instead of one.
Datamodule1.IBQuery1.close; - this is actually a place for rolling back the transaction, rather than merely closing queries, which were not open anyway.
Now, the very idea to make TWO (or more?) SQL requests going throw ONE Delphi query object is questionable per se. You make customization to the query, such as fixing DisplayFormat or setting fields' event handlers, then that query is quite worth to be left persistently customized. You may even set DisplayFormat in design-time, why not.
There is little point in jockeying on one single TIBQuery object - have as many as you need. As of now you have to pervasively and accurately reason WHICH text is inside the IBQuery1 in every function of you program.
That again creates the potential for future side effects. Imagine you have some place where you do function1; function2; and later you would decide you need to swap them and do function2; function1;. Can you do it? But what if function2 changes the IBQuery1.SQL.Text and function1 is depending on the prior text? What then?
So, basically, sort your queries. There should be those queries that do live across the function calls, and then they better to have a dedicated query object and not get overused with different queries. And there should be "one time" queries that only are used inside one function and never outside it, like the SQL_DELETE_ROW - those queries you may overuse, if done with care. But still better remake those functions to make their queries as local variables, invisible to no one but themselves.
PS. Seems you've got stuck with IBX library, then I suggest you to take a look at this extension http://www.loginovprojects.ru/download.php?getfilename=uploads/other/ibxfbutils.zip
Among other things it provides for generic insert/delete functions, which would create and delete temporary query objects inside, so you would not have to think about it.
Transactions management is still on you to keep in mind and control.

Using FireDac's onUpdateRecord and optionally execute default statements

I have a query that returns rows with an outer join. This causes records to exist in the results that don't really exist in the table. When those rows are changed FireDac sees the change as an Update instead of an insert. That behavior makes sense from FireDac's side because it has no way to tell the difference.
I am overriding the OnUpdateRecord event to catch those rows that are marked wrong and perform the insert myself. That part is working great. What I can't figure out is how to tell FireDac to perform it's normal process on other records. I thought I could set the AAction to eaDefault and on the return FireDac would continue to process the row as normal. However, that does not seem to be the case. Once OnUpdateRecord is in place it looks like FireDac never does any updates to the server.
Is there a way to tell FireDac to update the current row? Either in the OnUpdateRecord function by calling something - or maybe a different return value that I missed?
Otherwise is there a different way to change these updates into inserts? I looked at changing the UpdateStatus but that is read only. I looked at TFDUpdateSql - but I could not figure out a way how to only sometimes turn the update into an insert.
I am using CachedUpdates if that makes any difference.
Here is what I have for an OnUpdateRecord function:
procedure TMaintainUserAccountsData.QueryDrowssapRolesUpdateRecord(
ASender: TDataSet; ARequest: TFDUpdateRequest; var AAction: TFDErrorAction;
AOptions: TFDUpdateRowOptions);
{^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^}
begin
if (ARequest = arUpdate) and VarIsNull(ASender.FieldByName('Username').OldValue) then
begin
ASender.Edit;
ASender.FieldByName('RoleTypeID').Value := ASender.FieldByName('RealRoleTypeID').Value;
ASender.Post;
MGRDataAccess.ExecSQL('INSERT INTO DrowssapRoles (Username, RoleTypeID, HasRole) VALUES (:Username, :RoleTypeID, :HasRole)',
[ASender.FieldByName('Username').AsString, ASender.FieldByName('RoleTypeID').AsInteger,
ASender.FieldByName('HasRole').AsBoolean]);
AAction := eaApplied;
end
else
begin
// What do I do here to get the default FireDac actions?
end;
end;

Refresh Nested DataSet with poFetchDetailsOnDemand

Is there a way to refresh only the Detail DataSet without reloading all master dataset?
this is what I've tried so far:
DM.ClientDataSet2.Refresh;
DM.ClientDataSet2.RefreshRecord;
I have also tried:
DM.ClientDataSet1.Refresh;
But the method above refreshes the entire Master dataset, not just the current record.
Now, the following code seems to do anything:
DM.ClientDataSet1.RefreshRecord;
Is there a workaround or a proper way to do what I want? (maybe an interposer...)
Additional Info:
ClientDataSet1 = Master Dataset
ClientDataSet2 = Detail DataSet , is the following: *
object ClientDataSet2: TClientDataSet
Aggregates = <>
DataSetField = ClientDataSet1ADOQuery2
FetchOnDemand = False
.....
end
Provider properties:
object DataSetProvider1: TDataSetProvider
DataSet = ADOQuery1
Options = [poFetchDetailsOnDemand]
UpdateMode = upWhereKeyOnly
Left = 24
Top = 104
end
Googling finds numerous articles that say that it isn't possible at all with nested ClientDataSets without closing and re-opening the master CDS, which the OP doesn't want to do in this case. However ...
The short answer to the q is yes, in the reasonably simple case I've tested, and it's quite straightforward, if a bit long-winded; getting the necessary steps right took a while to figure out.
The code is below and includes comments explaining how it works and a few potential problems and how it avoids or works around them. I have only tested it with TAdoQueries feeding the CDSs' Provider.
When I started looking into all this, it soon became apparent that with the usual master
+ detail set-up, although Providers + CDSs are happy to refresh the master data from the server, they simply will not refresh the detail records once they've been read from the server for the first time since the cdsMaster was opened. This may be by design of course.
I don't think I need to post a DFM to go with the code. I simply have AdoQueries set up in the usual master-detail way (with the detail query having the master's PK as a parameter), a DataSetProvider pointed at the master AdoQuery, a master CDS pointed at the provider, and a detail cDS pointed at the DataSetField of the cdsMaster. To experiment and see what's going on, there are DBGrids and DBNavigators for each of these datasets.
In brief, the way the code below works is to temporarily filter the AdoQuery master and the CDS masterdown to the current row and then force a refresh of their data and the dtail data for the current master row. Doing it this way, unlike any other I tried, results in the detail rows nested in the cdsMaster's DataSet field getting refreshed.
Btw, the other blind alleys I tried included with and without poFetchDetailsOnDemand set to true, ditto cdsMaster.FetchDetailsOnDemand. Evidently "FetchDetailsOnDemand" doesn't mean ReFetchDetailsOnDemand!
I ran into a problem or two getting my "solution" working, the stickiest one being described in this SO question:
Refreshing a ClientDataSet nested in a DataSetField
I've verified that this works correctly with a Sql Server 2000(!) back-end, including picking up row data changes fired at the server from ISqlW. I've also verified, using Sql Server's Profiler, that the network traffic in a refresh only involves the single master row and its details.
Delphi 7 + Win7 64-bit, btw.
procedure TForm1.cdsMasterRowRefresh(MasterPK : Integer);
begin
// The following operations will cause the cursor on the cdsMaster to scroll
// so we need to check and set a flag to avoid re-entrancy
if DoingRefresh then Exit;
DoingRefresh := True;
try
// Filter the cdsMaster down to the single row which is to be refreshed.
cdsMaster.Filter := MasterPKName + ' = ' + IntToStr(MasterPK);
cdsMaster.Filtered := True;
cdsMaster.Refresh;
Inc(cdsMasterRefreshes); // just a counter to assist debugging
// release the filter
cdsMaster.Filtered := False;
// clearing the filter may cause the cdsMaster cursor to move, so ...
cdsMaster.Locate(MasterPKName, MasterPK, []);
finally
DoingRefresh := False;
end;
end;
procedure TForm1.qMasterRowRefresh(MasterPK : Integer);
begin
try
// First, filter the AdoQuery master down to the cdsMaster current row
qMaster.Filter := MasterPKName + ' = ' + IntToStr(MasterPK);
qMaster.Filtered := True;
// At this point Ado is happy to refresh only the current master row from the server
qMaster.Refresh;
// NOTE:
// The reason for the following operations on the qDetail AdoQuery is that I noticed
// during testing situations where this dataset would not be up-to-date at this point
// in the refreshing operations, so we update it manually. The reason I do it manually
// is that simply calling qDetail's Refresh provoked the Ado "Insufficient key column
// information for updating or refreshing" despite its query not involving a join
// and the underlying table having a PK
qDetail.Parameters.ParamByName(MasterPKName).Value := MasterPK;
qDetail.Close;
qDetail.Open;
// With the master and detail rows now re-read from the server, we can update
// the cdsMaster
cdsMasterRowRefresh(MasterPK);
finally
// Now, we can clear the filter
qMaster.Filtered := False;
qMaster.Locate(MasterPKName, MasterPK, []);
// Obviously, if qMaster were filtered in the first place, we'd need to reinstate that later on
end;
end;
procedure TForm1.RefreshcdsMasterAndDetails;
var
MasterPK : Integer;
begin
if cdsMaster.ChangeCount > 0 then
raise Exception.Create(Format('cdsMaster has %d change(s) pending.', [cdsMaster.ChangeCount]));
MasterPK := cdsMaster.FieldByName(MasterPKName).AsInteger;
cdsDetail.DisableControls;
cdsMaster.DisableControls;
qDetail.DisableControls;
qMaster.DisableControls;
try
try
qMasterRowRefresh(MasterPK);
except
// Add exception handling here according to taste
// I haven't encountered any during debugging/testing so:
raise;
end;
finally
qMaster.EnableControls;
qDetail.EnableControls;
cdsMaster.EnableControls;
cdsDetail.EnableControls;
end;
end;
procedure TForm1.cdsMasterAfterScroll(DataSet: TDataSet);
begin
RefreshcdsMasterAndDetails;
end;
procedure TForm1.cdsMasterAfterPost(DataSet: TDataSet);
// NOTE: The reason that this, in addition to cdsMasterAfterScroll, calls RefreshcdsMasterAndDetails is
// because RefreshcdsMasterAndDetails only refreshes the master + detail AdoQueries for the current
// cdsMaster row. Therefore in the case where the current cdsMaster row or its detail(s)
// have been updated, this row needs the refresh treatment before we leave it.
begin
cdsMaster.ApplyUpdates(-1);
RefreshcdsMasterAndDetails;
end;
procedure TForm1.btnRefreshClick(Sender: TObject);
begin
RefreshcdsMasterAndDetails;
end;
procedure TForm1.cdsDetailAfterPost(DataSet: TDataSet);
begin
cdsMaster.ApplyUpdates(-1);
end;

delphi Ado (mdb) update records

I´m trying to copy data from one master table and 2 more child tables. When I select one record in the master table I copy all the fields from that table for the other. (Table1 copy from ADOQuery the selected record)
procedure TForm1.copyButton7Click(Sender: TObject);
SQL.Clear;
SQL.Add('SELECT * from ADoquery');
SQL.Add('Where numeracao LIKE ''%'+NInterv.text);// locate record selected in Table1 NInterv.text)
Open;
// iniciate copy of record´s
begin
while not tableADoquery.Eof do
begin
Table1.Last;
Table1.Append;// how to append if necessary!!!!!!!!!!
Table1.Edit;
Table1.FieldByName('C').Value := ADoquery.FieldByName('C').Value;
Table1.FieldByName('client').Value := ADoquery.FieldByName('client').Value;
Table1.FieldByName('Cnpj_cpf').Value := ADoquery.FieldByName('Cnpj_cpf').Value;
table1.Post;
table2.next;///
end;
end;
//How can i update the TableChield,TableChield1 from TableChield_1 and TableChield_2 fields at the same time?
do the same for the child tables
TableChield <= TableChield_1
TableChield1 <= TableChield_2
thanks
The fields will all be updated at the same time. The actual update is performed when you call post (or not even then, it depends if the Batch Updates are on or off).
But please reconsider your logic. It would be far more efficient to use SQL statements (INSERT) in order to insert the data to the other table
SQL.Clear;
SQL.Add('INSERT INOT TABLE_1(C, client, Cnpj_cpf)');
SQL.Add('VALUES(:C, :client, :Cnpj_cpf)');
Then just fill the values in a loop.
SQL.Parameters.ParamByName('C').Value := ADoquery.FieldByName('C').Value;
SQL.Parameters.ParamByName('client').Value := ADoquery.FieldByName('client').Value;
SQL.Parameters.ParamByName('Cnpj_cpf').Value := ADoquery.FieldByName('Cnpj_cpf').Value;
SQL.ExecSQL;
You can also do the Updade - Insert pattern if the data can alredy be in the target table.
Like This:
if SQL.ExecSQL = 0 then
begin
// no records were update, do an insert
end;
And also the indication that you are copying data from table 1 to table 2 could be a sign of design flaw. But I can't say that for sure without knowing more. Anyway data duplication is never good.
I believe the asker was thinking about the data integrity, that means, ensure that only all the tables will updated or none...
The way I know to achieve this with security is executing all this updates (or inserts, a.s.o.) using SQL commands inside a transition.

Resources