Append/Insert detail records (NestedDataSet) in DataSetProvider OnUpdateData - delphi

I would like to insert detail records in an OnUpdateData event of the DataSetProvider and have the changes updated to the database along with the master record.
What is the best way to achieve this.
I have tried inserting records to the NestedDataSet but they are not sent to the database along with the delta.
Using Delphi 7 or Delphi 2010 with MySQL and dbexpress:
Master: InvoicePayment (SQLDataSet, DataSetProvider, ClientDataSet)
Detail: InvoicePaymentLine (NestedDataSet)
The user inputs payment amounts and the program loops through Delta in an OnUpdateData event processing invoices to be paid and inserts them in the detail table (InvoicePaymentLine) for each master record.
I would prefer not to use BeforeUpdateRecord event but instead process all records at once in a loop.

Related

Envers and batch loading scripts

I originally loaded some data through a liquibase script, and this has resulted in the envers audit table missing the insert records. So while I have update records, I do not have original insert records
I've written a data script to reinsert this data with the create records - but it is a fairly heavy weight script as you have to consider 4 scenarios -
Data with existing insert records - no migration
Data with update/delete records but no insert records - need insert records
Data with no audit entries - this is bulk upload without any subsequent changes
Reset Existing records
The scripts written, but is there an easier way to do this? Or did I just mess up not creating the initial insert records?
Im guessing you dont need the update/delete records - but I am using the audit table in a view
Thanks
Do you need to keep old update/delete audit records? If not - and it's ok to "start over" in auditing - you can simply remove all present audit history. Then "move" all present state to audit records as inserts pointing to revision 1 .

Amazon Kinesis Firehose - How to pause a stream?

I need the ability to pause a stream in AWS Kinesis Firehose.
I need it when I need to perform a schema change that requires re-creation of the table (just for example, change in sortkey).
Those changes usually require creating a new table inserting the rows to the new table, the dropping the original table and renaming the new table to the original name. Doing this will result in loss of rows that were streamed during this process.
I can think on two workarounds:
Renaming the original table at the begging of process, then force firehose to fail, and retry until you make the change and rename it back. I am not sue if the retry mechanism is bullet proof enough for this.
Defining a time interval of few hours (as needed) in between the loads, then watching the "COPY" queries, and doing the same as #1 just after the COPY. Thisi s jsut a bi more safe than #1.
Both workarounds doesn't feels lek a best practice, under statement.
Is there a better solution?
How bullet prof my solutions are?
I encountered the same issue and did the following. Note: for this method to work, you must have timestamps (created_at in the answer below) on the events you are ingesting into Redshift from Kinesis.
Assume table1 is the table you already have, and Kinesis is dumping events into it from firehose1.
Create a new firehose, firehose2, that dumps events to a new table, table2, which has the same schema as table1.
Once you can confirm that events are landing in table2, and max(created_at) in table1 is less than min(created_at) in table2, delete firehose1. We can now be sure we will not lose any data because there is already an overlap between table1 and table2.
Create a table table3 that has the same schema as table1. Copy all events from table1 into table3.
Drop table1 and recreate it, this time with the sort key.
Recreate firehose1 to continue dumping events into table1.
Once events start landing in table1 again, confirm that min(created_at) in table1 is less than max(created_at) in table2. When this is true, delete firehose2.
Copy all events from table2 with created_at strictly greater than max(created_at) in table3 and strictly less than min(created_at) in table1 into table1. If your system allows events with the same timestamp, there may be duplicates introduced in this step.
Copy all events from table3 back into the new table1.
EDIT: You can avoid using table3 if you use alter table to rename table1 to table1_old and then make table2 above the new table1.
Since AWS Kinesis Stream can store data (by default 1 day and up to a year but more than 24 hours additional charges will be applied), I recommend to delete the Delivery Stream (Kinesis Firehose) and once you are done with upgrade/maintenance work you easily can re-configure a new delivery stream.

capture/store when fields in database records have been changed/edited

I have a multi user database system which stores records with various fields e.g. text, date time etc.
Does anyone know of a way to capture when fields of a record have been changed/modified by a user. A bit like a audit history which displays all the events which have happened against the record.
I connect to database via tdatasource and TADQuery (fireDAC).
Thanks,
I have seen a solution where every important table has a number of triggers that fire when inserting, updating, deleting records. Those triggers save the old and the new state of the record to a corresponding "history" table.

Editable detail records in Delphi DBGrid

I have a database (held in an Access .MDB file) that records staff members, and any absence they have e.g. holiday, sickness, training course, the start and end dates, and hours of productive time lost.
I then have a dbgrid bound to an "master" ADO query that finds all staff meeting the selected criteria of date range, department, search string for name, summing up the hours of productive time lost.
I have another dbgrid bound to a "detail" ADO table containing the absence records.
The desired effect is that the detail dbgrid should only contain those records from the Absence table that match the row selected in the master record (both "master" Staff and "detail" Absence tables contain a common EmployeeID field).
Though I can achieve this using ADO Queries created on the fly, changing the query each time the user moves to a different master staff record, I was hoping to use the detail DBGrid as my main method of deleting, updating, and adding additional absence records, complete with in grid lookups; so user can select record types without having to remember the code for that type.
I would also like the changes in this detail grid to be reflected in the summaries in the master dbgrid.
I have achieved this using a detail ADOTable linked as MasterDetail to the Staff Query, but need to have filtered set to True, and control the onfilterevent in code; but as the database increases in size this is getting slower and slower.
Is there anything I can do to improve this performance, or will I be forced to have the detail dbgrid as purely read-only, and all Absence records entered through another form or panel?
More information on Making the Table a Detail of Another Dataset
ADOTable2.MasterSource := DataSource1;
ADOTable2.MasterFields := 'EmployeeID';
I would also like the changes in this detail grid to be reflected in the summaries in the master dbgrid. After editing the detail table and posting any change you may use the AfterPost event to recalculate the summaries.

Transaction lifecycle tracking in data warehouse

How do you store facts within which data is related? And how do you configure the measure? For example, I have a data warehouse that tracks the lifecycle of an order, which changes states - ordered, to shipped, to refunded. And for a state like 'refunded', it is not always there. So in my model, I am employing the transaction store model, so every time the order changes state, it is another row in the fact table. So, for an order that was placed in april, and refunded in may, there will be two rows - one with a state of 'ordered' and another with a state of 'refunded'. So if the user wanted to see all the orders placed/ordered in april, and wanted to see how many of 'those' orders got refunded, how would he see that? Is this a MDX query that will be run at runtime? Is this is a calculated measure I can store in the cube? How would I do that? My thought process is that it should be a fact that the user can use in a pivottable, but I'm not sure.....
One way to model this would be to create a factless fact table to model events. Your ORDERS fact table models the transaction amount, customer information etc, while the factless fact table (perhaps called ORDER_STATUS) models any events that occur in relation to a specific order.
With this model, it's easy to count or add all transactions based on their order status by checking for existence of records in the factless fact table.

Resources