How to highlight changed cells when updating a DBgrid? - delphi

Let's say I am showing stock prices, or sports scores, or movie attendance or something.
Periodically, I will refresh the grid by Close() and then Open() of a query linked to its associated datasource.
I know how to owner draw a cell with OnDrawCell() - what I can't figure out is how to know if the new value is the same as or different from the previous value for a given cell.
I suppose there are two use cases here, one where the number of rows is fixed and they remain in the same row order and one where rows can change (insert/delete or reorder).
For the former, I can take a snapshopshot before updating and compare after the update, but that might be a lot of data. I am not sure if I want to restrict the operation to the currently visible rows. I think that a user might want to scroll down and still be notified of any which have changed during the last update.
For the latter, I am stumped, unless, of course, each row has a unique key.
How can I do this (efficiently)? A solution for TDbGrid would help everyone, a solution with TMS Software's TAdvDbGrid would be fine by me (as would a (preferably free) 3rd party component).

TDBGrid reads the data currently contained in its assigned dataset. It has no capacity to remember prior values, perform calculations, or anything else. If you want to track changes, you have to do it yourself. You can do it by multiple means (a prior value column, a history table, or whatever), but it can't be done by the grid itself. TDBGrid is for presenting data, not analyzing or storing it.
One suggestion would be to track it in the dataset using the BeforePost event, where you can store the _oldvalue of a your into a LastValue column, and then use that to see if the value has changed in your TDBGrid.OnDrawColumnCell event and alter the drawing/coloring as needed. Something like if LastValue <> CurrValue then... should work.

Related

Vaadin grid user column re-ordering and saving per user

How do people tend to let users re-order the grid columns and save that ordering for later?
The only way I can think of to do it, at least in Vaadin 7, is:
Listen for column re-ordering via addColumnReorderListener(…)
When re-order triggered, if user initiated, get columns from getColumns() and save to DB with any identifying information
When pull Grid back up, read grid ordering from DB and apply the same order with setColumnOrder(columns)
So is there a better way to do this? I just checked the Directory, could not find anything obvious to make this easier. Just looking for how others have addressed this user requirement. If Vaadin 14 already supports such actions a little easier, that would be good to know as well, as it might give me some ideas on how to get that ability short term before I can upgrade to Vaadin 14.
For a more customizable grid you can (in addition to what you've already done) add a button that opens a dialog that lists all possible columnnames, together with a checkbox.
Unchecking the checkbox removes the column, checking the checkbox adds the column.
Even more comfortable is when the dialog lists all available columns in a Grid with draggable rows and editable checkboxes, so that the user can show, hide and sort all columns in one place. After that you have to reorder all columns by calling grid.setColumnOrder.
Just so people know how I solved this issue, based on the comments:
When load data into Grid, first check database for columns of this Grid/user combination. If find such a column order, call setColumnOrder(userColumns).
Added 2 buttons to top, one to save column order, one to reset it.
"Save" button only enabled after moving at least one column.
"Reset" button only enabled if at least one column was moved. One column was moved either because of the DB, or because user JUST moved a column.
On save, save to DB. On reset, clear from DB, and reset Grid to original column order.
We chose not to save the column order each time they changed the order, directly in the addColumnReorderListener, because we realized sometimes users might move columns around temporarily, and one really want to save that column order for the future. That said, the saving inside the addColumnReorderListener worked well.
We don't currently need to save the column sizes, as suggested by #Simon Martinelli, but we are keeping it as an idea for the future. I fully expect it would work.

Automatically updating Data Validation lists based on user input

I have a very large data set (about 16k rows). I have 10 higher level blocks and within each block I have 4 categories (10 rows for each) which use Data Validation lists to show items available in each category. The lists should automatically update based on user input. What I need your help with is that I want to use the same data set for each block and preferably a least calculation/size intensive approach. I have put together a sample file that outlines the issue with examples.
Sample File
Thank you for your help in advance.
Okay, I've found something, but it can be quite time consuming to do.
Select each range of cells. For instance, for the first one, select B3:B18 and right click on the selection. Find 'Name a Range..." and give it the name "_FIN_CNY". Repeat for all the other ranges, changing the name where necessary.
Select the first range of cells to get the data validation, and click on "Data validation", pick the option "Allow: List" (you already have it) and then in the source, put the formula:
=INDIRECT($G$4&"_CNY")
$G$4 is where the user will input. This changes as you change blocks.
_CNY is the category. Change it to _CNY2 for the second category.
Click "OK" and this should be it. Repeat for the other categories.
I have put an updated file on dropbox where you can see I already did it for the data of _FIN for categories CNY, CNY2 and INT and did the one for _GER as well. You'll notice the category of INT for _GER doesn't work, that's because the Named Range _GER_INT doesn't exist yet.

Using bookmarks with filtered query

I need help with the following problem. I have a dbgrid, the underlying query is filtered. I want to apply a new filter but stay at the same row number in the dbgrid. Here is my code:
with qrProperties do
begin
...
MyPoint:=GetBookmark;
Filter:='N<>'+IntToStr(ResultPropertyN);
Filtered:=True;
GotoBookmark(MyPoint);
end;
When it gets executed, an EDBEngineError is raised with a message "Could not find record". My explanation is that the bookmark functions do not take into account the filter and the procedure GotoBookmark searches for a record that is not present in the dbgrid (due to the filter applied). Is there any way to use bookmarks with filters?
Here are a little more details. In my application when I double click on a row in the dbgrid, it disappears (due to the filter applied) but as a result of the filtering the cursor moves to the first row (if I do not use bookmarks). I want it to stay at the same row number that is to go to the record shown immediately after the one that has been deleted.
Your assumption is correct, the record is no longer 'there'.
Wrap GotoBookmark in a try/except and decide what to do in the exception, e.g. go to the first record.
Alternatively you could go to the 'nearest' record you can find. That depends on what you consider 'nearest' and then you would not need bookmarks at all and use e.g. FindNearest.

using triggers to update Values

I'm trying to enhance the performance of a SQL Server 2000 job. Here's the scenario:
Table A has max. of 300,000 rows. If I update/delete the 100th row (Based on the insertion time) all the rows which has been added after that row, should update their values. Row no. 101, should update its value based on row no. 100 and row no. 102 should update its value based on the row no.101's updated value. e.g.
Old Table:
ID...........Value
100..........220
101..........(220/2) = 110
102..........(110/2)=55
......................
Row No. 100 updated with new value: 300.
New Table
ID...........Value
100..........300
101..........(300/2) = 150
102..........(150/2)=75
......................
The actual values calculation is more complex. the formula is for simplicity.
Right now, a trigger is defined for update/delete statements. When a row is updated or deleted, the trigger adds the row's data to a log table. Also, a SQL Job is created in code-behind after update/delete which fires a stored procedure that finally, iterates through all the next rows of table A and updates their values. The process takes ~10 days to be accomplished for 300,000 rows.
When the SP gets fired, it updates the next rows' values. I think this causes the trigger to run again for each SP update and add these rows to the log table too. Also, The task should be done in DB-side as requested by customer.
To solve the problem:
Modify the stored procedure and call it directly from the trigger. The stored procedure then drops the trigger and updates the next rows' values and then creates the trigger again.
There will be multiple instances of the program running simultaneously. if another user modifies a row while the SP is being executed, the system will not fire the trigger and I'll be in trouble! Is there any workaround for this?
What's your opinion about this solution? Is there any better way to achieve this?
Thank you.
First, about the update process. I understand, your procedure is simply calling itself, when it comes to updating the next row. With 300K rows this is certainly not going to be very fast, even without logging (though it would most probably take much fewer days to accomplish). But what is absolutely beyond me is how it is possible to update more than 32 rows that way without reaching the maximum nesting level. Maybe I've got the sequence of actions wrong.
Anyway, I would probably do that differently, with just one instruction:
UPDATE yourtable
SET #value = Value = CASE ID
WHEN #id THEN #value
ELSE #value / 2 /* basically, your formula */
END
WHERE ID >= #id
OPTION (MAXDOP 1);
The OPTION (MAXDOP 1) bit of the statement limits the degree of parallelism for the statement to 1, thus making sure the rows are updated sequentially and every value is based on the previous one, i.e. on the value from the row with the preceding ID value. Also, the ID column should be made a clustered index, which it typically is by default, when it's made the primary key.
The other functionality of the update procedure, i.e. dropping and recreating the trigger, should probably be replaced by disabling and re-enabling it:
ALTER TABLE yourtable DISABLE TRIGGER yourtabletrigger
/* the update part */
ALTER TABLE yourtable ENABLE TRIGGER yourtabletrigger
But then, you are saying the trigger shouldn't actually be dropped/disabled, because several users might update the table at the same time.
All right then, we are not touching the trigger.
Instead I would suggest adding a special column to the table, the one the users shouldn't be aware of, or at least shouldn't care much of and should somehow be made sure never to touch. That column should only be updated by your 'cascading update' process. By checking whether that column was being updated or not you would know whether you should call the update procedure and the logging.
So, in your trigger there could be something like this:
IF NOT UPDATE(SpecialColumn) BEGIN
/* assuming that without SpecialColumn only one row can be updated */
SELECT TOP 1 #id = ID, #value = Value FROM inserted;
EXEC UpdateProc #id, #value;
EXEC LogProc ...;
END
In UpdateProc:
UPDATE yourtable
SET #value = Value = #value / 2,
SpecialColumn = SpecialColumn /* basically, just anything, since it can
only be updated by this procedure */
WHERE ID > #id
OPTION (MAXDOP 1);
You may have noticed that the UPDATE statement is slightly different this time. I understand, your trigger is FOR UPDATE (= AFTER UPDATE), which means that the #id row is already going to be updated by the user. So the procedure should skip it and start from the very next row, and the update expression can now be just the formula.
In conclusion I'd like to say that my test update involved 299,995 of my table's 300,000 rows and took approximately 3 seconds on my not so very fast system. No logging, of course, but I think that should give your the basic picture of how fast it can be.
Big theoretical problem here. It is always extremely suspicious when updating one row REQUIRES updating 299,900 other rows. It suggests a deep flaw in the data model. Not that it is never appropriate, just that it is required far far less often than people think. When things like this are absolutely necessary, they are usually done as a batch operation.
The best you can hope for, in some miraculous situation, is to turn that 10 days into 10 minutes, but never even 10 seconds. I would suggest explaining thoroughly WHY this seems necessary, so that another approach can be explored.

Benefit of using DBComboBox over CombBox?

So I'm messing around with a new project in Delphi 2009 and the default components that can be dropped onto a form for accessing data consist of a SQLConnection, DataSource and SQLQuery. If I add a simple select to the query component, say:
select name from customers
and then drop a DBComboBox on the form and link it up with the DataSource I get a single record in the combo box. After using Google for half and hour to figure out what I was doing wrong it looks like you have to manually add some code to your project which loops through the dataset and adds all the records to the drop down box. Something like:
while not SQLQuery.eof do
begin
DBComboBox.items.add(SQLQuery.fieldbyname('name').asstring);
SQLQuery.next;
end;
And that actually sort of works, but then you get a list in the drop down which you can't actually select anything from. Regardless of the result though I'm wondering why would you even use a DBComboBox if you have to manually add the result of your query to it? Seems to me that if it doesn't automatically populate the db combo box with the result of the query then we might as well be using a non-data-aware component like tcombobox.
I guess what I'm asking is why does it work this way? Isn't the purpose of data aware drag-and-drop controls to minimize the amount of actual written code and speed development? Is there a method that I'm missing that is supposed to make this easier?
A TDBComboBox doesn't get its list of values from the database; it gets its current value from the database. Link it to a field in your dataset, and when you change the active record, the combo box's current value will change. Change the combo box's current value, and the corresponding field's value will change.
If you want to get the list of values from the database as well, then use a TDBLookupComboBox.
This is all covered in the help:
Using TDBListBox and TDBComboBox
Displaying and Editing Data in Lookup List and Combo Boxes
Defining a Lookup List Column
I think you want the TDBLookupComboBox because that allows you to lookup from a list of items where the list comes from a dataset.
In the TDBComboBox, the list is just a TStrings manually filled with data.
--jeroen
DbCombox is a dbaware version of the standard combobox component.

Resources