sometimes CxGrid column looses RepositoryItem property - delphi

I am using delphi xe2 (fully updated) and Express QuantumGridSuite 13.2.2 . I've much columns on grid and i set RepositoryItem for some columns. EditREpository component is on another form. Some times that columns repositoryItem property is clearing randomly. I think that something is triggering that but i couldn't found what is this about and how is that do?
Tnks for your helps.

This problem of component values becoming "lost" at design time is a known phenomenon, even with EMBA's own components. Usually, it manifests itself when forms are first opened in the IDE.
In my experience, ymmv, it nearly always happens with a property of some component of form A which references a component on form B, and it seems to happen more frequently if form A is opened in the IDE before form B.
Anyway, there are things you can do to try and identify the problem and at least one work-around you can use until you do. But, before you start, the very first thing to do, if you haven't already, is to ask Devex whether they know about the problem. No disrespect to the readership here but they are more likely to know, and it may turn out that you've missed a maintenance update that fixed it.
When I've had it happen with components I've written myself, usually it has been caused by some error in my coding of the component's initialization and/or property setters. In my own components' cases, I've always been lucky in that although at first the behaviour seems random, in fact there has turned to be a specific sequence of actions in the IDE that triggers it. If you can identify a reproducible sequence of actions, you're 90% of the way to getting the problem fixed.
The best place to start is to make a reference back-up of your code in its pre-problem state. Then try out various sequences of actions in the IDE, rolling back to your reference in between, until you find one that provokes the problem. If this sounds tedious, it is, but you may get lucky and spot a pattern early on. If you don't, then keep reminding yourself that the problem only seems random because you haven't spotted the pattern yet.
However, I have the impression (though no proof) that another misbehaving component can disturb the setting of the properties of the component which is losing the value. So, one thing to look at is what other components are on the same form as your affected one. Not all have the same pedigree as the Quantum Grid and its siblings from Devex.
Things I've found effective to isolate the problem with components I've written myself are:
Removing all the other components from the form.
Seeing if I can find reproducible sequences of actions (e.g. what order forms are opened in) that trigger the problem.
Editing the DFM so that the affected component appears last in it. Ditto, first.
Running the IDE in another instance of itself. The main initial reason to do this see if you, or rather the debugger, can unmask a normally-silent exception occurring in some design-time component code that may be involved on the loss of the property value.
Devex's Quantum Grid is widely used (I do myself), has a long lineage and their code is usually top quality. Although I don't imagine it's perfect, I would start by assuming that the problem is caused by something else.
As you may have noticed, one of the most troublesome things about this problem is that if the component is on a rarely-used form, often the first you hear of it is when a user reports it.
Anyway, with all that said, if you can come up with a reproducible test case involving only Devex components and the standard ones, that can be submitted to them for investigation, I'm sure it won't take them long to find and fix the problem. And I'm sure they will fix it if it's in their own code (I wish the same were true of EMBA themselves).
However, without a reproducible test case, I think the best you can hope to do is to add explicit code to your form's creation to set the component value at run-time, e.g. when the form is first created. With my own problem components, once or twice I've found that careful tracing into the code I've added to do this has led me to the cause of the problem.

Related

Replacing non-visual components with code

Is "Replacing non-visual components with code" a proven optimization technique in Delphi 7. Mainly with respect to Database Access.
The Web site you cite talks about replacing a dialog-box component with code that would display the dialog box without the use of any component. The alternative is to write a couple of lines of code to set up and display a dialog box whenever you need one, and to skip the component altogether. It's not really an optimization in speed or size, though. It's not a speed optimization since your code would do exactly what a component would have done anyway, and it's not a size optimization because the space any one component occupies in a program is negligible.
Database components aren't so easily replaceable as dialog-box components. Nearly everything in Delphi is designed to use descendants of the standard database components. If you don't use the components, then you won't be using any of Delphi's database capabilities at all. You can use the database libraries' native APIs if you wish, but I think that would be foolish if your goal is really optimization and you haven't identified the components as the source of your program's non-optimal behavior. Consider how much time and effort it would take you to rewrite your program without the database components.
I don't see how a form-based dataset/query/table/etc., would be faster or slower than one created in code. However, I like to put them in code as it's easier to maintain. I've seen screens with SQL embedded in a component, and then it's overridden in the code. Then I have to stop and investigate to determine which SQL is actually in effect. Sometimes the SQL in the form is good, sometimes it's used for a while and then trumped by the code, sometimes it's never active and the SQL is trumped in the formcreate. So I have to determine whether this is by design, or just sloppy leftovers. Also, it's easy to miss SQL changes in code reviews if they're in the .DFM and not the .PAS. i.e. I don't always look at the .DFM because I'm not interested in whether a label caption changed or a button moved.
So while it's nice for prototyping, when it comes to production code, you're better off having all of your database logic (SQL, table and field definitions) in the .pas file.
Update: I have finally given CnPack a try. Among the dozens of goodies, is a brilliant tool called "convert selected components to code". Form Design Wizard | More... | Convert Selected Components To Code. It does it all for you.
This is not a matter of being a component or not a component. If it comes to database access then BDE is extremely slow so changing it for sth else is a good move.
By the way - optimization is not about 'proven techniques' - it's about identifying a problem and solving it. If the problem happens to be slow db access then this is what you have to change.
Generally no. There is no additional overhead in using a non-visual component. It is created very quickly and works at runtime exactly at the same speed as one "created in code".

How to hunt a Heisenbug

Recently, we received a bug report from one of our users: something on the screen was displayed incorrectly in our software. Somehow, we could not reproduce this in our development environment (Delphi 2007).
After some further study, it appears that this bug only manifests itself when "Code optimization" is turned on.
Are there any people here with experience in hunting down such a Heisenbug? Any specific constructs or coding bugs that commonly cause such an issue in Delphi software? Any places you would start looking?
I'll also just start debugging the whole thing in the usual way, but any tips specific to Optimization-related bugs (*) would be more than welcome!
(*) Note: I don't mean to say that the bug is caused by the optimizer; I think it's much more likely some wonky construct in the code is somehow pushed "over the edge" by the optimizer.
Update
It seems the bug boils down to a record being fully initialized with zeros when there's no code optimization, and the same record containing some random data when there is optimization. In this case, the random data seems to cause an enum type to contain invalid data (to my great surprise!).
Solution
The solution turned out to involve an unitialized local record variable somewhere deep in the code. Apparently, without optimization the record was reset (heap?), and with optimization turned on, the record was filled with the usual garbage. Thanks to you all for your contributions --- I learned a lot along the way!
Typically bugs of this form are caused by invalid memory access (reading uninitialised data, reading off the end of a buffer...) or thread race conditions.
The former will be affected by optimisations causing data layout to be rearranged in memory, and/or possibly by debug code that initialises newly allocated memory to some value; causing the incorrect code to "accidentally work".
The latter will be affected due to timings changing between optimisation levels. The former is generally much more likely.
If you have some automated way of making freshly allocated memory be filled with some constant value before it is passed to the program, and this makes the crash go away or become reproducible in the debug build, that'll provide a good point to start chasing things.
Could very well be a memory vs register issue: you programm running fine relying on memory persistence after a free.
I would recommend running your application with FastMM4 in full debug mode to be sure of your memory management.
Another (not free) tool which can be very useful in a case like this is Eurekalog.
Another thing that I've seen: a crash with the FPU registers being botched when calling some outside code (DLL, COM...) while with the debugger everything was OK.
A record that contains different data according to different compiler settings tells me one thing: That the record is not being explicitly initialised.
You may find that the setting of the compiler optimization flag is only one factor that might affect the content of that record - with any uninitialised data structures the one thing that you can rely on is that you can't rely on the initial content of the structure.
In simple terms:
class member data is initialised (to zero's) for new instances of the class
local variables (in functions and procedures) and unit variables are NOT initialised except in a few specific cases: interface references, dynamic arrays and strings and I think (but would need to check) records if they contain one or more fields of those types that would be initialised (strings, interface references etc).
The question as stated is now a little misleading because it seems you found your "Heisenberg" fairly easily enough. Now the issue is how to deal with it, and the answer is simply to explicitly initialise your record so that you aren't reliant on whatever behaviour or side-effect of the compiler is sometimes taking care of that for you and sometimes not.
Especially in purely native languages, like Delphi, you should be more than careful not to abuse the freedom to be able to cast anything to anything.
IOW: One thing, I have seen is that someone copies the definition of a class (e.g. from the implementation section in RTL or VCL) into his own code and then cast instances of the original class to his copy.
Now, after upgrading the library where the original class came from, you might experience all kinds of weird stuff. Like jumping into the wrong methods or bufferoverflows.
There's also the habit of using signed integer as pointers and vice-versa. (Instead of cardinal)
this works perfectly fine as long as your process has only 2GB of address space. But boot with the /3GB switch and you will see a lot of apps that start acting crazy. Those made the assumption of "pointer=signed integer" at least somewhere.
Your customer uses a 64Bit Windows? Chances are, he might have a larger address space for 32Bit apps. Pretty tough to debug w/o having such a test system available.
Then, there's race conditions.
Like having 2 threads, where one is very, very slow. So that you instinctively assume it will always be the last one and so there's no code that handles the scenario where "Captn slow" finishes first.
Changes in the underlying technologies can make these assumptions very wrong, very fast indeed.
Take a look at the upcoming breed of Flash-based super-mega-fast server storage.
Systems that can read and write Gigabytes per second. Applications that assume the IO stuff to be significantly slower than some calculations on in-memory values will easily fail on this kind of fast storage.
I could go on and on, but I gotta run right now...
Cheers
Code optimization does not mean necessarily that debug symbols have to be left out. Do a debug build with code optimization, then you can still debug the program and maybe the error occurs now.
One easy thing to do is Turn on compiler warning and hint, rebuild project and then fix all warnings/hints
Cheers
If it Delphi businesscode, with dataaware components etc, the follow might not apply.
I'm however writing machine vision code which is a bit computational. Most of the unittests are console based. I also am involved with FPC, and over the years have tested a lot with FPC. Partially out of hobby, partially in desperate situations where I wanted any hunch.
Some standard tricks that I tried (decreasing usefulness)
use -gv and valgrind the code (practically this means applications are required to run on Linux/FreeBSD. But for computational code and unittests that can be doable)
compile using fpc param -gt (=trash local vars, randomize local vars on procedure init)
modify heapmanager to randomize data of blocks it puts out (also applyable to Delphi code)
Try FPC's range/overflow checking and compiler hints.
run on a Mac Mini (powerpc) or win64. Due to totally different rules and memory layouts it can catch pretty funky things.
The 2 and 3 together nearly allow you to find most, if not all initialization problems.
Try to find any clues, and then go back to Delphi and search more focussed, debug etc.
I do realize this is not easy. I have a lot of FPC experience, and didn't have to find everything out from scratch for these cases. Still it might be worth a try, and might be a motivation to start setting up non-visual systems and unittests FPC compatible and platform independant. Most of this work will be needed anyway, seeing the Delphi roadmap.
In such problems i always advice to use logfiles.
Question: Can you somehow determine the incorrect display in the sourcecode?
If not, my answer wont help you.
If yes, check for the incorrectness, and as soon as you find it, dump the stack to a logfile. (see post mortem debugging for details about dumping and resymbolizing the stack).
If you see that some data has been corrupted, but you dont know how and then this happend, extract a function that does such a test for validity (with logging if failed), and call this function from more and more places over program execution (i.e. after each menu call). If you reiterate such a approach a few times you have good chances to find the problem.
Is this a local variable inside a procedure or function?
If so, then it lives on the stack, and will contain garbage. Depending on the execution path and compiler settings the garbage will change, potentially pushing your logic 'over the edge'.
--jeroen
Given your description of the problem I think you had uninitialized data that you got away with without the optimizer but which blew up with the optimization on.

Best way of validating modal dialog fields?

I often need to have modal dialogs for editing properties or application configuration settings, but I'm never really happy about how to validate these, and present the validation results to the user.
Choices and tools are typically:-
Design UI so that invalid choices
are simply impossible - i.e. use
"mask edits", range limits on
spin-edits,
Try and trap errors as they're
found - immediate dialogs or
feedback when a user has an invalid
value entered somewhere (although,
because this may be due to an
incomplete entry, this can be
visually distracting)
Detect errors on change of
control focus
Validate entire dialog when OK
is pressed, and present message
box(es) showing what's wrong.
No.4 is typically the easiest and quickest to code, but I'm never really happy with it.
What good techniques have you found to handle this?
While this question is fairly generic, an ideal answer would be easily implementable in Delphi for Win32...
As with everything, it depends. :) I try to look at some of these from the user's perspective.
Number 1. I don't like mask edits personally, but things like range limits on spin edits, pre-populated combo boxes etc make a lot of sense for general sanity checking and it makes the user's life easier.
I think number 2 could make using the dialog painful for the user. They might not enter the information in the order you think they will, or might leave an incomplete field and come back to it at the end.
For validation, I use a combination of 3 and 4.
Depending on the field (e.g. required value), I might validate it on each key press and disable the OK button if it's invalid. You could get fancy and change the colour of the bad field or use some other kind of visible validator control. It's obvious to the user and doesn't interrupt their "flow".
Things that aren't as easy to check on the fly (e.g. calls to the server) are done once when the user hits OK.
Just an observation but I have watched a lot of users populate dialog boxes (especially complex ones) and they DO NOT use the TAB key. They tend to click in/on edits combos radio buttons as they "think through" the answers or read from disparate documentation. This order will not be the same that you thought it would be! We as programmers are hopefully logical (captain, said Spock) but users well...
One way that is nice (but requires effort) is to have each editor validate itself, either on change or on exit, and it simply changes colour if it is invalid. Your routine in the "OK button" code is then a simple matter of iterating through the control list and setting the focus to the first one that reports itself as "invalid" until none do.
I do work for the airline industry with focus on credit card stuff and I have TTicketNumberEdit, TCardNumberEdit, TExpiryDateEdit, TFormOfPaymentEdit etc. works well because in some of these the validation is not simple. As mentioned, you need to put effort in early on but it pays off in complex dialogs.
I think N°4 is the best way to do the validation, in addition to being the easiest & quickest to code, you have all your validation logic in the same place, so if you need to connect to database, compare 2+ inputs, etc... everything is done only once,
While:
N°1: this may be a nightmare to implement for some cases & to update
N°2/3: you have to be aware of all UI events related to validation, input changes, focus, .. -> heavy coding & hard to debug
The JVCL offers a component set for validating input (TJvValidators etc.). It marks fields that have no valid input and shows a hint to the user when he moves the mouse over that marker. (I think I read about a similar functionality in dotNET but I have never used it.)
While I like the concept and have actually used these components in a number of dialogs, I don't like the implementation much: It is a hog on cpu usage and the pre-defined validators that come with the JVCL are not really usefull. Of course, having access to the jvcl svn repository, I could just stop complaining and start improving the components...
Don't forget to give a look at Jim's great coderage session : Stop Annoying Your Users!
He has a verse on input validation...
IMO, option #1 should be done as a matter of course, not optional, and the interface simplified as far as you can take it while still allowing the user to input the detail needed for the application. I don't like using masked edits, though. If I want a user to enter a number, for example, I'll just use a textbox, then try to parse the number when I go to save the field value.
For direct validation, I use #4 exclusively, unless there's a special case that calls for using one of the other methods. I like to let my users modify their inputs if they change their mind, so they can make a mistake and go back and fix it on their own because they already know there's an error in their input. I do help them out if possible, though (i.e., if a form field is empty or invalid and they hit OK, I will focus/select the offending field after showing an error message).
Doing #2 in a Windows Forms application is rarely pulled off well on its own, so I would just avoid it altogether as a primary means of validation. It could, however, be combined with #4 effectively, but I think in most cases, that would be overkill.

Is there a better multi-select than the default TDBGrid in Delphi?

First off, this applies to Delphi 5 Enterprise, as this is what we use at work. There's no view to upgrading any time soon, as this version "does what we need", apparently.
After setting the dgRowSelect and dgMultiSelect options on a TDBGrid, the behavior does not confirm to a standard Windows UI.
I don't think we've ever needed this option before, else I would have noticed how poor the default implementation is on Delphi's TDBGrid. I want Ctrl-Click for single rows (which works OK; not great, but OK) but also Shift-Click for a range selection (which doesn't work).
I suspect I could trap the WM_LBUTTONDOWN message and process it manually in a subclass, but are there any pitfalls that await me down that path?
I'm hoping someone has already had to go through these motions, as I can't imagine people being happy with the poor default effort offered.
The Infopower library, available from Woll2Woll [http://www.woll2woll.com], contains an extended datagrid which includes properties (msoAutoUnselect,msoShiftSelect) that will provide the behavior you want.
These properties were introduced very early in Infopower's history, so even the cheapest version you can find should be adequate. Infopower costs less than three hundred dollars in any case.
I am not affiliated with Woll2Woll in any way; I just use their product.
-Al.

What is the difference between a bug and a change request in MSF for CMMI?

I'm currently evaluating the MSF for CMMI process template under TFS for use on my development team, and I'm having trouble understanding the need for separate bug and change request work item types.
I understand that it is beneficial to be able to differentiate between bugs (errors) and change requests (changing requirements) when generating reports.
In our current system, however, we only have a single type of change request and just use a field to indicate whether it is a bug, requirement change, etc (this field can be used to build report queries).
What are the benefits of having a separate workflow for bugs?
I'm also confused by the fact that developers can submit work against a bug or a change request, I thought the intended workflow was for bugs to generate change requests which are what the developer references when making changes.
#Luke
I don't disagree with you, but this difference is typically the explanation given for why there is two different processes available for handling the two types of issues.
I'd say that if the color of the home page was originally designed to be red, and for some reason it is blue, that's easily a quick fix and doesn't need to involve many people or man-hours to do the change. Just check out the file, change the color, check it back in and update the bug.
However, if the color of the home page was designed to be red, and is red, but someone thinks it needs to be blue, that is, to me anyway, a different type of change. For instance, have someone thought about the impact this might have on other parts of the page, like images and logos overlaying the blue background? Could there be borders of things that looks bad? Link underlining is blue, will that show up?
As an example, I am red/green color blind, changing the color of something is, for me, not something I take lightly. There are enough webpages on the web that gives me problems. Just to make a point that even the most trivial change can be nontrivial if you consider everything.
The actual end implementation change is probably much of the same, but to me a change request is a different beast, precisely because it needs to be thought about more to make sure it will work as expected.
A bug, however, is that someone said this is how we're going to do it and then someone did it differently.
A change request is more like but we need to consider this other thing as well... hmm....
There are exceptions of course, but let me take your examples apart.
If the server was designed to handle more than 300,000,000,000 pageviews, then yes, it is a bug that it doesn't. But designing a server to handle that many pageviews is more than just saying our server should handle 300,000,000,000 pageviews, it should contain a very detailed specification for how it can do that, right down to processing time guarantees and disk access average times. If the code is then implemented exactly as designed, and unable to perform as expected, then the question becomes: did we design it incorrectly or did we implement it incorrectly?.
I agree that in this case, wether it is to be considered a design flaw or a implementation flaw depends on the actual reason for why it fails to live up to expectations. For instance, if someone assumed disks were 100x times as fast as they actually are, and this is deemed to be the reason for why the server fails to perform as expected, I'd say this is a design bug, and someone needs to redesign. If the original requirement of that many pageviews is still to be held, a major redesign with more in-memory data and similar might have to be undertaken.
However, if someone has just failed to take into account how raid disks operate and how to correctly benefit from striped media, that's a bug and might not need that big of a change to fix.
Again, there will of course be exceptions.
In any case, the original difference I stated is the one I have found to be true in most cases.
Keep in mind that a part of a Work Item Type definition for TFS is the definition of it's "Workflow" meaning the states the work item can be and the transitions between the states. This can be secured by security role.
So - generally speaking - a "Change Request" would be initiated and approved by someone relatively high up in an organization (someone with "Sponsorship" rights related to spending the resources to make a (possibly very large) change to the system. Ultimately this person would be the one to approve that the change was made successfully.
For a "Bug" however, ANY user of the application should be able to initiate a Bug.
At an organization I implemented TFS at, only Department Heads can be the originators of a "Change Request" - but "Bugs" were created from "Help Desk" tickets (not automated, just through process...)
Generally, though I can't speak for CMM, change requests and bugs are handled and considered differently because they typically refer to different pieces of your application lifecycle.
A bug is a defect in your program implementation. For instance, if you design your program to be able to add two numbers and give the user the sum, a defect would be that it does not handle negative numbers correctly, and thus a bug.
A change request is when you have a design defect. For instance, you might have specifically said that your program should not handle negative numbers. A change request is then filed in order to redesign and thus reimplement that part. The design defect might not be intentional, but could easily be because you just didn't consider that part when you originally designed your program, or new cases that didn't exist at the time when the original design was created have been invented or discovered since.
In other words, a program might operate exactly as designed, but need to be changed. This is a change request.
Typically, fixing a bug is considered a much cheaper action than executing a change request, as the bug was never intended to be part of your program. The design, however, was.
And thus a different workflow might be necessary to handle the two different scenarios. For instance, you might have a different way of confirming and filing bugs than you have for change requests, which might require more work to lay out the consequences of the change.
A bug is something that is broken in a requirement which has already been approved for implementation.
A change request needs to go through a cycle in which the impact and effort has to be estimated for that change, and then it has to be approved for implementation before work on it can begin.
The two are fundamentally different under CMM.
Is my assumption incorrect then that change requests should be generated from bugs? I'm confused because I don't think all bugs should be automatically approved for implementation -- they may be trivial and at least in our case will go through the same review process as a change request before being assigned to a developer.
Implementation always comes from requirement. It may be from product manager, it may be from some of you random thought. It may be documented, it may be from some conversation. In the end of the day, even something as simple as a := a + 1, the "real" implementation would be based on compiler, linker, CPU, etc. which depends on the physical law of real life.
A bug is something that is implemented against the ORIGINAL requirement. Other than that, it is a change request.
If the requirement is changed and the implementation need to be changed as well, it's a change request.
If the dependency has been changed, for example web browser stopped supporting some tags and you need to make some change, it's a change request.
In real word, anything that is not properly documented should be treated as change request. Product manager forgot to put something in the story? Sorry, that's a change request.
All change requests should be properly estimated and pointed. Developers get paid for making change requests, not for making bugs and fixing those made by them.

Resources