Delphi exceptions not letting me see local variables - delphi

When debugging in Delphi, an exception will correctly tell me the line of code causing the fault, but I cannot get access to any local variables. Is this a limitation in the debugger? Or am I missing something simple? At present, I have to mirror all local variables to a global on the line before the fault, recompile the program and hope to be able to repeat the same exception.
For example
MyArray[I]:=Foo(...);
If I is out of bounds (with bounds checking turned on), I cannot see what the variable I is, unless I mirrored it to a globally scoped debug variable on the previous line.
Or if I have
MyInteger:=Trunc(MyFloat),
and MyFloat is 6.1E+17, I have no idea what it's value is.

You can see the values of local variables when you select the proper line in the call stack window. It is usually one or two lines before the exception is raised.
I don't have the exact version at hand when this has been implemented, but it is definitely one of the newer versions.

The "problem" is caused by the compiler as far as I know. The optimization feature of the compiler acts like a garbage collector, it frees the variables declared within a function when not used any more.
To overcome the problem, write a exception handler and make a fake use of the variable within the exception catch block.

Related

Mainframe CEE3DD abend - CEE3501S - Module not found in COBOL Dynamic Call

I have encountered an issue recently while processing a CICS transaction. My CICS transaction is calling a chain of dynamically linked COBOL modules. The transaction runs fine for the first time after the PGM-A load is new copied into the region. When I try to process the transaction for the second time, I keep getting CEE3DD abend saying the module not found for PGM-B which is being called from PGM-A. IF I do a new copy for PGM-A in CICS, the transaction again runs fine.
Something is wrong with the CICS setup or memory but I am not able to figure it out. PGM-A is working fine in batch processing. PGM-B has no issues when it is called from any other PGMs except PGM-A.
Can someone share some thoughts on what may be wrong with this?
To invoke your program via CICS, it must be compiled with the NODYNAM option.
It admittedly seems counter-intuitive, but using the DYNAM option will cause CICS stubs to be loaded, instead of your intended programs, and result in the CEE3501S condition.
So, compile your programs with the NODYNAM option to avoid this error condition.
See the following links for additional info:
https://www.ibm.com/support/knowledgecenter/en/SSGMCP_5.3.0/com.ibm.cics.ts.applicationprogramming.doc/topics/dfhp3_cobol_subprog_rules.html
http://www-01.ibm.com/support/docview.wss?uid=swg21054079
Does PGM-A use "CALL VARIABLE" to invoke PGM-B? If so check the contents of VARIABLE on the second run (the contents of that variable will probably be reported in the error message. The contents of the variable may be overwritten by a bug in PGM-A. That might explain why the program always fails after the (seemingly) succesful run and after a newcopy.
Converting this from dynamic to static worked. But the question remains why it was not working with dynamic linking.

IfThen(Assigned(Widget), Widget.Description, 'No Widget') doesn't crash. Should it?

In code that I help maintain, I have found multiple examples of code that looks like this:
Description := IfThen(Assigned(Widget), Widget.Description, 'No Widget');
I expected this to crash when Widget was nil, but when I tested it, it worked perfectly.
If I recompile it with "Code inlining control" turned off in Project - Options - Compiler, I do get an Access Violation.
It seems that, because IfThen is marked as inline, the compiler is normally not evaluating Widget.Description if Widget is nil.
Is there any reason that the code should be "fixed", as it doesn't seem to be broken? They don't want the code changed unnecessarily.
Is it likely to bite them?
I have tested it with Delphi XE2 and XE6.
Personally, I hate to rely on a behavior that isn't contractual.
The inline directive is a suggestion to the compiler.
If I understand correctly what I read, your code would also crash if you build using runtime packages.
inlining never occurs across package boundaries
Like Uli Gerhardt commented, it could be considered a bug that it works in the first place. Since the behavior isn't contractual, it can change at any time.
If I was to make any recommendation, I would flag that as a low priority "fix". I'm pretty sure some would argue that if the code works, it doesn't need fixing, there is no bug. At that point, it becomes more of a philosophical question (If a tree falls in a forest and no one is around to hear it, does it make a sound?)
Is there any reason that the code should be "fixed", as it doesn't seem to be broken?
That's really a question that only you can answer. However, to answer it then you need to understand fully the implications of reliance on this behaviour. There are two main issues that I perceive:
Inlining of functions is not guaranteed. The compiler may choose not to inline, and in the case of runtime packages or DLLs, a function in another package cannot be inlined.
Skipping evaluation of an argument only occurs when the compiler is sure that there are no side effects associated with evaluation of the argument. For instance, if the argument involved a function call, the compiler will ensure that it is always evaluated.
To expand on point 2, consider the statement in your question:
Description := IfThen(Assigned(Widget), Widget.Description, 'No Widget');
Now, if Widget.Description is a field, or is a property with a getter that reads a field, then the compiler decides that evaluation has no side effects. This evaluation can safely be skipped.
On the other hand, if Widget.Description is a function, or property with a getter function, then the compiler determines that there may be side effects. And so it ensures that Widget.Description is evaluated exactly once.
So, armed with this knowledge, here are a couple of ways for your code to fail:
You move to runtime packages, or the compiler decides not to inline the function.
You change the Description property getter from a field getter to a function getter.
If it were me, I would not like to rely on this behaviour. But as I said right at the top, ultimately it is your decision.
Finally, the behaviour has been changed from XE7. All arguments to inline functions are evaluated exactly once. This is in keeping with other languages and means that observable behaviour is no longer affected by inlining decisions. I would regard the change in XE7 as a bug fix.
It already has been fixed - in XE7 and confirmed that this was supposed to be wrong behavior.
See https://quality.embarcadero.com/browse/RSP-11531

Lazarus Free Pascal / Delphi - RunError 211

I'm trying to connect my Windows XP program (Lazarus) to my Ubuntu postgres server.
When the Lazarus program runs, it seems to compile fine but I get this error:
Project ... raised exception class 'RunError(211)'.
Then it terminates execution (and I don't see any output), and opens up a file customform.inc. In that file, it shows a procedure procedure TCustomForm.DoCreate; where it highlights a line: if Assigned(FOnCreate) then FOnCreate(Self);
I believe this is one of the system's files.
I never get to see any output.
What could this be? Thanks!
MORE INFO:
I've narrowed down the error to this line:
dbQuery_Menu.SQL.Text:='Select * From "tblMenus"';
dbQuery_Menu.Open;
the exception is triggered when the OPEN statement gets executed.
BTW, dbQuery_Menu is defined as a TSQLQuery component.
Clueless! :(
Run error 211 appears when you try to call an abstract method. Check this link from more information on FreePascal/Lazarus runtime errors.
Since you say all is done by code and you have no visual components, the problem probably lies in your code trying to use an ancestor component which has not overriden the Open method. You should be able to solve this by using the correct descendant component.
Another possibility, although I would strongly recommend to avoid this one, is to override the Open method yourself. It should be avoided because if you are using an ancestor component then you probably would have to override more abstract methods.
HTH
After nearly 5 days I found the answer. Many thanks to all thos e ho have contributed with their ideas ESPECIALLY RRUZ, RBA and Guillem Vicens. there are other related posts all connected to getting the FIRST Lazarus program working with PostgreSQL.
Summary.
The biggest mistake I made here was that I used the TSQLConnection component. Don't do this. Instead use the TPQConnection.
Everything is done through code. We're not using any draggable components from the top tab.
Don't rely on the Lazarus docs (wiki) at least for working with PG DBs.. It is outdated. Some of the examples can be pretty misleading.
Make sure that fields have some default values. For example, if a Boolean field has no true or false (t/f) set, this may lead to errors.
And that's it! I hope many postgres+Lazarus newbies will find this useful.
From here - http://www.network-theory.co.uk/docs/postgresql9/vol2/SQLSTATEvsSQLCODE.html - -211 (ECPG_CONVERT_BOOL) This means the host variable is of type bool and the datum in the database is neither 't' nor 'f'. (SQLSTATE 42804)

Why does execution jump to the end of a proc after an exception?

When an unhandled exception happens while debugging some code in any procedure/function/method, the debugger stops there and shows the message.
If I now continue debugging step by step, the execution jumps directly from the line that created the exception to the end of the current procedure (if there is no finally block).
Woulnd't it be just as good to continue with the next line of the current procedure?
Why jump to the end of the proc and continue with the calling procedure?
Is this just by design or is there a good reason for it?
An exception is a unexpected situation, that why processings is stopped.
The jump to the end of the procedure is an invisible finally statement to release any locally "allocated" memory, like strings, interfaces, records etc.
If you want to handle a exception the you have to incapsulate the call that can give an exception with try .. except statement, and use an "on" clause to only handle that particular exception that you want to handle.
In the except you can inspect variables in the debugger and in code you can raise the exception again if needed.
In general, an uncaught exception will execute a hidden "finally" in each function on the stack, as it "unwinds" up to an exception handler. This cleans up local variables in each stackframe. In languages like C++ with the Resource Acquisition Is Initialization paradigm, it will also cause destructors to run.
Eventually, somewhere up the callstack, the exception will be caught by a handler. If there's no explicit one, the system-provided one will kill the process, because what else can it reasonably do?
Throwing an exception is a way of saying "Something unexpected happening. I don't know how to handle this". In such case it it better not to do anything (other than throwing the exception) than to try to continue, not knowing if what you're doing is correct.
In real life you have the same kind of thing: If someone asks you to count to 10 in Hebrew (or some language you don't know), you just say you don't know. You don't go ahead and try anyway.
I would expect it to jump to the end of the proc, and then jump to the except or finally block of the calling proc. Does it really continue in the calling proc as if nothing had happened? What does it use as the return value (if it's a function call)?
Continuing with the next line in the original proc/function would be a very bad thing - it would mean the code executing radically differently in a debugger to in release (where the exception would indeed cause execution to exit that proc/function unless there's an except/finally block). Why should debugging let you ignore exceptions completely?
It has to unwind the stack to find a handler.
I do agree it's very annoying behavior. Continuing isn't an option but it sure would make life easier if the debugger was pointing to the spot that threw it with the local variables still intact. Obviously, the next step would be to the hidden finally, not to the next line.
I just want to be able to examine everything I can about what caused it. I was just fighting this not very long ago. Text parsing, I KNEW the offending string contained no non-numeric characters (sanity limits mean the overflow case should never happen, it's my data file, all I'm worried about is oopses) and so I didn't put an exception handler around the StrToInt--what's this nonsense about it not being a valid number????? Yup--the routine wouldn't call it with anything non-numeric in the buffer--but an empty string has nothing non-numeric in it!

In Delphi: How to skip sections of code while debugging?

I often accidently step into code that I'm not interested in while debugging in Delphi.
Let's start by saying that I know that you can step over with F8, and that you can run to a certain line with f4.
Example:
function TMyClass.DoStuff():Integer;
begin
// do some stuff
bla();
end;
procedure TMyClass.Foo()
begin
if DoStuff()=0 then // press F7 when entering this line
beep;
end;
Example: I want to step into method DoStuff() by pressing F7, but instead of going there, I first end up in FastMM4.FastGetMem(), which is a massive blob of assembly code that obviously I'm not interested in at the moment.
There are several ways to go about this, and I don't like any of them:
Add a breakpoint on "bla" (almost useless if you only want to step into DoStuff on special occasions, like iteration 23498938);
Instead of pressing F7, manually move the cursor to "bla", and press F4 (Works for this simple example. In practice, it doesn't);
In case of FastMM: temporarily disable fastmm;
Is there any way to hint the IDE that I'm never interested into stepping into a certain block of code, or do I always have to set extra breakpoints or use F4 to try to avoid this?
I'm hoping for some magic compiler directive like {$NODEBUG BEGIN/END} or something like that.
In most cases being able to exclude entire units would be fine-grained enough for me, but being able to avoid certain methods or even lines of code would be even better.
Update: Maybe codegear should introduce something like skip-points (as opposed to break-points) :-)
There is a "magic nodebug switch". {$D-} will disable the generation of debug code. Place that at the top of your FastMM unit and you won't end up tracing into it. And if you do end up in a function you don't want to be in, SHIFT-F8 will get you out very quickly. (WARNING: Don't use SHIFT-F8 from inside an assembly-code routine that plays around with the stack. Unpredictable behavior can result. F4 to the bottom of it instead.)
If you're jumping into FastMM code, then there are memory operations occurring. The code you've shown doesn't have any memory operations, so your question is incomplete. I'll try to guess at what you meant.
When a subroutine has local variables of compiler-managed types (such as strings, interfaces, or dynamic arrays), the function prologue has non-trivial work to do. The prologue is also where reference counts of input parameters are adjusted. The debugger represents the prologue in the begin line of the function. If the current execution point is that line, and you "step into" it, you'll be taken to the RTL code for managing the special types. (I wouldn't expect FastMM to be involved there, either, but maybe things have changed from what I'm used to.) One easy thing to do in that situation is to "step over" the begin line instead of into it; use F8.
If you're really pressing F7 when entering your highlighted line, then you're doing it wrong. That's stepping into the begin line, not the line where DoStuff is called. So whether you get taken to the FastMM code has nothing to do with the implementation of DoStuff. To debug the call to DoStuff, the current execution point should already be the line with the call on it.
If you only want to debug DoStuff on iteration 23498938, then you can set a conditional breakpoint in that function. Click in the gutter to make a normal breakpoint, and then right-click it to display its properties. There you can define a condition that will be evaluated every time execution reaches that point. The debugger will only stop there when the condition is true. Press F8 to "step over" the DoStuff call, and if the condition is true, the debugger will stop there as though you'd pressed F7 instead.
You can toggle the "use debug DCUs" option to avoid stepping into most RTL and VCL units. I don't know whether FastMM is included in that set. The key difference is whether the DCUs you've linked to were compiled with debug information. The setting alters the library path to include or exclude the subdirectory where the debug DCUs are. I think you can configure the set of included or excluded debug directories so that a custom set of directories is added or removed based on the "debug DCUs" setting.
Back to breakpoints. You can set up breakpoint groups by assigning names to your breakpoints. You can use an advanced breakpoint to enable or disable a named group of breakpoints when you pass it. (Breakpoint groups can have just one breakpoint, if you want.) So, for example, if you only want to break at location X if you've also passed some other location Y in your program, you could set a disabled breakpoint at X and a non-breaking breakpoint at Y. Set the "enable groups" setting at Y to enable group X.
You can also take advantage of disabled breakpoints without automatic enabling and disabling. Your breakpoints appear in the "breakpoints" debugger window. If you're stepping through DoStuff and you decide you want to inspect bla this time, go to the breakpoint window and enable the breakpoint at bla. No need to navigate to bla's implementation to set the breakpoint there.
For more about advanced breakpoints, see Using Non-Breaking Breakpoints in Delphi, and article by Cary Jensen from a few years ago.
I may have missed something with your post, but with FastMM4 you can edit the FastMM4Options.Inc include file and remove the '.' from the following define:
From FastMM4Options.inc ****
{Enable this option to suppress the generation of debug info for the
FastMM4.pas unit. This will prevent the integrated debugger from stepping into
the memory manager code.}
{$.define NoDebugInfo}
When recompiling (might need building) the debugger will (should) no longer debug the FastMM code.
Use a precompiled non-debug DCU of FasmMM
In the project dpr file, I use
uses
{$IFNDEF DEBUG} FastMM4, {$ENDIF}
... // other units
to exclude FastMM4 during debug mode. Requires no change in FastMM4 so I don't have to remember to add {$D-} in FastMM when I change to a different version.
AFAIK, the debugger is only aware of the files in Browsing Path that you can modify in Options. So if you exclude the paths of modules you're not interested in debugging that will give the effect of what you want to do.
One caveat: code completion also relies on Browsing Path so you might run into occasions that code completion falls short when needed.
Although it isn't a direct answer to your question, you could modify your first suggested solution by putting breakpoint at bla that is only enabled when a breakpoint at Foo is passed (or some other condition of your choose, such as iteration count). Then it will only break when you want it to.
As an aside, I am finding more and more that I am not halting execution at break points, but rather dumping variable values or stack dumps to the message log. This allows more careful analysis than on-the-fly inspection of variables, etc. FWIW.
No. I don't believe there is a way to tell the debugger to never stop in a certain section of code. There is no magic directive.
The best you can do when you get into a routine you don't want to be in is to use Shift+F8 which will Run until the Return. Then do a F7 or F8 to exit the procedure.
Hmmm. Now I see Mason's answer. Learned something. Thanks, Mason. +1

Resources