IJVM is IRETURN equal to HALT ? Because both stopps the interpreter? - ijvm

I think IRETURN and HALT are the same commend in IJVM. Because I tried both and both stopped the interpreter.

They are not the same (what would be the point of having two instructions doing the same thing?).
See the description in https://en.wikipedia.org/wiki/IJVM:
HALT is described as "Halt the simulator"
IRETURN is described as "Return from method with integer value"
If your code is running the top level method they may appear to have the same effect.
If you your top level method calls other methods you will see the distinction if the instructions are placed within the called methods:
HALT will still halt the simulator, thereby aborting any ongoing calculations
IRETURN will return from the called method to the caller

Related

Can I have a custom function stop a Lua script in Delphi without exiting the application?

I have an application that periodically will run a Lua script. Within the script, on occasion, I have created a custom registered Lua function to check some parameters and decide if the Lua script should continue or exit. The logic ideally should not be part of the script and I can think of using a Lua script to work around this, but I'm wondering if it is possible to stop the execution of a Lua script without ending the application.
I have a custom function written in Delphi and exposed to Lua scripts using Lua 5.1. The Lua script looks something like that shown below and the script in Lua is started using luaL_loadbuffer.
io.write("Script starting\n");
--Custom Function
ExitIfFound();
io.write("Script continuing\n");
My custom function looks something like this, below I have provided one of my attempts where I tried to use lua_error to stop the script...
function ExitIfFound(LuaState: TLuaState): Integer;
var
s: AnsiString;
begin
s := 'ExitIfFound ending script, next Lua script line not called';
lua_pushstring(LuaState, PAnsiString(s));
lua_error(LuaState);
end;
When my custom function is called, I'm unsure as to how to exit the Lua script without any further evaluation. I have seen posts referring to Lua and using setjmp and longjmp in C, but I'm curious how these may translate Delphi.
In the example above, when I use lua_error, the entire program crashes with Windows doing its typical, [luarun.exe has stopped working] ...
With all of this, I'm am still pretty new to integrating Lua to Delphi and hoping that I can find some cleaner options to explore.
There is no clean way to entirely abort a Lua script. The lua_error function is the correct way to signal an error. It is the caller's responsibility to catch the error and propagate it to the next caller.
If you cannot rely on the caller to cooperate, then you can try to exert more control by installing debug hooks. Then the host program will be consulted before continuing to run the script. However, the script can still avoid exiting by using pcall to catch any errors.
The crash in your program is probably not simply from setting an error. Rather, it's likely from using the wrong calling convention on your ExitIfFound function. It needs to be cdecl, but Delphi's default, if you don't specify anything else, is register. Using the wrong calling convention will give you unpredictable parameter values and can lead to a corrupted stack. If you type-casted the function or used the # operator when you called lua_register, then you might have hidden the calling-convention mismatch from the compiler's type checker, which would have otherwise alerted you to the problem at compile time.
When compiled as C++, lua_error will use a exception instead of longjmp, but either way, the caller always catches the error. Exceptions are important, though, when your Delphi code uses compiler-managed types like string, or exception-sensitive constructs like try-finally blocks. In C mode, lua_error calls longjmp to jump directly to the waypoint set by a previous call to setjmp. That jump will skip over any exception handlers like the ones the Delphi compiler sets up to ensure the finally block runs and the string gets cleaned up.
A further headache is that since the compiler cleans up the string while exiting the function, the pointer you put on the Lua stack might not be valid by the time it's used; that depends on whether lua_pushstring makes a copy of its argument.

iOS: LLDB multiline breakpoint commands don't work as expected

I'm trying to do something a little fancy here, but the docs suggest it should be possible. Maybe LLDB is still too new, but I'm getting a lot of debugger crashes / deadlocks and even when that doesn't happen it does't seem to work like I expected.
I'm trying to put together a debug wrapper around all selector calls, to extract the message call graph inside a certain chunk of code. (I could explain why if you really want to know, but it isn't really relevant to the debugger issue.)
I start out with an Xcode breakpoint on the line where I want to start tracking things (for bonus points, this is happening on a secondary thread, but before you ask, no, nothing on any other thread is doing any accesses to this object or anything in its property subgraph):
[myObject startProcessing];
The breakpoint triggers, and I run "bt", just to extract:
* thread #5: tid = 0x2203, 0x000277d2 .........
I then do something mildly evil: I put a breakpoint in objc_msgSend, right at the instruction where it calls out to the real object selector. objc_msgSend looks like:
libobjc.A.dylib`objc_msgSend:
...(instructions)...
0x37babfa4: bx r12
...(more instructions)...
(Actually there are two bx calls but let's keep things simple.) I run:
breakpoint set -a 0x37babfa4 -t 0x2203
(TID included because I'm having enough trouble tracking this one thread and I don't need irrelevant stuff interfering.)
Here's where the scripting comes in. The setup described above works exactly as I'd like it to. If I resume execution until the breakpoint triggers, I can run:
frame select 0
thread step-inst -m this-thread 5
frame info
continue
and the effect will be that the debugger:
moves to the objc_msgSend frame
steps by one instruction, advancing it into the object selector frame it was pointing at
displays relevant details (object type, selector called)
resumes execution
at which point I can keep pasting in those four commands over and over and copying the output until I hate myself.
If, on the other hand, I run:
breakpoint command add -s command
and paste in those exact same commands, everything breaks. It does not advance to the object selector frame. It doesn't show the frame details, or at least not the correct ones -- depending on various tweaks (see below), it may or may not show "objc_msgSend" as being the current function. It doesn't resume execution.
In this case, if I could get that example working, I'd be mostly happy. But for even more bonus points, I've also tried this with python, which I would prefer because it would allow for much more sophisticated logging:
breakpoint command add -s python
> thread = frame.GetThread()
> thread.StepInstruction(1)
> newFrame = thread.GetFrameAtIndex(0)
> print " " * thread.GetNumFrames() + newFrame.GetFunctionName()
> process = thread.GetProcess()
> process.Continue()
> DONE
Again no good. Again depending on tiny details, this may or may not print something (usually objc_msgSend), but it never prints the correct thing. It never steps the instruction forward. It never resumes execution afterwards.
And again, the python version works fine if I do it by hand: if I wait till the breakpoint fires, then run "script" and enter those exact same lines, it works as expected. Some parts will even work in isolation, e.g. if I remove everything except the parts that get the process and call process.Continue() and trigger those automatically, that "works" (meaning I see the lldb prompt flashing rapidly as it suspends and resumes execution. Usually I regret this because it becomes unresponsive and crashes shortly after.)
So: Any ideas? Is the technology Not Ready Yet, or am I just missing some clever piece of the puzzle that will fix everything? Or should I give up entirely and just live with the fact that there are some parts of object internals that I will never understand?...
Breakpoint commands cannot resume execution and then get control back again, at least today. There are a lot of unresolved questions about what would happen if breakpoint 1 is running the process and then breakpoint 2 is hit. Besides the whole question of whether the code base can really handle nested breakpoints correctly (it was designed to...), what does it mean if breakpoint 2 decides execution should stop? Is breakpoint 1's state thrown away?
It seems a little esoteric to worry about a breakpoint hitting another breakpoint while stepping the inferior process but unless all the details have been worked out, it's easy for the user to shoot themselves in the foot. So for today, breakpoint commands can either stop when the breakpoint is hit or continue to run - but there isn't any ability to run a little bit and do more processing. I know this would be a really useful capability for certain tasks but there are a lot of gotchas that need to be thought out before it could be done.
For some cases, it is possible to handle it the other way around ... if you want to stop in function parser() only when it has been called by function lexer(), it is easy to put a breakpoint on lexer() with some a few python commands to go one stack frame up the stack and see what the calling function is. If it's not lexer(), continue. I don't think this will apply to what you're trying to do, though.

What's the point of NSAssert, actually?

I have to ask this, because: The only thing I recognize is, that if the assertion fails, the app crashes. Is that the reason why to use NSAssert? Or what else is the benefit of it? And is it right to put an NSAssert just above any assumption I make in code, like a function that should never receive a -1 as param but may a -0.9 or -1.1?
Assert is to make sure a value is what its supposed to be. If an assertion fails that means something went wrong and so the app quits. One reason to use assert would be if you have some function that will not behave or will create very bad side effects if one of the parameters passed to it is not exactly some value (or a range of values) you can put an assert to make sure that value is what you expect it to be, and if it's not then something is really wrong, and so the app quits. Assert can be very useful for debugging/unit testing, and also when you provide frameworks to stop the users from doing "evil" things.
I can't really speak to NSAssert, but I imagine that it works similarly to C's assert().
assert() is used to enforce a semantic contract in your code. What does that mean, you ask?
Well, it's like you said: if you have a function that should never receive a -1, you can have assert() enforce that:
void gimme_positive_ints(int i) {
assert(i > 0);
}
And now you'll see something like this in the error log (or STDERR):
Assertion i > 0 failed: file example.c, line 2
So not only does it safe-guard against potentially bad inputs but it logs them in a useful, standard way.
Oh, and at least in C assert() was a macro, so you could redefine assert() as a no-op in your release code. I don't know if that's the case with NSAssert (or even assert() any more), but it was pretty useful to compile out those checks.
NSAssert gives you more than just crashing the app. It tells you the class, method, and the line where the assertion occurred. All the assertions can also be easily deactivated using NS_BLOCK_ASSERTIONS. Thus making it more suitable for debugging. On the other hand, throwing an NSException only crashes the app. It also does not tell about the location of the exception, nor can it be disabled so simply. See the difference in the images below.
The app crashes because an assertion also raises an exception, as the NSAssert documentation states:
When invoked, an assertion handler prints an error message that
includes the method and class names (or the function name). It then
raises an NSInternalInconsistencyException exception.
NSAssert:
NSException:
Apart from what everyone said above, the default behaviour of NSAssert() (unlike C’s assert()) is to throw an exception, which you can catch and handle. For instance, Xcode does this.
Just to clarify, as somebody mentioned but not fully explained, the reason for having and using asserts instead of just creating custom code (doing ifs and raising an exception for bad data, for instance) is that asserts SHOULD be disabled for production applications.
While developing and debugging, asserts are enabled for you to catch errors. The program will halt when an assert is evaluated as false.
But, when compiling for production, the compiler omits the assertion code and actually MAKE YOUR PROGRAM RUN FASTER. By then, hopefully, you have fixed all the bugs.
In case your program still has bugs while in production (when assertions are disabled and the program "skips over" the assertions), your program will probably end up crashing at some other point.
From NSAssert's help: "Assertions are disabled if the preprocessor macro NS_BLOCK_ASSERTIONS is defined."
So, just put the macro in your distribution target [only].
NSAssert (and its stdlib equivalent assert) are to detect programming errors during development. You should never have an assertion that fails in a production (released) application. So you might assert that you never pass a negative number to a method that requires a positive argument. If the assertion ever fails during testing, you have a bug. If, however, the value that's passed is entered by the user, you need to do proper validation of the input rather than relying on the assertion in production (you can set a #define for release builds that disables NSAssert*.
Assertions are commonly used to enforce the intended use of a particular method or piece of logic. Let's say you were writing a method which calculates the sum of two greater than zero integers. In order to make sure the method was always used as intended you would probably put an assert which tests that condition.
Short answer: They enforce that your code is used only as intended.
It's worthwhile to point out that aside from run time checking, assert programming is a important facility used when you design your code by contract.
More info on the subject of assertion and design by contract can be found below:
Assertion (software development)
Design by contract
Programming With Assertions
Design by Contract, by Example [Paperback]
To fully answer his question, the point of any type of assert is to aid debugging. It is more valuable to catch errors at their source, then to catch them in the debugger when they cause crashes.
For example, you may pass a value to a function expects values in a certain range. The function may store the value for later use, and on later use the application crashes. The call stack seen in this scenario would not show the source of the bad value. It's better to catch the bad value as it comes in to find out who's passing the bad value and why.
NSAssert make app crash when it match with the condition. If not match with the condition the next statements will execute. Look for the EX below:
I just create an app to test what is the task of NSAssert is:
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
[self testingFunction:2];
}
-(void)testingFunction: (int)anNum{
// if anNum < 2 -> the app will crash
// and the NSLog statement will not execute
// that mean you cannot see the string: "This statement will execute when anNum < 2"
// into the log console window of Xcode
NSAssert(anNum >= 2, #"number you enter less than 2");
// If anNum >= 2 -> the app will not crash and the below
// statement will execute
NSLog(#"This statement will execute when anNum < 2");
}
into my code the app will not crash.And the test case is:
anNum >= 2 -> The app will not crash and you can see the log string:"This statement will execute when anNum < 2" into the outPut log console window
anNum < 2 -> The app will crash and you can not see the log string:"This statement will execute when anNum < 2"

Why do conditional breakpoints slow my program down so much?

When I'm debugging something that goes wrong inside a loop, say on the 600th iteration, it can be a pain to have to break for every one. So I tried setting a conditional breakpoint, to only break if I = 600. That works, but now it takes almost a full minute to reach that point, where before it was almost instantaneous. What's going on, and is there any way to fix it?
When you hit a breakpoint, Windows stops the process and notifies the debugger. It has to switch contexts, evaluate the condition, decide that no, you don't want to be notified about it, restart the process and switch back. That can take a lot of processor cycles. If you're doing it in a tight loop, it'll take a couple orders of magnitude more processor cycles than one iteration of the loop takes.
If you're willing to mess with your code a little, there's a way to do conditional breakpoints without incurring all this overhead.
if <condition here> then
asm int 3 end;
This is a simple assembly instruction that manually sends a breakpoint notification to the OS. Now you can evaluate your condition inside the program, without switching contexts. Just make sure to take it out again when you're done with it. If an int 3 goes off inside a program that's not connected to a debugger, it'll raise an exception.
It slows it down because every time you reach that point, it has to check your condition.
What I tend to do is to temporarily create another variable like this (in C but should be doable in Delphi).
int xyzzynum = 600;
while (true) {
doSomething();
if (--xyzzynum == 0)
xyzzynum = xyzzynum;
}
then I put a non-conditional breakpoint on the "xyzzynum = xyzzynum;" line.
The program runs at full speed until it's been through the loop 600 times, because the debugger is just doing a normal breakpoint interrupt rather than checking conditions every time.
You can make the condition as complicated as you want.
Further to Mason's answer, you could make the int 3 assember only be compiled in if the program is built with the debug conditional defined:
{$ifdef debug}
{$message warn 'debug breakpoint present in code'}
if <condition here> then
asm int 3 end;
{$endif}
So, when you are debugging in the ide, you have the debug conditional in the project options. When you build the final product for your customers (with your build script?), you wouldn't include that symbol, so it wont get compiled in.
I also included the $message compiler directive, so you will see a warning when you compile letting you know that the code is still there. If you do that everywhere you use int 3, you will then have a nice list of places which you can double click on to take you straight to the offending code.
N#
Mason's explanations are quite good.
His code could be made a bit more secure by testing that you run under the debugger:
if (DebugHook <> 0) and <your specific condition here> then
asm int 3 end;
This will not do anything when the application is running normally and will stop if it's running under the debugger (whether launched from the IDE or attached to the debugger).
And with boolean shortcut <your specific condition here> won't even be evaluated if you're not under the debugger.
Conditional breakpoints in any debugger (I'm just surmising here) require the process to flip back and forth every time between your program and the debugger every time the breakpoint is hit. This process is time consuming but I do not think there is anything you can do.
Normally condition breakpoints work by inserting the appropriate break instruction into the code and then checking for the conditions you have specified. It'll check at every iteration and it might well be that the way in which the check is implemented is responsible for the delay as it's unlikely that the debugger compiles and inserts the complete check and breakpoint code into the existing code.
A way that you might be able to accelerate this is if you put the condition followed by an op with no side effect into the code directly and break on that op. Just remember to remove the condition and the op when you're done.

In Delphi: How to skip sections of code while debugging?

I often accidently step into code that I'm not interested in while debugging in Delphi.
Let's start by saying that I know that you can step over with F8, and that you can run to a certain line with f4.
Example:
function TMyClass.DoStuff():Integer;
begin
// do some stuff
bla();
end;
procedure TMyClass.Foo()
begin
if DoStuff()=0 then // press F7 when entering this line
beep;
end;
Example: I want to step into method DoStuff() by pressing F7, but instead of going there, I first end up in FastMM4.FastGetMem(), which is a massive blob of assembly code that obviously I'm not interested in at the moment.
There are several ways to go about this, and I don't like any of them:
Add a breakpoint on "bla" (almost useless if you only want to step into DoStuff on special occasions, like iteration 23498938);
Instead of pressing F7, manually move the cursor to "bla", and press F4 (Works for this simple example. In practice, it doesn't);
In case of FastMM: temporarily disable fastmm;
Is there any way to hint the IDE that I'm never interested into stepping into a certain block of code, or do I always have to set extra breakpoints or use F4 to try to avoid this?
I'm hoping for some magic compiler directive like {$NODEBUG BEGIN/END} or something like that.
In most cases being able to exclude entire units would be fine-grained enough for me, but being able to avoid certain methods or even lines of code would be even better.
Update: Maybe codegear should introduce something like skip-points (as opposed to break-points) :-)
There is a "magic nodebug switch". {$D-} will disable the generation of debug code. Place that at the top of your FastMM unit and you won't end up tracing into it. And if you do end up in a function you don't want to be in, SHIFT-F8 will get you out very quickly. (WARNING: Don't use SHIFT-F8 from inside an assembly-code routine that plays around with the stack. Unpredictable behavior can result. F4 to the bottom of it instead.)
If you're jumping into FastMM code, then there are memory operations occurring. The code you've shown doesn't have any memory operations, so your question is incomplete. I'll try to guess at what you meant.
When a subroutine has local variables of compiler-managed types (such as strings, interfaces, or dynamic arrays), the function prologue has non-trivial work to do. The prologue is also where reference counts of input parameters are adjusted. The debugger represents the prologue in the begin line of the function. If the current execution point is that line, and you "step into" it, you'll be taken to the RTL code for managing the special types. (I wouldn't expect FastMM to be involved there, either, but maybe things have changed from what I'm used to.) One easy thing to do in that situation is to "step over" the begin line instead of into it; use F8.
If you're really pressing F7 when entering your highlighted line, then you're doing it wrong. That's stepping into the begin line, not the line where DoStuff is called. So whether you get taken to the FastMM code has nothing to do with the implementation of DoStuff. To debug the call to DoStuff, the current execution point should already be the line with the call on it.
If you only want to debug DoStuff on iteration 23498938, then you can set a conditional breakpoint in that function. Click in the gutter to make a normal breakpoint, and then right-click it to display its properties. There you can define a condition that will be evaluated every time execution reaches that point. The debugger will only stop there when the condition is true. Press F8 to "step over" the DoStuff call, and if the condition is true, the debugger will stop there as though you'd pressed F7 instead.
You can toggle the "use debug DCUs" option to avoid stepping into most RTL and VCL units. I don't know whether FastMM is included in that set. The key difference is whether the DCUs you've linked to were compiled with debug information. The setting alters the library path to include or exclude the subdirectory where the debug DCUs are. I think you can configure the set of included or excluded debug directories so that a custom set of directories is added or removed based on the "debug DCUs" setting.
Back to breakpoints. You can set up breakpoint groups by assigning names to your breakpoints. You can use an advanced breakpoint to enable or disable a named group of breakpoints when you pass it. (Breakpoint groups can have just one breakpoint, if you want.) So, for example, if you only want to break at location X if you've also passed some other location Y in your program, you could set a disabled breakpoint at X and a non-breaking breakpoint at Y. Set the "enable groups" setting at Y to enable group X.
You can also take advantage of disabled breakpoints without automatic enabling and disabling. Your breakpoints appear in the "breakpoints" debugger window. If you're stepping through DoStuff and you decide you want to inspect bla this time, go to the breakpoint window and enable the breakpoint at bla. No need to navigate to bla's implementation to set the breakpoint there.
For more about advanced breakpoints, see Using Non-Breaking Breakpoints in Delphi, and article by Cary Jensen from a few years ago.
I may have missed something with your post, but with FastMM4 you can edit the FastMM4Options.Inc include file and remove the '.' from the following define:
From FastMM4Options.inc ****
{Enable this option to suppress the generation of debug info for the
FastMM4.pas unit. This will prevent the integrated debugger from stepping into
the memory manager code.}
{$.define NoDebugInfo}
When recompiling (might need building) the debugger will (should) no longer debug the FastMM code.
Use a precompiled non-debug DCU of FasmMM
In the project dpr file, I use
uses
{$IFNDEF DEBUG} FastMM4, {$ENDIF}
... // other units
to exclude FastMM4 during debug mode. Requires no change in FastMM4 so I don't have to remember to add {$D-} in FastMM when I change to a different version.
AFAIK, the debugger is only aware of the files in Browsing Path that you can modify in Options. So if you exclude the paths of modules you're not interested in debugging that will give the effect of what you want to do.
One caveat: code completion also relies on Browsing Path so you might run into occasions that code completion falls short when needed.
Although it isn't a direct answer to your question, you could modify your first suggested solution by putting breakpoint at bla that is only enabled when a breakpoint at Foo is passed (or some other condition of your choose, such as iteration count). Then it will only break when you want it to.
As an aside, I am finding more and more that I am not halting execution at break points, but rather dumping variable values or stack dumps to the message log. This allows more careful analysis than on-the-fly inspection of variables, etc. FWIW.
No. I don't believe there is a way to tell the debugger to never stop in a certain section of code. There is no magic directive.
The best you can do when you get into a routine you don't want to be in is to use Shift+F8 which will Run until the Return. Then do a F7 or F8 to exit the procedure.
Hmmm. Now I see Mason's answer. Learned something. Thanks, Mason. +1

Resources