run-time evaluation of values in DelphiWebScript - delphi

My delphi application runs scripts using JvInterpreter (from the Jedi project).
A feature I use is runtime evaluation of expressions.
Script Example:
[...]
ShowMessage(X_SomeName);
[...]
JvInterpreter doesn't know X_SomeName.
When X_SomeName's value is required the scripter calls its OnGetValue-callback.
This points to a function I handle. There I lookup X_SomeName's value and return it.
Then JvInterpreter calls ShowMessage with the value I provided.
Now I consider switching to DelphiWebScript since it has a proper debug-interface and should also be faster than JvInterpreter.
Problem: I didn't find any obvious way to implement what JvInterpreter does with its OnGetValue/OnSetValue functions, though.
X_SomeName should be considered (and actually is, most of the time) a variable which is handled by the host application.
Any Ideas?
Thanks!

You can do that through the language extension mechanism, which has a FindUnknownName method that allows to register symbols on the spot.
It is used in the asm lib module demo, and you can also check the new "AutoExternalValues" test case in ULanguageExtensionTests, which should be closer to what you're after.

Related

Why does Erlang have arity in its imports?

I find Erlang's module arity import /n where n is the number of arguments rather bizarre.
In Java and various other languages you can do something like:
import static com.stuff.Blah.myFunction;
Which will import all overloaded Blay.myFunction(..) regardless of parameters.
Besides I guess being explicit why did the language designers decide this was a good idea (I'm not trying to criticize the language... just curious)?
Does it have to do with code swapping?
Or does it have to do with hiding guard methods for recursion? If so why not allow arity on export but no need for arity on import?
Why would I want to be that explicit? That is import the two argument function but not the the three argument of myFunction?
You should be aware of what importing functions in Erlang really does. It is a pure textual transformation. If I do an -import(foo, [bar/1,baz/2]). it means that when I write a call like bar(5) or baz(a, 3) the compiler transforms these to foo:bar(5) and foo:baz(a, 3). That is all it does, nothing else. It doesn't check anything:
It doesn't check if the module foo contains the functions bar/1 or baz/2.
It doesn't even check if the module foo exists.
Really all it does is hide that you are calling a function in another module. That is why the recommendation from experienced Erlangers is "don't use it". It was a mistake. Unfortunately it is much easier to add stupid things than to get rid of them so we were never able to remove it.
"Does it have to do with code swapping?"
Yes, sort of. The unit of all code handling in Erlang is the module. So you compile modules, load modules, purge and delete modules. This means that there are no inter-module dependencies at all in the system and the compiler makes no assumptions about other modules when it is compiling a module. No assumptions are made that the environment in which a module is compiled will be the same in which it is run. That is why it is at runtime the system checks whether the function you are trying to call in another exists, or even if the module itself exists. That is why the import was a purely textual transformation.
Erlang was originally developed in Prolog.
In Prolog, the arity adds additional meaning to what you consider to be the 'arguments, as I understand from a function' in a procedural programming language. But that model does not apply here.
The so-called clauses 'married(X,Y).' and 'married(X,Y,Z).' imply a different kind of relationship 'married', which can be declared as married/2 and married/3.
In procedural programming, 'add(a,b)' or 'add(a,b,c)' are intended to generate the addition of a different number of arguments. That's not immediately the case in Prolog, where it is possible to have the relationship 'a and b, added' or 'a, b and c, added' mean something else. Needless to say, Prolog allows you to declare 'add' as you would expect a function would do. But it allows for more. More available meaning, means more need to control it.
And as in any module system, selecting what you want to expose to external clients makes sense: hence the declaration of arity.
Does it have to do with code swapping?
Kind of. The modules in Erlang are compiled separately (which is part of what allows code swapping), unlike Java classes, so the compiler doesn't know how many versions of the imported function with different arities exist. It could assume that all calls of a function with the given name come from the same module, of course, but the designers likely decided it wasn't particularly useful.
In fact, you rarely want to use imports at all, at least in my experience, just as you rarely use static imports in Java. Just write module:function, like Class.staticMethod.
Or does it have to do with hiding guard methods for recursion?
No, since not importing functions doesn't hide them in any way.

Can I have a custom function stop a Lua script in Delphi without exiting the application?

I have an application that periodically will run a Lua script. Within the script, on occasion, I have created a custom registered Lua function to check some parameters and decide if the Lua script should continue or exit. The logic ideally should not be part of the script and I can think of using a Lua script to work around this, but I'm wondering if it is possible to stop the execution of a Lua script without ending the application.
I have a custom function written in Delphi and exposed to Lua scripts using Lua 5.1. The Lua script looks something like that shown below and the script in Lua is started using luaL_loadbuffer.
io.write("Script starting\n");
--Custom Function
ExitIfFound();
io.write("Script continuing\n");
My custom function looks something like this, below I have provided one of my attempts where I tried to use lua_error to stop the script...
function ExitIfFound(LuaState: TLuaState): Integer;
var
s: AnsiString;
begin
s := 'ExitIfFound ending script, next Lua script line not called';
lua_pushstring(LuaState, PAnsiString(s));
lua_error(LuaState);
end;
When my custom function is called, I'm unsure as to how to exit the Lua script without any further evaluation. I have seen posts referring to Lua and using setjmp and longjmp in C, but I'm curious how these may translate Delphi.
In the example above, when I use lua_error, the entire program crashes with Windows doing its typical, [luarun.exe has stopped working] ...
With all of this, I'm am still pretty new to integrating Lua to Delphi and hoping that I can find some cleaner options to explore.
There is no clean way to entirely abort a Lua script. The lua_error function is the correct way to signal an error. It is the caller's responsibility to catch the error and propagate it to the next caller.
If you cannot rely on the caller to cooperate, then you can try to exert more control by installing debug hooks. Then the host program will be consulted before continuing to run the script. However, the script can still avoid exiting by using pcall to catch any errors.
The crash in your program is probably not simply from setting an error. Rather, it's likely from using the wrong calling convention on your ExitIfFound function. It needs to be cdecl, but Delphi's default, if you don't specify anything else, is register. Using the wrong calling convention will give you unpredictable parameter values and can lead to a corrupted stack. If you type-casted the function or used the # operator when you called lua_register, then you might have hidden the calling-convention mismatch from the compiler's type checker, which would have otherwise alerted you to the problem at compile time.
When compiled as C++, lua_error will use a exception instead of longjmp, but either way, the caller always catches the error. Exceptions are important, though, when your Delphi code uses compiler-managed types like string, or exception-sensitive constructs like try-finally blocks. In C mode, lua_error calls longjmp to jump directly to the waypoint set by a previous call to setjmp. That jump will skip over any exception handlers like the ones the Delphi compiler sets up to ensure the finally block runs and the string gets cleaned up.
A further headache is that since the compiler cleans up the string while exiting the function, the pointer you put on the Lua stack might not be valid by the time it's used; that depends on whether lua_pushstring makes a copy of its argument.

Creating macros using DWScript

I read this paragraph from the Delphi Tools Site
Changes since the last SVN update are:
Added support for FreePascal-like compile-time $INCLUDE “macros”:
%FILE% and %LINE% insert the current filename and line number into the source
%FUNCTION% inserts the current function name, or class.method name into the source
%DATE% and %TIME% allow inserting the compile date/time
Is there a way we can define macros in DWScript (other than these functions) just like people define macors in excel (using VBscript) in a simple way, where the name of the script will be the name of the function that will be used later, without adding {$Include XXX} in the executed script?
N.B.: I konw this can be done by managing the written script to be saved in a certain file called functions for ex. then save the added function with its name to be used (Add), then the user will write Add(1,2) to get the result; but my boss at work wants it to be something that looks like vbscript in excel.
I'm not sure to understand the question, so I'll list various answers to various possible interpretations...
If you want to declare functions that are implicitly supported by the scripting engine without having to "{$include}" or "uses" them, you can declare them via a TdwsUnit component, and attach it to the script component. If you don't have the "coExplicitUses" option set, they'll be available automatically, and you get design-time support in the IDE.
If you want to add internal functions (that are always there), use one of the RegisterInternalFunction overloads, you can check any of the "dwsXxxxFunctions.pas" units for examples. That's potentially more efficient, but also more cumbersome.
If you want to pre-process custom source-level macros in the source code (ala C's macros), you can use the filters functionality (check the HTML or JS filters as example of how a filter can be implemented).
If you want to react dynamically to "unknown" names, so you can declare them on the spot or bind them to something dynamically, you can use TdwsLanguageExtension.FindUnknownName, that's how the RTTI environment works f.i. (see TRTTIEnvironment in dwsRTTIConnector).
If you want to parse completely custom areas of code in a completely custom way, you can use language extensions too, override ReadInstr and check how asmLib & the JSLibModule do it to support "asm".

How can I tell if a module is being called dynamically or statically?

How can I tell if a module is being called dynamically or statically?
If you are operating on z/OS, you can accomplish this, but it is non-trivial.
First, you must trace up the save area chain and use CSVQUERY to find out which program owns each save area. Every other program will be a Cobol runtime module, like IGZCPAC. Under IMS, CICS, TSO, et al, those modules might be different. That is the easy part.
Once you know who owns all the relevant save areas, you can use the OS LOADER / BINDER / LINKER utilities to discover what artifacts are in the same modules. This is the non-easy part.
The ONLY way is to look at the output of the linkage editor (IEWL) or the load module itself. If the module is being called DYNAMICALLY then it will not exist in the main module, if it is being called STATICALLY then it will be seen in the load module. Calling a working storage variable, containing a program name, does not make a DYNAMIC call. This type of calling is known as IMPLICITE calling as the name of the module is implied by the contents of the working storage variable. Calling a program name literal.
Calling a working storage variable,
containing a program name, does not
make a DYNAMIC call.
Yes it does. Call variablename is always DYNAMIC.
Call 'literal' is dynamic or static according to the DYNAM/NODYNAM compiler option.
Caveat: This applies for IBM mainframe COBOL and I believe it is also part of the standard. It may not apply to other non-standard versions of COBOL.
For Micro Focus COBOL statically linking is controlled via call-convention on the call (bit 3) or via the compiler directive LITLINK.
When linking statically the case of the program-id/entry-point and the call itself is important, so you may want to ensure it is exact and use the CASE directive.
The reverse of LITLINK directive is the NOLITLINK directive or a call-convention without bit 3 set!
On Windows you can see the exported symbols in your .dll by using the "dumpbin /exports" utility and on Unix via the 'nm' utility.
A import .lib for the .dll created via "cbllink" can be created by using the '-K'command line option on cbllink.
Look at the call statement. If the called program is described in a literal then it's a static call. It's called a dynamic call if the called program is determined at runtime:
* Static call
call "THEPROGRAM"
* Dynamic call
call wsProgramName

Replace function units

I am writing a unit test infrastructure for a large Delphi code base. I would like to link calls to pure functions in SysUtils.FileExists for example to a "MockSysUtils.FileExists" instead.
Creating a SysUtils unit with the same interface is not appreciated by the compiler.
What I am thinking of is to hook in my mock function at runtime. Is this possible nowadays?
Any other suggestions?
Regards,
Peter
Replacing a function at runtime is difficult but usually technically possible. "All" you need to do is:
take the address of the function in question
disassemble the first 5 bytes or so (to check for a RET instruction - very small routines may abut another routine, preventing you from replacing it)
change its page protection (with VirtualProtect) to be writable
rewrite the first 5 bytes with a JMP rel32 instruction (i.e. E9 <offset-to-your-func>)
implement your version function as normal, making sure it has the same arguments and calling convention as the function you are mocking
An easier approach would be to link against a different version of SysUtils.pas. That will require you to also recompile all the units in the RTL and VCL that depend on SysUtils.pas, but it is likely quite a bit easier than the function intrumentation approach described above.
The easiest approach is the language-level one, where either you don't directly rely on SysUtils at all (and so can switch at a higher level), or you modify the uses declaration to conditionally refer to a different unit.
You can do it with MadCodeHook. Use the HookCode function, give it the address of the function you want to replace and the address of the function you want to be called instead. It will give you back a function pointer that you can use for calling the original and for unhooking afterward. In essence, it implements the middle three steps of Barry's description.
I think MadCodeHook is free for personal use. If you're looking for something freer than that, you can try to find an old version of the Tnt Unicode controls. It used the same hooking technique to inject Unicode support into some of the VCL's code. You'll need an old version because more recent releases aren't free anymore. Find the OverwriteProcedure function in TntSystem.pas, which is also where you'll find examples of how to use it.
Code-hooking is nice because it doesn't require you to recompile the RTL and VCL, and it doesn't involve conditional compilation to control which functions are in scope. You can hook the code from your unit-test setup procedure, and the original code will never know the difference. It will think it's calling the original FileExists function (because it is), but when it gets there, it will immediately jump to your mocked version instead.
You could also just add a unit that only contains the functions you want to mock to the test unit's uses clause. Delphi will always use the function from the unit that is listed last. Unfortunately this would require you to change the unit you want to test.
Your Mock-Sysutils unit:
unit MockSysutils;
interface
function FileExists(...) ...
...
end.
Your unit, you want to test:
unit UnitTotest;
interface
uses
Sysutils,
MockSysUtils;
...
if FileExists(...) then
FileExists will now call the version from MockSysutils rather than from Sysutils.
Thanks,
yes, it would be great to have TSysUtils class for example instead that I could inherit with my MockSysUtils. But, that is not the case and the code base huge. It will be replaced bit by bit, but I wondered if there was a quick-start solution.
The first approach is ok for one function perhaps, but not in this case I guess.
I will go for the second approach.
This is slightly way out there but here is another alternative.
When building your unit tests and your main codebase to go with it, you could grep all the functions you wish to replace and specify the unit to use
Instead of
fileexists(MyFilename);
you could grep fileexists and replace with
MockTests.fileexists(MyFileName);
If you did this at build time (using automated build tools) it could easily be done and would provide you with the greatest flexibility. You could simply have a config file that listed all the functions to be replaced.

Resources