Loading pre-compiled script in RemObjects Pascal Script (Delphi) - delphi

I am trying to load a pre-complied RemObjects Pascal Script in Delphi at run-time.
However when I try to load it Delphi excepts with 'Cannot Import VALUE_TEAMCODE.' Value_TeamCode is a function in my Delphi app that I have already registered with Pascal Script.
Here is what I am doing. Rough pseudo code below - actual code is split over multiple files. Also the SetCompiled call below occurs much later in the app when the script is required to run.
Note regarding code:
FPascalScript is a TPSScriptDebugger
...
//Register custom functions with Pascal Script
FuncsRegister;
//Load script
FPascalScript.Script.AddStrings(AContent);
//Compile script
FPascalScript.Compile;
//Get compiled script
FPascalScript.GetCompiled(sCompiledScript)
//Try and set script back in - ERROR Here 'Cannot Import VALUE_TEAMCODE'
FPascalScript.PascalScript.Debugger.SetCompiled(sCompiledScript);
...
Maybe I am going about this wrong. I am not sure if it is even possible to load a pre-compiled script.
I searched on RemObjects WebSite Wiki but the Pascal Script help is deleted.
I also searched various topics here on StackOverflow but none appear to be related to this issue.
Just another note. I already have scripts compiling and executing at run-time with no issues. I need to pre-compile for performance reasons.
Any help appreciated.
Update:
Current work around is to have one script engine per script in my system. These engines then stay in memory after pre-compilation. This removes the 30ms per script compilation overhead I have otherwise. It also uses bit more memory but not enough to be a concern.
I would still rather use just the one script engine though.(Hence the need to load pre-compiled script)

Thanks to a response over on RemObject Connect beta forum I have a solution.
(For post see http://connect.remobjects.com/discussion/comment/13540#Comment_13540)
Thanks go to poster vovanl.
I had to import my functions via the OnExecImport event as follows:
...
FPascalScript.OnExecImport := OnExecImport;
FPascalScript.SetCompiled(sCompiledScript);
...
TMyClass.OnExecImport(Sender: TObject; se: TPSExec; x: TPSRuntimeClassImporter);
begin
se.RegisterDelphiFunction(#Value_TeamCode, 'Value_TeamCode', cdRegister);
end;
...
It appears SetCompiled clears all existing registrations and so you MUST hook OnExecImport to re-register functions, procedures, methods etc.
Note that it appears loading pre compiled script (ie changing out one script for another) does appear to add some extra time overhead. I have found my initial work around is in fact faster by around 6 times.

Rather than the how of this, I am going to counter with why?
Compiled scripts will likely be version bound, probably even platform/target bound - and they usually compile fast enough that you never notice the time hit. Are you really use the scripts intensely enough that compile time is an issue?
Sometimes the best answer is "do you really need to do this at all?"

Related

Dart: Possible to exchange source code on the fly in a running system?

In this article it says: "The Dart VM reads and executes source code, which means there is no compile step between edit and run.". Does that mean that you can exchange source-code on the fly in a running Dart system like in Erlang? Maybe the compiler is removed from the runtime system and then this is no longer possible. So that's why I'm asking.
Dart is run "natively" only in Dartium, which is a flavour of Chrome with DartVM. When you develop an application you still need to compile it it to JavaScript. This way you get fast development lifecycle and in the end you can compile code to JS. Because it's compiled code there is lots more room for compiler to run optimisations on the code. So from my perspective, the compiler is still there and I don't think you would be able to replace code at runtime.
You can send around source code and run it, but it would need to be in a separate isolate. Isolates do have some relationship to Erlang concepts.
The Dart VM doesn't support hot swapping (Called live edit in V8). However, based on mailing list discussions, it sounds like this is something that the authors do want to support in the future.
However, as the others have mentioned, it is possible to dynamically load code into another isolate.

Delphi Code Completion Fails

We use Embarcadero Delphi 2010 and recently a change was made to one of the units of a medium-sized project, causing code completion to stop working completely -- but only inside this project, it still works fine in other projects. Puzzled, I searched the interwebs for clues on what exactly could make this happen, but my search wasn't too successful.
From what I gathered, it looks like IDE has a few parsers/compilers that work completely separated from one another, which makes it entirely possible that the faster code-completion compiler could fail where the main compiler would not. Which is exactly what's happening to my project.
My question: Is there a way to find out WHERE exactly the Code Insight/Code Completion compiler is failing? Does the IDE keep a log of on-the-fly parsing/compilations anywhere?
Is there a way to find out WHERE exactly the Code Insight/Code Completion compiler is failing?
Not readily, not without debugging the IDE.
Does the IDE keep a log of on-the-fly parsing/compilations anywhere?
No.
I suggest that you install Andy Hausladen's IDEFixPack. If that does not help then use your revision control to isolate the code change that causes the problem. And find a different way to write that code that happens not to bork code completion. Trial and error is likely to be the most productive method here, much as I hate to say that.
I recently had the same problem using Delphi 10.2. After a lot of research I found that I had inadvertently declared a variable in a type section with an end; following on the next line. Removing the errors restored the code completion function. So I would recommend combing the interface for an error or restoring a backup from the directory history.

Is there a way to force Delphi to leave unused code in the executable?

The Delphi Linker strips out any functions that aren't actually used, thus reducing the executable size.
Is there any way to stop the Delphi Linker doing this? e.g. a compiler switch?
To those wondering "why?"...
I am trying to use the delphi-code-coverage tool, but it only reports on code that is actually compiled into the executable. Which makes it not very useful. If I could get Delphi to include all code, I'm hoping I could then get some useful code coverage statistics.
I should mention that I have DUnit tests in a separate project to my application. So even though the code is "unused" in the DUnit project, it is used in the actual application.
See here for more details.
Your code-coverage tool is measuring the wrong thing. It works off the map file instead of the source code, so it will only report on live code instead of on all code in a project. The linker already filters out the dead code, and in a blank unit-test project, all code is dead code. There is no way to tell Delphi to include dead code in an EXE.
Run the code-coverage tool on your application to get a list of functions that needs testing. Then, write code in your unit-test project that mentions all those functions. (It doesn't have to call everything yet, and it certainly doesn't have to test it all. We're just making sure it's linked to the unit-test project.) Now the coverage tool can get an accurate measurement of what's been executed and what hasn't.

How can I load a package and keep the debugger working?

I'm using TJvPluginManager in the JVCL to create and load BPL-based plugins for my program. Problem is, one of the plugins isn't loading properly, and I can't debug it. Every time I try to trace into the loading sequence, it gets as far as the LoadLibrary API call, and then the debugger seems to forget what it's there for. It completely loses the ability to associate program code with source lines, give meaningful data in a call stack, or display local variables. It will still stop at breakpoints, but it breaks to the CPU window, with all the inline source code stripped out.
This happens on Delphi 2007 and 2009, and it's driving me nuts. Does anyone know how to load a plugin without it breaking the debugger? Does anyone even know why it's breaking it in the first place?
NOTE: I'm not looking for alternative methods of debugging. I know all about tracing and logging and all the rest. What I want is to understand what's going wrong and how to fix it. Surely I'm not the only person who's ever used TJvPluginManager?
Not quite the answer to your question: Have you tried to debug the package project, by setting the host application and putting a breakpoint into the package's startup code?
I've found Ray Kanopka's (Raize) CodeSite to be invaluable for debugging in situations where the integrated debugger is acting up. Thinking about the things I want to monitor using CodeSite actually helps me focus on what's important - it enforces good habits.
Another alternative to Codesite is Overseer which is part of the nexus project, but stands alone so does not require you to use their framework. Codesite is by far the better option, but in a pinch Overseer would work just as well.
I found that using packages for plugins can be problematic and many years ago switched to a completely COM based implementation for plugins and never had any problems. The other advantage to COM based plugins, they don't require Delphi to write, do not need to be recompiled when the main app switches to a new version of the compiler (my plugins compiled with Delphi 5 still run fine against the main application compiled in Delphi 2009!) and they are easier to write test applications to assist in debugging.
The only side effect I notice, is that shared code ends up in both executables and the plugins require registration into the registry.
Hmmmm... This is a stupid question, but I have to ask: the initialization function have the EXACT declaration syntax like the other plugins that work ?(from your question, I deducted you made some others that work)
Check your dependencies. Make sure each unit is compiled into one package only. Whenever a package needs to reference a unit from another package, use the requires clause to do so. Watch for compiler warnings about implicitly linked units.

Delphi: How to organize source code to increase compiler performance?

I'm working on a large delphi 6 project with quite a lot of dependancies. It takes several minutes to compile the whole project. The recompilation after a few changes is sometimes much more longer so that it is quicker to terminate Delphi, erase all dcu files and recompile everything.
Does anyone know a way to identify, what makes the compiler slower and slower? Any tips how to organize the code to improve compiler performance?
I have already tried following things:
Explicitly include most of the units in the dpr instead of relying on the search path: It didn't improve anything.
Use the command line compiler dcc32: it isn't faster.
Try to see what the compiler does (using ProcessExplorer from SysInternals): apparently it runs most of the time a function called 'KibitzGetOverloads'. But I can't do anything with this information...
EDIT, Summary of the answers until now:
The answer that worked best in my case:
The function "Clean unused units references" from cnpack. It almost automatically cleaned more than 1000 references, making a "cold" compilation about twice faster. ("cold" compilation = erase all dcu files before compiling). It gets the reference list from the compiler. So if you have some {$IFDEF } check that all your configurations still compile.
The next thing I would like to try:
Refactoring the unit references manually (eventually using an abstract class)
but it is much more work, since I first need to identify where the problems are. Some tools that might help:
GExperts adds a project dependencies browser to the delphi IDE (but unfortunately it can not show the size of each branch)
Delphi Unit Dependency Viewer V1.0 do about the same thing but without Delphi. It can calculate some simple statistics (Which units is the most referenced, ...)
Icarus which is referenced on a link in one of the answer.
Things that didn't change anything in my case:
Putting every files from my program and all components in one folder without subfolders.
Defragmenting the disk (I tried with a ramdisk)
Using a ramdisk for the code source and output folders.
Turning off the live scanning antivirus
Listing all the units in the dpr file instead of relying on the search path.
Using the command line compiler dcc32 or ecc32.
Things that didn't apply to my case:
Avoiding having dependencies on network shares.
Using DelphiSpeedUp, because I already had it.
Using a single folder for all dcu (I always do it)
Things that I didn't try:
Upgrading to another Delphi version.
Using dcc32speed.exe
Using a solid-state drive (I didn't tried it, but I tried with a ramdisk where I put all the source code. But maybe I should have installed delphi on the ramdisk too)
Some things that could slow down the compiler
Redundant units in your uses clause. See this question for a link to CnPack.
Not explicitly adding units to your project file. You've already seem to have covered that.
Changed compiler settings, most notably include TDD32 info.
Try to get rid of unused units in your uses clause and see if it makes a difference.
using Delphi 7 and 2009, last week I pass from almost 2 minutes for compiling and another 45 seconds from hitting f9 and get the main form of my app to 20 seconds compiling and running. This things has drive me crazy for about 6 months and nothing I tried seems to work. Using filemon from SysInternals, I realize than every unit (mostly components) that compiler requires was searched in every folder that was in Search Path, yes, this produce a LOT of FileOpen, FileExists and FileNotFound, etc. What I do was, put every DCU, DFM, RES, etc from components all in a single folder, and having just this folder in the search path, and a couple of others folders required by the project; the results were amazing. Other problem prior to the fix, was debugging. It takes almos 40 seconds in each F7, F8 key press while debuging, this has been fixed too. Hope this info can help you. Greetings form Isla de Margarita, Venezuela. Excuse my english, if any error ;)
Check are there any paths in search paths that aren't on your local machine.
i.e. Don't link to binaries on network shares, and check that the search path isn't checking any network shares.
I haven't seen the compiler get slower over time, but it's been a long time since we used Delphi 6.
It seems to be generally agreed upon in the Delphi community that, if you don't want to upgrade to the latest and greatest (Delphi 2007 or 2009), then Delphi 7 is the best/fastest/most stable. You might consider upgrading.
KibitzGetOverloads sounds like something from the kibitz compiler -- the "background" compiler that gives you code-completion, background error highlighting, code tooltips, etc. Sounds like you'd be better off checking the call stack of the command-line compiler, not the IDE; you'd get something more helpful.
I have never found compiles to be faster after deleting the DCUs. DCUs are there to make the build incremental, therefore faster. If you're seeing faster compiles after deleting all DCUs, check your hardware. Have you defragged your hard disk lately? How much free space do you have on the drive?
Have you set a single folder to get the DCUs. If not, they will be scattered all over.
Put all the units and their implicitly called units (except installed components from Library path) in the dpr. To be sure you did not miss some, empty your search path, it still should compile.
After reducing the search path, you can try to reduce your library path by installing your components into fewer folders.
Although only partly relevant to your exact question, I hear that the use of a solid-state drive is vastly increasing compile time with Delphi - Nick Hodges said this himself on the Delphi Podcast a couple of week ago.
Brian
U can automatically get rid of
unnecesseary unit references, which is very efficient optimization for compiling speed.
In your situation, dividing your
project into packages can improve
compiling speed. With this way, it
just generates modified package(s),
not single massive binary for each
recompilation. Working with packages
can also help about easy deployment
of your project updates.
Turn off your live scanning antivirus
We had the same (or similar) problem.
I of our package has compilation Time about 12 min.
After changes, now we have moved to 32 sg.
After many tests we found that the "problematic situation" was the following:
In a single package:
The A unit uses a large number of units: U1, U2, U3, U4, ... U100 (Uses of Interface) in the same package. This is an important unit that centralizes all the initialization work.
All units of the package, U1, U2, U3, .., U100 uses unit A (use of implementation)
This "circular reference" does not give compilation errors because the USES are different, but caused a large compile-time.
SOLUTION:
Eliminate the reference to each unit, U1, U2, U3 ,...., U100 in the A Unit.
Now, A unit use a large number of units: U1, U2 ,...., U100, but the units U1, U2 ,..., U100, does not use the unit A.
After this change the compile-time is down drastically.
If you have a similar situation, you can try this.
Excuse for my bad english.
Greetings.
Neftalí -Germán Estévez-
I had the same problem and I can come up with (2) reasons it effected me.
Circular references. The gentleman who stated that one was correct. I would have certain LARGE projects that would compile fast, and SMALL projects that compiled slow. Could not figure it out until I restructured the code and then I got the faster compile speeds. Lots of small units. It's easy to build monolithic units. But, there are many penalties from it.
I've heard it a 1000 times, develop on a slow machine like your users might be using. Hey, that's for the testing department. I can't waste time with compiling, Delphi load speeds, packages, etc. I went out and bought a "GAMERS" computer (WOW) with the Solid State Drives (as mentioned earlier), 12GB RAM, OVERCLOCKED "i7" Intel chip, triple video cards (linked), all on Vista64 (Vista is not bad once it is finally running with all installed parts). It was a real pain to get it all set up. But, I am not waiting anymore on my computer. Pure compile speed, load speed, plus a new fresh machine without all of the crap that was installed on the last one over the last 2 years. I even unloaded DelphiSpeedUp. Did not need it. And I don't need to turn off AntiVirus, since I did that one as well, and got penalized with the internet crap. So AntiVirus stays on. Pure and simple, get a BALLS OUT machine. Your time is worth more than what you will spend on a new computer.
Try to install a ram disk and set your dcu output path to point there. This more than halved my compilation time with Delphi 2007 on top of DelphiSpeedUp.
The compiler will only compile units that have changed. If you have changed the code in the interface section all units that depend on the changed unit are compiled. If only code in the implementation section is changed, the compile will only that unit but presumably link all the modules. Implies a good design of interfaces up front but if you restructure the code to restrict changes to the implementation compile times might reduce. I have no idea by how much. This fact is mentioned in the Delphi help files under Multiple and indirect unit references in Delphi 7 "Using Delphi".
Do not compile on network drives. Seek time is dramatically worse.
Consider pointing your dcu ("unit output" directory to a ramdrive.
Limit the number of include/unit directories.
Try to avoid minor circular references that the compiler still accepts, specially for large units (e.g. generated ORM units for your OPF). It might cause large units to be compiled twice. (does Delphi allow minor mutual circulars, or is that a FPC only feature?)
I never tried, but hardcoding all files with full/relative path in the central .dpr might also help (script to regenerate/update?). (you mention that above, but was it with path xx in '\path\yyy' notation?).
Other long shots:
Use Kylix (file/dir I/O under Linux is dramatically better in my experience (though that is from FPC experience)). Maybe we need a reversed cross-kylix :-)
Use a separate (windows) build machine, and tweak NTFS over the registry to be less "safe". (which you don't care for, since everything is a revision system to begin with). Afaik these options can only be done global for all filesystems, hence the separate system. Throw in a raid array or Raptor too.
Forget solid state. Nice buzz atm, but the high write ratio will kill it eventually (both life and performance when it gets fuller and can't optimally allocate anymore), and you need the expensive intel ones to beat two $75 HD's in RAID.
P.s. Sorry for the FPC references. I do both, and I sometimes don't know anymore what belongs to what.
What I do is always make sure to have very few directories in the library path, and most of the components and static code. I also make sure that NO sourcecode is available in the library path, only .dcu/.res etc. Only browsepath has the sourcecode, and special circumstances are handled through searchpath for the project.
Just limit what you compile in any situation.
A few years later I am struggling again with increasing compiling times. I am currently using Delphi XE4 and I am at a point where I absolutely need to refactor the units references. I thought about a new way to identify where are the problems:
I’m using Process Monitor from Microsoft/SysInternals to monitor the compiler:
I start Process Monitor with a filter to show only dcc32.exe
(or bds.exe when working from the IDE).
I build my project from the command line.
At the end I look at the CreateFile operations in the log of Process Monitor.
For each unit there will an entry for the .PAS file (when the compiler starts working on this unit) and one for the .DCU file (when the compiler is complexly done with this unit). By working on the log with a text editor and/or with Excel I can extract this kind of information:
A kind of “tree”, where you recursively see in which order the units have been compiled.
For each unit the delay between “.PAS file opened“ and “.DCU file written”.
Then I try to interpret the results to find places where doing some refactoring would speed the compile time. It is not so easy, but I’m getting some encouraging results.

Resources