As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
At one point I had a nice little compression utility that smashed my Delphi compiled EXE's to a smaller download size, but now I can't find it. Any recommendations?
Also, are there any downsides to using these kinds of utilities? (I mainly use them to shorten download times for rural / dial-up users).
Related question: Are there any downsides to using UPX to compress a Windows executable?
Years ago I looked into compressing my executable to make the download smaller.
What I ended up doing, and what I recommend for you, is to use an installer program like Inno Setup instead. Not only does it create a single EXE that will install/uninstall your program, but it also compresses that EXE practically as much as a separate compressor would do to your executable alone.
When the program is installed it gets decompressed, so it never appears to be a virus and does not increase load times.
So I get the benefits of smaller download size, and a professional-looking installation script at the same time.
p.s. Inno Setup is free.
The recommendation is that you not:
EXE compressors can make your application seem like a virus (self-modifying)
gzip/zip are just as effective at compressing and don't tinker with your app
EXE compressors make the load times of your app increase (unless you're just talking about the setup program which is a different matter
This crazy looking site brings up an argument I had heard in the distant past (whether it's true or not still today, I'm not sure, modern packers probably have a different strategy today) This article references Win32! :)
http://topic.csdn.net/t/20000408/08/6785.html
Modern multitasking OSes such as
Windows 95/98 and NT use what is
called a "virtual memory" system. When
programs start, all of their code is
not loaded into memory immediately
upon startup, as was the case with DOS
programs. Instead, only portions of
the code being actively executed are
stored into memory. For example, say
your program has a Print option on its
menu, and code behind it that handles
the printing. This code will only be
loaded into memory once the Print
feature is first selected by the user.
And if after the code is loaded into
memory the Print feature is not used
for a while the system will "discard"
the code, freeing the memory it
occupied, if another application
desperately needs memory. This is part
of a process called "paging" and is
completely transparent to the program.
Another way paging under Win32
conserves memory is it causes multiple
instances of a program (or DLL) to
share the same memory for code. In
other words, under normal
circumstances there is no real
difference in the amount of physical
memory allocated for code between
starting 100 instances of a program
and starting one instance.
If all Win32 programs behaved like
DOS programs, loading everything into
memory and keeping it there until the
program terminated and also not
sharing any memory between multiple
instances, you can probably imagine
how quickly physical memory could run
out on systems with a limited amount,
causing disk swapping to start.
Yet this is precisely what current
Win32 EXE compressors do to your
EXE's/DLL's! They completely go
against the OS's paging system by
decompressing all code into memory and
keeping it there until termination.
And because code isn't stored in a
"raw" format in the EXE file (i.e. the
same way it is stored in memory), the
OS is unable to share code between
multiple instances.
I don't know of any that are specifically for Delphi, but UPX is very popular for this sort of thing. The only downside is that the executable has to be decompressed when it's launched, and that can eat some time. It seems to be very fast for sanely sized executables, though.
The one you are probably thinking of is ASPack - it is a EXE compressor written in Delphi, but will compress any EXE. It might do extra well on Delphi EXE's though. I would agree with the other answers that you should not use an EXE compressor just to save on download times. There may be specific situations where an EXE compress is a good idea, but generally it is not.
Instead use a good installation builder, especially if you can find one that uses 7zip compression. I know InstallAware uses 7zip internally for maximum compression. Depending on which versions of Delphi you own you may have an InstallAware license too.
If nothing else you can build a self extracting archive with basic install behavior with 7zip for free. It is a separate download for SFXs for installers.
Use UPX with lzma option for max compression.
upx --lzma yourfile.exe
The main inconvenience of a compressed EXE or DLL is that the OS cannot share the code amongst multiple instances.
So you're wasting memory, have to decompress each time you start an instance, exhibit a virus-like behavior without even an download advantage over a compressed install.
Only positive case is when launching directly from a network drive.
I believe Terminal servers (Like Citrix) will use the same memory for you're application binary if it is uncompressed. Meaning a compressed exe could smell a small disaster in a Citrix environment.
UPX should work, although it's not Delphi specific.
I use PEtite: http://un4seen.com/petite/
I would also vote for upx. Beside the downsides which were mentioned it also protects from basic reverse engineering and those lame "resource hacker" tools. Which by the way are plenty, and most of them fail to open a compressed executable.
I asked a question about the con's of using UPX on Delphi executables here on SO a while back, and I got some great responses.
Are there any downsides to using UPX to compress a Windows executable?
You can use PECompact since people can't decrypt it easily, and as test showed (showed on main page, just scroll down a bit) it's better than ASPack or UPX, i've using it on my previous Delphi projects
Related
I understand from other posts here that "IMAGE_FILE_LARGE_ADDRESS_AWARE" may work to effectively expand memory availability in e.g. Delphi 2007.
I don't get this to work in Delphi6, is this indeed the case, or should it work? Or is there an alternative command that does the same thing?
If not, I may need to migrate to a later version of Delphi. Then, does anyone know what the most recent version of Delphi is that would easily allow me to migrate my existing code (ideally, my existing code, which is fairly simple Turbo Pascal-type code, would just work as is) AND would support the "IMAGE_FILE_LARGE_ADDRESS_AWARE" 'trick' to expand memory?
Many thanks!
Remco
You can apply the IMAGE_FILE_LARGE_ADDRESS_AWARE PE flag to a Delphi 6 application, but you must beware of the following issues:
The default memory manager for Delphi 6, the Borland memory manager, does not support memory allocations with addresses above 2GB. You must replace the memory manager with one that supports large addresses. For instance FastMM.
Your code may well contain pointer truncation bugs that will need to be found and fixed.
The same goes for any third party software that you use. This includes the Borland RTL and VCL libraries. I did not encounter many problems with these libraries, but it may be that your program uses different parts of the runtime libraries that have pointer truncation bugs.
In order to stress test your program under large address conditions you should turn on top down memory allocation. Do not be surprised if your anti-malware software (or indeed other system level software) has to be disabled whilst you operate in top down memory allocation mode. This type of software is notoriously poor at operating in top down memory allocation mode.
Finally, it is worth pointing out that large address aware cannot solve all out of memory problems. All it does is open up the top half of the 32 bit address space. Your program might require even more address space than that. In which case you'd need to either re-design your program, or move to a 64 bit compiler.
And if so, how. I'm talking about this 4GB Patch.
On the face of it, it seems like a pretty nifty idea: on Windows, each 32-bit application normally only has access to 2GB of address space, but if you have 64-bit Windows, you can enable a little flag to allow a 32-bit application to access the full 4GB. The page gives some examples of applications that might benefit from it.
HOWEVER, most applications seem to assume that memory allocation is always successful. Some applications do check if allocations are successful, but even then can at best quit gracefully on failure. I've never in my (short) life come across an application that could fail a memory allocation and still keep going with no loss of functionality or impact on correctness, and I have a feeling that such applications are from extremely rare to essentially non-existent in the realm of desktop computers. With this in mind, it would seem reasonable to assume that any such application would be programmed to not exceed 2GB memory usage under normal conditions, and those few that do would have been built with this magic flag already enabled for the benefit of 64-bit users.
So, have I made some incorrect assumptions? If not, how does this tool help in practice? I don't see how it could, yet I see quite a few people around the internet claiming it works (for some definition of works).
Your troublesome assumptions are these ones:
Some applications do check if allocations are successful, but even then can at best quit gracefully on failure. I've never in my (short) life come across an application that could fail a memory allocation and still keep going with no loss of functionality or impact on correctness, and I have a feeling that such applications are from extremely rare to essentially non-existent in the realm of desktop computers.
There do exist applications that do better than "quit gracefully" on failure. Yes, functionality will be impacted (after all, there wasn't enough memory to continue with the requested operation), but many apps will at least be able to stay running - so, for example, you may not be able to add any more text to your enormous document, but you can at least save the document in its current state (or make it smaller, etc.)
With this in mind, it would seem reasonable to assume that any such application would be programmed to not exceed 2GB memory usage under normal conditions, and those few that do would have been built with this magic flag already enabled for the benefit of 64-bit users.
The trouble with this assumption is that, in general, an application's memory usage is determined by what you do with it. So, as over the past years storage sizes have grown, and memory sizes have grown, the sizes of files that people want to operate on have also grown - so an application that worked fine when 1GB files were unheard of may struggle now that (for example) high definition video can be taken by many consumer cameras.
Putting that another way: applications that used to fit comfortably within 2GB of memory no longer do, because people want do do more with them now.
I do think the following extract from your link of 4 GB Patch pretty much explains the reason of how and why it works.
Why things are this way on x64 is easy to explain. On x86 applications have 2GB of virtual memory out of 4GB (the other 2GB are reserved for the system). On x64 these two other GB can now be accessed by 32bit applications. In order to achieve this, a flag has to be set in the file's internal format. This is, of course, very easy for insiders who do it every day with the CFF Explorer. This tool was written because not everybody is an insider, and most probably a lot of people don't even know that this can be achieved. Even I wouldn't have written this tool if someone didn't explicitly ask me to.
And to expand on CFF,
The CFF Explorer was designed to make PE editing as easy as possible,
but without losing sight on the portable executable's internal
structure. This application includes a series of tools which might
help not only reverse engineers but also programmers. It offers a
multi-file environment and a switchable interface.
And to quote a Microsoft insider, Larry Miller of Microsoft MCSA on a blog post about patching games using the tool,
Under 32 bit windows an application has access to 2GB of VIRTUAL
memory space. 64 bit Windows makes 4GB available to applications.
Without the change mentioned an application will only be able to
access 2GB.
This was not an arbitrary restriction. Most 32 bit applications simply
can not cope with a larger than 2GB address space. The switch
mentioned indicates to the system that it is able to cope. If this
switch is manually set most 32 bit applications will crash in 64 bit
environment.
In some cases the switch may be useful. But don't be surprised if it
crashes.
And finally to add from MSDN - Migrating 32-bit Managed Code to 64-bit,
There is also information in the PE that tells the Windows loader if
the assembly is targeted for a specific architecture. This additional
information ensures that assemblies targeted for a particular
architecture are not loaded in a different one. The C#, Visual Basic
.NET, and C++ Whidbey compilers let you set the appropriate flags in
the PE header. For example, C# and THIRD have a /platform:{anycpu,
x86, Itanium, x64} compiler option.
Note: While it is technically possible to modify the flags in the PE header of an assembly after it has been compiled, Microsoft does not recommend doing this.
Finally to answer your question - how does this tool help in practice?
Since you have malloc in your tags, I believe you are working on unmanaged memory. This patch would mostly result in invalid pointers as they become twice the size now, and almost all other primitive datatypes would be scaled by a factor of 2X.
But for managed code since all these are handled by the CLR in .NET, this would mean really helpful and would not have much problems unless you are dealing with any of the following :
Invoking platform APIs via p/invoke
Invoking COM objects
Making use of unsafe code
Using marshaling as a mechanism for sharing information
Using serialization as a way of persisting state
To summarize, being a programmer I would not use the tool to convert my application and rather would migrate it myself by changing build targets. being said that if I have a exe that can do well like games with more RAM, then this is worth a try.
If I compile my entire Delphi application to a single exe, that file will grow to 5MB, 10MB, maybe more. When is that too big? What are the issues with this? This is a commercial application, currently on Delphi XE.
I'm aware of the option to Build with Runtime Packages. That sounded like a good idea, but I see comments here noting that there are some issues and disadvantages.
A Delphi application is never really too big.
However the larger the exe is, the harder it will be to redistribute the file.
Also if the executable is located on a network-disk start-up time may suffer.
A number of factors make the exe grow:
enabling debug info (will more or less double the exe size). Disable the inclusion of debug info in the final exe (see screenshot above).
including bitmaps (in an imagelist or likewise component) will also grow the exe substantially.
including resources (using a custom *.res) file will grow the size.
I would advise against putting resources in a separate dll.
This will complicate your application, whilst not reducing the loading time and distribution issues.
Turning off debug info in production code is a must.
If you have a Delphi-2010 or newer you can choose to include images in the png format.
This will take up much less space than old-skool bitmaps.
As long as your app is below 30 MB I would not really worry overmuch about the file size though.
Strip RTTI info
David suggests stripping RTTI info (this will disable live-bindings and some other advanced stuff), see: Reduce exe file
According to David it saves about 30% in exe size.
Exe-size will only increase loading time
Far more important is the amount of data your application allocates as storage.
The amount of space you use (or waste) here will have a far greater impact on the performance of your application than the raw exe size.
Strategy or tools to find "non-leak" memory usage problems in Delphi?
A better way to optimize is to make sure you don't leak resources
How to activate ReportMemoryLeaksOnShutdown only in debug mode?
Windows API calls memory leak detection
Use smart datastructures and algorithms
It gets too general to really narrow it down here, but use algorithms with O(slowly increasing) over O(wasteful increase).
Big-O for Eight Year Olds?
And try and limit memory usage by only fetching the data that you need instead of all the data you might need but probably never will.
Delphi data structures
Etc etc.
I don't know any issues with the exe-size of an application. I'm currently working at an application where the exe is around 60MB and there is no problem.
The only limitation I know are the limitation of the available memory. And an application with use of runtime-packages will consume more working memory because all runtime packages are load on application start. And the packages contains a lot of code wich is problably not used in your application.
I really like the idea of runtime-packages but I don't like the implementation in Delphi. One main disadvantage is that you have to ship your app with a bunch of packages wich makes it hard to maintain.
use RELEASE Build for reduce execute size , increase performance.also use runtime package for reduce exe file but use runtime package cause increase package(setup) file size.
I see the following means of debugging and wonder if there are others or which FOSS tools a small company can use (we don't do much Windows programming).
1 Debug in the IDE, by setting breakpoints, using watches, etc
2 Debug in the IDE, by using the Event Log
I got some good info from this page and tweaked it to add timestamps and indent/outdent on procedure call/return, so that I can see nested calls more quickly. Does anyone know of anything better ?
3 Using a profiler
4 Any others?
Such as MadExcept, etc?
(I am currently using Delphi 7)
The Delphi integrated debugger is powerful enough, even in Delphi 7, to handle most debugging tasks. It can also debug an application remotely. Anyway, there are situations where you may need to track different kind of issue:
To check for memory leaks, you can switch to a memory manager like FastMM4 which has good memory leak reporting. Profilers like AQTime have also memory allocation profilers to identify such kind of issues.
To investigate performance problems, you need a performance profiler. There are sampling profilers (less invasive, although may be less precise) and standard profilers (AQTime again, not cheap but very good, and others).
To trace exception, especially on deployed applications, you may need tools like JCL/JVCL (free), MadExcept or EurekaLog or SmartInspect
To obtain a log of what the application does, you can use OutputDebugString() and the IDE event viewer, or the DebugView standalone application. There are also dedicated tools like SmartInspect.
You can also convert Delphi 7 .map files to .dbg files and use an external debugger as the WinSDK WinDbg, and look at application calls in tools like ProcessExplorer
Some debugging tools may also offer features like code coverage checks (which code was actually executed, and which was never), platform compliance (check API calls are supported by a given platform), resources use and so on, but may be useful for larger developments.
Delphi 7's IDE is pretty good to start with, only look at 3rd party tools if you run into something you can't fix with what you've got:
It's error messages are informative and not excessively verbose.
The debugger is pretty good, you've got lots of options for inspecting variables, brakepoints, conditional brakepoints, data brakepoints, address brakepoint, module load brakepoint. It's call-stack view is good, it has some support for multi-threaded debugging.
Good step-by-step execution, step into, step over, run until return, etc.
3rd party tools help when you need to diagnose a problem on the client's computer (you have no Delphi IDE on the client's computer). If you can get the problem to manifest on your computer you get away with the IDE alone, no need for any addition, free or payed for.
Profiler: That's not a debugging tool. You use an profiler when you need to find bottlenecks in your application, or you need to do some speed optimizations.
Logging 3rd party frameworks: The good ones are not cheap, and you can do minimal logging without a tool (even ShowMessage works some times).
MadExcept, other tools that log exceptions: They usually require debugging information to be present in the EXE, and that's not a good idea because it makes the program slower AND it easier to hack. Again, if you can get the exception on your machine, you don't need the logger.
I'm not saying 3rd party debugging aids are not useful: they are, but I'd wait until I can clearly see the benefit of any tool before I commit to it. And in my opinion there's no such thing as free software: Even the software you don't pay for requires time to learn how to use it and requires changes to your programs and workflow.
For the bigger work, there is AQTime.
A cheaper solution for selected code is running it through Free Pascal (with the "randomize local variables option") and run it through valgrind. I've validated most my streaming code (which heavily has backwards compat constructs) that way.
Another such interesting switch is -CR, verify object method call. It basically turns every
TXXX(something).callsomething
into
if something is txx then
TXXX(something).callsomething
else
raise some exception;
Specially in code with complex trees this can give some precious information.
Normal Pascal language checking (Range, I/O, Overflow, sTack aka -Criot) can be useful too, and is also available in Delphi.
Some range check errors (often loop bounderies) that can be detected statically, will result in compile errors in (beta) FPC 3.0.x+.
You can try the "Process Stack Viewer" of my (open source) sampling profiler:
http://code.google.com/p/asmprofiler/wiki/ProcessStackViewer
(you need some debug info: a .map or .jdbg file)
You can watch the stack (also the raw stack, with "false positives" but useful when normal stack walking is not possible) of all threads, and do some simple sampling profiling.
Note: My (older) instrumenting profiler does exact profiling, is on the same site.
Not sure why you would want to upgrade to debug a problem. Yes the newer IDE's provide more features to help you debug something, but taking into consideration your previous question on how to debug your program when it hangs, I'd sooner suggest a good logging solution like CodeSite or SmartInspect. They provide way more flexibility and features than any home-grown solution based around the event log and do not require you to step through the code, like the IDE does (which affects timings in multi-threadeded problems).
Update
Sorry, didn't get that FOSS stands for Free and Open Source Software. CodeSite and SmartInspect are neither. For a free solution, you could have a look though at the logging features within the Jedi family of tools.
Rad Studio XE includes a light version of CodeSite, and AQTime, which together are both compelling improvements.
You could do a lot with JCL Debug, MadExcept, and other profiling and logging tools, but CodeSite and AQTime are the two best for their respective tasks.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I recently upgraded from Delphi 4 to Delphi 2009. With Delphi 4 I had been using GpProfile by Primoz Gabrijelcic as a profiler and Memory Sleuth by Turbo Power for memory analysis and leak debugging. Both worked well for me. But I now need new tools that will work with Delphi 2009.
The leader in Profiling/Analysis tools for Delphi by a wide margin is obviously AQTime by AutomatedQA. They recently even gobbled up Memproof by Atanas Soyanov, which I understood was an excellent and free memory analysis tool, and incorporated its functionality into AQTime. But AQTime is very expensive for an individual programmer. It actually costs more than the upgrade to Delphi 2009 cost!
So my question is: Are there other less expensive options to do profiling and memory analysis in current versions of Delphi that you are happy with and recommend, or should I bite the bullet and pay the big bucks for AQTime?
Addenum: It seems the early answerers are indicating that the FastMM manager already included in Delphi is very good for finding memory leaks.
So then, are there any good alternatives for source code profiling?
One I'm curious about is ProDelphi by Michael Adolph which is less than one sixth the cost of AQTime. Do you use it? Is AQTime worth paying six times as much?
Addenum 2: I downloaded trial versions of both AQTime and ProDelphi.
AQTime was a bit overwhelming and a little confusing at first. It took a few hours to find some of the tricks needed to hook it up.
ProDelphi was very much like the GpProfile that I was used to. But its windows are cluttered and confusing and it's not quite as nice as GpProfile.
To me the big differences seem to be:
ProDelphi changes your code. AQTime does not. Changing code may corrupt your data if something goes wrong, but my experience with GpProfile was that it never happened to me. Plus one for AQTime.
ProDelphi requires you turn optimization off. But what you want to profile is your program with optimization on, the way it will be run. Plus one for AQTime.
ProDelphi only can profile down to the function or procedure. AQTime can go down to individual lines. Plus 2 for AQTime.
ProDelphi has a free version that will profile 20 routines, and its pro version costs less than $100 USD. AQTime is $600 USD. Plus 4 for ProDelphi.
The score is now 4-4. What do you think?
Addenum 3: Primoz Gabrijelcic is planning to get GpProfile working again. See his comments on some of the responses below. He on StackOverflow as Gabr.
Addenum 4: It seems like there may be a profiler solution after all. See Andre's open source asmprofiler, described below.
For the price, you cannot beat FastMM4 as a memory tracker. It's simple to use yet powerful and well integrated with Delphi.
I guess that you know that, without downloading, installing or changing anything else, just putting this line
ReportMemoryLeaksOnShutDown := True;
anywhere in your code, will enable basic reporting of memory leaks.
If you need more like crash information, EurekaLog is a very good product that we use. MadExcept also has a good reputation...
For profiling specifically, we have AQTime.
As for gpProfile, you can try and bug gabr on SO for an update... or go and update gpProfile yourself as it is open source. ;-)
I've made an open source profiler for Delphi:
http://code.google.com/p/asmprofiler/
It's not perfect, but it's free and open source :-).
The main reason I made it was because I missed an exact call tree.
For example, ProDelphi only stores a summary and total counts of all calls,
you cannot see what calls a specific procedure at a specific time did (or time
duration).
And it has a time chart, so you can see how the call duration changed over time.
Also take a Look at Eric Grange's Sampling Profiler
I've been very happy with AQtime for profiling.
Having used both GpProfile and AQTime I have found both to be effective at finding what method call is causing a bottle neck.
However AQTime can also tell me what line of code is causing this, without making any changes to my source code (although it works best with TD32 debugging and debug dcus).
I recently used it to speed up a routine by about 30x (due to bad use of a internal library function)
However I didn't have to pay for it myself!
We use AQTime Pro and are happy with it. Smartbear have recently released a completely free AQTime standard edition. Most of the features are still there but they have of course removed a bit
I agree with you about the interface of ProDelphi, but it does a good enough job that we're happy to stay with it. We only need to profile very occasionally when we have a significant performance issue, and it's always helped us find the problem pretty quickly. Very good value for money, and Michael seems pretty good about keeping it updated for new versions.
One thing I would suggest is that because it does require code to be inserted, having all the relevant code in some kind of VCS is invaluable. When we need to profile, we:
Check all relevant files in
Check them all out
Do the profiling we need, then
Cancel all checkouts, effectively rolling back to where we were.
Has anyone tried the Profiler component at Delphi Area? It is freeware with source and it's writeup says:
If you are looking for an easy and
accurate way to measure execution time
of your code for free, TProfiler is
what you need. TProfiler is a
non-visual and debugging component
that enables you to create named
timers in your code.
Each timer of TProfiler provides the
following information:
The number of times that the timer was
activated (Hit Count) The total
execution time The average execution
time on each hit Execution time on
the first hit Execution time on the
last hit The hit with minimum
execution time The hit with maximum
execution time
It's true, for profiling I miss Primoz' GpProfile, and haven't found a good replacement. I once tried AQTime, but wasn't too happy with it for the price.
For tracking of memory leaks and dodgy memory accesses however I couldn't be happier than I am with FastMM4.
I've been using ProDelphi for a long time & find it meets my needs.
I've been able to achieve stunning results in system performance improvements by using the data it provides.
For small projects the free version is fine.
For larger projects, you'll need the (Paid) pro version.
For a profiler you might try SmartInspect from Gurock Software. I never used GpProfile, but quickly glancing at its feature set reminded me of SmartInspect. Interestingly it doesn't claim to be a profiler, but it seems to be as much of one as GpProfile (unless I am missing something). It supports Delphi 2009 and has a free Trial and is a little cheaper then AQTime.
Note: SmartInspect is a logger rather than a profiler.
The FastMM4 memory manager mentioned in this older answer ("How to monitor or visualize memory fragmentation of a delphi application") keeps a list of all allocations which can be queried at run time (and displayed in a grid using the included demo application). It does not exactly show which object leaks, as the statistics are per block size. But it can be useful for long-time monitoring of applications in production, for example servers or services. I am currently integrating it in a (commercial) web application server framework as the 'VisualMM' add-on.