Is there a way to group a bunch of DLL's and still use them at run time (not zipped up). Sorry this question sounds terse and stupid, but I'm not sure what more to ask.
I'll explain the situation though:
We've had two standalone Windows Applications and now one of our Applications has swelled to such ungainly proportions that the other application cannot run outside of the scope of the first app. We want to maintain some of the encapsulation we had while letting the smaller program in on some of the bigger program's features.
There is no problem in running the application, other than we don't want to send out all the 20-30 DLL's that the smaller project has.
It is possible to do this by adding startup code which checks if the DLLs are present on the target system and if not then extracts them from the resources section (or simply tagged onto the end of the exe). A good example of this being done is Process Explorer - it's distributed as a single binary, but when run it extracts and installs a driver.
If you have a situation where most, or all, of those assemblies have to be kept together, then I would highly recommend just merging the code files into the same project and recompiling. This would leave you with one assembly.
Of course there are other considerations like compile time, overall size of the final dll, how often various pieces change, and whether each component is deployed without the others.
One example of a company that did this is Telerik. Their dev components are all compiled into the same assembly. This makes deployment an absolute breeze. Contrasting that is Dev Express which put just about each control into it's own assembly. Because of this just maintaining, much less deploying, a Dev Express project is not something for the faint of heart.
(I don't work for either of those companies. However, I have a lot of experience with both toolkits.)
You could store the DLLs as Resources, and use BTMemoryModule, which essentially allows you to LoadLibrary on a Stream.
That way you could compile-in the multiple DLLs straight into the EXE or into a single resource DLL.
see http://www.jasontpenny.com/blog/2009/05/01/using-dlls-stored-as-resources-in-delphi-programs/
Related
I have a large application to manage consisting of of three or four executables and as many as fifty .dlls. Many of the source code files are shared across many of the projects.
The problem is a familiar one to many of us - if I change some source code I want to be able to identify which of the binaries will change and, therefore, what it is appropriate to retest.
A simple approach would be simply to compare file sizes. That is an 80% acceptable solution, but there is at least a theoretical possibility of missing something. Secondly, it gives me very little indication as to WHAT has changed; It would be ideal to get some form of report on this so I can then filter out irrelevant (e.g. dates/versions copyrights etc..)
On the plus side :
all my .dcus are in a row - I mean they are all built into a single folder
the build is controlled by a script (.bat)(easy, for example, to emit .obj files if that helps)
svn makes it easy to collect together any (two) revisions for comparison
On the minus side
There is no policy to include all used units in all projects; some units get included because they are on a search path.
Just knowing that a changed unit is used/compiled by a project is not sufficient proof that the binary is affected.
Before I begin writing some code to solve the problem I would like to ask the panel what suggestions they might have as to how to approach this.
The rules of StackOverflow forbid me to ask for recommended software, but if anyone has any positive experiences of continuous integration tools that would help - great
I am open to any suggestion or observation that is relevant in this context.
It seems to me that your question boils down to knowing which units are contained in your various executables. Since you are using search paths, it will be hard for you to work this out ahead of time. The most robust way to find out is to consult the .map file that the compiler emits. This contains a list of all units contained in your executable.
Once you know which units are contained in each executable, you need to know whether or not anything has changed in those units. That information is contained in your revision control system. Put this all together and you have the information that you need.
Of course, just because the source code for a unit has changed, you might argue that re-testing is not needed. Perhaps the only change made was the version, or the date in a copyright label or some such. But it is asking too much to be able to ask a computer to make such a judgement. At some point you need a human to step up and take responsibility.
What is odd about this though is that you are asking the question at all. It seems to me to be enormously risky to attempt partial testing. I cannot understand why you don't simply retest the entire product.
After using it for > 10 years for commercial in-house and freelancer work in large projects, I can recommend to try Apache Ant. It is a build tool which supports dependencies, and has many very helpful features.
Apache Ant also integrates nicely with CI tools such as Hudson/Jenkins, Bamboo etc.
Another suggestion - based on experience with Maven - is to design the general software architecture as modular as possible. If modules (single or multiple source or DCU files in one directory) use a version number in the directory name as a version number, it is possible to control exactly how application are composed from these modules.
If you want to program such a tool yourself the approach would be something like this:
First you need to detect wheter there were any changes made to seperate source files. As you already figured out comparing the file size is bad idea as the file size can stay the same despite lots of changes made to it (as long as there is same amount of text in pas file its size won't change). So instead you could check the last modification time for specific file or create some hash value like MD5 hash for comparison (can be quite slow).
Then you need to generate yourself a dependancy tree which will tell you which files are used for which project/subproject.
Finally based on changes detected in seperate files you check the dependancy tree to see which projects needs to be recompiled.
The problem of such approach is that you would probably have to update the dependancy tree manually each time when new unit is added to the project or an existing one is removed from the project.
But the best way would be to go and use some version controll software istead of reinventing the wheel. I myself like the way how GIT works and I belive that with proper implementation of GIT into the project mannager itself could be quite powerfull do to GIT support of branching/subbranching (each project is its own branch, each version of your software can be its own subbranch).
Now latest version of Delphi does have GIT integration done though SVN but this unfortunately limits some of best GIT functionality. So if you maybe decide to go and integrate GIT support directly into Delphi I'm first in line to use it.
We have an old VB6 project that uses ActiveX controls, some of which we build and others we get from third-party vendors.
Currently, we use a .csproj project which does the following,
Execute regsvr32 to register the OCXs
Execute vb6 to build the VB6 project
Execute regsvr32 to unregister the OCXs
This registering/unregistering is ugly and is a bit of a pain for local developer builds with UAC enabled. Is it at all possible to build a VB6 project without having to register any controls?
I apologize if this has already been asked before. The only similar questions I was able to find were about how to build VB6 projects, and answers to these mention the same solution of register, build, unregister.
It sounds like these people are merely working on clients of these OCXs rather than modifying and recompiling the OCXs themselves.
If so, you should be administering the installation of these libraries just as you administer the VB6 development system itself. This means each workstation needs to have the control suites you are using installed once (well, and maintained when new releases are placed into use). Installers for developer libraries deploy things like .DEP files as well as design-time license key registry entries, so using regsvr32 shouldn't be considered a viable strategy anyway.
If you set the developer workstations up properly and maintain them there isn't any reason to be registering and unregistering such things.
It means the original developers probably did not set the "binary compatibility" correctly. Which means the VB6 dll's get a "new com guid" every time they are built.
Which means your original VB6 developers were probably a bunch of hacks.
You can read the section here on Binary Compatibility.
http://support.microsoft.com/kb/161137
Get in a time machine and go back and punch the person in the face who said "We don't need
to work out the binary compatibility issues now, we'll just unregister and re-register the components... Easy Peezey!"................
If I'm wrong, please let me know. But every time I've seen "unregister the com" and "re-register the com".........it goes back to that brainiac decision.
Here is a longer discussion on it:
http://www.techrepublic.com/article/demystifying-version-compatibility-settings-in-visual-basic/5030274
EDIT:
If the ocx's are not changing........then you should only have to register them once on the build machine once.
The direct answer is no, it is not possible to compile a VB6 project with OCX dependencies without those dependencies being registered.
Furthermore, the act of compilation itself involves VB6 attempting to register what it has just built (unless you are compiling to an EXE). This generally requires the VB6 IDE and/or its compiler to run with "admin" permissions. Therefore the permissions are a hard to avoid issue regardless.
I believe these issues can be obfuscated by the fact that VB6 itself (the IDE and/or the runtime) will sometimes try to automatically register certain things for you, but will keep silent when it does so.
You should probably create a different process to setup a development PC from the build process you use from deployment. This may "feel" wrong especially if you have experience with other programming environments, but I would stress that VB6 can be very painful & problematic to work with and so pragmatism is generally in order.
On the development PCs: Setup all the unchanging dependencies once (and document them) and then leave them alone (as noted in another answer.) When weird dependency problems occur, verify the PC is setup correctly before doing anything else.
If you have all the sources to your dependencies, then I would consider if you can actually run them all in a VB6 project group (VBG) and not compile them at all. (A VBG is akin to a .NET solution though far less powerful.) I do this often and it cuts out a lot of wasted time. Developers don't necessarily need code compiled to EXE / DLL / OCX - they often just need to be able to run it in the IDE.
On the build PC: If you can always start with a clean environment, like in a virtual machine, then I think its actually a good idea to register everything from scratch in an automated fashion as this helps to verify nothing is missing or mismatched. Re-using the same build environment without doing this can mask problems when some dependency has changed in source control but still exists on the build machine. On a VM generally permissions aren't a limiting factor.
Notes:
If you are building an EXE, VB6 does not require any elevated permissions, as far as I can recall.
Running code in the VB6 IDE does not either.
[Caveat 1]:
It may technically be possible to create a side-by-side application manifest file for VB6.exe itself and include in that manifest whatever dependencies you need, thereby avoiding having to register them.
But this would fall well outside of the normal ways to use VB6 tools - its a hack - and possibly is not worth the potentially large effort. I don't think I've ever seen a working example and so I don't recommend this as a practical solution, but mention it for completeness.
Maybe in some locked-down corporate IT scenario this could pay off... maybe. In that scenario doing dev work in a VM might be a better option though.
tldr; at bottom.
Ok, so once again an interesting problem and I'm looking for a fun and interesting solution.
My current project involves being very modular, meaning the program functionality will be easily changed based on different modules and the program would adapt.
So I started out with the typical route, which is using DLL plugins. Now this is just way to normal, I want to think outside the box a bit.
The modules included in my program are long running campaigns that may take weeks to finish, and there will be many running at a time. So stability is a big issue, so I thought about what Google Chrome does. Processes, not DLLs or threads.
So I have a framework going and I need a way to get some information about each module (which are now EXEs). Now for my DLL framework I was exporting a "Register" function that would fill in some information.
So I thought to myself, hey EXEs can export functions, let's see if that actually works...It doesn't. I did some research into how Windows handles theses things and I don't feel like hacking the PE headers on the fly (but it's the out of the box kind of thinking I'm going for).
I'm planning on using named pipes and CLI parameters to transfer data between the main program and the module exe's. I could use that in a register fashion, but I want to here other peoples thoughts.
tldr: I'm using EXE's instead of DLL's for plugins. Looking for a way to easily export one time information like a exported "Register" function would on a DLL. Thoughts?
You might still consider having the modules written as DLLs with defined entrypoints (e.g., the Register function). Then you write the executable that loads the specified DLL. Your main application would fire off the driver executable and give it a name of a plugin DLL.
That way it is still easy to define and export the set of APIs that must be provided yet still run it as a separate process. The one executable that you write can load the specified DLL and then handle the necessary IPC with the main app.
You could define a protocol via the stdin/stdout, named pipes, sockets, etc.
I have successfully used 'plain' COM for several projects, and objects inheriting from TAutoObject. The bonusses here are IDL; the interopability with .Net, VBA and other non-Delphi things; and the fact that implementors still can choose wether to supply a DLL, an exe, an NT-service, and optionally run hosted over the network (COM+/DCOM). There may be several considerations you should handle about multi-threading and locking, but I found all that I needed to know online.
You can, of course, not use symbols exported by a (running) exe since it is running in another boundary. But, you can load an exe as an image (as you would do with a library) using LoadLibrary(Ex) and then, use the functions exported by the exe. I have tested (just for fun) when debugging PeStudio. See the snapshot below of chrome.exe loaded in the process space of PeStudio.exe using LoadLibrary.
this is a continuation of the discussion I started here. I would like to find the best way to modularize Delphi source code as I'm not experienced on this field. I will be gratefull for all your suggestions.
Let me post what I have already written there.
The software developed by the company I work for consists of more than 100 modules (most of them being something like drivers for different devices). Most of them share the same code - in most cases classes. The problem is that those classes are not always put into separate, standalone PAS units. I mean that the shared code is often put into units containing code specific to a module. This means that when you fix a bug in a shared class, it is not enough to copy the PAS unit it is defined in into all software modules and recompile them. Unfortunately, you have to copy and paste the fixed pieces of code into each module, one by one, into a proper unit and class. This takes a lot of time and this is what I would like to eliminate in the nearest future by choosing a correct approach - please help me.
I thought that using BPLs distributed with EXEs would be a good solution, but it has some downsides, as some mentioned during the previous discussion. The worst problem is that if each EXE needs several BPLs, our technical support people will have to know which EXE needs which BPLs and then provide end users with proper files. As long as we don't have a software updater, this will be a great deal for both our technicians and end users. They will certainly get lost and angry :-/.
Also compatibility issues may occur - if one BPL is shared by many EXEs, a modification of that BPL can bee good for one EXE and bad for some other ones.
What should I do then to make bug fixes quicker in so many projects? I think of one of the following approaches. If you have better ideas, please let me know.
Put shared code into separate and standalone PAS units, so when there is a bug fix in one of them, it is enough to copy it to all projects (overwrite the old files) and recompile all of them. This means that each unit is copied as many times as many projects it is used by.
This solution seems to be OK as far as a rarely modified code is concerned. But we also have pas units with general use functions and procedures, which often undergo modifications. It would be impossible to do the same procedure (of copying and recompiling so many projects) every time someone adds a new function to this file.
Create BPLs for all the shared code, but link them into EXEs, so that EXEs are standalone.
For me it seems the best solution now, but there are some cons. If I make a bug fix in a BPL, each programmer will have to update the BPL on their computer. What if they forget to do that? However, I think it is a minor problem. If we take care of informing each other about changes, everything should be fine. What do you think?
And the last idea, suggested by CodeInChaos (I don't know if I understood it properly). Sharing PAS files between projects. It probably means that we would have to store shared code in a separate folder and make all projects search for that code there, right? And whenever it is necessary to modify a project, it would have to be downloaded from SVN together with the shared files folder, I guess. Each change in the shared code would have to cause recompilation of each project that uses that code.
Please help me choose a good solution. I just don't want the company to lose much more time and money than necessary on bugfixes, just because of a stupid approach to software development. So far nobody has cared about it and you can imagine how many problems it causes.
Thank you very much.
You say:
Create BPLs for all the shared code, but link them into EXEs, so
that EXEs are standalone.
You can't link BPLs into an executable. You are simply linking in the separate units that are also in the BPL. That way you don't actually use or even need the BPL at all.
BPLs are meant to be used as shared code, i.e. you put the code that is shared into one or several BPLs and use that from each of the .exes, .dlls or other .bpls. Bugfixes (if they don't change the public interface of the BPL) merely require the redistribution of that one fixed BPL.
As I said, decide on the public interface of a DLL and then don't change it. You can add routines, types and classes, but you should not modify the public interfaces of any existing classes, types, interfaces, constants, global variables, etc. that are already in use. That way, a fixed version of the BPL can easily be distributed.
But note that BPLs are highly compiler version dependent. If you use a new version of the compiler, you will have to recompile the BPL too. That is why it makes sense to give BPLs suffixes like 100, 110, etc., depending on the compiler version. An executable compiled with compiler version 15.0 will then be told to use the BPL with suffix 150, and an executable compiled with version 14.0 will use the BPL with suffix 140. That way, different versions of the BPLs can peacefully co-exist. The suffix can be set in the project options.
How do you manage different versions? Make a directory with a structure like I have for my ComponentInstaller BPL (this is the expert you can see in the Delphi/C++Builder/RAD Studio XE IDE under menu Components -> Install Component):
Projects
ComponentInstaller
Common
D2007
D2009
D2010
DXE
The Common directory contains the .pas files and resources (bitmaps, etc.) shared by each version, and each of the Dxxxx directories contains the .dpk, .dproj, etc. for that particular version of the BPL. Each of the packages uses the files in the Common directory. This can of course be done for several BPLs at once.
A versioning system might make this a lot easier, BTW. Just be sure to give each version of the BPL a different suffix.
If you actually want standalone executables, you don't use BPLs and simply link in the separate units. The option "compile with BPLs" governs this.
From my point of view trying to manage artifacts like Delphi units, libraries and executable files, you search at wrong place. I suggest you to turn around and start with refactoring of code, based on Design patterns implementation.
E.g. all common functions can be placed into one Singleton class, instances of common classes can be constructed with Abstract Factory, classes can interact through native Delphi implementation of interfaces instead of direct usage and so on. Even you can choose to implement Facade for all common parts of projects.
Of course, concrete choice of patterns and details of implementation depends on project specific and only you can decide what applicable in your case.
I suppose, that after looking to project in this vein you can find more natural ways of code organization and solution for your problems.
Some other things:
Of course, you must follow #CodeInChaos suggestion and share one copy of source file between all projects instead of copying it to each project manually. It may be useful if you adopt some standard for building environment, which will be mandatory for all developers (same folder structure, location of libraries, environment settings).
Try to analyze building and deployment process: for me it's looking abnormal when solution not built with latest version of code and not tested before deployment. (it's for your "If I make a bug fix in a BPL, each programmer ..." phrase).
Variant with standalone executable files looks better because significantly simplifies organization of testing environment and project deployment. Just choose adequate conventions for versioning.
Maybe this applied to other Delphi's (I've only used 7). We've got our code broken up so that nearly every DLL in our fairly massive app is in a different folder.
99% of the open source stuff I've downloaded to plug into Delphi have had all their source munged into one folder.
It seems like this was an assumption that the developers of Delphi made about the coding practices of it's users that may be non-obvious.
I don't think so. In fact, In more recent versions they've added features to the project manager to make it easier to deal with the fact that code is spread around different directories (such as the flatten directories option), so I think it is accepted that this is how many people organize their code.
I suspect it's more to do with projects growing organically over time, and whether anyone takes the time to tidy up.
I for one definitely do not put all the sources into one directory but rather keep them in groups that have something in common. e.g. I use subversion externals quite extensively
(see http://www.dummzeuch.de/delphi/subversion/english.html , the section about externals).
I prefer different modules to be hosted on different folders, then have a common folder for units that is shared among different modules, makes management easy. e.g
myClientServerApp:(parent)
Client folder :(child)
server filder (child)
lib - (child)
Back in DELPHI 7 I also had all files in one folder. It has easy for small projects, but very hard for med to big one.
So I began to create a folder structure for all DELPHI projects small or big.
Over the year I am trying to improve, this folder structure, and every new project I make a small improvement so that it is simpler, logical, and more organized.
This day I am trying to make some parts of it sharable to several project. Its work in progress.
It would seem that having all the units in one folder would save you headaches in doubly named units. On the other hand, it might be handier to keep your projects in different folders when checking in and out of your version control. On the other hand it really doesn't promote code reuse to have them separated out like that.