How to re-use existing CMake variables with new generator - opencv

I need to build OpenCV for both 32-bit and 64-bit in VS2015.
I'm aware that I need a separate build tree for each generator.
OpenCV's CMake configuration has approximately 300 user-configurable variables, which I have finally got set to my satisfaction. Now I want to use the exact same set of decisions to build the 64-bit version.
Is there a way to transfer the variable values that represent my decisions to the new build tree? (Other than opening two CMake-GUIs side by side and checking that all ~300 values correspond.)
BTW, if the generator is changed, CMakeCache.txt must be deleted, according to the CMake mailing list [ http://cmake.3232098.n2.nabble.com/Changing-the-the-current-generator-in-CMake-GUI-td7587876.html ]. Manually editing it is very risky and will likely lead to undefined behaviour.
Thanks

Turning my comment into an answer
You can use a partial CMakeCache.txt in the new directory (CMake will just pre-load the values that are there and reevaluate the rest).
So you can use a grep like approach and do
findstr "OpenCV_" CMakeCache.txt > \My\New\Path\CMakeCache.txt
Just tested it and seems to work as expected.
Reference
What are good grep tools for Windows?

Related

With bazel how do I be/make sure objects taken from cache have been build for the right system/libraries?

I got some strange glibc-related linker errors for builds with distributed build cache configured on build nodes running different Linux distributions.
Now I somehow suspect build artifacts from those machines with different glibc versions getting mixed up, but I don't know how to investigate this.
How do I find out what Bazel takes into account when building the hash for a certain build artifact?
I know I can explicitly set environment variables which then will affect the hash. But how can I be sure a given compiler, a certain version of glibc, etc. will lead to different hashes for built artifacts?
And how do I check/compare what's been taken into account?
This is a complex topic and a multi-facet question. I am going to answer in the following order:
How do I check/compare what's been taken into account?
How to investigate against which glibc a build linked?
How can I be sure a given compiler, a certain version of glibc, etc. will lead to different hashes for built artifacts?
How do I check/compare what's been taken into account?
To answer this, you should look into the the execution look, specifically you can read up on https://bazel.build/remote/cache-remote#compare-logs. The *.json execution log should contain everything you need to know (granted, it might be a bit verbose) and is a little easier to process with shell-magic/your editor.
How to investigate against which glibc a build linked?
From the execution log, you can get all the required hashes to retrieve cached artifacts/binaries from your remote cache. Given these files, you should be able to use standard tools to get to the glibc version (ldd -r -v binary | grep GLIBC).
How can I be sure a given compiler, a certain version of glibc, etc. will lead to different hashes for built artifacts?
This depends on the way you have setup for compilation toolchain. The best case would be a fully hermetic compilation toolchain, where all necessary files are declared using attributes like https://bazel.build/reference/be/c-cpp#cc_toolchain.compiler_files.
But this would also mean to lock-down the compiler sysroot. This should include all libraries you are linking against if you want full hermeticity. If you want to use some system libraries, you need to tell bazel where to find them and to factor in their hash: https://stackoverflow.com/a/43419786/20546409 or https://www.stevenengelhardt.com/2021/09/22/practical-bazel-depending-on-a-system-provided-c-cpp-library/
If you use the auto-detected compiler toolchain, some tricks are used to lock-down the sysroot paths, but expect some non-hermiticity. https://github.com/limdor/bazel-examples/tree/master/linux_toolchain is a nice write-up how to move from the auto-detected toolchain to something more hermetic.
The hack
Of course, you can hack around this. Note, this is inherently a bad idea:
create a script that inspects the system, determines everything important like the glibc version, maybe the linux distribution (flavor)
creates a string describing this variation and hash-summing it
use that as the instance key/name for your remote cache

how to minimize edit-build-test cycle for developing a single clang tool?

I am trying to build a clang static checker that will help me identify path(s) to a line in code where variables of type "struct in_addr" (on linux) are being modified in C/C++ progrmas. So I started to build a tool that will find lines where "struct in_addr" variable are being modified (and will then attempt to trace path to such places). For my first step I felt I only need to work with AST, I would work w/ paths as step 2.
I started with the "LoopConvert" example. I understand it and am making some progress, but I can't find how to only make the "LoopConvert" example in he cmake|build eco system. I had used
cmake -DCMAKE_BUILD_TYPE=Debug -G "Unix Makefiles"
when I started. I find that when I edit my example and rebuild (by typing "make" in the build directory) the build systems checks everything, seems to rebuilding quite a bit though nothing else has changed but 1 line in LoopConvert.cpp, and takes forever.
How can only rebuild the one tool I am working on? If I can shorten my edit-compile-test cycle I feel I can learn more quickly.
In a comment you say that switching to Ninja helped. I'll add my advice when using make.
First, as explained in an answer to Building clang taking forever, when invoking cmake, pass -DCMAKE_BUILD_TYPE=Release instead of -DCMAKE_BUILD_TYPE=Debug. The "debug" build type produces an enormous clang executable (2.2 GB) whereas "release" produces a more reasonable size (150 MB). The large size causes the linker to spend a long time and consume a lot of memory (of order 10 GB?), possibly leading to swapping (it did on my 16 GB VM).
Second, and this more directly addresses the excessive recompilation, what I do is save the original output of make VERBOSE=1 to a file, then copy the command lines that are actually relevant to what I am changing (compile, archive, link) into a little shell script that I run thereafter. This is of course fragile, but IMO the substantially increased speed is worth it during my edit-compile-debug inner loop.
I have not tried Ninja yet, but have no reason to doubt your claim that it works much better. Although, again, I think you want build type to be release for faster link time.

How to identify what projects have been affected by a code change

I have a large application to manage consisting of of three or four executables and as many as fifty .dlls. Many of the source code files are shared across many of the projects.
The problem is a familiar one to many of us - if I change some source code I want to be able to identify which of the binaries will change and, therefore, what it is appropriate to retest.
A simple approach would be simply to compare file sizes. That is an 80% acceptable solution, but there is at least a theoretical possibility of missing something. Secondly, it gives me very little indication as to WHAT has changed; It would be ideal to get some form of report on this so I can then filter out irrelevant (e.g. dates/versions copyrights etc..)
On the plus side :
all my .dcus are in a row - I mean they are all built into a single folder
the build is controlled by a script (.bat)(easy, for example, to emit .obj files if that helps)
svn makes it easy to collect together any (two) revisions for comparison
On the minus side
There is no policy to include all used units in all projects; some units get included because they are on a search path.
Just knowing that a changed unit is used/compiled by a project is not sufficient proof that the binary is affected.
Before I begin writing some code to solve the problem I would like to ask the panel what suggestions they might have as to how to approach this.
The rules of StackOverflow forbid me to ask for recommended software, but if anyone has any positive experiences of continuous integration tools that would help - great
I am open to any suggestion or observation that is relevant in this context.
It seems to me that your question boils down to knowing which units are contained in your various executables. Since you are using search paths, it will be hard for you to work this out ahead of time. The most robust way to find out is to consult the .map file that the compiler emits. This contains a list of all units contained in your executable.
Once you know which units are contained in each executable, you need to know whether or not anything has changed in those units. That information is contained in your revision control system. Put this all together and you have the information that you need.
Of course, just because the source code for a unit has changed, you might argue that re-testing is not needed. Perhaps the only change made was the version, or the date in a copyright label or some such. But it is asking too much to be able to ask a computer to make such a judgement. At some point you need a human to step up and take responsibility.
What is odd about this though is that you are asking the question at all. It seems to me to be enormously risky to attempt partial testing. I cannot understand why you don't simply retest the entire product.
After using it for > 10 years for commercial in-house and freelancer work in large projects, I can recommend to try Apache Ant. It is a build tool which supports dependencies, and has many very helpful features.
Apache Ant also integrates nicely with CI tools such as Hudson/Jenkins, Bamboo etc.
Another suggestion - based on experience with Maven - is to design the general software architecture as modular as possible. If modules (single or multiple source or DCU files in one directory) use a version number in the directory name as a version number, it is possible to control exactly how application are composed from these modules.
If you want to program such a tool yourself the approach would be something like this:
First you need to detect wheter there were any changes made to seperate source files. As you already figured out comparing the file size is bad idea as the file size can stay the same despite lots of changes made to it (as long as there is same amount of text in pas file its size won't change). So instead you could check the last modification time for specific file or create some hash value like MD5 hash for comparison (can be quite slow).
Then you need to generate yourself a dependancy tree which will tell you which files are used for which project/subproject.
Finally based on changes detected in seperate files you check the dependancy tree to see which projects needs to be recompiled.
The problem of such approach is that you would probably have to update the dependancy tree manually each time when new unit is added to the project or an existing one is removed from the project.
But the best way would be to go and use some version controll software istead of reinventing the wheel. I myself like the way how GIT works and I belive that with proper implementation of GIT into the project mannager itself could be quite powerfull do to GIT support of branching/subbranching (each project is its own branch, each version of your software can be its own subbranch).
Now latest version of Delphi does have GIT integration done though SVN but this unfortunately limits some of best GIT functionality. So if you maybe decide to go and integrate GIT support directly into Delphi I'm first in line to use it.

Why is the executable produced by Delphi 2009 IDE different to that produced on the command line?

I'm producing builds using MSBuild, and build configurations set up in the dproj on the command line. It's slightly disconcerting that the size of the executables thus produced are different (not by much, but still!) to what an IDE build produces. Any ideas why? I would have thought the same compiler is used?
The main power of building from the Delphi command-line compiler is standardization - you explicitly identify the options (on the command line, in the .cfg files, etc), and the compiler follows the options provided exclusively. In contrast, the IDE has many other behaviors that are not clear and explicit - for example, it may search library paths not specified in the Project Options. My guess is that something's happening in the IDE build of which you're not entirely aware - and this is why standardized builds are done from the command line.
To see what IDE is doind, check
Tools | Options | Environment Options | Compiling and Running | Show Command Line
And you can check the compiler messages.
The first answer on using the command line for build consistency is right on and it is probably something you needn't worry about if you are relying on a build system where production files are always sourced from the console builds.
On the other hand, if you really do want to figure out what is going on you should turn on map files (at the full detail level) and compare/diff them. If there are differences between the two they will show up there. Any other differences that may exist are likely a result of a commmand line option being different (such as a conditional flag that may be set in the IDE settings).
This behavior has existed in every version of Delphi I've used. (5 - 2006). I wouldn't worry to much about it. When I first discovered it I spent a lot of time trying to resolve the difference. Did I miss a compiler flag? Is there a discrepancy between the IDE and the command line compiler's supported options?
In the end I decided it wasn't that big of an issue. Both consistently produced functionally equivalent executables.
If you supply exactly the same params to the command line compiler the produced executables will virtually be identical.
In fact the IDE just calls the commandline compiler. Compile your project in the IDE and look at the messages window. you will see the full dcc32.exe call ...

Why does every build change the exe-file?

Building the same project (without any changes) produces binary different exe-files: some small regions of them are different. Empty project, version information (and auto-increment on every build) is turned off.
Why it happens? And is it possible to make delphi produce binary equal files for the same projects?
The various structures in the PE executable file format used by Windows include timestamps that are set by the compiler and linker.
It is possible to post-process the file to reset these values to a defined constant (I wrote a tool to do exactly this for a secure product that needed exact hash values), but this should only be done on ready-to-ship executables, as some debuggers rely on the timestamps for source lookup, etc.
Try changing the problem into "How do I avoid compiling if there are no changes to the source", might be easier to deal with.
I suspect compiler insert to *.exe encoded time, special ordinal numbers (for versioning) and maybe other things :)
It's impossible to force Delphi to produce equal binary output.
it may be, that some actual time-stamps are compiled into the exe-file.

Resources