Does qmake support running sanity checks on project dependencies? - qmake

I'm working on a legacy Qt project which relies heavily on qmake, and I've experienced a fair share of subtle build errors which could be easily avoided with some sanity checks, such as verifying if a library or a file is actually where it's expected to be.
Does anyone know if qmake supports running sanity checks at the prebuild stage?

qmake supports test functions which can be used to test preconditiosn. See qmake manual's page on test functions.

Related

Handling complex and large dependencies

Problem
I've been developing a game in C++ in my spare time and I've opted to use Bazel as my build tool since I have never had a ton of luck (or fun) working with make or cmake. I also have dependencies in other languages (python for some of the high level scripting). I'm using glfw for basic window handling and high level graphics support and that works well enough but now comes the problem. I'm uncertain on how I should handle dependencies like glfw in a Bazel world.
For some of my dependencies (like gtest and fruit) I can just reference them in my WORKSPACE file and Bazel handles them automagically but glfw hasn't adopted Bazel. So all of this leads me to ask, what should I do about dependencies that don't use Bazel inside a Bazel project?
Current approach
For many of the simpler dependencies I have, I simply created a new_git_repository entry in my WORKSPACE file and created a BUILD file for the library. This works great until you get to really complicated libraries like glfw that have a number of dependencies on their own.
When building glfw for a Linux machine running X11 you now have a dependency on X11 which would mean adding X11 to my Bazel setup. X11 Comes with its own set of dependencies (the X11 libraries like X11Cursor) and so on.
glfw also tries to provide basic joystick support which is provided by default in Linux which is great! Except that this is provided by the kernel which means that the kernel is also a dependency of my project. Now I shouldn't need anything more than the kernel headers this still seems like a lot to bring in.
Alternative Options
The reason I took the approach I've taken so far is to make the dependencies required to spin up a machine that can successfully build my game very minimal. In theory they just need a C/C++ compiler, Java 8, and Bazel and they're off to the races. This is great since it also means I can create a Docker container that has Bazel installed and do CI/CD really easily.
I could sacrifice this ease and just say that you need to have libraries like glfw installed before attempting to compile the game but that brings the whole which version is installed and how is it all configured problem back up that Bazel is supposed to help solve.
Surely there is a simpler solution and I'm overthinking this?
If the glfw project has no BUILD files, then you have the following options:
Build glfw inside a genrule.
If glfw supports some other build system like make, you could create a genrule that runs the tool. This approach has obvious drawbacks, like the not-to-be-underestimated impracticality of having to declare all inputs of that genrule, but it'd be the simplest way of Bazel'izing glfw.
Pre-build glfw.o and check it into your source tree.
You can create a cc_library rule for it, and put the .o file in the srcs. Even though this solution is the least flexible of all because you not only restrict the target platform to whatever the .o was built for, but also make it harder to reproduce the whole build, the benefits are sometimes worth the costs.
I view this approach as a last resort. Even in Bazel's own source code there's one cc_library.srcs that includes a raw object file, because it was worth it, as the commit message of 92caf38 explains.
Require that glfw be installed.
You already considered this option. Some people may prefer this to the other approaches.

Dealing with shared helpers in Common Test suites?

I've got an Erlang project comprising a bunch of different applications. I'm using Common Test to do some of the testing.
apps/foo/suites/foo_SUITE.erl
apps/bar/suites/bar_SUITE.erl
I'm starting to see duplication of utility code in those suites.
Where should I put my utility code so that it can be shared between the two suites?
I've considered adding another application:
apps/test_stuff
...but I can't make the CT suites depend on this without making the application under test depend on this (or can I?). I don't want to do that, because test_stuff is only needed when testing.
I have a similar problem with my eunit tests, both between applications (apps/foo/test vs. apps/bar/test), and where I'm using similar functionality between the eunit and CT tests in the same application (apps/bar/suites vs apps/bar/test). Can I use the same solution for this case as well? Or do I need to ask another question about that?
Do you think ct:require/1,2 could help you so that foo and bar SUITE would require test_stuff before it gets executed? For more information http://www.erlang.org/doc/man/ct.html#require-1
It depends on how you are packaging your final releases. For example, I use rebar for relase management. I have Cowboy fetched along with other dependencies for testing purposes, but in my reltool.config, I omit it, so it doesn't get packaged with the final product. I use rebar to run Common Test, and it's able to add Cowboy to the path without having it bundled as a lib with everything else or added as a dependency to the app I'm testing.
However, if you have another process which infers your release configuration from your dependencies, you'll have to find a way to exclude your test code when you generate a release.

Is it possible to run erlang without compilation?

Is there any VM for Erlang that allows you to do compilation on the fly instead of compiling before?
There is a possibility to compile from the shell, thanks Martin.
Now, from the Erlang shell (or some other module!):
1> compile:file("mymod.erl").
{ok,mymod}
2> mymod:myfun().
Hello Joe
Is there any pros or cons with doing this?
Will you still be able to hot swap code?
Is it the regular use-case to handle code?
What benefits does the compiler give you in the end then?
From the Erlang shell, you can compile a module on the fly using c("path/to/module.erl"). You can also access this functionality through the compile module, specifically the compile:file/{1,2} functions.
For example, suppose we have a file mymod.erl:
-module(mymod).
-export([myfun/0]).
myfun() -> io:format("Hello Joe~n").
Now, from the Erlang shell (or some other module!):
1> compile:file("mymod.erl").
{ok,mymod}
2> mymod:myfun().
Hello Joe
See Erldocs on the compile module for more information.
You can do a great deal with the Erlang compiler in runtime. For example, you can dynamically generate code for a module (use erl_syntax!) and then compile it without even writing it to a file using compile:forms/{1,2}.
(Insert standard speech on great power and great responsibility.)
Will you still be able to hot swap code?
Yes.
Is it the regular use-case to handle code?
No. Normally Erlang code is compiled ahead of time into BEAM bytecode. Depending on whether Erlang was started in embedded or interactive mode, the modules are either loaded on startup, or dynamically as they are referenced. If you are building a release, you basically have to compile ahead of time.
What benefits does the compiler give you in the end then?
Well, for one thing, we can build compact releases without unnecessary components like the compiler. Of course, we also get all the traditional benefits of ahead-of-time compilation, particularly that of not having to waste time compiling all the time.
To sum it up, unless you fully understand the implications and have a very good reason not to compile your code ahead of time, please follow the standard practices.
The Erlang VM can only run compiled code! If you want to interpret Erlang code then you need an interpreter. The module erl_eval implements an Erlang interpreter and is part of the standard Erlang/OTP distribution. It is used by the Erlang shell to interpret the expressions entered.
All code handling in the Erlang VM, whether compiling, loading or updating, is done at the module level so it is impossible to compile or load a just one function. The Erlang compiler is written in Erlang and always available and can compile to either a file or a binary which can be immediately loaded into the system. As #MartinTörnwall has pointed out compiling a module from the shell using c(module) is in essence compiling on the fly.
So there would be no problems in automatically compiling code on the fly when it is used, at the module level. It is just that the current system is not designed to work that way and by default when it tries to load a module it only looks for the pre-compiled object file, the .beam file.
Erlang has an interpreter escript. Entire Erlang archive can be written in script. Almost all features are available.
By default, the script will be interpreted. You can force it to be compiled by including the -mode(compile). in the script.
Though it depends on the way you design your application, regular practice is to have .erl files which are compiled and run than having escript files.
So now you have many options.
Compile .erl file to .beam using c(my_module) this auto loads the .beam file. So the existing VM can run it on the fly. On in code you can use compile module functions like file, purge and load to load and run it on the fly.
Compile and keep the .erl files using erlc, erl -make, rebar, etc (Erlang has rich support) and then run it. You can build archives, boot scripts, rel etc to manage running and release of the Erlang software. This usually is the practice for production.
Use escript and run everything in interpreted mode.
Use escript and give -mode(compile) option to tell Erlang VM that at runtime (when starting to run escript) compile the code and run the compiled code (in memory)
Is there any pros or cons with doing this?
Compiled code is faster than interpreted code. I dont see any other right now in Erlang as pretty much everything is supported in both. Erlang even supports combination (Calling compiled code from interpreted code)
Will you still be able to hot swap code?
Yes in all cases. Your code also should be able to handle this.
Is it the regular use-case to handle code?
Option 2 for production. Option for 1 for learning / simple development. Option 3 and 4 in need basis for specific requirements (May be one time running).
What benefits does the compiler give you in the end then?
To make it clear, erlc program provides a common way to run all compilers in the Erlang system and compile module gives an interface to Erlang compilers. Compiler gives intermediate binary .beam file which helps in running Erlang code faster than interpreted counterpart. They also catch syntax errors (compilation errors).

How to force prebuild script to run at each compile

We currently use Delphi 2009 and GIT to develop an application. We have set up a prebuild script to generate a version number and build ID using information from git and compile this as a resource that is included in the project. The problem is that this script doesn't run on a regular compile. This means that the other developers can end up with a discrepancy between the actual version number and the number in the resource (especially when switching branches in git).
Since we use our software to make some critical calculations, we would like to use this build id to reproduce calculations and track down problems.
Other than trying to force my developers to frequently press shift-F9, how can I ensure that the the prebuild script gets run when necessary (ideally at each compile)?
Jason
UPDATE: It's true that the pre-build script gets run at each compile. The problem was that I expected to get a different result pressing F9 after making a tag in git, even though no code had changed.
We solved a similar issue by writing a custom IDE plugin which uses the IOTAIDENotifier50 interface. Specifically using the BeforeCompile method, to test some required project settings and also generate dynamic version information (VERSIONINFO resource). It gets called for every type of build (compile and build). We also generate a unique exe serial number and log everything, which helps us track down issues and is similar to your script. For completeness we have only done this in Delphi 2007 and Delphi XE.
Pre build actions do run before every compile.
You state in a comment that the actions sometimes don't run when you press F9. That makes sense because F9, or Run, only invokes a compile if source is deemed to have changed.
A BeforeCompile notifier plug in will behave in exactly the same way. Your solution is to make sure that you compile before running using Ctrl+F9.

Managing/Using libraries with Debug builds vs Release builds

I'm curious about everyones practices when it comes to using or distributing libraries for an application that you write.
First of all, when developing your application do you link the debug or release version of the libraries? (For when you run your application in debug mode)
Then when you run your app in release mode just before deploying, which build of the libraries do you use?
How do you perform the switch between your debug and release version of the libraries? Do you do it manually, do you use macros, or whatever else is it that you do?
I would first determine what requirements are needed from the library:
Debug/Release
Unicode support
And so on..
With that determined you can then create configurations for each combination required by yourself or other library users.
When compiling and linking it is very important that you keep that libraries and executable consistent with respect to configurations used i.e. don't mix release & debug when linking.
I know on the Windows/VS platform this can cause subtle memory issues if debug & release libs are mixed within an executable.
As Brian has mentioned to Visual Studio it's best to use the Configuration Manager to setup how you want each configuration you require to be built.
For example our projects require the following configurations to be available depending on the executable being built.
Debug+Unicode
Debug+ASCII
Release+Unicode
Release+ASCII
The users of this particular project use the Configuration Manager to match their executable requirements with the project's available configurations.
Regarding the use of macros, they are used extensively in implementing compile time decisions for requirements like if the debug or release version of a function is to be linked. If you're using VS you can view the pre-processor definitions attribute to see how the various macros are defined e.g. _DEBUG _RELEASE, this is how the configuration controls whats compiled.
What platform are you using to compile/link your projects?
EDIT: Expanding on your updated comment..
If the Configuration Manager option is not available to you then I recommend using the following properties from the project:
Linker->Additional Library Directories or Linker->Input
Use the macro $(ConfigurationName) to link with the appropriate library configuration e.g. Debug/Release.
$(ProjectDir)\..\third-party-prj\$(ConfigurationName)\third-party.lib
Build Events or Custom Build Step configuration property
Execute a copy of the required library file(s) from the dependent project prior (or after) to the build occurring.
xcopy $(ProjectDir)\..\third-party-prj\$(ConfigurationName)\third-party.dll $(IntDir)
The macro $(ProjectDir) will be substituted for the current project's location and causes the operation to occur relative to the current project.
The macro $(ConfigurationName) will be substituted for the currently selected configuration (default is Debug or Release) which allows the correct items to be copied depending on what configuration is being built currently.
If you use a regular naming convention for your project configurations it will help, as you can use the $(ConfigurationName) macro, otherwise you can simply use a fixed string.
I use VS. The way that I do it is that the libraries I need through the references of the project. Which basically just says in what folder to look for a specific library at project load time. I develop my libraries to be as project independent or reusable as possible. Therefore they are all projects of their own. So of the libraries that I need for a specific project, I create a "3rdParty" or "libs" folder at the same level as my "src" folder in my svn folder tree. I tend to only use released libraries, but when I get some unknown issues and want to switch to debug, I manually copy a debug version of the files in the "lib" folder and reload the project.
I am unsure wether I should be keeping both debug and released versions in my svn tree. Although since they are projects of their own, keeping them in the svn tree of another project doesn't right. They can be built again without an hitch at any moment.
And then I wanted to find a way of making the switch more...hmmm...well basically automatic if you while, but that's not what I really mean. It just feels that switching the files manually between released and debug isn't right. Maybe I haven't found it yet, but what I would like is an option that would do like:
For library "stack.dll" look in "......\3rdParty\" for release and "......\3rdPartyD\" for debug.
Anything that those something like I don't know. What do you suggest?
Remember libraries are external projects. There the built files are totally elsewhere. In fact think of it as you have to check out another project, build it, and copy the built library if you want another copy. How would you set that up?

Resources