Where do compiler flags come from when using qmake? - qmake

I have a qmake project in which I cannot debug because something adds -O2 -g to the end of the compiler flags in debug mode, overriding all my debug and optimization flags. I have greped the whole project for -O2 and there is none (I removed the one I had for release). Deleting the build folder and running qmake again didn't help. I'm trying to track down what adds compiler flags, but I'm missing something.
Known things that can add compiler flags:
QMAKE_CXXFLAGS - Adds flags as given in all builds.
QMAKE_CXXFLAGS_DEBUG - Adds flags as given in debug builds.
QMAKE_CXXFLAGS_RELEASE - Adds flags as given in release builds.
CONFIG - Adds flags that are difficult to trace. CONFIG += strict_c++ and CONFIG += c++17 managed to not have my -std=c++17 overwritten, but I can't tell what other flags that adds. Also the qmake call contains CONFIG+=debug which may or may not add other flags. I can't tell from the documentation.
mkspec - In projects->build it lists the effective qmake call which includes for example -spec linux-g++ which I think includes /usr/lib/x86_64-linux-gnu/qt5/mkspecs/linux-g++/qmake.conf which includes more files which add platform-dependent flags. Removing the spec flag didn't remove the undesired -O2 flag though. Also it works for other projects, so it's probably not the culprit.
TEMPLATE - Specifies how the project is organized. Normally it's just APP, but this one uses subdirs which may override flags as all sub projects need to have the same flags.
An ideal answer would list all ways to add compiler flags, in which order they are added, an explanation how to check what flags they add and how to change them.

What qmake does it simply produces a makefile. A generated Makefile only uses compiler flags from CXXFLAGS (plus DEFINES) and INCPATH make variables, unless you have some handcrafted rules. It is clearly viewable from a generated makefile.
And these make variables come directly from qmake vars, such as QMAKE_CXXFLAGS, DEFINES and INCLUDEPATH. (This is done internally in qmake source code; well, actually the stuff could be more complicated on some platforms, so refer to qmake source code too).
Now, QMAKE_CXXFLAGS is just a qmake's variable. So, in principle, it can be modified at any line of any qmake script. Given that these scripts depend on OS/arch/compiler/Qt build options/App options etc. your expectations of "an ideal answer" are overstretched too far.
But, roughly speaking, qmake sources its scripts in the following order (hint: see full dependency list in a generated makefile):
features/spec_pre.prf
<QMAKE-SPEC>/qmake.conf (usually includes features/qt_config.prf and a ton of Qt-related stuff)
features/spec_post.prf
features/default_pre.prf
<user project>
features/default_post.prf
all features/xxx.prf according to the final CONFIG value (note: order reversed!)
So if you miss some flag in your project, it probably originates either from default_post.prf (like release flags for release build), or from CONFIG (i.e. features/xxx.prf).

Related

gcov is invoking gtest sources and unit-tests. how can I avoid this?

I am working on creating a Jenkins pipeline for unit-testing maybe with GTest.
My plan is to use the following tools:
GTest for Unit-Testing, gcov for generating gcda and gcno Files and gcovr for xml or Html outputs of the unit-testing results.
It's working well till now with the help from the internet and particularly stack overflow.
But I am struggling with 3 issues.
gcov is creating gcda and gcno files for gtest sources and my unit-tests. Because gcovr is invoking them and I see them in the HTML files. how can I avoid this? I only want my production code in the HTML files.
I can only see code coverage for template classes if gcov is generating gcda and gcno files for my unit-tests. So I need a simple idea for 1) Maybe can I use an exclude flag in gcovr?
Unused functions in template classes (inline functions) are not covered. Code coverage is always 100% but I tried with different flags, and nothing helped.
-fprofile-abs-path --coverage -fno-inline -fno-inline-small-functions -fno-default-inline -fkeep-inline-functions
I added a picture to show, what I am talking about. UnitTests and GTests covering results should not appear in gcovr HTML...
You can filter out unwanted coverage data, but you can't create data that doesn't exist.
1. gcov is creating gcda and gcno files for gtest sources and my unit-tests. Because gcovr is invoking them and I see them in the HTML files. how can I avoid this? I only want my production code in the HTML files.
Use gcovr --exclude GoogleTest/ --exclude UnitTests/
Gcovr has a per-file filtering system that allows you to specify which source code files to include/exclude. For a file to be included in the coverage report,
any --filter pattern must match, and
no --exclude pattern must match.
Or phrased in reverse: a file is excluded if it doesn't match any --filter or if it matches any --exclude pattern.
If you don't provide an explicit --filter, then the default filter is the --root directory, which in turn defaults to the current working directory.
These patterns are regexes. Usually, these are used to match paths relative to the current working directory. For example, you can limit the reports to a src/ directory with gcovr --filter src/. Or you can exclude the GoogleTest/ directory with gcovr --exclude GoogleTest/.
Gcovr also has a way to filter gcda/gcno files (search_paths and --gcov-filter), but that is mostly useful as a performance optimization.
2. I can only see code coverage for template classes if gcov is generating gcda and gcno files for my unit-tests. So I need a simple idea for 1) Maybe can I use an exclude flag in gcovr?
This is by design. As explained above, you can solve this via gcovr's exclude flag.
You get a gcda/gcno file per compilation unit. Header files are included into multiple compilation units, so their coverage information is essentially split across all compilation units that include it.
So, if you want coverage for code in header files, and you include these headers into your unit tests, then gcovr must also process the gcda/gcno files from those unit tests.
3. Unused functions in template classes (inline functions) are not covered. Code coverage is always 100% but I tried with different flags, and nothing helped. -fprofile-abs-path --coverage -fno-inline -fno-inline-small-functions -fno-default-inline -fkeep-inline-functions
The gcov coverage data model works on an assembly-code level. Counters are inserted by the compiler itself, but only for functions for which the compiler actually generates machine code. Thus, as far as gcov is concerned, inline functions, optimized-out code, and non-instantiated templates simply do not exist.
This is quite annoying, but it's potentially difficult to work around.
This can most clearly be avoided by making sure that all functions for which you want coverage data are referenced via your unit tests. It is not necessary to actually invoke that function, merely referencing it should be sufficient. For example, I'd write a function to ignore() arbitrary values despite optimizations, then:
ignore(&some_inline_function);
Possible implementation: template<class T> void ignore(T const& t) { volatile T sinkhole = t; }
Your suggested options like -fno-inline do not work because the code for these functions isn't generated in the first place.
With GCC and when using C++ (but not C), the -fkeep-inline-functions should work, but only for non-templated inline functions.
If a non-templated inline function is only used within one file and isn't provided in a header to multiple files, then it should instead be declared static (in C) or in an anonymous namespace (in C++11 or later), so that -Wunused-function or -Wall notify you if it isn't referenced.
Templates are more tricky in general. Each distinct instantiation of a template results in separate functions. Gcovr does aggregate coverage data across them, but in order for the template to appear in the coverage data it must be instantiated at least once. You will have to do this manually.

How to verify BAZEL project for correctness?

How can I verify that my entire project does not contain errors (say, references to targets which are not declared anywhere)?
In a static language, whenever my code references something that doesn't exist, I get compiler errors. Is there a way to perform an equivalent check with bazel?
bazel build --nobuild //... has a similar effect. It evaluates all the rules (and fails with any errors), but doesn't actually build anything.
Add any additional flags you would with a full build you're checking against. Most flags result in rules evaluating differently, so you might see different errors depending on what flags you use.
A set of Bazel targets can build correctly for some configurations but not others. For example, if there's a select without a default like this:
cc_library(
name = "something",
srcs = select({
":cpu_k8": ["something_k8.cc"],
}),
)
then it will build with --cpu=k8 but not --cpu=aarch64. This means you have to specify the same set of flags when checking as with a full build.

AST of a project by Clang

I use the Clang python binding to extract the AST of c/c++ files. It works perfectly for a simple program I wrote. The problem is when I want to employ it for a big project like openssl. I can run clang for any single file of the project, but clang seems to miss some headers of the project, and just gives me the AST of a few functions of the file, not all of the functions. I set the include folder by -I, but still getting part of the functions.
This is my code:
import clang.cindex as cl
cl.Config.set_library_path(clang_lib_dir)
index = cl.Index.create()
lib = 'Path to include folder'
args = ['-I{}'.format(lib)]
translation_unit = index.parse(source_file, args=args)
my_get_info(translation_unit.cursor)
I receive too many header files not found errors.
UPDATE
I used Make to compile openssl by clang? I can pass -emit-ast option to clang to dump the ast of each file, but I cannot read it now by the clang python binding.
Any clues how I can save the the serialized representation of the translation units so that I will be able to read it by index.read()?
Thank you!
You would "simply" need to provide the right args. But be aware of two possible issues.
Different files may require different arguments for parsing. The easiest solution is to obtain compilation database and then extract compile commands from it. If you go this way be aware that you would need to filter out the arguments a bit and remove things like -c FooBar.cpp (potentially some others), otherwise you may get something like ASTReadError.
Another issue is that the include paths (-I ...) may be relative to the source directory. I.e., if a file main.cpp compiled from a directory /opt/project/ with -I include/path argument, then before calling index.parse(source_file, args=args) you need to step in (chdir) into the /opt/project, and when you are done you will probably need to go back to the original working directory. So the code may look like this (pseudocode):
cwd = getcwd()
chdir('/opt/project')
translation_unit = index.parse(source_file, args=args)
chdir(cwd)
I hope it helps.

waf does not correctly detect C++ #include dependencies

I have C++ header file dependencies that I specify in my waf script with the includes=... parameter to bld.program().
I know the waf build configuration sees the includes because my program compiles correctly.
However, when I change a header file, waf does not detect the change. That is, when I run waf build after changing the contents of an included header, nothing gets recompiled.
Isn't waf supposed to determine #include "..." dependencies automatically?
How can I troubleshoot this?
I have looked in the build/c4che directory to see if I could make sense of the configuration files stored there. Mention of "include" in the waf generated .py files is suspiciously absent.
I am using waf version 1.9.0.
I have also tried this with waf 1.8.19 and got the same result.
EDIT: I replaced my original complicated wscript with the much simpler one listed below, and I still get the same behavior.
Here is my wscript:
top = '.'
out = 'build'
CXXFLAGS = ['-fopenmp', '-Wall', '-Werror', '-std=c++11', '-Wl,--no-as-needed']
def options(ctx):
ctx.load('compiler_cxx')
def configure(ctx):
ctx.load('compiler_cxx')
ctx.env.CXXFLAGS = CXXFLAGS
def build(ctx):
ctx.program(source="test_config_parser.cpp", target="test_config_parser", includes=["../include"], lib=['pthread', 'gomp'])
Your problem is that you use includes out of the project's directory. By default waf does not use external includes as dependencies (like system includes) to speed up things. Solutions I know of :
1/
Organize your project to have your include directory under the waf top directory :
top_dir/
wscript
include/
myinclude.h
sources/
mysource.cpp
2/
Change top directory. I think top = .. should work (not tested).
3/
Tell waf to go absolute by adding this lines at the beginning of build():
waflib.Tools.c_preproc.go_absolute=True
waflib.Tools.c_preproc.standard_includes=[]
4/
Use gcc dependencies by loading the gccdeps waf module.
Solution 1/ is probably the best.
By the way I prefer to have my build directory out of the source tree. Use out = ../build in your wscript, if you want to build out of the source tree.
my2c

Strip names after linking a static library

In a static library project for iOS 6, some functions in a .c file is referenced by others, and therefore are considered global symbols, but should not be exposed to the user of this library.
How can I strip these function names out? Also, how can I hide those obj file names as well so that nobody could see the .o names in nm output?
I have tried to enable/set:
Deployment Postprocessing
Strip Debug Symbols During Copy
Strip Linked Product
Strip Stype: either 'Non-Global Symbols' or 'Debugging symbols'
Use Separate Strip
EDIT:
I see that there is another Build Setting item 'Additional Strip Flags'.
By adding in it a flag -R /path/to/symbol_list_file, strip command would remove symbols indicated in the file, or -s /path/to/exported_symbol_list_file -u to indicate interfaces and leaving undefined symbols at the same time.
No, you can't. A static library is simply a collection of object files and the object files within the static library have no special privileges over those using the static library.
You can obviously strip the final binary.
If you must hide symbols then they need to be static, which forces you to use fewer implementations files to allow the symbol to be shared, which is inconvenient.

Resources