How to make qmake to do a clean rebuild if DEFINES are changed - qmake

... which it should do but does not.
This is one of the major frustration of the qmake for me. qbs is our Qt future but for now we are stuck with qmake. So, what can be done?

I abuse QMAKE_EXTRA_COMPILERS to accompilsh this. I need to use it because I have to get DEFINES value after all features are processed.
# in this function all the work is done
defineReplace(checkDefinesForChanges) {
old_def = $$cat($$OUT_PWD/defines.txt)
curr_def = $$DEFINES
curr_def -= $$old_def
old_def -= $$DEFINES
diff = $$old_def $$curr_def
# delete all files in OUT_PWD if macros were changed
!isEmpty(diff) {
A = $$system(del /F /Q /S $$system_path($${OUT_PWD}/*.*))
message(DEFINES WERE CHANGED)
}
write_file($$OUT_PWD/defines.txt, DEFINES);
return(???)
}
# use QMAKE_EXTRA_COMPILERS to launch function
# checkDefinesForChanges after all features processing
_defines_check_ = ???
defines_check.name = check on defines being changed
defines_check.input = _defines_check_
defines_check.CONFIG += no_link ignore_no_exist
defines_check.depends = ???
defines_check.commands = ???
defines_check.output_function = checkDefinesForChanges
defines_check.clean = 333
QMAKE_EXTRA_COMPILERS += defines_check
# make sure qmake is run if deines.txt is deleted
recompile_on_defines_txt_not_existsing.target = $(MAKEFILE)
recompile_on_defines_txt_not_existsing.depends = $$OUT_PWD/defines.txt
recompile_on_defines_txt_not_existsing2.target = $$OUT_PWD/defines.txt
recompile_on_defines_txt_not_existsing2.depends = qmake
QMAKE_EXTRA_TARGETS += recompile_on_defines_txt_not_existsing recompile_on_defines_txt_not_existsing2
Source in Russian

Related

Custom C++ rule with the cc_common API

I'm trying to write a custom rule to compile C++ code using the cc_common API. Here's my current attempt at an implementation:
load("#bazel_tools//tools/cpp:toolchain_utils.bzl", "find_cpp_toolchain")
load("#bazel_tools//tools/build_defs/cc:action_names.bzl", "C_COMPILE_ACTION_NAME")
def _impl(ctx):
cc_toolchain = find_cpp_toolchain(ctx)
feature_configuration = cc_common.configure_features(
cc_toolchain = cc_toolchain,
unsupported_features = ctx.disabled_features,
)
compiler = cc_common.get_tool_for_action(
feature_configuration=feature_configuration,
action_name=C_COMPILE_ACTION_NAME
)
compile_variables = cc_common.create_compile_variables(
feature_configuration = feature_configuration,
cc_toolchain = cc_toolchain,
)
compiler_options = cc_common.get_memory_inefficient_command_line(
feature_configuration = feature_configuration,
action_name = C_COMPILE_ACTION_NAME,
variables = compile_variables,
)
outfile = ctx.actions.declare_file("test.o")
args = ctx.actions.args()
args.add_all(compiler_options)
ctx.actions.run(
outputs = [outfile],
inputs = ctx.files.srcs,
executable = compiler,
arguments = [args],
)
return [DefaultInfo(files = depset([outfile]))]
However, this fails with the error "execvp(external/local_config_cc/wrapped_clang, ...)": No such file or directory. I assume this is because get_tool_for_action returns a string representing a path, not a File object, so Bazel doesn't add wrapped_clang to the sandbox. Executing the rule with sandboxing disabled seems to confirm this, as it completes successfully.
Is there a way to implement this custom rule without disabling the sandbox?
If you use ctx.actions.run_shell you can add the files associated with the toolchain to the input (ctx.attr._cc_toolchain.files). Also, you'll want to add the compiler environment variables. E.g.
srcs = depset(ctx.files.srcs)
tools = ctx.attr._cc_toolchain.files
...
compiler_env = cc_common.get_environment_variables(
feature_configuration = feature_configuration,
action_name = C_COMPILE_ACTION_NAME,
variables = compiler_variables,
)
...
args = ctx.actions.args()
args.add_all(compiler_options)
ctx.actions.run_shell(
outputs = [outfile],
inputs = depset(transitive = [srcs, tools]), # Merge src and tools depsets
command = "{compiler} $*".format(compiler = compiler),
arguments = [args],
env = compiler_env,
)
Bazel doesn't add files as action inputs automatically, you have to do it explicitly, as you did in your second approach (ctx.attr._cc_toolchain.files). With that, ctx.actions.run should work just fine.

Filter source files for custom rule

I used a bazel macro to run a python test on a subset of source files. Similar to this:
def report(name, srcs):
source_labels = [file for file in srcs if file.startswith("a")]
if len(source_labels) == 0:
return;
source_filenames = ["$(location %s)" % x for x in source_labels]
native.py_test(
name = name + "_report",
srcs = ["report_tool"],
data = source_labels,
main = "report_tool.py",
args = source_filenames,
)
report("foo", ["foo.hpp", "afoo.hpp"])
This worked fine until one of my source files started using a select and now I get the error:
File "/home/david/foo/report.bzl", line 47, in report
[file for file in srcs if file.startswith("a")]
type 'select' is not iterable
I tried to move the code to a bazel rule, but then I get a different error that py_test can not be used in the analysis phase.
The reason that the select is causing the error is that macros are evaluated during the loading phase, whereas selectss are not evaluated until the analysis phase (see Extension Overview).
Similarly, py_test can't be used in a rule implementation because the rule implementation is evaluated in the analysis phase, whereas the py_test would need to have been loaded in the loading phase.
One way past this is to create a separate Starlark rule that takes a list of labels and just creates a file with each filename from the label. Then the py_test takes that file as data and loads the other files from there. Something like this:
def report(name, srcs):
file_locations_label = "_" + name + "_file_locations"
_generate_file_locations(
name = file_locations_label,
labels = srcs
)
native.py_test(
name = name + "_report",
srcs = ["report_tool.py"],
data = srcs + [file_locations_label],
main = "report_tool.py",
args = ["$(location %s)" % file_locations_label]
)
def _generate_file_locations_impl(ctx):
paths = []
for l in ctx.attr.labels:
f = l.files.to_list()[0]
if f.basename.startswith("a"):
paths.append(f.short_path)
ctx.actions.write(ctx.outputs.file_paths, "\n".join(paths))
return DefaultInfo(runfiles = ctx.runfiles(files = [ctx.outputs.file_paths]))
_generate_file_locations = rule(
implementation = _generate_file_locations_impl,
attrs = { "labels": attr.label_list(allow_files = True) },
outputs = { "file_paths": "%{name}_files" },
)
This has one disadvantage: Because the py_test has to depend on all the sources, the py_test will get rerun even if the only files that have changed are the ignored files. (If this is a significant drawback, then there is at least one way around this, which is to have _generate_file_locations filter the files too, and have the py_test depend on only _generate_file_locations. This could maybe be accomplished through runfiles symlinks)
Update:
Since the test report tool comes from an external repository and can't be easily modified, here's another approach that might work better. Rather than create a rule that creates a params file (a file containing the paths to process) as above, the Starlark rule can itself be a test rule that uses the report tool as the test executable:
def _report_test_impl(ctx):
filtered_srcs = []
for f in ctx.attr.srcs:
f = f.files.to_list()[0]
if f.basename.startswith("a"):
filtered_srcs.append(f)
report_tool = ctx.attr._report_test_tool
ctx.actions.write(
output = ctx.outputs.executable,
content = "{report_tool} {paths}".format(
report_tool = report_tool.files_to_run.executable.short_path,
paths = " ".join([f.short_path for f in filtered_srcs]))
)
runfiles = ctx.runfiles(files = filtered_srcs).merge(
report_tool.default_runfiles)
return DefaultInfo(runfiles = runfiles)
report_test = rule(
implementation = _report_test_impl,
attrs = {
"srcs": attr.label_list(allow_files = True),
"_report_test_tool": attr.label(default="//:report_test_tool"),
},
test = True,
)
This requires that the test report tool be a py_binary somewhere so that the test rule above can depend on it:
py_binary(
name = "report_test_tool",
srcs = ["report_tool.py"],
main = "report_tool.py",
)

How to split a Torch class into several files in a Lua rock

In my recently aided in the development of a Dataframe package for Torch. As the code base has quickly doubled there is a need to split the class into several sections for better organization and follow-up (issue #8).
A simple test-class would be a test.lua file in the root folder of the test-package:
test = torch.class('test')
function test:__init()
self.data = {}
end
function test:a()
print("a")
end
function test:b()
print("b")
end
Now the rockspec for this would simply be:
package = "torch-test"
version = "0.1-1"
source = {
url = "..."
}
description = {
summary = "A test class",
detailed = [[
Just an example
]],
license = "MIT/X11",
maintainer = "Jon Doe"
}
dependencies = {
"lua ~> 5.1",
"torch >= 7.0",
}
build = {
type = 'builtin',
modules = {
["test"] = 'test.lua',
}
}
In order to get multiple files to work for a single class it is necessary to return the class object initially created and pass it to the subsections. The above example can be put into the file structure:
\init.lua
\main.lua
\test-0.1-1.rockspec
\Extensions\a.lua
\Extensions\b.lua
The luarocks install/make copies the files according to 'require' syntax where each . signifies a directory and the .lua is left out, i.e. we need to change the rockspec to:
package = "torch-test"
version = "0.1-1"
source = {
url = "..."
}
description = {
summary = "A test class",
detailed = [[
Just an example
]],
license = "MIT/X11",
maintainer = "Jon Doe"
}
dependencies = {
"lua ~> 5.1",
"torch >= 7.0",
}
build = {
type = 'builtin',
modules = {
["test.init"] = 'init.lua',
["test.main"] = 'main.lua',
["test.Extensions.a"] = 'a.lua',
["test.Extensions.b"] = 'b.lua'
}
}
The above will thus create a test-folder where the packages reside together with the files and subdirectories. The class initialization now resides in the init.lua that returns the class object:
test = torch.class('test')
function test:__init()
self.data = {}
end
return test
The subclass-files now need to pickup the class object that is passed using loadfile() (see init.lua file below). The a.lua should now look like this:
local params = {...}
local test = params[1]
function test:a()
print("a")
end
and similar addition for the b.lua:
local params = {...}
local test = params[1]
function test:b()
print("b")
end
In order to glue everything together we have the init.lua file. The following is probably a little over-complicated but it takes care of:
Finding all extensions available and loading them (Note: requires lua filesystem that you should add to the rockspec and you still need to add each file into the rockspec or it won't be in the Extensions folder)
Identifies the paths folder
Loads the main.lua
Works in a pure testing environment without the package installed
The code for init.lua:
require 'lfs'
local file_exists = function(name)
local f=io.open(name,"r")
if f~=nil then io.close(f) return true else return false end
end
-- If we're in development mode the default path should be the current
local test_path = "./?.lua"
local search_4_file = "Extensions/load_batch"
if (not file_exists(string.gsub(test_path, "?", search_4_file))) then
-- split all paths according to ;
for path in string.gmatch(package.path, "[^;]+;") do
-- remove trailing ;
path = string.sub(path, 1, string.len(path) - 1)
if (file_exists(string.gsub(path, "?", "test/" .. search_4_file))) then
test_path = string.gsub(path, "?", "test/?")
break;
end
end
if (test_path == nil) then
error("Can't find package files in search path: " .. tostring(package.path))
end
end
local main_file = string.gsub(test_path,"?", "main")
local test = assert(loadfile(main_file))()
-- Load all extensions, i.e. .lua files in Extensions directory
ext_path = string.gsub(test_path, "[^/]+$", "") .. "Extensions/"
for extension_file,_ in lfs.dir (ext_path) do
if (string.match(extension_file, "[.]lua$")) then
local file = ext_path .. extension_file
assert(loadfile(file))(test)
end
end
return test
I hope this helps if you run into the same problem and find the documentation a little too sparse. If you happen to know a better solution, please share.

How tell qmake NOT to create a folder?

I want to configurate my qmake so it will make my executables go under ./build/debug (or release). I've done that sucessfully with the following code:
CONFIG(debug, debug|release) {
DESTDIR = ./build/debug
TARGET = mShareLibd
}
CONFIG(release, debug|release) {
DESTDIR = ./build/release
TARGET = mShareLib
}
Everything works fine apart from the fact that qmake still creates two folders, namely "debug" and "release" in the project's root directory - so I end up with a "build", a "debug" (always empty) and a "release" (always empty) folder.
How can I tell qmake NOT to create this two folders? I did this question in the QtCentre forum (here is the link), but the way provided didn't seem to me to be a reasonable one. Isn't there a more reasonable approach - such as just write a command which tells "qmake, don't create this folders"?
Thanks,
Momergil
EDIT
Bill asked me to copy and paste my .pro file here. Here are the resumed version (most of the header and source files not included)
#qmake defines
MSHARE_REPO = $${PWD}/..
MSHARE_COMMON = $${MSHARE_REPO}/Common
MSHARE_LIB = $${MSHARE_REPO}/mShareLib
MLOGGER = $${MSHARE_REPO}/../Classes/mLogger
#inclusion
QT += core gui network multimedia sql
qtHaveModule(printsupport): QT += printsupport
greaterThan(QT_MAJOR_VERSION, 4): QT += widgets
CONFIG += qwt
#CONFIG *= precompile_header
#PRECOMPILED_HEADER = stdafx.h
#HEADERS += stdafx.h
TARGET = mShare
TEMPLATE = app
VER_MAJ = 0
VER_MIN = 0
VER_PAT = 7
VERSION = $${VER_MAJ}.$${VER_MIN}.$${VER_PAT}
INCLUDEPATH += MSHARE_REPO \
MSHARE_COMMON \
C:\Qt\Qwt-6.1.0\include
LIBS += $${PWD}/SMTPEmail.dll
DEFINES += MGENERALDEFINES_GUI \
MGENERALDEFINES_DEBUG \
MGENERALDEFINES_GENERAL \
QWT_INCLUDED \
APP_VERSION=\\\"$$VERSION\\\"
win32 {
LIBS += -lpsapi
CONFIG(debug, debug|release) { #debug {
LIBS += C:/Qt/Qwt-6.1.0/lib/qwtd.dll \
$${MLOGGER}/build/debug/mLogger.dll \ #$${MLOGGER}/debug/mLoggerd.dll \
$${MSHARE_LIB}/build/debug/mShareLibd.dll
DEFINES += DEBUG
DESTDIR = ./build/debug
}
CONFIG(release, debug|release) { #release {
LIBS += C:/Qt/Qwt-6.1.0/lib/qwt.dll \
$${MLOGGER}/build/release/mLogger.dll \
$${MSHARE_LIB}/build/release/mShareLib.dll
DEFINES += RELEASE \
QT_NO_DEBUG \
QT_NO_DEBUG_OUTPUT
DESTDIR = ./build/release
}
} # win32
#others
MOC_DIR = $${DESTDIR}/.moc
OBJECTS_DIR = $${DESTDIR}/.obj
UI_DIR = $${DESTDIR}/.ui
RCC_DIR = $${DESTDIR}/.rcc
########################################################################
HEADERS += AppDefines.hpp \
mreadwrite.hpp \
system/appbrain.hpp \
...
SOURCES += main.cpp \
mreadwrite.cpp \
system/appbrain.cpp \
...
FORMS += \
interface/entracedialog.ui \
interface/validationdialog.ui \
...
OTHER_FILES += Files/CandlePatternProbabilities.txt \
Project_Files/Readme.txt \
...
RESOURCES += \
Icons.qrc \
Setups.qrc \
GeneralFiles.qrc
RC_FILE = icone.rc
#TRANSLATIONS += DEFAULT_THEME_PATH/translations/app_pt.ts \
# DEFAULT_THEME_PATH/translations/app_de.ts
I think I've found the solution by looking at the QMake's source code : set the "PRECOMPILED_DIR" variable.
It works with Qt 5. Since the QMake source code doesn't change a lot, I think it also works with Qt 4.
CONFIG(debug, debug|release) {
DESTDIR = ./build/debug
PRECOMPILED_DIR = ./build/debug
TARGET = mShareLibd
}
CONFIG(release, debug|release) {
DESTDIR = ./build/release
PRECOMPILED_DIR = ./build/release
TARGET = mShareLib
}

How can I use qmake to copy files recursively

In my source tree I have a bunch of resources, I want to copy them with make install to my defined target path. Since the resource tree has many, many subdirectories, I want qmake to find all files recursively.
I tried:
resources.path = /target/path
resources.files += `find /path/to/resources`
INSTALLS += resources
and:
resources.path = /target/path
resources.files += /path/to/resources/*
resources.files += /path/to/resources/*/*
resources.files += /path/to/resources/*/*/*
resources.files += /path/to/resources/*/*/*/*
INSTALLS += resources
Both don't have the result I was hoping for.
I have done it like this:
res.path = $$OUT_PWD/targetfolder
res.files = sourcefolder
INSTALLS += res
this would copy "wherever this qmake script is"/sourcefolder into buildfolder/"same sub folder on build dir"/targetfolder
so you would have targetfolder/sourcefolder/"all other subfolders and files..."
Example:
#(My .pro file's dir) $$PWD = /mysources/
#(My Build dir) $$OUT_PWD = /project_build/
extras.path = $$OUT_PWD
extras.files += extras
src.path = $$OUT_PWD
src.files += src
INSTALLS += extras src
Would copy /mysources/extras to /project_build/extras and /mysources/src to /project_build/src
It appears that directories are installed with 'cp -r -f', so this does the trick:
resources.path = /target/path
resources.files += /path/to/resources/dir1
resources.files += /path/to/resources/dir2
resources.files += /path/to/resources/dir3
resources.files += /path/to/resources/dirn # and so on...
resources.files += /path/to/resources/* # don't forget the files in the root
INSTALLS += resources

Resources