I have some code in Delphi that has Assert statements all over it. I know that there is a compiler directive {$C-}, but there are too many units to add it to. Is there a way to have it done by the compiler command line or somewhere in the dpr file?
You can use $C- from the command line as well, or configure it in 'Project->Options->Compiler' from the IDE (which configures it in the .dproj file).
There's a list of command line switches and options available by typing dcc32 from the command line. It can be redirected to a text file using command redirection (as in dcc32 > dccCommands.txt), which produces the following output with XE5's version of dcc32:
Embarcadero Delphi for Win32 compiler version 26.0
Copyright (c) 1983,2013 Embarcadero Technologies, Inc.
Syntax: dcc32 [options] filename [options]
-A<unit>=<alias> = Set unit alias
-B = Build all units
-CC = Console target
-CG = GUI target
-D<syms> = Define conditionals
-E<path> = EXE/DLL output directory
-F<offset> = Find error
-GD = Detailed map file
-GP = Map file with publics
-GS = Map file with segments
-H = Output hint messages
-I<paths> = Include directories
-J = Generate .obj file
-JPHNE = Generate C++ .obj file, .hpp file, in namespace, export all
-JL = Generate package .lib, .bpi, and all .hpp files for C++
-K<addr> = Set image base addr
-LE<path> = package .bpl output directory
-LN<path> = package .dcp output directory
-LU<package> = Use package
-M = Make modified units
-NU<path> = unit .dcu output directory
-NH<path> = unit .hpp output directory
-NO<path> = unit .obj output directory
-NB<path> = unit .bpi output directory
-NX<path> = unit .xml output directory
-NS<namespaces> = Namespace search path
-O<paths> = Object directories
-P = look for 8.3 file names also
-Q = Quiet compile
-R<paths> = Resource directories
-TX<ext> = Output name extension
-U<paths> = Unit directories
-V = Debug information in EXE
-VR = Generate remote debug (RSM)
-VT = Debug information in TDS
-VN = TDS symbols in namespace
-W[+|-|^][warn_id] = Output warning messages
-Z = Output 'never build' DCPs
-$<dir> = Compiler directive
--help = Show this help screen
--version = Show name and version
--codepage:<cp> = specify source file encoding
--default-namespace:<namespace> = set namespace
--depends = output unit dependency information
--doc = output XML documentation
--drc = output resource string .drc file
--no-config = do not load default dcc32.cfg file
--description:<string> = set executable description
--inline:{on|off|auto} = function inlining control
--legacy-ifend = allow legacy $IFEND directive
--zero-based-strings[+|-] = strings are indexed starting at 0
--peflags:<flags> = set extra PE Header flags field
--peoptflags:<flags> = set extra PE Header optional flags field
--peosversion:<major>.<minor> = set OS Version fields in PE Header (default: 5.0)
--pesubsysversion:<major>.<minor> = set Subsystem Version fields in PE Header (default: 5.0)
--peuserversion:<major>.<minor> = set User Version fields in PE Header (default: 0.0)
Compiler switches: -$<letter><state> (defaults are shown below)
A8 Aligned record fields
B- Full boolean Evaluation
C+ Evaluate assertions at runtime
D+ Debug information
G+ Use imported data references
H+ Use long strings by default
I+ I/O checking
J- Writeable structured consts
L+ Local debug symbols
M- Runtime type info
O+ Optimization
P+ Open string params
Q- Integer overflow checking
R- Range checking
T- Typed # operator
U- Pentium(tm)-safe divide
V+ Strict var-strings
W- Generate stack frames
X+ Extended syntax
Y+ Symbol reference info
Z1 Minimum size of enum types
Stack size: -$M<minStackSize[,maxStackSize]> (default 16384,1048576)
Related
I am using Doxygen for documentation of my project. I have a compile_commands.json file describing the source code of my project inside the directory C:\dev\project_dir. I set the following variables:
CLANG_ASSISTED_PARSING = YES
CLANG_DATABASE_PATH = C:\dev\project_dir
So how does this work? Do I also have to set the variables INPUT and INCLUDE_PATH? It seems that all the files and the instructions to compile them, including where to get header files from, are written in the compilation database.
And if I do have to set the variables INPUT and INCLUDE_PATH also, what should I set them to? The compilation database lists the source and header files of the project, which are scattered among multiple different directories. How should I proceed in this situation.
I found the answer.
So I set the following variables in the Doxyfile.
CLANG_ASSISTED_PARSING = YES
CLANG_DATABASE_PATH = C:\dev\project_dir
And I set the following variables as blank:
INPUT =
INCLUDE_PATH =
INPUT specifies the paths to source code *.c *.cpp files and/or directories to be processed. INCLUDE_PATH specifies the paths to header code *.h *.hpp files and/or directories to be processed.
The CLANG_ASSISTED_PARSING = YES enables using clang compiler as the parser instead of the default doxygen parser. So if INPUT and INCLUDE_PATH are not set, then it gets the source code files and header code files from the compilation database itself. The CLANG_DATABASE_PATH specifies the directory in which the compilation database is stored. It grabs the file named compile_commands.json by default from that directory, implying that the name of the compilation database is fixed. If you name your compilation database JSON file anything else other than compile_commands.json doxygen won't be able to find it.
So if a clang compilation database JSON file is used all the *.c *.cpp files that are being compiled are placed in the INPUT. And all the header code files are placed in the INCLUDE_PATH. The clang used by doxygen parses the JSON, and every time it encounters a -I compiler flag it recognizes that file as a header file, adding it to the INCLUDE_PATH. This means that setting INPUT and INCLUDE_PATH are not mandatory. So if the compilation database is properly formatted, and all the header files are explicitly marked with the -I, only setting the CLANG_DATABASE_PATH is sufficient.
But there is a certain situation when the INCLUDE_PATH also needs to be set explicitly. For example if you have a source code file, which includes a header file, which includes another header file inside of it.
first.h
int one(int);
second.h
#include "first.h"
int two(int);
code.cpp
#include "second.h"
int main(void) {}
And the command in the compilation database is such:
clang -I path/to/second.h -c code.cpp
So in this case doxygen would read that file, and it would internally set the following variables as such:
INPUT = code.cpp
INCLUDE_PATH = path/to/second.h
This means that although doxygen will index second.h, it will miss first.h since that header file isn't explicitly provided in the -I compilation database. That would be an error. So we need to list it explicitly in the doxyfile, an an additional include path.
INPUT =
INCLUDE_PATH = path\to\first.h
CLANG_ASSISTED_PARSING = YES
CLANG_DATABASE_PATH = C:\dev\project_dir
What is the best way to refer to an external package's path in any arbitrary files processed by Bazel?
I'm trying to understand how Bazel preprocesses BUILD and .bzl files. I see instances where strings contain calls to package() and I am wondering how it works (and could not find any relevant documentation). Here is an example of this:
I have a toolchain which BUILD file contains the following expression :
cc_toolchain_config(
name = "cc-toolchain-config",
abi_libc_version = "glibc_" + host_gcc8_bundle()["pkg_version"]["glibc"],
abi_version = "gcc-" + host_gcc8_bundle()["version"],
compiler = "gcc-" + host_gcc8_bundle()["version"],
cpu = "x86_64",
cxx_builtin_include_directories = [
"%package(#host_gcc8_toolchain//include/c++/8)%",
"%package(#host_gcc8_toolchain//lib64/gcc/x86_64-unknown-linux-gnu/8/include-fixed)%",
"%package(#host_gcc8_kernel_headers//include)%",
"%package(#host_gcc8_glibc//include)%",
],
host_system_name = "x86_64-unknown-linux-gnu",
target_libc = "glibc_" + host_gcc8_bundle()["pkg_version"]["glibc"],
target_system_name = "x86_64-unknown-linux-gnu",
toolchain_identifier = "host_linux_gcc8",
)
From my understanding, the cxx_builtin_include_directories defines a list of strings to serve as the --sysroot option passed to GCC as detailed in https://docs.bazel.build/versions/0.23.0/skylark/lib/cc_common.html These strings are in the format %sysroot%.
Since package(#host_gcc8_toolchain//include/c++/8) for example, does not mean anything to GCC, bazel has to somehow expand this function to produce the actual path to the files included in the package before passing them to the compiler driver.
But how can it determine that this needs to be expanded and that it is not a regular string ? So how does Bazel preprocess the BUILD file ? Is it because of the % ... % pattern ? Where is this documented ?
is "%package(#external_package//target)%" a pattern that can be used elsewhere ? In any BUILD file ? Where do I find Bazel documentation showing how this works ?
These directives are expanded by cc_common.create_cc_toolchain_config_info within the cc_toolchain_config rule implementation not any sort of preprocessing on the BUILD file (I.e., "%package(#host_gcc8_glibc//include)%" is literally passed into the cc_toolchain_config rule.) I'm not aware that these special expansions are completely documented anywhere but the source.
Starting with Bazel v0.19, if you have Starlark (formerly known as "Skylark") code that references #bazel_tools//tools/jdk:jar, you see messages like this at build time:
WARNING: <trimmed-path>/external/bazel_tools/tools/jdk/BUILD:79:1: in alias rule #bazel_tools//tools/jdk:jar: target '#bazel_tools//tools/jdk:jar' depends on deprecated target '#local_jdk//:jar': Don't depend on targets in the JDK workspace; use #bazel_tools//tools/jdk:current_java_runtime instead (see https://github.com/bazelbuild/bazel/issues/5594)
I think I could make things work with #bazel_tools//tools/jdk:current_java_runtime if I wanted access to the java command, but I'm not sure what I'd need to do to get the jar tool to work. The contents of the linked GitHub issue didn't seem to address this particular problem.
I stumbled across a commit to Bazel that makes a similar adjustment to the Starlark java rules. It uses the following pattern: (I've edited the code somewhat)
# in the rule attrs:
"_jdk": attr.label(
default = Label("//tools/jdk:current_java_runtime"),
providers = [java_common.JavaRuntimeInfo],
),
# then in the rule implementation:
java_runtime = ctx.attr._jdk[java_common.JavaRuntimeInfo]
jar_path = "%s/bin/jar" % java_runtime.java_home
ctx.action(
inputs = ctx.files._jdk + other inputs,
outputs = [deploy_jar],
command = "%s cmf %s" % (jar_path, input_files),
)
Additionally, java is available at str(java_runtime.java_executable_exec_path) and javac at "%s/bin/javac" % java_runtime.java_home.
See also, a pull request with a simpler example.
Because my reference to the jar tool is inside a genrule within top-level macro, rather than a rule, I was unable to use the approach from Rodrigo's answer. I instead explicitly referenced the current_java_runtime toolchain and was then able to use the JAVABASE make variable as the base path for the jar tool.
native.genrule(
name = genjar_rule,
srcs = [<rules that create files being jar'd>],
cmd = "some_script.sh $(JAVABASE)/bin/jar $# $(SRCS)",
tools = ["some_script.sh", "#bazel_tools//tools/jdk:current_java_runtime"],
toolchains = ["#bazel_tools//tools/jdk:current_java_runtime"],
outs = [<some outputs>]
)
I'm building ARM Cortex-M firmware from Bazel with a custom CROSSTOOL. I'm successfully building elf files and manually objcopying them to binary files with the usual:
path/to/my/objcopy -o binary hello.elf hello.bin
I want to make a Bazel macro or rule called cc_firmware that:
Adds the -Wl,-Map=hello.map flags to generate a mapfile
Changes the output elf name from hello to hello.elf
Invokes path/to/my/objcopy to convert the elf to a bin.
I don't know how to get the name of a CROSSTOOL tool (objcopy) to invoke it, and it feels wrong to have the rule know the path to the tool executable.
Is there a way to use the objcopy that I've already told Bazel about in my CROSSTOOL file?
You can actually access this from a custom rule. Basically you need to tell Bazel that you want access to the cpp configuration information (fragments = ["cpp"]) and then access its path via ctx.fragments.cpp.objcopy_executable, e.g.,:
def _impl(ctx):
print("path: {}".format(ctx.fragments.cpp.objcopy_executable))
# TODO: actually do something with the path...
cc_firmware = rule(
implementation = _impl,
fragments = ["cpp"],
attrs = {
"src" : attr.label(allow_single_file = True),
"map" : attr.label(allow_single_file = True),
},
outputs = {"elf" : "%{name}.elf"}
)
Then create the output you want with something like (untested):
def _impl(ctx):
src = ctx.attr.src.files.to_list()[0]
m = ctx.attr.map.files.to_list()[0]
ctx.action(
command = "{objcopy} -Wl,-Map={map} -o binary {elf_out} {cc_bin}".format(
objcopy=ctx.fragments.cpp.objcopy_executable,
map=m,
elf_out=ctx.outputs.elf.path,
cc_bin=src,
),
outputs = [ctx.outputs.elf],
inputs = [src, m],
)
As described in Undocumented qmake, I declared an extra compiler in my qmake project file:
TEST = somefile.h
test_c.output = ${QMAKE_VAR_OBJECTS_DIR}${QMAKE_FILE_BASE}_1.cpp
test_c.input = TEST
test_c.commands = C:/somedir/some.exe /i ${QMAKE_FILE_IN} /o ${QMAKE_VAR_OBJECTS_DIR}${QMAKE_FILE_BASE}_1.cpp
test_c.variable_out = SOURCES
test_c.name = MyTestC
QMAKE_EXTRA_COMPILERS += test_c
And this works fine. But I also want to generate a header file. I can easily make a second custom tool for parsing this file (or files, if >1 will be in TEST), but I don't want to parse each file twice. I tried:
test_c.output = ${QMAKE_VAR_OBJECTS_DIR}${QMAKE_FILE_BASE}_1.cpp \
${QMAKE_VAR_OBJECTS_DIR}${QMAKE_FILE_BASE}_2.cpp
Just to test that the extra compiler can make two files per run. I expected some error like "file somefile_2.cpp doesn't exist", but project compiles without errors and second output file is ignored. In Makefile somefile_2.cpp is not present.
Now I'm thinking about two variants:
Make an extra compiler that produces an archive, where all needed output files will be saved at once. Set tool1.variable_out = TOOL_1_OUT, and add two more extra compilers with toolN.input = TOOL_1_OUT to just "unzip" the archived files (one per tool) and append them to some variables.
In this case three executes will be called per one input file. This is not optimal, but at least the parser will run only once per file.
Experiment with the .output_function option. Make a qmake function that returns the same name as .output now does, but also append second filename to HEADERS.
P.S. I am using MinGW x32 4.7, QtCreator 2.7.1, Qt 5.1.0, C++11.
Your variant #2 is the right idea. This works for me:
defineReplace(addToHeaders) {
source = $$1
source_split = $$split(source, ".")
source_without_extension = $$first(source_split)
HEADERS += ${QMAKE_VAR_OBJECTS_DIR}$${source_without_extension}_1.h
return(${QMAKE_VAR_OBJECTS_DIR}$${source_without_extension}_1.cpp)
}
defineReplace(FILE_IN_addToHeaders) {
# qmake emits a warning unless this function is defined; not sure why.
}
TEST = somefile.h
test_c.output_function = addToHeaders
test_c.input = TEST
test_c.commands = cp ${QMAKE_FILE_IN} ${QMAKE_VAR_OBJECTS_DIR}${QMAKE_FILE_BASE}_1.cpp ; cp ${QMAKE_FILE_IN} ${QMAKE_VAR_OBJECTS_DIR}${QMAKE_FILE_BASE}_1.h
test_c.variable_out = SOURCES
test_c.name = MyTestC
QMAKE_EXTRA_COMPILERS += test_c
It produces a Makefile which builds somefile_1.cpp and somefile_1.h, with somefile_1.cpp added to SOURCES and somefile_1.h added to HEADERS.
This works ok (variant #1):
MY_COMP = src/precompiled.h \
src/file2.h
GENERATE_FOLDER = generated/
# build package file
my_build.clean = $${GENERATE_FOLDER}gen_${QMAKE_FILE_BASE}.pack
my_build.depends = [somepath]/my_precompiler.exe
my_build.output = $${GENERATE_FOLDER}gen_${QMAKE_FILE_BASE}.pack
my_build.input = MY_COMP
my_build.commands = [somepath]/my_precompiler.exe /i ${QMAKE_FILE_IN} /o $${GENERATE_FOLDER}gen_${QMAKE_FILE_BASE}.pack
my_build.variable_out = MY_PACKAGES
my_build.name = "package build"
# unpack cpp
my_unpack_cpp.clean = $${GENERATE_FOLDER}${QMAKE_FILE_BASE}.cpp
my_unpack_cpp.depends = $${GENERATE_FOLDER}${QMAKE_FILE_BASE}.h
my_unpack_cpp.output = $${GENERATE_FOLDER}${QMAKE_FILE_BASE}.cpp
my_unpack_cpp.input = MY_PACKAGES
my_unpack_cpp.commands = [somepath]/my_precompiler.exe /unpack cpp /i ${QMAKE_FILE_IN} /o $${GENERATE_FOLDER}${QMAKE_FILE_BASE}.cpp
my_unpack_cpp.variable_out = GENERATED_SOURCES
my_unpack_cpp.dependency_type = TYPE_C
my_unpack_cpp.name = "unpack code"
my_unpack_cpp.CONFIG = no_link
# unpack header
my_unpack_h.clean = $${GENERATE_FOLDER}${QMAKE_FILE_BASE}.h
my_unpack_h.output = $${GENERATE_FOLDER}${QMAKE_FILE_BASE}.h
my_unpack_h.input = MY_PACKAGES
my_unpack_h.commands = [somepath]/my_precompiler.exe /unpack h /i ${QMAKE_FILE_IN} /o $${GENERATE_FOLDER}${QMAKE_FILE_BASE}.h
my_unpack_h.variable_out = HEADERS
my_unpack_h.name = "unpack header"
my_unpack_h.CONFIG = no_link
QMAKE_EXTRA_COMPILERS += my_build my_unpack_h my_unpack_cpp
With this technique number of output files per one parse may vary, but may be constant for all files in project, of course.
In my_precompiler I parse file if unpack option isn't preserved and build two files (cpp + h) into two QBuffers. After that I simply write builded data to QFile:
QDataStream ds(&file);
ds.setVersion(QDataStream::Qt_5_1);
ds << qCompress(output_cpp.data(), 9);
ds << qCompress(output_h.data(), 9);
file.close();
In fact, now qCompress isn't profitable, because generated files too small to compression size exceeded the size of the headers zlib - sizeof(.pack) > size(.h + .h).
Unpacking:
QByteArray ba;
QDataStream ds(&file);
ds.setVersion(QDataStream::Qt_5_1);
ds >> ba;
if(unpack != "cpp")
{
ds >> ba;
}
file.close();
ba = qUncompress(ba);
file.setFileName(output);
if(!file.open(QFile::WriteOnly | QFile::Truncate)) return 1;
file.write(ba);
file.close();
When generating:
Write #include "original header" in begin of generated header
Write #include "generated header" in begin of generated code
Therefore I set this:
my_unpack_cpp.depends = $${GENERATE_FOLDER}${QMAKE_FILE_BASE}.h
So /unpack cpp (and, therefore, building) performs after building needed header file. And this:
my_build.depends = [somepath]/my_precompiler.exe
Sets builded pack (and, therefore, generated cpp+h) depends on my_precompiler, so all will be rebuilded if I modify and rebuild precompiler.
P.S. IMHO these lines must works as cleaners before rebuilding:
my_build.clean = $${GENERATE_FOLDER}gen_${QMAKE_FILE_BASE}.pack
my_unpack_cpp.clean = $${GENERATE_FOLDER}${QMAKE_FILE_BASE}.cpp
my_unpack_h.clean = $${GENERATE_FOLDER}${QMAKE_FILE_BASE}.h
But they don't :( At present I ignore that, but now if building .pack is failed than previously builded pack-file is used