Bazel adds -MD -MF -frandom-seed flags by default - Why? - bazel

I have a cc_toolchain configuration for a proprietary c compiler and I have ensured that the compilation commands are correct from the bazel output using the '-s' flag.
However, bazel adds the three compilation flags '-MD -MF and -frandom-seed' in addition to what I have specified.
My compiler does not recognize the -MD and -MF flags. No issues with the -frandom-seed.
How can I specify bazel NOT to add these flags?

To not add random seed, disable the corresponding feature, add:
random_seed_feature = feature(
name = "random_seed",
enabled = False,
)
and add random_seed_feature to list of features you pass to cc_common.create_cc_toolchain_config_info().
For -MD -MF it gets more complicated. You could disable dependency_file feature in a similar manner, but the hdrs_check would fail expecting to find a dependency dump and I do not believe you can actually disable that for C++ action with cc_toolchain based on current implementation (or no readily available method comes to mind).
The question is, does your compiler still support dumping dependencies, just using different flag(s)? Then you can (even should) redefine the feature, for reference in https://github.com/bazelbuild/rules_cc it currently for U*X-like systems looks like this:
dependency_file_feature = feature(
name = "dependency_file",
enabled = True,
flag_sets = [
flag_set(
actions = [
ACTION_NAMES.assemble,
ACTION_NAMES.preprocess_assemble,
ACTION_NAMES.c_compile,
ACTION_NAMES.cpp_compile,
ACTION_NAMES.cpp_module_compile,
ACTION_NAMES.objc_compile,
ACTION_NAMES.objcpp_compile,
ACTION_NAMES.cpp_header_parsing,
ACTION_NAMES.clif_match,
],
flag_groups = [
flag_group(
flags = ["-MD", "-MF", "%{dependency_file}"],
expand_if_available = "dependency_file",
),
],
),
],
)
If your compiler does not produce this file at all, I am afraid on top of disabling the feature you'd need to wrap its call and dump an empty file to where dependency_file is expected (essentially use a flag the wrapper understands, gets the file name and strips both from compiler call, writing an empty file for the check). You'd lose the headers checking for dependencies being correctly declared by by-passing it, but it would allow the build to proceed.
Alternatively, from scratch new cc_toolchain with own actions which does not incorporate the header checking could be an option.

Related

Bazel fetch remote file not as a WORKSPACE rule?

In Bazel, how do I fetch a remote file as a build rule not as a WORKSPACE rule?
I want to use a build rule because WORKSPACE rules are not loaded for transitively.
e.g. this fails
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_file")
http_file(
name = "foo",
urls = [ "https://example.com" ],
sha256 = "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
executable = True,
)
Error in repository_rule: 'repository rule http_file' can only be called during workspace loading
If you really want to do that, you have to implement your own rule, a naïve trivial example relying on curl to fetch could be:
def _impl(ctx):
args = ctx.actions.args()
args.add("-o", ctx.outputs.out)
args.add(ctx.attr.url)
ctx.actions.run(
outputs = [ctx.outputs.out],
executable = "curl",
arguments = [args],
)
get_stuff = rule(
_impl,
attrs = {
"url": attr.string(
mandatory = True,
),
},
outputs = {"out": "%{name}.out"},
)
But (and esp. in such a trivial) for, it comes with problems. Apart from, do you want to step out of sandbox during the build? And do you want to talk to someone across the network during the build (out of the sandbox)? Bypassing repository_cache, and possibly getting remote_cache involved (networked caching of networked fetching). Specifically in this example, if content of the file pointed to by url changes... build has no idea and only fetches it when it either hasn't done so or the url itself has changed. I.e. the implementation would need to be more robust (mimic that of http_file for instance).
But it actually sounds like you're trying to address a different problem (transitive external dependencies, for which there could be another solution). One trick used for that is to define a macro (in your first level dependency to load define the next hop) and after declaring that first step as an external dependency in your parent project, load the that macro and use it from parent project WORKSPACE. This too has a price though, namely the first level dependency has to always be present (fetched or already cached), even if build target asked for does not actually need it (as that load and macro call will always pull it in).

Bazel external dependency rebuilt unnecessarily in rule used as a tool from within another rule

I am working on a set of Bazel rules where one test rule is also executed as a tool from within another executable rule. The test rule depends on an external tool which is built by rules_foreign_cc.
symbiyosys_test = rule(
implementation = _symbiyosys_test_impl,
doc = "Formal verification of (System) Verilog.",
attrs = {
...,
"_yosys_toolchain": attr.label(
doc = "Yosys toolchain.",
default = Label("#rules_symbiyosys//symbiyosys/tools:yosys"),
),
"_yices_toolchain": attr.label(
doc = "Yices toolchain.",
default = Label("#rules_symbiyosys//symbiyosys/tools:yices"),
),
},
test = True,
)
symbiyosys_trace = rule(
implementation = _symbiyosys_trace_impl,
doc = "View VCD trace from Symbiyosys.",
attrs = {
"test": attr.label(
doc = "Symbiyosys test target to produce VCD file.",
mandatory = True,
executable = True,
cfg = "exec",
),
...,
},
executable = True,
)
With a virgin Bazel cache, when an instance of the test rule is run with bazel test //examples:counter_fail the external tool is built. The external tool is also built when an instance of the executable rule (which utilizes the test rule) is run with bazel run //examples:counter_fail_trace. Once the external tool has been built in these two contexts, subsequent tests or runs use the cached outputs.
Building the external tool twice seems unnecessary as both the test and executable rule have the same configuration ("exec"). I have a hunch that this may have to do with bazel test and bazel run invoking different command line options causing the cache to miss on the external dependency.
My question is primarily what is causing this rebuild and how do I get rid of it? And short of answering that, what are some techniques to dig into what is causing this rebuild? I have tried some basic Bazel queries, but haven't had much luck.
EDIT
I still haven't cracked this one. I do suspect a subtle difference between bazel test and bazel run but unfortunately there is limited information about how specifically the two differ in the documentation.

How to Write Out of Tree LLVM LTO Pass?

I'm aware of similar questions here and here, however, the LLVM codebase changes so quickly I'm here to ask if the state of things have changed since then.
So, currently I'm trying to write an out-of-tree pass that works on the whole program CFG (hence the need for the merged bitcode). I would prefer to use the legacy PassManager as opposed to the Mixin-based NPM due to some other legacy passes my current pass relies on.
clang is called with these args:
clang -flto -Xclang -O0 -Xclang -load -Xclang ./mvxaa.so -fuse-ld=gold -o ./tests/target_app $(TARGET_SOURCES)
Will this register the pass as an LTO (full) pass? The pass never runs.
static void registerGlobalCollectionPass(const PassManagerBuilder &PB,
legacy::PassManagerBase &PM) {
PM.add(new CollectGlobals());
}
static RegisterStandardPasses
RegisterMyPass(PassManagerBuilder::EP_FullLinkTimeOptimizationEarly,
registerGlobalCollectionPass);
Looking deeper, it seems that PassManagerBuilder::addExtensionsToPM is called by the individual populateXPassManager functions and they will look through the GlobalExtensions list to call the respective callback functions. For other non-LTO ExtensionPointTy like EP_EnabledOnOptLevel0 there are entries, but when populateLTOPassManager is called, there are no longer any entries in the GlobalExtensions smallvector. Why is this the case?
Is it because LTO occurs at a later point after the linker runs and the -load argument given to dlopen the shared libraries only loads the shared objects at the compilation phase?

Bazel- can a skylark action read a command-line flag (strict_java_deps)

I'm working on implementing a feature like Strict Java Deps for rules_scala.
I'd really like to have the ability to configure in runtime if this uses warn or error.
I seem to recall skylark rules can't create and access command-line flags but I don't recall if they can access existing ones?
Main difference is that existing ones are already parsed so maybe they are also passed in some ctx.
The flag you want (strict_java_deps) isn't available through Skylark at the moment. There's no reason we can't add it, though, filed #3295 to track.
For other flags, the context can access the configuration fragments, which can access some of the parsed command line flags. I think what you'd want is ctx.fragments, then use the fragments to get the java fragments, and then get the default_javac_flags from that:
# rules.bzl
def _impl(ctx):
print("flags: %s" % ctx.fragments.java.default_javac_flags)
...
frag = rule(
implementation = _impl,
fragments = ["java"], # Declare that this rule uses java fragments
)
Then:
$ bazel build --javacopt="-g:source,lines" :x
WARNING: /home/kchodorow/test/a/tester.bzl:2:3: flags: ["-g:source,lines"].

Contiki Timer Synchronization

I have started recently programming in Contiki. Currently I want to synchronize the timer of a set of nodes. To achieve this, I am using the functions of the header file timesynch.h. I include it as a normal h file at the top of my code. But whenever I try to use the functions inside it (timesynch_init() for instance) I get the following error message:
xmas_sync.co: In function process_thread_example_collect_process':
/home/user/contiki-2.7/examples/rime/xmas_sync.c:108: undefined reference totimesynch_init'
collect2: ld returned 1 exit status
make: *** [xmas_sync.native] Error 1
I do not know if I need to do something else to use these synchronization functions. My code is based on the collect example.
You need to enable timesync in application's or platform's configuration. On most platforms, it is not done by default.
To enable it in application's configuration, there are two options:
Modify Makefile to include this define in CFLAGS:
CFLAGS += -DTIMESYNCH_CONF_ENABLED=1
Create a project-conf.h file, change Makefile to enable including this file:
CFLAGS += -DPROJECT_CONF_H=\"project-conf.h\"
and enable the timesync in project-conf.h by adding this line
#define TIMESYNCH_CONF_ENABLED 1
Also, it looks like you're to build the example for the native (x86) platform. To build for a sensor node hardware platform, set the TARGET variable, e.g. run make TARGET=sky example-collect.

Resources