How to integrate C/C++ analysis tooling in Bazel? - bazel

I have a code analysis tool that I'd like to run for each cc_library (and cc_binary, silently implied for rest of the question). The tool has a CLI interfaces taking:
A tool project file
Compiler specifics, such as type sizes, built-ins, macros etc.
Files to analyze
File path, includes, defines
Rules to (not) apply
Files to add to the project
Options for synchronizing files with build data
JSON compilation database
Parse build log
Analyze and generate analysis report
I've been looking at how to integrate this in Bazel so that the files to analyze AND the associated includes and defines are updated automatically, and that any analysis result is properly cached. Generating JSON compilation database (using third party lib) or parsing build log both requires separate runs and updating the source tree. For this question I consider that a workaround I'm trying to remove.
What I've tried so far is using aspects, adding an analysis aspect to any library. The general idea is having a base project file holding library invariant configuration, appended with the cc_library files to analysis, and finally an analysis is triggered generating the report. But I'm having trouble to execute, and I'm not sure it's even possible.
This is my aspect implementation so far, trying to iterate through cc_library attributes and target compilation context:
def _print_aspect_impl(target, ctx):
# Make sure the rule has a srcs attribute
if hasattr(ctx.rule.attr, 'srcs'):
# Iterate through the files
for src in ctx.rule.attr.srcs:
for f in src.files.to_list():
if f.path.endswith(".c"):
print("file: ")
print(f.path)
print("includes: ")
print(target[CcInfo].compilation_context.includes)
print("quote_includes: ")
print(target[CcInfo].compilation_context.quote_includes)
print("system_includes: ")
print(target[CcInfo].compilation_context.system_includes)
print("define: " + define)
print(ctx.rule.attr.defines)
print("local_defines: ")
print(ctx.rule.attr.local_defines)
print("") # empty line to separate file prints
return []
What I cannot figure out is how to get ALL includes and defines used when compiling the library:
From libraries depended upon, recursively
copts, defines, includes
From the toolchain
features, cxx_builtin_include_directories
Questions:
How do I get the missing flags, continuing on presented technique?
Can I somehow retrieve the compile action command string?
Appended to analysis project using the build log API
Some other solution entirely?
Perhaps there is something one can do with cc_toolchain instead of aspects...

Aspects are the right tool to do that. The information you're looking for is contained in the providers, fragments, and toolchains of the cc_* rules the aspect has access to. Specifically, CcInfo has the target-specific pieces, the cpp fragment has the pieces configured from the command-line flag, and CcToolchainInfo has the parts from the toolchain.
CcInfo in target tells you if the current target has that provider, and target[CcInfo] accesses it.
The rules_cc my_c_compile example is where I usually look for pulling out a complete compiler command based on a CcInfo. Something like this should work from the aspect:
load("#rules_cc//cc:action_names.bzl", "C_COMPILE_ACTION_NAME")
load("#rules_cc//cc:toolchain_utils.bzl", "find_cpp_toolchain")
[in the impl]:
cc_toolchain = find_cpp_toolchain(ctx)
feature_configuration = cc_common.configure_features(
ctx = ctx,
cc_toolchain = cc_toolchain,
requested_features = ctx.features,
unsupported_features = ctx.disabled_features,
)
c_compiler_path = cc_common.get_tool_for_action(
feature_configuration = feature_configuration,
action_name = C_COMPILE_ACTION_NAME,
)
[in the loop]
c_compile_variables = cc_common.create_compile_variables(
feature_configuration = feature_configuration,
cc_toolchain = cc_toolchain,
user_compile_flags = ctx.fragments.cpp.copts + ctx.fragments.cpp.conlyopts,
source_file = src.path,
)
command_line = cc_common.get_memory_inefficient_command_line(
feature_configuration = feature_configuration,
action_name = C_COMPILE_ACTION_NAME,
variables = c_compile_variables,
)
env = cc_common.get_environment_variables(
feature_configuration = feature_configuration,
action_name = C_COMPILE_ACTION_NAME,
variables = c_compile_variables,
)
That example only handles C files (not C++), you'll have to change the action names and which parts of the fragment it uses appropriately.
You have to add toolchains = ["#bazel_tools//tools/cpp:toolchain_type"] and fragments = ["cpp"] to the aspect invocation to use those. Also see the note in find_cc_toolchain.bzl about the _cc_toolchain attr if you're using legacy toolchain resolution.
The information coming from the rules and the toolchain is already structured. Depending on what your analysis tool wants, it might make more sense to extract it directly instead of generating a full command line. Most of the provider, fragment, and toolchain is well-documented if you want to look at those directly.
You might pass required_providers = [CcInfo] to aspect to limit propagation to rules which include it, depending on how you want to manage propagation of your aspect.
The Integrating with C++ Rules documentation page also has some more info.

Related

Getting runtime path to a package in Bazel toolchain configuration files

What is the best way to refer to an external package's path in any arbitrary files processed by Bazel?
I'm trying to understand how Bazel preprocesses BUILD and .bzl files. I see instances where strings contain calls to package() and I am wondering how it works (and could not find any relevant documentation). Here is an example of this:
I have a toolchain which BUILD file contains the following expression :
cc_toolchain_config(
name = "cc-toolchain-config",
abi_libc_version = "glibc_" + host_gcc8_bundle()["pkg_version"]["glibc"],
abi_version = "gcc-" + host_gcc8_bundle()["version"],
compiler = "gcc-" + host_gcc8_bundle()["version"],
cpu = "x86_64",
cxx_builtin_include_directories = [
"%package(#host_gcc8_toolchain//include/c++/8)%",
"%package(#host_gcc8_toolchain//lib64/gcc/x86_64-unknown-linux-gnu/8/include-fixed)%",
"%package(#host_gcc8_kernel_headers//include)%",
"%package(#host_gcc8_glibc//include)%",
],
host_system_name = "x86_64-unknown-linux-gnu",
target_libc = "glibc_" + host_gcc8_bundle()["pkg_version"]["glibc"],
target_system_name = "x86_64-unknown-linux-gnu",
toolchain_identifier = "host_linux_gcc8",
)
From my understanding, the cxx_builtin_include_directories defines a list of strings to serve as the --sysroot option passed to GCC as detailed in https://docs.bazel.build/versions/0.23.0/skylark/lib/cc_common.html These strings are in the format %sysroot%.
Since package(#host_gcc8_toolchain//include/c++/8) for example, does not mean anything to GCC, bazel has to somehow expand this function to produce the actual path to the files included in the package before passing them to the compiler driver.
But how can it determine that this needs to be expanded and that it is not a regular string ? So how does Bazel preprocess the BUILD file ? Is it because of the % ... % pattern ? Where is this documented ?
is "%package(#external_package//target)%" a pattern that can be used elsewhere ? In any BUILD file ? Where do I find Bazel documentation showing how this works ?
These directives are expanded by cc_common.create_cc_toolchain_config_info within the cc_toolchain_config rule implementation not any sort of preprocessing on the BUILD file (I.e., "%package(#host_gcc8_glibc//include)%" is literally passed into the cc_toolchain_config rule.) I'm not aware that these special expansions are completely documented anywhere but the source.

Have all Bazel packages expose their documentation files (or any file with a given extension)

Bazel has been working great for me recently, but I've stumbled upon a question for which I have yet to find a satisfactory answer:
How can one collect all files bearing a certain extension from the workspace?
Another way of phrasing the question: how could one obtain the functional equivalent of doing a glob() across a complete Bazel workspace?
Background
The goal in this particular case is to collect all markdown files to run some checks and generate a static site from them.
At first glance, glob() sounds like a good idea, but will stop as soon as it runs into a BUILD file.
Current Approaches
The current approach is to run the collection/generation logic outside of the sandbox, but this is a bit dirty, and I'm wondering if there is a way that is both "proper" and easy (ie, not requiring that each BUILD file explicitly exposes its markdown files.
Is there any way to specify, in the workspace, some default rules that will be added to all BUILD files?
You could write an aspect for this to aggregate markdown files in a bottom-up manner and create actions on those files. There is an example of a file_collector aspect here. I modified the aspect's extensions for your use case. This aspect aggregates all .md and .markdown files across targets on the deps attribute edges.
FileCollector = provider(
fields = {"files": "collected files"},
)
def _file_collector_aspect_impl(target, ctx):
# This function is executed for each dependency the aspect visits.
# Collect files from the srcs
direct = [
f
for f in ctx.rule.files.srcs
if ctx.attr.extension == f.extension
]
# Combine direct files with the files from the dependencies.
files = depset(
direct = direct,
transitive = [dep[FileCollector].files for dep in ctx.rule.attr.deps],
)
return [FileCollector(files = files)]
markdown_file_collector_aspect = aspect(
implementation = _file_collector_aspect_impl,
attr_aspects = ["deps"],
attrs = {
"extension": attr.string(values = ["md", "markdown"]),
},
)
Another way is to do a query on file targets (input and output files known to the Bazel action graph), and process these files separately. Here's an example querying for .bzl files in the rules_jvm_external repo:
$ bazel query //...:* | grep -e ".bzl$"
//migration:maven_jar_migrator_deps.bzl
//third_party/bazel_json/lib:json_parser.bzl
//settings:stamp_manifest.bzl
//private/rules:jvm_import.bzl
//private/rules:jetifier_maven_map.bzl
//private/rules:jetifier.bzl
//:specs.bzl
//:private/versions.bzl
//:private/proxy.bzl
//:private/dependency_tree_parser.bzl
//:private/coursier_utilities.bzl
//:coursier.bzl
//:defs.bzl

What is the relationship between DefaultInfo and PyInfo

It's not clear to me what the difference between the DefaultInfo runfiles's transitive_files and PyInfo transitive_sources are. Are they redundant or is there an important difference?
For example, I have a custom starlark rule which I want to conform as a PyInfo provider, but I want to add an additional provider so I can't use the native py_library rule.
transitive_sources = [dep[PyInfo].transitive_sources for dep in ctx.attr.deps]
return struct(providers = [
DefaultInfo(
files = depset(sources + outs),
runfiles = ctx.runfiles(files = sources + outs, transitive_files = transitive_sources)
),
PyInfo(
transitive_sources = depset(direct = sources + outs, transitive = transitive_sources),
imports = depset(
direct = [_path_join(ctx.workspace_name, ctx.label.package, im) for im in ctx.attr.imports],
transitive = [dep[PyInfo].imports for dep in ctx.attr.deps]
)
),
_EggLibraryInfo(aditional_info="other stuff"),
])
I'm creating redundant depsets to satisfy these providers, which makes me think maybe I'm doing it wrong.
I have also tried another method of looping over all the default_runfiles of the deps, and using runfiles.merge for DefaultInfo. For simple cases, these methods appear equivalent, but I don't know if there are other scenarios where the approaches would diverge.
The PyInfo documentation could use a section on how transitive_sources fits into DefaultInfo, and why additional mechanisms outside of runfiles needs to be provided. https://docs.bazel.build/versions/master/skylark/lib/PyInfo.html
DefaultInfo is a known type to Bazel:
files controls which files are built when you bazel build the target,
runfiles defines which files need to be present in the sandbox when executing the target.
PyInfo is exclusively used by Python rules and is used to propagate metadata to consuming targets.
My guess is that the duplication is necessary because the values may differ, so removing the duplication will either mean Bazel doesn't build/include the right files, or consuming Python rules are missing information.

How can I use the JAR tool with Bazel v0.19+?

Starting with Bazel v0.19, if you have Starlark (formerly known as "Skylark") code that references #bazel_tools//tools/jdk:jar, you see messages like this at build time:
WARNING: <trimmed-path>/external/bazel_tools/tools/jdk/BUILD:79:1: in alias rule #bazel_tools//tools/jdk:jar: target '#bazel_tools//tools/jdk:jar' depends on deprecated target '#local_jdk//:jar': Don't depend on targets in the JDK workspace; use #bazel_tools//tools/jdk:current_java_runtime instead (see https://github.com/bazelbuild/bazel/issues/5594)
I think I could make things work with #bazel_tools//tools/jdk:current_java_runtime if I wanted access to the java command, but I'm not sure what I'd need to do to get the jar tool to work. The contents of the linked GitHub issue didn't seem to address this particular problem.
I stumbled across a commit to Bazel that makes a similar adjustment to the Starlark java rules. It uses the following pattern: (I've edited the code somewhat)
# in the rule attrs:
"_jdk": attr.label(
default = Label("//tools/jdk:current_java_runtime"),
providers = [java_common.JavaRuntimeInfo],
),
# then in the rule implementation:
java_runtime = ctx.attr._jdk[java_common.JavaRuntimeInfo]
jar_path = "%s/bin/jar" % java_runtime.java_home
ctx.action(
inputs = ctx.files._jdk + other inputs,
outputs = [deploy_jar],
command = "%s cmf %s" % (jar_path, input_files),
)
Additionally, java is available at str(java_runtime.java_executable_exec_path) and javac at "%s/bin/javac" % java_runtime.java_home.
See also, a pull request with a simpler example.
Because my reference to the jar tool is inside a genrule within top-level macro, rather than a rule, I was unable to use the approach from Rodrigo's answer. I instead explicitly referenced the current_java_runtime toolchain and was then able to use the JAVABASE make variable as the base path for the jar tool.
native.genrule(
name = genjar_rule,
srcs = [<rules that create files being jar'd>],
cmd = "some_script.sh $(JAVABASE)/bin/jar $# $(SRCS)",
tools = ["some_script.sh", "#bazel_tools//tools/jdk:current_java_runtime"],
toolchains = ["#bazel_tools//tools/jdk:current_java_runtime"],
outs = [<some outputs>]
)

Unmanaged C# calls to a static library

I'm using swig to generate C# wrappers for some C code base to be used from C#. When I run swig, it generates a wrapper c file that exposes all the functionality to the generated PInvoke C# file... For example:
// This is in KodLogic_wrap.c
SWIGEXPORT void SWIGSTDCALL CSharp_DMGameMode_timeLimit_set(void * jarg1, unsigned short jarg2) { ... }
// This is in KodLogicPInvoke.cs
[global::System.Runtime.InteropServices.DllImport("KodLogic", EntryPoint="CSharp_DMGameMode_timeLimit_set")]
This works great when I am building a dynamic library. However, I need to support iOS now, so I've prepared a static library, and passed in the -dllimport '__Internal' option to swig for that to work.
Unfortunately, I am getting linking errors such as:
"_DMGameMode_timeLimit_set", referenced from:
RegisterMonoModules() in RegisterMonoModules.o
(maybe you meant: _CSharp_DMGameMode_timeLimit_set)
Indeed, I did mean "CSharp_DMGameMode_timeLimit_set", but that's the point of the "entrypoint" argument?
So, since this error is thrown by the Xcode project Unity generated, I am not quite sure what's the source of the failure. Does it fail for static libraries? Is this something to be fixed on Unity side or swig side?
Update: After digging more into this, I think I have a slight idea of what's going on here..
The main issue seems to be from the AOT compiler, which tries to compile all the CS code to an ARM assembly. This seems to be required for iOS, so during Unity's AOT compilation, it generates a file RegisterMonoModules.cpp, which attempts to define access functions to the native code. RegisterMonoModules.cpp doesn't honor the entrypoint parameter, which causes undefined symbol errors to be thrown...
Still attempting to find a proper workaround.
The main issue seems to be from Unity, and not Swig nor Mono. As mentioned above, Unity performs AOT compilation that doesn't honor the entry point argument. This produces cpp code that calls the function name, not the entry point name..
I've confirmed this by switching the scripting backend to IL2cpp, and the entry point name was honored there.
Let's switch over to callbacks. Not exactly related to the question, but it definitely fits the context of Unity + Native plugins + iOS.
AFAIK, you can't have a managed method marshaled to native land on iOS using Mono 2x. I previously had to delete all the string callback and exception handlers from the swig generated files. Fortunately, IL2Cpp supports callbacks, after a little tweaking:
Add using AOT;
Decorate callbacks with [MonoPInvokeCallback(typeof(method_signature))]
You can use this script, just use it to process the generated swig files:
def process_csharp_callbacks(pinvoke_file):
"""Process PInvoke file by fixing the decorators for callback methods to use:
[MonoPInvokeCallback(typeof(method_signature))]
"""
# prepare requirements
with open(pinvoke_file) as f:
content = f.read()
callback_methods_regex = re.compile(r"( +)static (?:void|string) (?:SetPending|CreateString)\w*\([\s\w\,]+\)")
callback_decorator = "[MonoPInvokeCallback(typeof(ExceptionDelegate))]"
callback_arg_decorator = "[MonoPInvokeCallback(typeof(ExceptionArgumentDelegate))]"
callback_str_decorator = "[MonoPInvokeCallback(typeof(SWIGStringDelegate))]"
# add use AOT
content = content.replace("\n\n", "\nusing AOT;\n", 1)
# fix callback methods
def method_processor(match):
match_string = match.group()
indentation = match.captures(1)[0]
if match_string.find(",") != -1:
fix = callback_arg_decorator
elif match_string.find("static string") != -1:
fix = callback_str_decorator
else:
fix = callback_decorator
return indentation + fix + "\n" + match_string
content = callback_methods_regex.sub(method_processor, content)
# write it back
with open(pinvoke_file, "w+") as f:
f.write(content)
For anyone looking for help converting their generated swig CSharp PInvoke file to something mono 2x scripting backend will allow, stick this somewhere in your build process, after the CSharp files are generated:
pinvoke_template = """{extern_prefix} CSharp_{method_signature};
{normal_prefix} {method_signature} {{
{return_statement}CSharp_{method_name}({method_args});
}}"""
def process_csharp_wrapper(csharp_dir):
"""Reads the PINVOKE csharp file, and performs the following:
1. Remove EntryPoint="xxx" from the decorators
2. Make the methods match their native counterpart name
3. Add a C# method with the original name, for compatability
"""
# prepare requirements
pinvoke_file = os.path.join(csharp_dir, "KodLogicPINVOKE.cs")
with open(pinvoke_file) as f:
content = f.read()
decorator_regex = re.compile(r', EntryPoint=".*?"')
method_regex = re.compile(r"(public static extern \w+[\w:\.]+)\s(([^S]\w+)\((?:([\w:\. ]+)\,?)*\));")
# fix decorators
content = decorator_regex.sub("", content)
# fix method definitions
def method_processor(match):
extern_prefix = match.captures(1)[0]
return pinvoke_template.format(
extern_prefix=extern_prefix,
normal_prefix=extern_prefix.replace("extern ", ""),
method_signature=match.captures(2)[0],
return_statement=("return " if extern_prefix.find("void") == -1 else ""),
method_name=match.captures(3)[0],
method_args=", ".join(map(lambda s: s.strip().split()[1], match.captures(4)))
)
content = method_regex.sub(method_processor, content)
# write it back
with open(pinvoke_file, "w+") as f:
f.write(content)

Resources