Facing Bazel Build error while using Select() function - bazel

I have a simple BUILD.bazel for selecting the targets as below. This works, however when i apply the same method to a bigger project having multiple .cpp and .h files, Iam facing an error. Can you please suggest?
Simple BUILD.bazel:
cc_library(
name = "testlib",
srcs = select ({
":pc" : [ "source/t1.c" , ],
":ecu" : [ "source/t2.c" , ],
})
)
config_setting(
name = "pc",
define_values = {
"target" : "pc",
}
)
config_setting(
name = "ecu",
define_values = {
"target" : "ecu",
}
)
Command that works:
Build for PC:
bazel build --define target=pc //...
Build for ECU:
bazel build --define target=ecu //...
However below error coming when I apply above method to large project:
failed: configurable "srcs" triggers an implicit .so output even though there are no sources to compile in this configuration caused by.....

Related

Using a custom Java toolchain in shell binary target

I am having a simple shell script that executes a prebuilt binary. The build file looks like this:
filegroup(
name = "generator_srcs",
srcs = glob([
"configuration/**",
"features/**",
"plugins/**"
]) + [
"commonapi-generator-linux-x86_64.ini",
"artifacts.xml"
],
)
filegroup(
name = "generator_binary",
srcs = ["commonapi-generator-linux-x86_64"],
data = [":generator_srcs"],
)
sh_binary(
name = "generator",
srcs = ["#//tools:generator.sh"],
data = [":generator_binary"],
toolchains = ["#//toolchain:jre_toolchain_definition"],
args = ["$(location :generator_binary)", "$(JAVA)"],
)
However the prebuilt binary depends on a specific Java Runtime Environment. Therefore I simply defined a custom java_runtime that fits the requirements of the binary. The corresponding build file looks like this:
java_runtime(
name = "jre8u181-b13",
srcs = glob([
"jre/**"
]),
java_home = "jre",
licenses = [],
visibility = [
"//visibility:public"
]
)
config_setting(
name = "jre_version_setting",
values = {"java_runtime_version": "1.8"},
visibility = ["//visibility:private"],
)
toolchain(
name = "jre_toolchain_definition",
target_settings = [":jre_version_setting"],
toolchain_type = "#bazel_tools//tools/jdk:runtime_toolchain_type",
toolchain = ":jre8u181-b13",
visibility = ["//visibility:public"],
)
When I am trying to build and run the target generator bazel throws the error:
//toolchain:jre_toolchain_definition does not have mandatory providers: 'TemplateVariableInfo'
This is the point where I am a little bit lost. As stated in this post the rule should provide toolchain specific make variables. Therefore I was looking around in the bazel github repository and found the rule java_runtime_alias which seems to provide some useful variables that I could use in my sh_binary target. But in this rule automatic toolchain resolution happens. I would like to rewrite the rule such that I can hand over my custom toolchain target as an argument but I don't know how. Should I define an attribute?

How to specify output name of cc_binary target

I'd like to build a simple application and to specify its output name: for debug version 'd' prefix should be added. I tried to use genrule for it, but I don't know how to change this option in different compilation modes. However I suppose that using genrule is not correct way.
So, how to specify output name?
cc_binary(
name = "TestApp",
srcs = [
"src/TestApp/main.cpp",
],
)
genrule(
name = "output_name_rule",
srcs = [":TestApp"],
outs = ["TestAppd.exe"],
output_to_bindir = True,
cmd_bat = "rename $(location TestApp) TestAppd.exe"
)
config_setting(
name = "release_build",
values = {
"compilation_mode":"opt"
}
)
config_setting(
name = "debug_build",
values = {
"compilation_mode":"dbg"
}
)
genrule.outs is documented as nonconfigurable, so you can't use it directly. I would create a genrule that copies the file that exists in all configurations, and then choose between the genrule and the original cc_binary based on the configuration. Something like this:
genrule(
name = "copy_with_d",
srcs = [":TestApp"],
outs = ["TestAppd.exe"],
output_to_bindir = True,
cmd_bat = "rename $(location TestApp) TestAppd.exe"
)
alias(
name = "maybe_with_d",
actual = select({
":debug_build": "TestAppd.exe",
"//conditions:default": "TestApp.exe",
}),
)
Some notes on things to tweak for your use case:
On Linux, I would write cmd = "cp $< $#". I'm not sure how Windows rename handles the input not being writable. You can definitely write $< and $# instead of repeating the filenames though, they are Bazel "Make" variables.
I put the non-d version as //conditions:default so it applies in fastbuild too, up to you which way you want as the default.
Up to you whether you use TestApp.exe or :TestApp.exe or :TestApp in alias.actual to refer to that version, they're all equivalent. Same with TestAppd.exe vs copy_with_d or :copy_with_d, etc.

How to reference the source directory of a `bazel` BUILD file?

I have something like the following:
sh_binary(
…
args = [
"path/to/this/build/file/relative/to/workspace/root",
],
…
)
Is there a way to compute/generate "path/to/this/build/file/relative/to/workspace/root" so that if the BUILD file is moved, args wouldn't have to be changed? Something similar to $(location) (I haven't gotten $(location) to work since it would introduce a circular dependency)?
Adding the BUILD file as a data dependency allows you to get its $(location):
sh_binary(
…
args = ["$(location BUILD.bazel)"],
data = ["BUILD.bazel"],
…
)

Passing output of custom bazel rule to a *_binary rule as runtime data

I have a rule that creates a folder (untars a tar.gz) and I would like to use this folder directly as data for a cc_binary. The only way I could find to so is to create an intermediate filegroup with the created folder as source.
Though it works it requires the users of the rule to create an intermediate filegroup and introduces some naming issues as the name of the filegroup cannot be the same as the created folder name.
This is what I have
# rules.bzl
def _untar_impl(ctx):
tree = ctx.actions.declare_directory(ctx.attr.out)
ctx.actions.run_shell(
inputs = [ctx.file.src],
outputs = [tree],
command = "tar -C '{out}' -xf '{src}'".format(src=ctx.file.src.path, out=tree.path),
)
return [DefaultInfo(files = depset([tree]))]
untar = rule(
implementation = _untar_impl,
attrs = {
"src": attr.label(mandatory=True, allow_single_file = True),
"out": attr.string(mandatory=True),
},
)
# BUILD
untar(
name = "media",
src = "media.tar.gz",
out = "media",
)
filegroup(
name = "mediafiles",
srcs = ["media"],
data = [":media"],
)
cc_binary(
name = "main",
srcs = ["main.cpp"],
data = [":mediafiles"],
)
Is there any way to avoid having the intermediate filegroup?
Based on the discussion that ensued, I see I've misunderstood the problem statement a bit. You probably still want to look into handling that tarball (and its entire processing) as an external dependency and a repository_rule, but for your immediate problem of need of intermediate filegroup.
If you noticed, you've defined both srcs and data to point to your media label, and that is exactly the missing bit to have data available for execution of your *_binary rule. Because the untar rule returned depset of files, but those when used data directly would resolve to being empty.
If you replace this line in your rule definition:
return [DefaultInfo(files = depset([tree]))]
with:
return [DefaultInfo(runfiles = ctx.runfiles([tree]))]
You can then in your BUILD file say:
cc_binary(
name = "main",
srcs = ["main.cpp"],
data = [":media"],
)
Because untar rule now provides runfiles of DefaultInfo. That filegroup wrapping and adding media through its data property has done that.

How do I load `config_setting()` into my `.bzl` file?

My motivation: our codebase is scattered across over at least 20 git repos. I want to consolidate everything into a single git repo with a single build system. Currently we use SBT, but we think the build would take too long, so I am examining the possibility of using Bazel instead.
Most of our codebase uses Scala 2.12, some of our codebase uses Scala 2.11, and the rest needs to build under both Scala 2.11 and Scala 2.12.
I'm trying to use bazelbuild/rules_scala.
With the following call to scala_repositories in my WORKSPACE, I can build using Scala 2.12:
scala_repositories(("2.12.6", {
"scala_compiler": "3023b07cc02f2b0217b2c04f8e636b396130b3a8544a8dfad498a19c3e57a863",
"scala_library": "f81d7144f0ce1b8123335b72ba39003c4be2870767aca15dd0888ba3dab65e98",
"scala_reflect": "ffa70d522fc9f9deec14358aa674e6dd75c9dfa39d4668ef15bb52f002ce99fa"
}))
If I have the following call instead, I can build using Scala 2.11:
scala_repositories(("2.11.12", {
"scala_compiler": "3e892546b72ab547cb77de4d840bcfd05c853e73390fed7370a8f19acb0735a0",
"scala_library": "0b3d6fd42958ee98715ba2ec5fe221f4ca1e694d7c981b0ae0cd68e97baf6dce",
"scala_reflect": "6ba385b450a6311a15c918cf8688b9af9327c6104f0ecbd35933cfcd3095fe04"
}))
However, it is not possible to specify in my BUILD files on a package level which version(s) of Scala to build with. I must specify this globally in my WORKSPACE.
To workaround this, my plan is to set up configurable attributes, so I can specify --define scala=2.11 to build with Scala 2.11, and specify --define scala=2.12 to build with Scala 2.12.
First I tried by putting this code in my WORKSPACE:
config_setting(
name = "scala-2.11",
define_values = {
"scala": "2.11"
}
)
config_setting(
name = "scala-2.12",
define_values = {
"scala": "2.12"
}
)
scala_repositories(
select(
{
"scala-2.11": "2.11.12",
"scala-2.12": "2.12.6"
}
),
select(
{
"scala-2.11": {
"scala_compiler": "3e892546b72ab547cb77de4d840bcfd05c853e73390fed7370a8f19acb0735a0",
"scala_library": "0b3d6fd42958ee98715ba2ec5fe221f4ca1e694d7c981b0ae0cd68e97baf6dce",
"scala_reflect": "6ba385b450a6311a15c918cf8688b9af9327c6104f0ecbd35933cfcd3095fe04",
},
"scala-2.12": {
"scala_compiler": "3023b07cc02f2b0217b2c04f8e636b396130b3a8544a8dfad498a19c3e57a863",
"scala_library": "f81d7144f0ce1b8123335b72ba39003c4be2870767aca15dd0888ba3dab65e98",
"scala_reflect": "ffa70d522fc9f9deec14358aa674e6dd75c9dfa39d4668ef15bb52f002ce99fa"
}
}
)
)
But this gave me the error config_setting cannot be in the WORKSPACE file.
So then I tried moving code into a Starlark file.
In tools/build_rules/scala.bzl:
config_setting(
name = "scala-2.11",
define_values = {
"scala": "2.11"
}
)
config_setting(
name = "scala-2.12",
define_values = {
"scala": "2.12"
}
)
def scala_version():
return select(
{
"scala-2.11": "2.11.12",
"scala-2.12": "2.12.6"
}
)
def scala_machinery():
return select(
{
"scala-2.11": {
"scala_compiler": "3e892546b72ab547cb77de4d840bcfd05c853e73390fed7370a8f19acb0735a0",
"scala_library": "0b3d6fd42958ee98715ba2ec5fe221f4ca1e694d7c981b0ae0cd68e97baf6dce",
"scala_reflect": "6ba385b450a6311a15c918cf8688b9af9327c6104f0ecbd35933cfcd3095fe04",
},
"scala-2.12": {
"scala_compiler": "3023b07cc02f2b0217b2c04f8e636b396130b3a8544a8dfad498a19c3e57a863",
"scala_library": "f81d7144f0ce1b8123335b72ba39003c4be2870767aca15dd0888ba3dab65e98",
"scala_reflect": "ffa70d522fc9f9deec14358aa674e6dd75c9dfa39d4668ef15bb52f002ce99fa"
}
}
)
And back in my WORKSPACE:
load("//tools/build_rules:scala.bzl", "scala_version", "scala_machinery")
scala_repositories(scala_version(), scala_machinery())
But now I get this error:
tools/build_rules/scala.bzl:1:1: name 'config_setting' is not defined
This confuses me, because I thought config_setting() was built in. I can't find where I should load it in from.
So, my questions:
How do I load config_setting() into my .bzl file?
Or, is there a better way of controlling from the command line which arguments get passed to scala_repositories()?
Or, is this just not possible?
$ bazel version
Build label: 0.17.2-homebrew
Build target: bazel-out/darwin-opt/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Fri Sep 28 10:42:37 2018 (1538131357)
Build timestamp: 1538131357
Build timestamp as int: 1538131357
If you call native code from a bzl file, you must use the native. prefix, so in this case you would call native.config_setting.
However, this is going to lead to the same error: config_setting is a BUILD rule, not a WORKSPACE rule.
If you want to change the build tool used for a particular target, you can change the toolchain, and this seems to be supported via the scala_toolchain
And I believe you can use a config to select the toolchain.
I'm unfamiliar with what scala_repositories does. I hope it defines the toolchain with a proper versioned name, so that you can reference the wanted toolchain correctly. And I hope you can invoke it twice in the same workspace, otherwise I think there is no solution.

Resources