I would like to be able to do the following in a Bazel BUILD file:
alpha(
name = "hello world",
color = "blue"
)
beta(
name = "hello again"
)
Where alpha and beta are custom rules. I want beta to be able to access the color attribute of the alpha rule, without adding a label attribute. In Bazel query, I can do something like this:
bazel query 'kind(beta, siblings(kind(alpha, //...)))'
which gives me the beta which is side by side to alpha. Can I achieve the same somehow from within the implementation function of the beta rule?
def _beta_rule_impl(ctx):
# This does not exist, I wish it did: ctx.siblings(kind='alpha')
I've seen this been done with a label like this
beta(
name = "hello again",
alpha_link = ":hello world" # explicitly linking
)
but I find this a bit verbose, especially since there is a sibling query support.
The way the question is formulated, the answer is no. It is not possible.
Bazel design philosophy is to be explicit about target dependencies. Providers mechanism is meant to provide the access to the dependency graph information during the analysis phase.
It is difficult to tell what is the actual use case is. Using Aspects might be the answer.
In my scenario, I'm trying to get a genrule to call a test rule before proceeding:
genrule(
name = "generate_buf_image",
srcs = [":protos", "cookie"],
outs = ["buf-image.json"],
cmd = "$(location //third_party/buf:cas_buf_image) //example-grpc/proto/v1:proto_backwards_compatibility_check $(SRCS) >$(OUTS)",
tools = [
"//third_party/buf:cas_buf_image",
"#buf",
],
)
If cas_buf_image.sh has ls -l "example-grpc/proto/v1" >&2, it shows:
… cookie -> …/example-grpc/proto/v1/cookie
… example.proto -> …/example-grpc/proto/v1/example.proto
IOW, examining what example-grpc/proto/v1/cookie is linked to and cding to its directory then performing the git commands should work.
Related
In Bazel, how do I fetch a remote file as a build rule not as a WORKSPACE rule?
I want to use a build rule because WORKSPACE rules are not loaded for transitively.
e.g. this fails
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_file")
http_file(
name = "foo",
urls = [ "https://example.com" ],
sha256 = "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
executable = True,
)
Error in repository_rule: 'repository rule http_file' can only be called during workspace loading
If you really want to do that, you have to implement your own rule, a naïve trivial example relying on curl to fetch could be:
def _impl(ctx):
args = ctx.actions.args()
args.add("-o", ctx.outputs.out)
args.add(ctx.attr.url)
ctx.actions.run(
outputs = [ctx.outputs.out],
executable = "curl",
arguments = [args],
)
get_stuff = rule(
_impl,
attrs = {
"url": attr.string(
mandatory = True,
),
},
outputs = {"out": "%{name}.out"},
)
But (and esp. in such a trivial) for, it comes with problems. Apart from, do you want to step out of sandbox during the build? And do you want to talk to someone across the network during the build (out of the sandbox)? Bypassing repository_cache, and possibly getting remote_cache involved (networked caching of networked fetching). Specifically in this example, if content of the file pointed to by url changes... build has no idea and only fetches it when it either hasn't done so or the url itself has changed. I.e. the implementation would need to be more robust (mimic that of http_file for instance).
But it actually sounds like you're trying to address a different problem (transitive external dependencies, for which there could be another solution). One trick used for that is to define a macro (in your first level dependency to load define the next hop) and after declaring that first step as an external dependency in your parent project, load the that macro and use it from parent project WORKSPACE. This too has a price though, namely the first level dependency has to always be present (fetched or already cached), even if build target asked for does not actually need it (as that load and macro call will always pull it in).
I am working on a set of Bazel rules where one test rule is also executed as a tool from within another executable rule. The test rule depends on an external tool which is built by rules_foreign_cc.
symbiyosys_test = rule(
implementation = _symbiyosys_test_impl,
doc = "Formal verification of (System) Verilog.",
attrs = {
...,
"_yosys_toolchain": attr.label(
doc = "Yosys toolchain.",
default = Label("#rules_symbiyosys//symbiyosys/tools:yosys"),
),
"_yices_toolchain": attr.label(
doc = "Yices toolchain.",
default = Label("#rules_symbiyosys//symbiyosys/tools:yices"),
),
},
test = True,
)
symbiyosys_trace = rule(
implementation = _symbiyosys_trace_impl,
doc = "View VCD trace from Symbiyosys.",
attrs = {
"test": attr.label(
doc = "Symbiyosys test target to produce VCD file.",
mandatory = True,
executable = True,
cfg = "exec",
),
...,
},
executable = True,
)
With a virgin Bazel cache, when an instance of the test rule is run with bazel test //examples:counter_fail the external tool is built. The external tool is also built when an instance of the executable rule (which utilizes the test rule) is run with bazel run //examples:counter_fail_trace. Once the external tool has been built in these two contexts, subsequent tests or runs use the cached outputs.
Building the external tool twice seems unnecessary as both the test and executable rule have the same configuration ("exec"). I have a hunch that this may have to do with bazel test and bazel run invoking different command line options causing the cache to miss on the external dependency.
My question is primarily what is causing this rebuild and how do I get rid of it? And short of answering that, what are some techniques to dig into what is causing this rebuild? I have tried some basic Bazel queries, but haven't had much luck.
EDIT
I still haven't cracked this one. I do suspect a subtle difference between bazel test and bazel run but unfortunately there is limited information about how specifically the two differ in the documentation.
I want to download and build ruby within a workspace. I've been trying to implement this by mimicking rules_go. I have that part working. The issue I'm having is it rebuilds the openssl and ruby artifacts each time ruby_download_sdk is invoked. In the code below the download artifacts are cached but the builds of openssl and ruby are always executed.
def ruby_download_sdk(name, version = None):
# TODO detect os and arch
os, arch = "osx", "x86_64"
_ruby_download_sdk(
name = name,
version = version,
)
_register_toolchains(name, os, arch)
def _ruby_download_sdk_impl(repository_ctx):
# TODO detect platform
platform = ("osx", "x86_64")
_sdk_build_file(repository_ctx, platform)
_remote_sdk(repository_ctx)
_ruby_download_sdk = repository_rule(
_ruby_download_sdk_impl,
attrs = {
"version": attr.string(),
},
)
def _remote_sdk(repository_ctx):
_download_openssl(repository_ctx, version = "1.1.1c")
_download_ruby(repository_ctx, version = "2.6.3")
openssl_path, ruby_path = "openssl/build", ""
_build(repository_ctx, "openssl", openssl_path, ruby_path)
_build(repository_ctx, "ruby", openssl_path, ruby_path)
def _build(repository_ctx, name, openssl_path, ruby_path):
script_name = "build-{}.sh".format(name)
template_name = "build-{}.template".format(name)
repository_ctx.template(
script_name,
Label("#rules_ruby//ruby/private:{}".format(template_name)),
substitutions = {
"{ssl_build}": openssl_path,
"{ruby_build}": ruby_path,
}
)
repository_ctx.report_progress("Building {}".format(name))
res = repository_ctx.execute(["./" + script_name], timeout=20*60)
if res.return_code != 0:
print("res %s" % res.return_code)
print(" -stdout: %s" % res.stdout)
print(" -stderr: %s" % res.stderr)
Any advice on how I can make bazel aware such that it doesn't rebuild these build artifacts every time?
Problem is, that bazel isn't really building your ruby and openssl. When it prepares your build tree and runs the repository rule, it just executes a shell script as instructed, which apparently happens to build, but that fact is essentially opaque to bazel (and it also happens before bazel itself would even build).
There might be other, but I see the following as your options from top of my head:
Pre-build your ruby environment and its results as external dependency. The obvious downside (which may or may not be quite a lot of pain) being you need to do so for all platforms you need to supports (incl. making sure correct detection and corresponding download). The upside being you really only build once (per platform) and also have control over tooling used across all hosts. This would likely be my primary choice.
Build ssl and ruby as any other C sources making them just another bazel target. This however means you'd need to bazelify their builds (describe and maintain bazel build of otherwise bazel unaware project).
You can continue further along the path you've started and just (sort of) leave bazel out of it. I.e. for these builds extend the magic and in the build scripts used for instance using deterministic location and perhaps manifest files of what is around (also to make corruption less likely) make it possible to determine that the build has indeed already taken place and you can just collect its previous results.
Is it possible to use bazel query to get a list of all remote repositories (e.x. #com_google_protobuf) that are available?
I don't know of any way to get exactly this, however you can get an overapproximation by querying the synthetic //external package. It contains one target for every external repository. However, it contains some other targets by default.
Example:
$ cat WORKSPACE
local_repository(name = "a", path = "a")
maven_jar(name = "b", artifact = "com.google.guava:guava:19.0")
$ bazel query //external:all
//external:local_jdk
//external:local_config_xcode
//external:local_config_cc
//external:jre-default
//external:jre
//external:jni_md_header-linux
//external:jni_md_header-freebsd
//external:jni_md_header-darwin
//external:jni_header
//external:jdk-default
//external:jdk
//external:javac
//external:java
//external:jar
//external:has_androidsdk
//external:extdir
//external:extclasspath
//external:cc_toolchain
//external:bootclasspath
//external:bazel_tools
//external:bazel_j2objc
//external:b
//external:android_sdk_for_testing
//external:android_ndk_for_testing
//external:android/sdk
//external:android/dx_jar_import
//external:android/crosstool
//external:a
Note that //external:a and //external:b appear in the results.
I'm working on implementing a feature like Strict Java Deps for rules_scala.
I'd really like to have the ability to configure in runtime if this uses warn or error.
I seem to recall skylark rules can't create and access command-line flags but I don't recall if they can access existing ones?
Main difference is that existing ones are already parsed so maybe they are also passed in some ctx.
The flag you want (strict_java_deps) isn't available through Skylark at the moment. There's no reason we can't add it, though, filed #3295 to track.
For other flags, the context can access the configuration fragments, which can access some of the parsed command line flags. I think what you'd want is ctx.fragments, then use the fragments to get the java fragments, and then get the default_javac_flags from that:
# rules.bzl
def _impl(ctx):
print("flags: %s" % ctx.fragments.java.default_javac_flags)
...
frag = rule(
implementation = _impl,
fragments = ["java"], # Declare that this rule uses java fragments
)
Then:
$ bazel build --javacopt="-g:source,lines" :x
WARNING: /home/kchodorow/test/a/tester.bzl:2:3: flags: ["-g:source,lines"].