How do I disable PIC for a toolchain? - bazel

I have a custom toolchain,
load("#rules_cc//cc:defs.bzl", "cc_toolchain")
load("#bazel_tools//tools/cpp:unix_cc_toolchain_config.bzl", "cc_toolchain_config")
cc_toolchain_config(
name = "config-gba",
# ... omitted ...
)
cc_toolchain(
name = "cc-compiler-gba",
toolchain_config = ":config-gba",
# ... omitted ...
)
This toolchain works.
I just migrated to unix_cc_toolchain_config.bzl, but I need to disable PIC for this toolchain. How do I disable PIC for the toolchain?
I can disable the feature globally from the command line,
# This works.
$ bazel build --features=-pic
I can also disable it on a per-target or per-package basis,
# This works.
cc_library(
name = "..."
features = ["-pic"],
# ...
)
Is there a way to disable it for the whole toolchain? Adding the features to the cc_toolchain doesn't do anything:
# This doesn't disable PIC compilation for targets compiled
# with this toolchain.
cc_toolchain(
name = "cc-compiler-gba",
toolchain_config = ":config-gba",
features = ["-pic"],
# ... omitted ...
)

Bazel decides whether or not a toolchain supports PIC by checking enablement of the supports_pic toolchain feature. unix_cc_toolchain_config.bzl unconditionally enables the supports_pic feature. So, the only path to removing it is forking unix_cc_toolchain_config.bzl.
(Setting features on a cc_toolchain rule is a noop. Every rule implicitly has a features attribute, but more or less the only rules that use it are rules that build C++ like cc_library and cc_binary.)

Related

Platform constraints rejecting the host (matching) platform

I'm trying to use platform constraints on a target:
cc_test(
name = "library_test",
srcs = ["library_test.cc"],
deps = [":library"],
target_compatible_with = [
"#platforms//cpu:x86_64",
"#platforms//os:linux"
]
)
But I'm getting this error:
Dependency chain:
//platforms:library_test (1f7c4b) <-- target platform (#local_config_platform//:host) didn't satisfy constraints [#platforms//cpu:x86_64, #platforms//os:linux]
Which I find confusing, considering that this should be correct. How do I print what the values in #local_config_platform//:host are?
I do have the --incompatible_enable_cc_toolchain_resolution flag turned on.
You can find this in bazel-out/../../../external/local_config_platform/constraints.bzl (relative to your Bazel workspace).
Similar paths also work for any other external repository (#something is in bazel-out/../../../external/something). Repository rules create these folders via various mechanisms, and being able to look at the result is very helpful for debugging.

How to query sibling rules from a Bazel rule

I would like to be able to do the following in a Bazel BUILD file:
alpha(
name = "hello world",
color = "blue"
)
beta(
name = "hello again"
)
Where alpha and beta are custom rules. I want beta to be able to access the color attribute of the alpha rule, without adding a label attribute. In Bazel query, I can do something like this:
bazel query 'kind(beta, siblings(kind(alpha, //...)))'
which gives me the beta which is side by side to alpha. Can I achieve the same somehow from within the implementation function of the beta rule?
def _beta_rule_impl(ctx):
# This does not exist, I wish it did: ctx.siblings(kind='alpha')
I've seen this been done with a label like this
beta(
name = "hello again",
alpha_link = ":hello world" # explicitly linking
)
but I find this a bit verbose, especially since there is a sibling query support.
The way the question is formulated, the answer is no. It is not possible.
Bazel design philosophy is to be explicit about target dependencies. Providers mechanism is meant to provide the access to the dependency graph information during the analysis phase.
It is difficult to tell what is the actual use case is. Using Aspects might be the answer.
In my scenario, I'm trying to get a genrule to call a test rule before proceeding:
genrule(
name = "generate_buf_image",
srcs = [":protos", "cookie"],
outs = ["buf-image.json"],
cmd = "$(location //third_party/buf:cas_buf_image) //example-grpc/proto/v1:proto_backwards_compatibility_check $(SRCS) >$(OUTS)",
tools = [
"//third_party/buf:cas_buf_image",
"#buf",
],
)
If cas_buf_image.sh has ls -l "example-grpc/proto/v1" >&2, it shows:
… cookie -> …/example-grpc/proto/v1/cookie
… example.proto -> …/example-grpc/proto/v1/example.proto
IOW, examining what example-grpc/proto/v1/cookie is linked to and cding to its directory then performing the git commands should work.

How to not rebuild artifacts on every invocation

I want to download and build ruby within a workspace. I've been trying to implement this by mimicking rules_go. I have that part working. The issue I'm having is it rebuilds the openssl and ruby artifacts each time ruby_download_sdk is invoked. In the code below the download artifacts are cached but the builds of openssl and ruby are always executed.
def ruby_download_sdk(name, version = None):
# TODO detect os and arch
os, arch = "osx", "x86_64"
_ruby_download_sdk(
name = name,
version = version,
)
_register_toolchains(name, os, arch)
def _ruby_download_sdk_impl(repository_ctx):
# TODO detect platform
platform = ("osx", "x86_64")
_sdk_build_file(repository_ctx, platform)
_remote_sdk(repository_ctx)
_ruby_download_sdk = repository_rule(
_ruby_download_sdk_impl,
attrs = {
"version": attr.string(),
},
)
def _remote_sdk(repository_ctx):
_download_openssl(repository_ctx, version = "1.1.1c")
_download_ruby(repository_ctx, version = "2.6.3")
openssl_path, ruby_path = "openssl/build", ""
_build(repository_ctx, "openssl", openssl_path, ruby_path)
_build(repository_ctx, "ruby", openssl_path, ruby_path)
def _build(repository_ctx, name, openssl_path, ruby_path):
script_name = "build-{}.sh".format(name)
template_name = "build-{}.template".format(name)
repository_ctx.template(
script_name,
Label("#rules_ruby//ruby/private:{}".format(template_name)),
substitutions = {
"{ssl_build}": openssl_path,
"{ruby_build}": ruby_path,
}
)
repository_ctx.report_progress("Building {}".format(name))
res = repository_ctx.execute(["./" + script_name], timeout=20*60)
if res.return_code != 0:
print("res %s" % res.return_code)
print(" -stdout: %s" % res.stdout)
print(" -stderr: %s" % res.stderr)
Any advice on how I can make bazel aware such that it doesn't rebuild these build artifacts every time?
Problem is, that bazel isn't really building your ruby and openssl. When it prepares your build tree and runs the repository rule, it just executes a shell script as instructed, which apparently happens to build, but that fact is essentially opaque to bazel (and it also happens before bazel itself would even build).
There might be other, but I see the following as your options from top of my head:
Pre-build your ruby environment and its results as external dependency. The obvious downside (which may or may not be quite a lot of pain) being you need to do so for all platforms you need to supports (incl. making sure correct detection and corresponding download). The upside being you really only build once (per platform) and also have control over tooling used across all hosts. This would likely be my primary choice.
Build ssl and ruby as any other C sources making them just another bazel target. This however means you'd need to bazelify their builds (describe and maintain bazel build of otherwise bazel unaware project).
You can continue further along the path you've started and just (sort of) leave bazel out of it. I.e. for these builds extend the magic and in the build scripts used for instance using deterministic location and perhaps manifest files of what is around (also to make corruption less likely) make it possible to determine that the build has indeed already taken place and you can just collect its previous results.

Bazel- can a skylark action read a command-line flag (strict_java_deps)

I'm working on implementing a feature like Strict Java Deps for rules_scala.
I'd really like to have the ability to configure in runtime if this uses warn or error.
I seem to recall skylark rules can't create and access command-line flags but I don't recall if they can access existing ones?
Main difference is that existing ones are already parsed so maybe they are also passed in some ctx.
The flag you want (strict_java_deps) isn't available through Skylark at the moment. There's no reason we can't add it, though, filed #3295 to track.
For other flags, the context can access the configuration fragments, which can access some of the parsed command line flags. I think what you'd want is ctx.fragments, then use the fragments to get the java fragments, and then get the default_javac_flags from that:
# rules.bzl
def _impl(ctx):
print("flags: %s" % ctx.fragments.java.default_javac_flags)
...
frag = rule(
implementation = _impl,
fragments = ["java"], # Declare that this rule uses java fragments
)
Then:
$ bazel build --javacopt="-g:source,lines" :x
WARNING: /home/kchodorow/test/a/tester.bzl:2:3: flags: ["-g:source,lines"].

get value of export_includes for target

I have a target in waf defined like this:
bld(...,
target='asdf',
export_includes='.'
)
I want to get the value of the export_includes for that target (for use in some custom commands).
How do I get it?
Use a custom rule with the features for whatever target you're processing. Ex, let's say I'm processing C:
def print_includes(t):
print(t.env.INCPATHS)
bld(features='c', use='asdf', rule=print_includes)
The task t's env will contain all relevant environment variables as derived from the bld.env, but with all the additional flags stemming from use-ing the target.
i.e. if I originally had bld.env.INCPATHS == ['old-path' 'other-old-path'], it'll end up being printed out as ['old-path', 'other-old-path', 'export_include_for_asdf_here'].

Resources