get value of specific attribute of given bazel target - bazel

I am sure this is documented somewhere but unable to find the answer anywhere.
if I have:
```bazel_rule(
name = "foo",
srcs = ["foo.cpp"],
attr_bar = "bar"
)```
if I have a reference to this rule (//src:foo) in a Starlark (.bzl) file, how can I query the target to get a value of a specific attribute. e.g. get_attribute("//src:foo", "attr_bar") should return "bar" in this example.

It depends on whether you're trying to read the attribute from a macro, a rule, or an aspect.
Short answers:
A macro can't read attributes of a target (roughly, macros are evaluated at build file loading time, and attributes are evaluated later at analysis time). You can do things like taking in the attributes you care about and creating the rule (bazel_rule in your example) within the macro, so that the macro has the attribute value, but this usually quickly becomes messy and hard to follow.
A Starlark rule also can't directly read attribute values from dependencies (it can read its own attributes though, of course). The rule you're interested in (bazel_rule here) has to put the information in a provider and the Starlark rule reads the provider from its dependencies.
An aspect can read the attributes of the rule its being evaluated on directly through ctx.rule.attr.<attr_name>(the example here does this).

Related

Consuming contents of declare_directory

I have rule A implemented with a macro that uses declare_directory to produce a set of files:
output = ctx.actions.declare_directory("selected")
Names of those files are not known in advance. The implementation returns the directory created by declare_directory with the following:
return DefaultInfo(
files = depset([output]),
)
Rule A is included in "srcs" attribute of rule B. Rule B is also implemented with a macro. Unfortunately the list of files passed to B implementation through "srcs" attribute only contains the "selected" directory created by rule A instead of files residing in that directory.
I know that Args class supports expansion of directories so I could pass names of all files in "selected" directory to a single action. What I need, however, is a separate action for every individual file for parallelism and caching. What is the best way to achieve that?
This is one of the intended use cases of directory outputs (called TreeArtifacts in the implementation), and it's implemented using ActionTemplate:
https://github.com/bazelbuild/bazel/blob/c2100ad420618bb53754508da806b5624209d9be/src/main/java/com/google/devtools/build/lib/actions/ActionTemplate.java#L24-L57
However, this is not exposed to Starlark, and has only a couple usages currently, in the Android rules AndroidBinary.java and C++ rules CcCompilationHelper.java. The Android rules and C++ rules are going to be migrated over to Starlark, so this functionality might eventually be made available in Starlark, but I'm not sure of any concrete timelines. It would probably be good to file a feature request on Github.

How can I create a link to a random text in the same document? [duplicate]

I'm using :class: and getting a lot of warnings
WARNING: py:class reference target not found: mypkg.submodule.class.
I can't find anywhere in the documentation what exactly the requirements are for a correct cross-reference.
This is currently an incomplete list of requirements I think there are:
The module of the object needs to be importable
The object needs to exist inside of the module
The object needs to be documented somewhere else in the build with a :py:class::, :py:func:: or similar directive
This directive can be generated by the autodoc extension, in which case the object needs to have a docstring associated to it.
For something to be cross-referenced it has to first be "declared".
The Python domain (name py) provides the following directives for module declarations:
There are 2 cases to consider:
domain directives (.. domain:directive_name::) and
roles (:domain:role_name:).
The case of :class: you specify is actually the shortened syntax of writing the role :py:class: not to be confused with the directive declaration .. py:class::.
This directive can be generated by the autodoc extension, in which case the object needs to have a docstring associated to it.
The directive declarations are done implicitly by autodoc, but for objects without docstrings to be declared by autodoc you must use :undoc-members: option with the autodoc directives.
Members without docstrings will be left out, unless you give the undoc-members flag option:
.. automodule:: noodle
:members:
:undoc-members:
One effect of declaring an object is that it is inserted in the index. So you can check the index to make sure it has been declared and inserted. (However note that labels used in referencing arbitrary locations are not inserted in the index.)

Bazel - best documentation for which providers are used by any given rule?

I am writing a custom rule that takes inputs from cc_library, cc_binary, apple_static_library, and a few other platform-specific rules. I'd like to view each API given to me via referencing ctx.attr.foo inside the custom rule's implementation function.
There is a list of providers here https://docs.bazel.build/versions/master/skylark/lib/skylark-provider.html but it doesn't say which rules are using them.
Is there a best practice for viewing what these rules are providing me, or does it require going through the source for each one?
This is how you get all providers and output groups from a target:
bazel cquery my_target --output=starlark --starlark:expr="providers(target)"
You can get a list of providers for a given target with dir. Something like this is helpful for debugging:
def _print_attrs_impl(ctx):
for target in ctx.attr.targets:
print('%s: %s' % (target.label, dir(target)))
Printing from inside a rule you're developing is often helpful too, to verify targets are actually what you expect them to be.
You can also apply dir to the providers themselves, to see what fields they have.

How to pass an array from Bazel cli to rules?

Let's say I have a rule like this.
foo(
name = "helloworld",
myarray = [
":bar",
"//path/to:qux",
],
)
In this case, myarray is static.
However, I want it to be given by cli, like
bazel run //:helloworld --myarray=":bar,//path/to:qux,:baz,:another"
How is this possible?
Thanks
To get exactly what you're asking for, Bazel would need to support LABEL_LIST in Starlark-defined command line flags, which are documented here:
https://docs.bazel.build/versions/2.1.0/skylark/lib/config.html
and here: https://docs.bazel.build/versions/2.1.0/skylark/config.html
Unfortunately that's not implemented at the moment.
If you don't actually need a list of labels (i.e., to create dependencies between targets), then maybe STRING_LIST will work for you.
If you do need a list of labels, and the different possible values are known, then you can use --define, config_setting(), and select():
https://docs.bazel.build/versions/2.1.0/configurable-attributes.html
The question is, what are you really after. Passing variable, array into the bazel build/run isn't really possible, well not as such and not (mostly) without (very likely unwanted) side effects. Aren't you perhaps really just looking into passing arguments directly to what is being run by the run? I.e. pass it to the executable itself, not bazel?
There are few ways you could sneak stuff in (you'd also in most cases need to come up with a syntax to pass data on CLI and unpack the array in a rule), but many come with relatively substantial price.
You can define your array in a bzl file and load it from where the rule uses it. You can then dump the bzl content rewriting your build/run configuration (also making it obvious, traceable) and load the bits from the rule (only affecting the rule loading and using the variable). E.g, BUILD file:
load(":myarray.bzl", "myarray")
foo(
name = "helloworld",
myarray = myarray,
],
)
And you can then call your build:
$ echo 'myarray=[":bar", "//path/to:qux", ":baz", ":another"]' > myarray.bzl
$ bazel run //:helloworld
Which you can of course put in a single wrapper script. If this really needs to be a bazel array, this one is probably the cleanest way to do that.
--workspace_status_command: you can collection information about your environment, add either or both of the resulting files (depending on whether the inputs are meant to invalidate the rule results or not, you could use volatile or stable status files) as a dependency of your rule and process the incoming file in the what is being executed by the rule (at which point one would wonder why not pass it to as its command line arguments directly). If using stable status file, also each other rule depending on it is invalidated by any change.
You can do similar thing by using --action_env. From within the executable/tool/script underpinning the rule, you can directly access defined environmental variable. However, this also means environment of each rule is affected (not just the one you're targeting); and again, why would it parse the information from environment and not accept arguments on the command line.
There is also --define, but you would not really get direct access it's value as much as you could select() a choice out of possible options.

Declared include source C++ compile action invalidation

(From https://groups.google.com/d/msg/bazel-discuss/HEpui0DLvnA/RzuwICDmBgAJ)
Forgive me if this has been asked and answered by the group/devs.
The list of "Declared include source" files is a component of the action key for C++ compiles.
This means that the addition of a header-extension file to srcs or hdrs of a cc_* target results in the invalidation of all compile actions which can see the declared list contents (in the hdrs case, transitively).
Can anyone explain how this could be necessary, when include pruning should be providing the minimal set of possible invalidation sources for a compile?
Reputation prevents me from commenting on your answer and the repost means that I don't own the question, but there is more to the problem than just a 're-validation':
Declared inclusion sources are transitively derived, resulting in action invalidation and reexecution (not simply re-validated, the definition of which is fuzzy at best for me) of all compiles in dependent targets.
The point of this post was to discuss (hence bazel-discuss) whether there is any way that previous compiled outputs could be affected, logistically, by adding (not discussing removal of) the definition of a header file, without changing any source related to the previous compilation. The inputs set, which would have been pruned to match the used header files, should (must) be an accurate enough depiction of the only possible triggers for action reexecution. The capacity of any compile to depend upon the newly added header is nil without further changes to the actual content of the input set of an action.
When the rule's definition changes (you add a file to srcs/hdrs), Bazel must assume that change may affect the compilation result, even if none of the other files changed. (For example you just added a header that was missing before.)
If you rebuild the target Bazel reruns the compilation action. If the output of that (the object file) is the same as the last time Bazel ran the action, it's going to re-validate downstream actions without re-executing them.

Resources