how to propagate data down build chain using bazel aspects - bazel

Let's say I have a simple java program including 2 classes:
Example, Example2
and another class that uses both classes:
ExamplesUsage
and I have corresponding bazel build targets of kind java_library:
example, example2, examples_usage
so example and example2 need to be compiled before examples_usage is built.
I want to accumulate information from all three targets using bazel aspects propagation technique, how do I go about doing that?

Here's an example for accumulating the number of source files in this build chain:
def _counter_aspect_impl(target, ctx):
sources_count = len(ctx.rule.attr.srcs)
print("%s: own amount - %s" % (target.label.name , sources_count))
for dep in ctx.rule.attr.deps:
sources_count = sources_count + dep.count
print("%s: including deps: %s" % (target.label.name , sources_count))
return struct(count = sources_count)
counter_aspect = aspect(implementation = _counter_aspect_impl,
attr_aspects = ["deps"]
)
if we run it on the hypothetical java program we get the following output:
example2: own amount - 1.
example2: including deps: 1.
example: own amount - 1.
example: including deps: 1.
examples_usage: own amount - 1.
examples_usage: including deps: 3.
As you can see the 'dependencies' targets' aspects were run first, and only then the 'dependant' target aspect was run.
Of course in order to actually utilize the information some ctx.action or ctx.file_action needs to be called in order to persist the gathered data

Related

How to list the output groups of a bazel rule?

From https://stackoverflow.com/a/59455700/6162120:
cc_library produces several outputs, which are separated by output groups. If you want to get only .so outputs, you can use filegroup with dynamic_library output group.
Where can I find the list of all the output groups produced by cc_library? And more generally, how can I list all the output groups of a bazel rule?
In the next Bazel release (after 3.7), or using Bazel#HEAD as of today, you can use cquery --output=starlark and the providers() function to do this:
$ bazel-dev cquery //:java-maven \
--output=starlark \
--starlark:expr="[p for p in providers(target)]"
["InstrumentedFilesInfo", "JavaGenJarsProvider", "JavaInfo", "JavaRuntimeClasspathProvider", "FileProvider", "FilesToRunProvider", "OutputGroupInfo"]
This isn't a replacement for documentation, but it's possible to get the output groups of targets using an aspect:
defs.bzl:
def _output_group_query_aspect_impl(target, ctx):
for og in target.output_groups:
print("output group " + str(og) + ": " + str(getattr(target.output_groups, og)))
return []
output_group_query_aspect = aspect(
implementation = _output_group_query_aspect_impl,
)
Then on the command line:
bazel build --nobuild Foo --aspects=//:defs.bzl%output_group_query_aspect
(--nobuild runs just the analysis phase and avoids running the execution phase if you don't need it)
For a java_binary this returns e.g.:
DEBUG: defs.bzl:3:5: output group _hidden_top_level_INTERNAL_: depset([<generated file _middlemen/Foo-runfiles>])
DEBUG: defs.bzl:3:5: output group _source_jars: depset([<generated file Foo-src.jar>])
DEBUG: defs.bzl:3:5: output group compilation_outputs: depset([<generated file Foo.jar>])

`knitr_out, `file_out` and `vis_drake_graph` usage in R:drake

I'm trying to understand how to use knitr_out, file_out and vis_drake_graph properly in drake.
I have two questions.
Q1: Usage of knitr_out and file_out to create markdown reports
While a code like this works correctly for one of my smaller projects:
make_hyp_data_aggregated_report <- function() {
render(
input = knitr_in("rmd/hyptest-is-data-being-aggregated.Rmd"),
output_file = file_out("~/projectname/reports/01-hyp-test.html"),
quiet = TRUE
)
}
plan <- drake_plan(
...
...
hyp_data_aggregated_report = make_hyp_data_aggregated_report()
...
...
)
Exactly similar code in my large project (with ~10+ reports) doesn't work exactly right. i.e., while the reports get built, the knitr_in objects don't get displayed as the blue squares in the graph using drake::vis_drake_graph() in my large project.
Both projects use the drake::loadd(....) within the markdown to get the objects from cache.
Is there some code in vis_drake_graph that removes these squares once the graph gets busy?
Q2: file_out objects in vis_drake_graph
Is there a way to display the file_out objects themselves as circles/squares in vis_drake_graph?
Q3: packages showing up in vis_drake_graph
Is there a way to avoid vis_drake_graph from printing the packages explicitly? (Basically anything with the ::)
Q1
Every literal file path needs its own knitr_in() or file_out(). If you have one function with one knitr_in(), even if you use the function multiple times, that still only counts as one file path. I recommend writing these keywords at the plan level, e.g.
plan <- drake_plan(
r1 = render(knitr_in("report1.Rmd"), output_file = file_out("report1.html")),
r2 = render(knitr_in("report2.Rmd"), output_file = file_out("report2.html")),
r3 = render(knitr_in("report3.Rmd"), output_file = file_out("report3.html"))
)
Q2
They should appear unless you set show_output_files = FALSE in vis_drake_graph().
Q3
No, but if it's any consolation, I do regret the decision to track namespaced functions and objects at all in drake. drake's approach is fundamentally suboptimal for tracking packages, and I plan to get rid of it if there ever comes time for a round of breaking changes. Otherwise, there is no way to get rid of it except vis_drake_graph(targets_only = TRUE), which also gets rid of all the imports in the graph.

Abaqus read input file very slow for many materials

EDIT*: After all it turned out that this is not causing the slow import. Nevertheless the answer given explains a better way to implement different densities with one material. So I'll let the question exist. (Slow import was caused by running the scripts from the abaqus PDE and not using 'Run script' from the file menu. special thanks to droooze for finding the problem)
I'm trying to optimize the porosity distribution of a certain material. Therefor I'm performing abaqus FEA simulations with +-500 different materials in one part. The simulation itself only takes about 40 seconds, but reading the input file takes more than 3 minutes. (I used a python script to generate the inp file)
I'm using these commands to generate my materials in the input file:
*SOLID SECTION, ELSET = ES_Implant_MAT0 ,MATERIAL=Implant_material0
*ELSET, ELSET=ES_Implant_MAT336
6,52,356,376,782,1793,1954,1984,3072
*MATERIAL, NAME = Implant_material0
*DENSITY
4.43
*ELASTIC
110000, 0.3
Any idea why this is so slow and is there a more efficient way to do this to reduce the load input file time?
If your ~500 materials are all of the same kind (e.g. all linear elastic isotropic mass density), then you can collapse it all into one material then define a distribution table which distributes these materials directly onto the instance element label.
Syntax:
(somewhere in the Part definition, under section)
*SOLID SECTION, ELSET = ES_Implant_MAT0 ,MATERIAL=Implant_material0
(somewhere in the Assembly definition; part= should reference the name of the part above)
**
**
** ASSEMBLY
**
*Assembly, name=Assembly
**
*Instance, name=myinstance, part=mypart
*End Instance
**
*Elset, elset=ES_Implant_MAT0, instance=myinstance
1,2,...
(somewhere in the Materials definition; see Abaqus Keywords Reference Guide for the keywords *DISTRIBUTION TABLE and *DISTRIBUTION)
**
** MATERIALS
**
*DISTRIBUTION TABLE, NAME=IMPLANT_MATERIAL0_ELASTIC_TABLE
MODULUS,RATIO
*DISTRIBUTION, NAME=Implant_material0_elastic, LOCATION=element, TABLE=IMPLANT_MATERIAL0_ELASTIC_TABLE
,110000,0.3 # First line is some default value
myinstance.1,110000,0.3 # syntax: instance name [dot] instance element label
myinstance.2,110000,0.3 # these elements currently use the material properties assigned to `ELSET = ES_Implant_MAT0`. You can define the material properties belonging to other element sets in this same table, making sure you reference the element label correctly.
...
*DISTRIBUTION TABLE, NAME=IMPLANT_MATERIAL0_DENSITY_TABLE
DENSITY
*DISTRIBUTION, NAME=Implant_material0_density, LOCATION=element, TABLE=IMPLANT_MATERIAL0_DENSITY_TABLE
,4.43 # Default value
myinstance.1,4.43
myinstance.2,4.43
...
*Material, name=Implant_material0
*Elastic
Implant_material0_elastic # Distribution name
*Density
Implant_material0_density # Distribution name

How to refresh a nix derivation (nix pill's 7.5) in nix-repl?

Looking at the example given in the 7th nix pill, typing :b simple builds the derivation for a short c program. In nix-repl
simple = derivation { name = "simple"; builder = "${bash}/bin/bash"; args = [ ./simple_builder.sh ]; gcc = gcc; coreutils = coreutils; src = ./simple.c; system = builtins.currentSystem; }
:b simple
compiles the source and makes the output-directory containing the simple-executable.
If the c-source changes a bit, say, we want to output "Simple?", the same derivation simple with same arguments doesn't seem to work.
Why is that and does it mean, that even for minor changes in c-file a new name (?) -argument has to be given for the derivation?
If so, how to get rid of the old derivations made within nix-repl with :b adrvn?
The problem is that simple is a value/constant rather than a function. Given that Nix is purely functional, it doesn't matter how many times you evaluate simple it will always produce the same output (within the same instance of nix-repl). On the other hand, going with an external file (ex. simple.nix) and using nix-build will pick up changes to the derivation's inputs, including simple.c.
As for cleaning up derivations, you can use nix-collect-garbage.

How to use the result of SMEMBERS as input for SUNION in a Lua script

I'm trying to produce a Lua script that takes the members of a set (each representing a set as well) and returs the union.
This is a concrete example with these 3 sets:
smembers u:1:skt:n1
1) "s2"
2) "s3"
3) "s1"
smembers u:1:skt:n2
1) "s4"
2) "s5"
3) "s6"
smembers u:1:skts
1) "u:1:skt:n1"
2) "u:1:skt:n2"
So the set u:1:skts contains the reference of the other 2 sets and I
want to produce the union of u:1:skt:n1 and u:1:skt:n2 as follows:
1) "s1"
2) "s2"
3) "s3"
4) "s4"
5) "s5"
6) "s6"
This is what I have so far:
local indexes = redis.call("smembers", KEYS[1])
return redis.call("sunion", indexes)
But I get the following error:
(error) ERR Error running script (call to f_c4d338bdf036fbb9f77e5ea42880dc185d57ede4):
#user_script:1: #user_script: 1: Lua redis() command arguments must be strings or integers
It seems like it doesn't like the indexes variable as input of the sunion command. Any ideas?
Do not do this, or you'll have trouble moving to the cluster. This is from the documentation:
All Redis commands must be analyzed before execution to determine which keys the command will operate on. In order for this to be true for EVAL, keys must be passed explicitly. This is useful in many ways, but especially to make sure Redis Cluster can forward your request to the appropriate cluster node.
Note this rule is not enforced in order to provide the user with opportunities to abuse the Redis single instance configuration, at the cost of writing scripts not compatible with Redis Cluster.
If you still decide to go against the rules, use Lua's unpack:
local indexes = redis.call("smembers", KEYS[1])
return redis.call("sunion", unpack(indexes))

Resources