How to disable remote cache for specified bazel rules - bazel

In my bazelrc, I have the following configuration:
build:remote --spawn_strategy=remote
build:remote --remote_local_fallback
build:remote --genrule_strategy=sandboxed
build:remote --remote_rest_cache=http://HOST/PATH
This works quite well for all rules with 'bazel build -c opt --config=remote //...'. However, for some types of rules, I would like to disable remote cache for them, because it's less likely for them to hit cache, and their outputs are often quite large.
Let's say we have a rule with mnemonic MNEMONIC, how should I just turn off remote cache for this rule? Thanks a lot!

We finally got this problem resolved with '--modify_execution_info':
build:remote --modify_execution_info MNEMONIC=+no-cache
The mnemonic can be queried with 'bazel aquery'.
We have verified that this method works on Bazel 2.2.0.

Not sure if you can do it for an entire mnemonic, but you can add a tag for each target that you don't want to cache. There is a tag called no-cache exactly for this.
https://docs.bazel.build/versions/master/be/common-definitions.html#common.tags

Related

Don't discard analysis cache when --action_env changes

I have an --action_env variable I'm passing into Bazel sometimes, but each time I remove it or add it back, it discards the analysis cache, which triggers a re-analysis that takes several minutes because I'm working in a large repo. Is there a way to prevent this? I'm already using --trim_test_configuration
Answer
Not really. You can think of the set of environment variables as a file that almost every Bazel action depends on. As a simple example let's say you have a build file that looks something like this;
genrule(
name = "foo_header",
cmd = "echo #define FOO $FOO_FROM_ENV > foo.h",
outs = ["foo.h"],
)
cc_library(
name = "my_library_that_everything_depends_on",
hdrs = [":foo_header"],
)
In this simple case it's not hard to see that if you change --action_env=FOO_FROM_ENV=7 to another value that everything that depends on foo.h now has to be completely rebuilt and analysed. So while frustrating it's probably a good thing that Bazel does this otherwise you'd end up with an inconsistent build.
Partial workarounds
Remove usage of use_default_shell_env from your rules/actions (this is far from trivial as most of the standard rules do this)
Use a centralised cache to prevent Bazel from deleting the artifact cache. e.g. I add these to ~/.bazelrc as then it shares the action/repository cache between all my projects. This only helps if you are switching between env variables on a regular basis rather than using new env values each time. Also, be careful with this as Bazel doesn't presently do any garbage collection so the cache directories can end up very large.
# ~/.bazelrc
common --disk_cache=~/.cache/shared_bazel_action_cache
common --repository_cache=~/.cache/shared_bazel_repository_cache
Add --incompatible_strict_action_env to your project bazelrc, this will prevent changes in your user shell triggering Bazel to discard the analysis cache.

In Fish, how do you tweak things around to match special key bindings?

Context
So I finally give a try to Fish, and as one would expect I encounter some frictions due to differences with my usual routines.
The most astonishing for me, as for many other, was the absence of the bang operator. I'm fine with the lose of sudo !!, as the suggested function replacement seems even better to me, I named it gar which means "To make, compel (someone to do something); to cause (something to be done." However I'll need a replacement for !<abc><enter> which grab the last history line starting with <abc> and run it without further ado, suggestions are welcome.
Now, for the more personal things:
- I use a Typematrix 2030 keyboard
- I use a bépo layout
- I like to configure default finger position keys with the most used actions
Aims
As on my keybord <enter> is well positioned and is semantically relevant for that, ideally I would like to achieve the following key binding:
ctrl-enter: accept the whole suggestion and run it without further confirmation
ctrl-tab: accept the whole suggestion and wait for further edit
alt-enter: redo the last command without further confirmation
But according to xev it appears that, at least with Gnome-terminal, this combinations are not recognized. Are they terminal that supports it? For now I remapped these three to <ctrl>-i, <alt>-i and <alt>-I respectively:
bind --preset \ci forward-char execute
bind --preset \ei forward-char
bind --preset \eI forward-word
This works as expected, but it seems that now the tab key will also map to the first item. I guess that tab map to <alt>-i at some point in the shell stack. I wasn't aware of that, so I don't know yet if it will be possible for Fish to separate each of them.
To manage jobs, I also came with
bind --preset \es fg
bind --preset \eS bg
The first works as expected, but the second one doesn't. With application like vim, the binding should be operated in the application configuration itself of course. But for things as trivial as yes, <alt>-S won't work as expected while <crl>-z continue to operate normally.
I also would like to bind some commands like ls -alh and git status --short to a directly executed command, showing the result bellow the currently edited line, allowing to further type seamlessly, but didn't find the way to do it yet.
Summary of remaining question
So here are my more precise questions summarised:
how do I bind the sleep signal to <alt>-S?
is there a terminal I can use where <alt>-<enter> and <ctrl>-<enter> works?
how to seamlessly run command while maintaining the current line edition in place?
can you bind something to <alt>-i without altering <tab>?
how do I bind the sleep signal to -S?
What you are doing with bind \es fg is to alter a binding inside the shell.
But when you execute yes, the shell isn't currently in the foreground, so shell bindings don't apply.
What you'd have to do instead is change the terminal settings via stty susp \cs,
but fish resets the terminal settings when executing commands (so you can't accidentally break them and end up in an unusable environment), so there currently is no way to do this in fish.
can you bind something to <alt>-i without altering <tab>?
Sure. You bind \ei. Which is escape+i, which is alt-i (because in a terminal alt is escape).
Your problem is with ctrl-i, which in the way terminals encode control+character is tab. The application receives an actual tab character, and at that point the information has been lost.
is there a terminal I can use where - and - works?
Most terminals should send \e\r for alt-enter. ctrl-enter again is unencodable with the usual code (because \r is ctrl-m), just like ctrl-tab is.
Any fix to this requires the terminal to encode these combination differently.
how to seamlessly run command while maintaining the current line edition in place?
I don't know what you mean by this. I'm guessing you want fish to remain open and editable while a command also runs in the foreground. That can't work. There's no way to synchronize output from two commands to a terminal, not with cursor movement being what it is.

Where to get list of known repositories like #bazel_tools, #rules_jvm_external etc?

Sometimes I see extensions loading from the internet or built-in ones.
Canonical example:
load("#bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
However, I cannot distinguish local repo and known repo by looking at the load expression.
How can I check the source (location) of any repo which I see in my WORKSPACE/BUILD files?
If the Bazel label is sufficient as a source, you might try fetching repo roots with BUILD files with bazel query 'buildfiles(//...)'.
Otherwise, you could run bazel clean --expunge and run a build with --experimental_execution_log_file=<FILENAME>. This creates a protobuf based log of the actions by Bazel. In there, all internet repos are downloaded anew because of clean --expunge.
Check https://github.com/bazelbuild/bazel/tree/master/src/tools/execlog for a parser.
It is super inconvenient that this information is not available another way - afaik. I really hope someone swings by and corrects me, but this way you at least know the available sources you can correlate.
I'm new to Bazel, but as far as I understand:
Copy the name of the repo. E.g. io_bazel_rules_docker
Search it through the codebase
Look at how it's being loaded
E.g. if you see
http_archive(
name = "io_bazel_rules_docker",
...
)
http_file(
name = "io_bazel_rules_docker",
...
)
And you can conclude where it's coming from.
bazel query --output=build //external:repo_name works just fine.

How to pass an array from Bazel cli to rules?

Let's say I have a rule like this.
foo(
name = "helloworld",
myarray = [
":bar",
"//path/to:qux",
],
)
In this case, myarray is static.
However, I want it to be given by cli, like
bazel run //:helloworld --myarray=":bar,//path/to:qux,:baz,:another"
How is this possible?
Thanks
To get exactly what you're asking for, Bazel would need to support LABEL_LIST in Starlark-defined command line flags, which are documented here:
https://docs.bazel.build/versions/2.1.0/skylark/lib/config.html
and here: https://docs.bazel.build/versions/2.1.0/skylark/config.html
Unfortunately that's not implemented at the moment.
If you don't actually need a list of labels (i.e., to create dependencies between targets), then maybe STRING_LIST will work for you.
If you do need a list of labels, and the different possible values are known, then you can use --define, config_setting(), and select():
https://docs.bazel.build/versions/2.1.0/configurable-attributes.html
The question is, what are you really after. Passing variable, array into the bazel build/run isn't really possible, well not as such and not (mostly) without (very likely unwanted) side effects. Aren't you perhaps really just looking into passing arguments directly to what is being run by the run? I.e. pass it to the executable itself, not bazel?
There are few ways you could sneak stuff in (you'd also in most cases need to come up with a syntax to pass data on CLI and unpack the array in a rule), but many come with relatively substantial price.
You can define your array in a bzl file and load it from where the rule uses it. You can then dump the bzl content rewriting your build/run configuration (also making it obvious, traceable) and load the bits from the rule (only affecting the rule loading and using the variable). E.g, BUILD file:
load(":myarray.bzl", "myarray")
foo(
name = "helloworld",
myarray = myarray,
],
)
And you can then call your build:
$ echo 'myarray=[":bar", "//path/to:qux", ":baz", ":another"]' > myarray.bzl
$ bazel run //:helloworld
Which you can of course put in a single wrapper script. If this really needs to be a bazel array, this one is probably the cleanest way to do that.
--workspace_status_command: you can collection information about your environment, add either or both of the resulting files (depending on whether the inputs are meant to invalidate the rule results or not, you could use volatile or stable status files) as a dependency of your rule and process the incoming file in the what is being executed by the rule (at which point one would wonder why not pass it to as its command line arguments directly). If using stable status file, also each other rule depending on it is invalidated by any change.
You can do similar thing by using --action_env. From within the executable/tool/script underpinning the rule, you can directly access defined environmental variable. However, this also means environment of each rule is affected (not just the one you're targeting); and again, why would it parse the information from environment and not accept arguments on the command line.
There is also --define, but you would not really get direct access it's value as much as you could select() a choice out of possible options.

How decides Typo3 Neos which Settings.yaml to choose?

i use one Neos installation for multiple domains with different content.
duplicating the package TYPO3.NeosDemoTypo3Org, removing the node-identifier and doing some replacements brought me nearby everything i need.
But only the first Settings.yaml found in Packages/Sites/ seems to be parsed. All changes to the Settings.yaml found in other packages (Test1 and Test2 in the following example) are ignored.
Packages/Sites/TYPO3.NeosDemoTypo3Org/Configuration/Settings.yaml
TYPO3:
Form:
yamlPersistenceManager:
savePath: 'resource://TYPO3.NeosDemoTypo3Org/Private/Form/'
Packages/Sites/UDF.Test1/Configuration/Settings.yaml
TYPO3:
Form:
yamlPersistenceManager:
savePath: 'resource://UDF.Test1/Private/Form/'
Packages/Sites/UDF.Test2/Configuration/Settings.yaml
TYPO3:
Form:
yamlPersistenceManager:
savePath: 'resource://UDF.Test2/Private/Form/'
When i delete the first Settings.yaml (Packages/Sites/UDF.Test2/Configuration/Settings.yaml), the next Setting.yaml in alphabetical order (Packages/Sites/UDF.Test1/Configuration/Settings.yaml) is used for all 3 site packages. When i also delete this file, the Settings.yaml from UDF.Test2 is used and so on.
would be awesome if somebody can enlighten me. I am new to flow and neos and any help is welcome. RTFM, i know, but as described here i have to believe, that it should work like i did?
alternative way?
is it possible not to set the savePath in the site package configuration but in the common settings ./Packages/Application/TYPO3.Form/Configuration/Settings.yaml
I see a {#package} placeholder in
### BASE ELEMENTS ###
# NAMING: base class for everything is RENDERABLE
'TYPO3.Form:Base':
renderingOptions:
templatePathPattern: 'resource://{#package}/Private/Form/{#type}.html'
but this doesn't work here
TYPO3:
Form:
yamlPersistenceManager:
#savePath: '%FLOW_PATH_DATA%Forms/'
savePath: 'resource://{#package}/Private/Form/'
as you see i am not really experienced with this stuff but i am very motivated.
All Settings.yaml are used, but the settings are merged in order of the package loading.
The loading order of packages again is based on their dependencies.
All three packages probably have the same dependencies so they are loaded one after the other (would need to check with which ordering), so third Settings.yaml is loaded, then second Settings.yaml is loaded and overwrites the third, then the first is loaded and again overwrites the second. Every setting path can only be set once, that's why.
In any case what you are trying to archive probably won't work. This is one of the things we have to fix (site package dependent configuration).
A possible workaround is either using a common package with the form configuration and just set the savePath to this package or using diferent subcontexts (like Production/Domain1 Production/Domain2) and setting this setting different per subcontext, then you could define the subcontext by domain (as the sites are triggered by domain anyway).

Resources