I have an action_listener:
action_listener(
name = "foo_listen",
mnemonics = [
"Foo", # Foo might usually take several minutes
],
extra_actions = [
"foo_action_pre", # Start some processing
"foo_action_post", # Finish parts of processing that needs action output
],
)
In foo_action_pre, I set
out_templates=[
"foo_action_pre_data",
],
in order to pass information to foo_action_post.
Now when I add $(location foo_action_pre_data) to the cmd of foo_action_post Bazel complains, that it is not a prereq.
No matter whether I add that to tools or data though, it never is detected as a prereq. How can I declare the correct dependency?
You have to use $(output foo_action_pre_data) instead of $(location foo_action_pre_data).
See extra_action.cmd.
Related
I have a bazel target with attribute that must be a list.
However, I need selectively add elements to the list based on the outcome of a select.
glob_tests(
# some stuff
exclude = [
"a.foo",
] + if_A([
"x.foo",
]) + if_B([
"y.foo",
]),
)
In the above code snippet, the functions if_A and if_B return select objects.
But when I run this as is, I get an error stating that a sequence object was expected but a select object was encountered instead.
How can I convert the select objects to sequence objects?
(I assume glob_test is a macro that calls the builtin function glob.) globs are evaluated when a BUILD file is loaded, which is before any configuration is known. This means glob cannot take any select objects as inputs because the knowledge to turn select objects into lists is not present.
The way to solve this is to lift the select calls above the globs like this
some_test(
name = "some_test",
srcs = select({
"//cond1": glob(["t*", "s*"], exclude=["thing"]),
"//cond2": glob(["t*", "s*"], exclude=["something else"]),
}),
)
instead of
some_test(
name = "some_test",
srcs = glob(
["t*", "s*"],
exclude=select({
"//cond1": ["thing"],
"//cond2": ["something else"],
}),
),
)
https://bazel.googlesource.com/bazel/+show/master/CHANGELOG.md mentions, that there are cpu tags. Of course now the question to me is where else these tags are being taken into account.
Posting the commit message here as I think it answers the question perfectly:
TLDR: You can increase the CPU reservation for tests by adding a "cpu:" (e.g. "cpu:4" for four cores) tag to their rule in a BUILD file. This can be used if tests would otherwise overwhelm your system if there's too much parallelism.
This lets users specify that their test needs a minimum of CPU cores
to run and not be flaky. Example for a reservation of 4 CPUs:
sh_test(
name = "test",
size = "large",
srcs = ["test.sh"],
tags = ["cpu:4"],
)
This could also be used by remote execution strategies to tune their
resource adjustment.
As of 2017-06-21 the following alternating options are possible:
genrule: Setting tags same as in sh_test.
Example:
genrule(
name = "foo",
srcs = [],
outs = ["foo.h"],
cmd = "./$(location create_foo.pl) > \"$#\"",
tools = ["create_foo.pl"],
tags = ["cpu:4"],
)
Skylark rules: This can work as long as you do NOT use Workers. See.
For Skylark rules cpu can be set manually for any created action individually. This is accomplished by setting execution_requirements.
Example:
ctx.action(
execution_requirements = {
"cpu:4": "", # This is no mistake - you really encode the value in the dict key and an empty string in dict value
},
)
When creating an OL3 build based on https://github.com/openlayers/ol3/blob/master/config/ol.json
I am able to access the ol.Map#renderSync prototype method. However, if I use the following custom "exports": [...] array (to trim library size), #renderSync is obfuscated (or perhaps removed):
[
"ol.Map",
"ol.View",
"ol.control.*",
"ol.interaction.*",
"ol.style.*",
"ol.layer.Tile",
"ol.layer.Group",
"ol.source.XYZ",
"ol.layer.Layer",
"ol.layer.Vector",
"ol.format.GeoJSON",
"ol.source.Vector",
"ol.Overlay",
"ol.has.*",
"ol.events.condition.*",
"ol.inherits"
]
How can I export a custom, trimmed down, build without losing access to ol.Map#renderSync while (ideally) retaining closure ADVANCED optimization?
Any ol.Map method you want to use, add it to exports section:
"exports": [
"ol.Map",
"ol.Map#updateSize",
"ol.Map#renderSync",
"ol.View",
"ol.View#*",
...
]
Or use an asterisk to export all methods:
"exports": [
"ol.Map",
"ol.Map#*",
...
]
I am trying to run code dynamically in groovy. I have someNode[0], which is the value, in variable var1
I then added double quotes to it like this
var2 = "\""+var1+"\""
then I tried to run this
request.abc."$var2"=Value
I saw here that something of this sort can be done on properties and methods. But the above code is not working. Giving me error
An error occurred [Cannot set property '"someNode[0]"' on null object], see error log for details
Any help is appreciated. Thanks.
Edit
Heres a snippet of my request
{
"app":{
"bundle":"531323947",
"cat":[
"IAB1",
"IAB9",
"IAB9-30",
"entertainment",
"games"
],
"id":"agltb3B1Yi1pbmNyDAsSA0FwcBitsL4UDA",
.
.
The field I am trying to manipulate is cat[0], which is IAB1 (I just used abc and someNode[0] in the code that i wrote above but actually they are app and cat[0])
Also, I parsed the request with jsonslurper befor running the above code
Thank you for your help
One way to do this, is by Eval
def request =[
"app":[
"bundle":"531323947",
"cat":[
"IAB1",
"IAB9",
"IAB9-30",
"entertainment",
"games"
],
]
]
assert request.app.cat[0]=='IAB1'
def var = 'request.app.cat[0]'
Eval.me('request', request, "$var = 'new value'")
assert request.app.cat[0]=='new value'
You are accessing/updating values from a map and a list. The request.app node will be a map, the request.app.cat node will be a list. Getting and setting the values in a map can be done in many different ways:
Use the put & get methods directly.
Use brackets [].
Use missing properties as map keys (i.e. the way you are using it).
For what you want to achieve, i.e. to access values from variable keys, it is much easier to use method 1 or 2 instead of method 3 with a variable inside a GString.
Example using brackets:
import groovy.json.JsonBuilder
import groovy.json.JsonSlurper
def request = new JsonSlurper().parseText '''{
"app":{
"bundle":"531323947",
"cat":[
"IAB1",
"IAB9",
"IAB9-30",
"entertainment",
"games"
],
"id":"agltb3B1Yi1pbmNyDAsSA0FwcBitsL4UDA"
}
}'''
def level0 = 'app'
def level1 = 'cat'
def node = request[level0][level1]
assert request instanceof Map
assert node instanceof List
assert node[0] == 'IAB1'
node[0] = 'new value'
assert node[0] == 'new value'
println new JsonBuilder(request).toPrettyString()
Output:
{
"app": {
"cat": [
"new value",
"IAB9",
"IAB9-30",
"entertainment",
"games"
],
"id": "agltb3B1Yi1pbmNyDAsSA0FwcBitsL4UDA",
"bundle": "531323947"
}
}
I am looking at a situation where I'd like to bring some structure to what would be a string in an typical language. And wondering how to use Rebol's parts box to do it.
So let's say I've got a line that looks like this in the original language I'm trying to dialect:
something = ("/foo/mumble" "/foo/${BAR}/baz")
I want to use Rebol's primitives, so certainly a file path. Here is a random example of what I thought of off the top of my head:
something: [%/foo/mumble [%/foo/ BAR %/baz]]
If it were code you'd use REJOIN or COMBINE. But this is not designed to be executed, it's more like a configuration file. You're not supposed to be running arbitrary code, just getting a list of files.
I'm not sure about how feasible it is to stick with strings and yet still have these type as FILE!. Not all characters work in a FILE!, for instance:
>> load "%/foo/${BAR}/baz"
== [%/foo/$ "BAR" /baz]
It makes me wonder what my options are in Rebol data that's supposed to represent a configuration file. I can use plain old strings and do substitutions like other things do. Maybe REWORD with an OBJECT block to represent the environment?
What is the 'reword' function in Rebol and how do I use it?
In any case, I want to know how to represent a filename in a declarative context with environment variable substitutions like this.
I should use file! Your example need "" after %
f: load {%"/foo/${BAR}/baz"}
replace f "${BAR}" "MYVALUE" ;== %/foo/MYVALUE/baz
you could use path! with parens.
the only issue is the root, for which you can use another character to replace the "%" used for files... let's use '! (note this should be a word 'valid character).
when calling to-block on a path! type, it returns each part as its own token... useful.
to-block '!/path/(foo)/file.txt
== [! path (foo) file.txt]
here is a little script which loads three paths and uses parens as a constructed part of the path and uses tags to escape path-illegal characters (like a space!)
environments: make object! [
foo: "FU"
bar: "BR"
]
paths: [
!/path/(foo)/file.txt
!/root/<escape weird chars $>/(bar ".txt")
!/("__" foo)/path/(bar)
]
parse paths [
some [
(print "------" )
set data path! here: ( insert/only here to-block data to-block data )
(out-path: copy %"" )
into [
path-parts: (?? path-parts)
'!
some [
[ set data [word! | tag! | number!] (
append out-path rejoin ["/" to-string data]
)]
|
into [
( append out-path "/")
some [
set data word! ( append out-path rejoin [to-string get in environments data] )
| set data skip ( append out-path rejoin [ to-string data])
]
]
| here: set data skip (to-error rejoin ["invalid path token (" type? data ") here: " mold here])
]
]
(?? out-path)
]
]
Note this works both in Rebol3 and Rebol2
output is as follows:
------
path-parts: [! path (foo) file.txt]
out-path: %/path/FU/file.txt
------
path-parts: [! root <escape weird chars $> (bar ".txt")]
out-path: %/root/escape%20weird%20chars%20$/BR.txt
------
path-parts: [! ("__" foo) path (bar)]
out-path: %/__FU/path/BR
------