Passing an argument list to Exec? - ant

I want to do the following in my Exec task
commandLine = [ 'my_executable_path\\' + executable.exe ,
argument1,
argument2,
argument3 ]
is it possible to do something like this instead?
//...dynamic creation of a List/Array/whatever
commandLine = [ 'my_executable_path\\' + executable.exe ,
myArgumentsList ]

I'm baffled why there are so many questions like this. Is the DSL reference too hard to find or make sense of? What can we improve to allow you to answer such questions yourself?
Anyway, the cleanest solution is:
task exec(type: Exec) {
executable = "/path/to/executable"
args = myArgumentsList
}

Related

How do I create a configuration point in a BUILD file with Bazel?

I would like to pass a variable to some of my build rules, e.g. this Webpack step:
load("#npm//webpack-cli:index.bzl", webpack = "webpack_cli")
webpack(
name = "bundle",
args = [
"--config",
"$(execpath webpack.config.js)",
"--output-path",
"$(#D)",
],
data = [
"index.html",
"webpack.config.js",
"#npm//:node_modules",
] + glob([
"src/**/*.js",
]),
env = {
"FOO_BAR": "abc",
},
output_dir = True,
)
Some builds will be done with FOO_BAR=abc and others with a different value. I don't know the full set of possible values!
I don't think that --action_env is applicable here since it is not a genrule.
I would also like to be able to set a default value in my BUILD script.
How can I accomplish this with Bazel?
If you knew the set of values, then the usual tools are config_setting() and select(), but since you don't know the possible values, then that won't work here.
It looks like webpack is actually a npm_package_bin or nodejs_binary underneath:
# Generated helper macro to call webpack-cli
def webpack_cli(**kwargs):
output_dir = kwargs.pop("output_dir", False)
if "outs" in kwargs or output_dir:
npm_package_bin(tool = "#npm//webpack-cli/bin:webpack-cli", output_dir = output_dir, **kwargs)
else:
nodejs_binary(
entry_point = { "#npm//:node_modules/webpack-cli": "bin/cli.js" },
data = ["#npm//webpack-cli:webpack-cli"] + kwargs.pop("data", []),
**kwargs
)
and in both cases env will do make-variable substitution:
https://bazelbuild.github.io/rules_nodejs/Built-ins.html#nodejs_binary-env
https://bazelbuild.github.io/rules_nodejs/Built-ins.html#npm_package_bin-env
So if you know at least what the variables will be, you can do something like
env = {
"FOO_BAR": "$(FOO_BAR)",
},
and use --define=FOO_BAR=123 from the Bazel command line.
nodejs_binary has additional attributes related to environment variables:
https://bazelbuild.github.io/rules_nodejs/Built-ins.html#nodejs_binary-configuration_env_vars
https://bazelbuild.github.io/rules_nodejs/Built-ins.html#nodejs_binary-default_env_vars
If the number of environment variables to set, or the name of the environment variable itself, is not known, then you might need to open a feature request with rules_nodejs.

Conditionally create a Bazel rule based on --config

I'm working on a problem in which I only want to create a particular rule if a certain Bazel config has been specified (via '--config'). We have been using Bazel since 0.11 and have a bunch of build infrastructure that works around former limitations in Bazel. I am incrementally porting us up to newer versions. One of the features that was missing was compiler transitions, and so we rolled our own using configs and some external scripts.
My first attempt at solving my problem looks like this:
load("#rules_cc//cc:defs.bzl", "cc_library")
# use this with a select to pick targets to include/exclude based on config
# see __build_if_role for an example
def noop_impl(ctx):
pass
noop = rule(
implementation = noop_impl,
attrs = {
"deps": attr.label_list(),
},
)
def __sanitize(config):
if len(config) > 2 and config[:2] == "//":
config = config[2:]
return config.replace(":", "_").replace("/", "_")
def build_if_config(**kwargs):
config = kwargs['config']
kwargs.pop('config')
name = kwargs['name'] + '_' + __sanitize(config)
binary_target_name = kwargs['name']
kwargs['name'] = binary_target_name
cc_library(**kwargs)
noop(
name = name,
deps = select({
config: [ binary_target_name ],
"//conditions:default": [],
})
)
This almost gets me there, but the problem is that if I want to build a library as an output, then it becomes an intermediate dependency, and therefore gets deleted or never built.
For example, if I do this:
build_if_config(
name="some_lib",
srcs=[ "foo.c" ],
config="//:my_config",
)
and then I run
bazel build --config my_config //:some_lib
Then libsome_lib.a does not make it to bazel-out, although if I define it using cc_library, then it does.
Is there a way that I can just create the appropriate rule directly in the macro instead of creating a noop rule and using a select? Or another mechanism?
Thanks in advance for your help!
As I noted in my comment, I was misunderstanding how Bazel figures out its dependencies. The create a file section of The Rules Tutorial explains some of the details, and I followed along here for some of my solution.
Basically, the problem was not that the built files were not sticking around, it was that they were never getting built. Bazel did not know to look in the deps variable and build those things: it seems I had to create an action which uses the deps, and then register an action by returning a (list of) DefaultInfo
Below is my new noop_impl function
def noop_impl(ctx):
if len(ctx.attr.deps) == 0:
return None
# ctx.attr has the attributes of this rule
dep = ctx.attr.deps[0]
# DefaultInfo is apparently some sort of globally available
# class that can be used to index Target objects
infile = dep[DefaultInfo].files.to_list()[0]
outfile = ctx.actions.declare_file('lib' + ctx.label.name + '.a')
ctx.actions.run_shell(
inputs = [infile],
outputs = [outfile],
command = "cp %s %s" % (infile.path, outfile.path),
)
# we can also instantiate a DefaultInfo to indicate what output
# we provide
return [DefaultInfo(files = depset([outfile]))]

How can I build custom rules using the output of workspace_status_command?

The bazel build flag --workspace_status_command supports calling a script to retrieve e.g. repository metadata, this is also known as build stamping and available in rules like java_binary.
I'd like to create a custom rule using this metadata.
I want to use this for a common support function. It should receive the git version and some other attributes and create a version.go output file usable as a dependency.
So I started a journey looking at rules in various bazel repositories.
Rules like rules_docker support stamping with stamp in container_image and let you reference the status output in attributes.
rules_go supports it in the x_defs attribute of go_binary.
This would be ideal for my purpose and I dug in...
It looks like I can get what I want with ctx.actions.expand_template using the entries in ctx.info_file or ctx.version_file as a dictionary for substitutions. But I didn't figure out how to get a dictionary of those files. And those two files seem to be "unofficial", they are not part of the ctx documentation.
Building on what I found out already: How do I get a dict based on the status command output?
If that's not possible, what is the shortest/simplest way to access workspace_status_command output from custom rules?
I've been exactly where you are and I ended up following the path you've started exploring. I generate a JSON description that also includes information collected from git to package with the result and I ended up doing something like this:
def _build_mft_impl(ctx):
args = ctx.actions.args()
args.add('-f')
args.add(ctx.info_file)
args.add('-i')
args.add(ctx.files.src)
args.add('-o')
args.add(ctx.outputs.out)
ctx.actions.run(
outputs = [ctx.outputs.out],
inputs = ctx.files.src + [ctx.info_file],
arguments = [args],
progress_message = "Generating manifest: " + ctx.label.name,
executable = ctx.executable._expand_template,
)
def _get_mft_outputs(src):
return {"out": src.name[:-len(".tmpl")]}
build_manifest = rule(
implementation = _build_mft_impl,
attrs = {
"src": attr.label(mandatory=True,
allow_single_file=[".json.tmpl", ".json_tmpl"]),
"_expand_template": attr.label(default=Label("//:expand_template"),
executable=True,
cfg="host"),
},
outputs = _get_mft_outputs,
)
//:expand_template is a label in my case pointing to a py_binary performing the transformation itself. I'd be happy to learn about a better (more native, fewer hops) way of doing this, but (for now) I went with: it works. Few comments on the approach and your concerns:
AFAIK you cannot read in (the file and perform operations in Skylark) itself...
...speaking of which, it's probably not a bad thing to keep the transformation (tool) and build description (bazel) separate anyways.
It could be debated what constitutes the official documentation, but ctx.info_file may not appear in the reference manual, it is documented in the source tree. :) Which is case for other areas as well (and I hope that is not because those interfaces are considered not committed too yet).
For sake of comleteness in src/main/java/com/google/devtools/build/lib/skylarkbuildapi/SkylarkRuleContextApi.java there is:
#SkylarkCallable(
name = "info_file",
structField = true,
documented = false,
doc =
"Returns the file that is used to hold the non-volatile workspace status for the "
+ "current build request."
)
public FileApi getStableWorkspaceStatus() throws InterruptedException, EvalException;
EDIT: few extra details as asked in the comment.
In my workspace_status.sh I would have for instance the following line:
echo STABLE_GIT_REF $(git log -1 --pretty=format:%H)
In my .json.tmpl file I would then have:
"ref": "${STABLE_GIT_REF}",
I've opted for shell like notation of text to be replaced, since it's intuitive for many users as well as easy to match.
As for the replacement, relevant (CLI kept out of this) portion of the actual code would be:
def get_map(val_file):
"""
Return dictionary of key/value pairs from ``val_file`.
"""
value_map = {}
for line in val_file:
(key, value) = line.split(' ', 1)
value_map.update(((key, value.rstrip('\n')),))
return value_map
def expand_template(val_file, in_file, out_file):
"""
Read each line from ``in_file`` and write it to ``out_file`` replacing all
${KEY} references with values from ``val_file``.
"""
def _substitue_variable(mobj):
return value_map[mobj.group('var')]
re_pat = re.compile(r'\${(?P<var>[^} ]+)}')
value_map = get_map(val_file)
for line in in_file:
out_file.write(re_pat.subn(_substitue_variable, line)[0])
EDIT2: This is how the Python script is how I expose the python script to rest of bazel.
py_binary(
name = "expand_template",
main = "expand_template.py",
srcs = ["expand_template.py"],
visibility = ["//visibility:public"],
)
Building on Ondrej's answer, I now use somthing like this (adapted in SO editor, might contain small errors):
tools/bazel.rc:
build --workspace_status_command=tools/workspace_status.sh
tools/workspace_status.sh:
echo STABLE_GIT_REV $(git rev-parse HEAD)
version.bzl:
_VERSION_TEMPLATE_SH = """
set -e -u -o pipefail
while read line; do
export "${line% *}"="${line#* }"
done <"$INFILE" \
&& cat <<EOF >"$OUTFILE"
{ "ref": "${STABLE_GIT_REF}"
, "service": "${SERVICE_NAME}"
}
EOF
"""
def _commit_info_impl(ctx):
ctx.actions.run_shell(
outputs = [ctx.outputs.outfile],
inputs = [ctx.info_file],
progress_message = "Generating version file: " + ctx.label.name,
command = _VERSION_TEMPLATE_SH,
env = {
'INFILE': ctx.info_file.path,
'OUTFILE': ctx.outputs.version_go.path,
'SERVICE_NAME': ctx.attr.service,
},
)
commit_info = rule(
implementation = _commit_info_impl,
attrs = {
'service': attr.string(
mandatory = True,
doc = 'name of versioned service',
),
},
outputs = {
'outfile': 'manifest.json',
},
)

Incorrect scope for eval

I have this code
eval(script);
where script is:
var script = console.log('xwin:', xwin);
however it keeps telling me that xwin is undefined, anyone know why eval is not taking the functions scope? Anyone know how to make it?
The problem is not with scopes or eval but that elsewhere in your code you call your "blah" function without passing a value for xwin.
See line: 109

Better parameter error handling; and better parameter parsing/handling?

I have a script like this:
param(
[Alias('a')]
[string]$aval,
[Alias('b')]
[switch]$bval,
[Alias('c')]
[string]$cval
)
if($aval.length -gt 1)
{
Do-Something
}
elseif($bval)
{
Do-Something-Else
}
elseif($cval.length -gt 1)
{
Do-Another-Thing
}
else
{
Do-This
}
If someone calls my script like so, an ugly error is displayed saying it is missing an argument for parameter 'aval/bval/cval':
PS C:\> .\MyScript.ps1 -a
C:\MyScript.ps1 : Missing an argument for parameter 'aval'. Specify a
parameter of type 'System.String' and try again.
At line:1 char:18
+ .\MyScript.ps1 -n <<<<
+ CategoryInfo : InvalidArgument: (:) [MyScript.ps1], ParameterBindingException
+ FullyQualifiedErrorId : MissingArgument,MyScript.ps1
Is there any way to make a cleaner, possibly one line, error appear instead? Also, is there a better way to handle parameters then a list of elseif statements (my actual script has ~10 parameters)?
The script sometimes passes an argument with a parameter as well:
EX:
PS C:\> .\MyScript.ps1 -b ServerName
Thanks for any help!
There are a few things that you can look at here. First, if the parameter will never have an associated value and you just want to know if the script was called with the parameter or not, then use a [switch] parameter instead of a string.
Here is a very simple example of using a switch parameter:
param(
[switch]$a
)
if($a){
'Switch was present'
}else{
'No switch present'
}
Save that as a script and run it with and without the -a parameter.
If you will sometimes have the parameter present with some value being passed in but other times without the value, then give the parameter a default value when you define it:
[Alias('a')]
[string]$aval = '',
Then in your logic if something was passed in, the length of the string will be gt 1.
As for the if-then structure that you have, there are a plethora of options for handling this sort of logic. with the little bit of information that you have shared, I suspect that using switch structure will be the best plan:
Get-Help about_Switch

Resources