I want to run unit tests on qemu. I have created a custom rule, that invokes qemu with arguments specified in the rule. One of those arguments is the elf file (rule attribute "target") which is used by qemu as kernel.
When I invoke my custom rule with the following command, the elf file ("kernel.elf") does not get compiled:
bazel build //test:custom_rule
This happens even though bazel query 'deps(//test:custom_rule)' lists the target ":kernel.elf" as a dependency.
Furthermore I have another problem with the custom rule. When I manually build the ":kernel.elf" and call the custom rule afterwards qemu tells me, that it could not load the kernel file. Manually invoking the qemu command in the shell does work, so I guess the problem does not lie within the "kernel.elf" file.
Does anybody have an answer to my problems?
Thanks in advance!
run_tests.bzl
def _impl(ctx):
qemu = ctx.attr.qemu
machine = ctx.attr.machine
cpu = ctx.attr.cpu
target = ctx.file.target.path
output = ctx.outputs.out
# The command may only access files declared in inputs.
ctx.actions.run_shell(
arguments = [qemu, machine, cpu, target],
outputs=[output],
command="$1 -M $2 -cpu $3 -nographic -monitor null
-serial null -semihosting -kernel $4 > %s" % (output.path))
run_tests = rule(
implementation=_impl,
attrs = {"qemu" : attr.string(),
"machine" : attr.string(),
"cpu" : attr.string(),
"target" : attr.label(allow_files=True, single_file=True,
mandatory=True)},
outputs={"out": "run_tests.log"}
)
BUILD
load("//make:run_tests.bzl", "run_tests")
run_tests(
name = "custom_rule",
qemu = "qemu-system-arm",
machine = "xilinx-zynq-a9",
cpu = "cortex-a9",
target = ":kernel.elf"
)
cc_binary(
name = "kernel.elf",
srcs = glob(["*.cc"]),
deps = ["//src:portos",
"#unity//:unity"],
copts = ["-Isrc",
"-Iexternal/unity/src",
"-Iexternal/unity/extras/fixture/src"]
)
The issue is probably that the inputs need to specified to the action, see
https://docs.bazel.build/versions/master/skylark/lib/actions.html#run_shell.inputs
You'll also probably need to make qemu a label and make that an input to the action as well (And machine too, if that's a file that qemu needs)
E.g. something like:
def _impl(ctx):
qemu = ctx.attr.qemu
machine = ctx.attr.machine
cpu = ctx.attr.cpu
target = ctx.file.target.path
output = ctx.outputs.out
# The command may only access files declared in inputs.
ctx.actions.run_shell(
inputs = [qemu, target],
outputs=[output],
arguments = [qemu, machine, cpu, target],
command="$1 -M $2 -cpu $3 -nographic -monitor null
-serial null -semihosting -kernel $4 > %s" % (output.path))
run_tests = rule(
implementation=_impl,
attrs = {
"qemu" : attr.label(allow_files=True, single_file=True,
mandatory=True),
"machine" : attr.string(),
"cpu" : attr.string(),
"target" : attr.label(allow_files=True, single_file=True,
mandatory=True)
},
outputs={"out": "run_tests.log"}
)
Related
Suppose I am writing a custom Bazel rule for foo-compiler.
The user provides a list of source-files to the rule:
foo_library(
name = "hello",
srcs = [ "A.foo", "B.foo" ],
)
To build this without Bazel, the steps would be:
Create a config file config.json that lists the sources:
{
"srcs": [ "./A.foo", "./B.foo" ]
}
Place the config alongside the sources:
$ ls .
A.foo
B.foo
config.json
Call foo-compiler in that directory:
$ foo-compiler .
Now, in my Bazel rule implementation I can declare a file like this:
config_file = ctx.actions.declare_file("config.json")
ctx.actions.write(
output = config_file,
content = json_for_srcs(ctx.files.srcs),
)
The file is created and it has the right content.
However, Bazel does not place config.json alongside the srcs.
Is there a way to tell Bazel where to place the file?
Or perhaps I need to copy each source-file alongside the config?
You can do this with ctx.actions.symlink e.g.
srcs = []
# Declare a symlink for each src files in the same directory as the declared
# config file.Then write that symlink.
for f in ctx.files.srcs:
src = ctx.actions.declare_file(f.basename)
srcs.append(src)
ctx.actions.symlink(
output = src,
target_file = f,
)
config_file = ctx.actions.declare_file("config.json")
ctx.actions.write(
output = config_file,
content = json_for_srcs(ctx.files.srcs),
)
# Run compiler
ctx.actions.run(
inputs = srcs + [config_file],
outputs = # TODO: Up to you,
tools = [ctx.file.__compiler], #TODO: Update this to match your rule.
command = ctx.file.__compiler.path,
args = ["."],
#...
)
Note that when you return your provider that you should only return the result of your compilation not the srcs. Otherwise, you'll likely run into problems with duplicate outputs.
I'm trying to package a bundle for uploading to Google Cloud. I have an output of pkg_web from an angular build that I did, which, if I pass into this custom rule I'm generating, is a File object that is a directory of the files. The custom rule I am generating takes the app.yaml, etc, and the bundle, and uploads.
However, the bundle becomes a directory, and I need the files of that directory expanded for uploading in the root of command.
For example:
- bundle/index.html <-- bundle directory
- bundle/main.js
- app.yaml
and I need:
- index.html
- main.js
- app.yaml
My rule:
deploy(
name = "deploy",
srcs = [":bundle"] <-- pkg_web rule,
yaml = ":app.yaml"
)
Rule implementation:
def _deploy_pkg(ctx):
inputs = []
inputs.append(ctx.file.yaml)
inputs.extend(ctx.files.srcs)
script_template = """
#!/bin/bash
gcloud app deploy {yaml_path}
"""
script = ctx.actions.declare_file("%s-deploy" % ctx.label.name)
ctx.actions.write(script, script_content, is_executable = True)
runfiles = ctx.runfiles(files = inputs, transitive_files = depset(ctx.files.srcs))
return [DefaultInfo(executable = script, runfiles = runfiles)]
Thank you for your ideas!
Seems a bit excessive, but I ended using a custom shell command to accomplish this:
def _deploy_pkg(ctx):
inputs = []
out = ctx.actions.declare_directory("out")
yaml_out = ctx.actions.declare_file(ctx.file.yaml.basename)
inputs.append(out)
ctx.actions.run_shell(
outputs = [yaml_out],
inputs = [ctx.file.yaml],
arguments = [ctx.file.yaml.path, yaml_out.path],
progress_message = "Copying yaml to output directory.",
command = "cp $1 $2",
)
for f in ctx.files.srcs:
if f.is_directory:
ctx.actions.run_shell(
outputs = [out],
inputs = [f],
arguments = [f.path, out.path],
progress_message = "Copying %s to output directory.".format(f.basename),
command = "cp -a -R $1/* $2",
)
else:
out_file = ctx.actions.declare_file(f.basename)
inputs.append(out_file)
ctx.actions.run_shell(
outputs = [out_file],
inputs = [f],
arguments = [f.path, out_file.path],
progress_message = "Copying %s to output directory.".format(f.basename),
# This is what we're all about here. Just a simple 'cp' command.
# Copy the input to CWD/f.basename, where CWD is the package where
# the copy_filegroups_to_this_package rule is invoked.
# (To be clear, the files aren't copied right to where your BUILD
# file sits in source control. They are copied to the 'shadow tree'
# parallel location under `bazel info bazel-bin`)
command = "cp -a $1 $2",
)
....
``
New to bazel so please bear with me :) I have a genrule which basically downloads and unpacks a a package:
genrule(
name = "extract_pkg",
srcs = ["#deb_pkg//file:pkg.deb"],
outs = ["pkg_dir"],
cmd = "dpkg-deb --extract $< $(#D)/pkg_dir",
)
Naturally pkg_dir here is a directory. There is another rule which uses this rule as input to create executable, but the main point is that I now need to add a rule (or something) which will allow me to use some headers from that package. This rule is used as an input to a cc_library which is then used in other parts of the repository to get access to the headers. Tried like this:
genrule(
name = "pkg_headers",
srcs = [":extract_pkg"],
outs = [
"pkg_dir/usr/include/pkg/h1.h",
"pkg_dir/usr/include/pkg/h2.h"
]
)
But it seems Bazel doesn't like the fact that both rules use the same directory as output, even though the second one doesn't do anything (?):
output file 'pkg_dir' of rule 'extract_pkg' conflicts with output file 'pkg_dir/usr/include/pkg/h1.h' of rule 'pkg_headers'
It works fine if I use different "root" directory for both rules, but I think there must be some better way to do this.
EDIT
I tried to use declare_directory as follows (compiled from different sources):
unpack_deb.bzl:
def _unpack_deb_impl(ctx):
input_deb_file = ctx.file.deb
output_dir = ctx.actions.declare_directory(ctx.attr.name + ".cc")
print(input_deb_file.path)
print(output_dir.path)
ctx.actions.run_shell(
inputs = [ input_deb_file ],
outputs = [ output_dir ],
arguments = [ input_deb_file.path, output_dir.path ],
progress_message = "Unpacking %s to %s" % (input_deb_file.path, output_dir.path),
command = "dpkg-deb --extract \"$1\" \"$2\"",
)
return [DefaultInfo(files = depset([output_dir]))]
unpack_deb = rule(
implementation = _unpack_deb_impl,
attrs = {
"deb": attr.label(
mandatory = True,
allow_single_file = True,
doc = "The .deb file to be unpacked",
),
},
doc = """
Unpacks a .deb file and returns a directory.
""",
)
BUILD.bazel:
load(":unpack_deb.bzl", "unpack_deb")
unpack_deb(
name = "pkg_dir",
deb = "#deb_pkg//file:pkg.deb"
)
cc_library(
name = "headers",
linkstatic = True,
srcs = [ "pkg_dir" ],
hdrs = ["pkg_dir.cc/usr/include/pkg/h1.h",
"pkg_dir.cc/usr/include/pkg/h2.h"],
strip_include_prefix = "pkg_dir.cc/usr/include",
)
The trick with adding .cc so the input can be accepted by cc_library was stolen from this answer. However the command fails on
ERROR: missing input file 'blah/blah/pkg_dir.cc/usr/include/pkg/h1.h'
From the library.
When I run with debug, I can see the command being "executed" (strange thing is that I don't always see this printout):
SUBCOMMAND: # //blah/pkg:pkg_dir [action 'Unpacking tmp/deb_pkg/file/pkg.deb to blah/pkg/pkg_dir.cc', configuration: xxxx]
(cd /home/user/.../execroot/src && \
exec env - \
/bin/bash -c 'dpkg-deb --extract "$1" "$2"' '' tmp/deb_pkg/file/pkg.deb bazel-out/.../pkg/pkg_dir.cc)
After execution, bazel-out/.../pkg/pkg_dir.cc exists but is empty. If I run the command manually it extracts files correctly. What might be the reason? Also, is it correct that there's an empty string directly after bash command line string?
Bazel's genrule doesn't work very well with directory outputs. See https://docs.bazel.build/versions/master/be/general.html#general-advice
Bazel mostly works with individual files, although there's some support for working with directories in Starlark rules with https://docs.bazel.build/versions/master/skylark/lib/actions.html#declare_directory
Your best bet is probably to extract all the files you're interested in in the genrule, then create filegroups for the different groups of files:
genrule(
name = "extract_pkg",
srcs = ["#deb_pkg//file:pkg.deb"],
outs = [
"pkg_dir/usr/include/pkg/h1.h",
"pkg_dir/usr/include/pkg/h2.h",
"pkg_dir/other_files/file1",
"pkg_dir/other_files/file2",
],
cmd = "dpkg-deb --extract $< $(#D)/pkg_dir",
)
filegroup(
name = "pkg_headers",
srcs = [
":pkg_dir/usr/include/pkg/h1.h",
":pkg_dir/usr/include/pkg/h2.h",
],
)
filegroup(
name = "pkg_other_files",
srcs = [
":pkg_dir/other_files/file1",
":pkg_dir/other_files/file2",
],
)
If you've seen glob, you might be tempted to use glob(["pkg_dir/usr/include/pkg/*.h"]) or similar for the srcs of the filegroup, but note that glob works only with "source files", which means files already on disk, not with the outputs of other rules.
There are rules for creating debs, but I'm not aware of rules for importing them. It's possible to write such rules using Starlark:
https://docs.bazel.build/versions/master/skylark/repository_rules.html
With repository rules, it's possible to avoid having to explicitly write out all the files you want to extract, among other things. Might be more work than you want to do though.
I'm trying to run qemu on the output of a cc_binary rule. For that I have created a custom rule, which is pretty similiar to this example, but instead of the cat command on the txt-file, I want to invoke qemu on the output elf-file (":test_portos.elf") of the cc_binary rule. My files are the following:
run_tests.bzl
def _impl(ctx):
# The path of ctx.file.target.path is:
'bazel-out/cortex-a9-fastbuild/bin/test/test_portos.elf'
target = ctx.file.target.path
command = "qemu-system-arm -M xilinx-zynq-a9 -cpu cortex-a9 -nographic
-monitor null -serial null -semihosting
-kernel %s" % (target)
ctx.actions.write(
output=ctx.outputs.executable,
content=command,
is_executable=True)
return [DefaultInfo(
runfiles=ctx.runfiles(files=[ctx.file.target])
)]
execute = rule(
implementation=_impl,
executable=True,
attrs={
"command": attr.string(),
"target" : attr.label(cfg="data", allow_files=True,
single_file=True, mandatory=True)
},
)
BUILD
load("//make:run_tests.bzl", "execute")
execute(
name = "portos",
target = ":test_portos.elf"
)
cc_binary(
name = "test_portos.elf",
srcs = glob(["*.cc"]),
deps = ["//src:portos",
"#unity//:unity"],
copts = ["-Isrc",
"-Iexternal/unity/src",
"-Iexternal/unity/extras/fixture/src"]
)
The problem is, that in the command (of the custom rule) the location of the ":test_portos.elf" is used and not the location of the runfile. I have also tried, like shown in the example, to use $(location :test_portos.elf) together with ctx.expand_location but the result was the same.
How can I get the location of the "test_portos.elf" runfile and insert it into the command of my custom rule?
Seems that the runfiles are safed according to the short_path of the File, so this was all I needed to change in my run_tests.bzl file:
target = ctx.file.target.short_path
I'm looking for a good recipe to run "checks" or "verify" steps in Bazel, like go vet, gofmt, pylint, cppcheck. These steps don't create any output file. The only thing that matters is the return code (like a test).
Right now I'm using the following recipe:
sh_test(
name = "verify-pylint",
srcs = ["verify-pylint.sh"],
data = ["//:all-srcs"],
)
And verify-pylint.sh looks like this:
find . -name '*.py' | xargs pylint
This has two problems:
The verify logic is split between the shell script and the BUILD file. Ideally I would like to have both in the same place (in the BUILD file)
Anytime one of the source file changes (in //:all-srcs), bazel test verify-pylint re-runs pylint on every single file (and that can be expensive/slow).
What is the idiomatic way in bazel to run these steps?
There are more than one solutions.
The cleanest way is to do the verification at build time: you create a genrule for each file (or batch of files) you want to verify, and if verification succeeds, the genrule outputs something, if it fails, then the rule outputs nothing, which automatically fails the build as well.
Since success of verification depends on the file's contents, and the same input should yield the same output, the genrules should produce an output file that's dependent on the contents of the input(s). The most convenient thing is to write the digest of the file(s) to the output if verification succeeded, and no output if verification fails.
To make the verifier reusable, you could create a Skylark macro and use it in all your packages.
To put this all together, you'd write something like the following.
Contents of //tools:py_verify_test.bzl:
def py_verify_test(name, srcs, visibility = None):
rules = {"%s-file%d" % (name, hash(s)): s for s in srcs}
for rulename, src in rules.items():
native.genrule(
name = rulename,
srcs = [s],
outs = ["%s.md5" % rulename],
cmd = "$(location //tools:py_verifier) $< && md5sum $< > $#",
tools = ["//tools:py_verifier"],
visibility = ["//visibility:private"],
)
native.sh_test(
name = name,
srcs = ["//tools:build_test.sh"],
data = rules.keys(),
visibility = visibility,
)
Contents of //tools:build_test.sh:
#!/bin/true
# If the test rule's dependencies could be built,
# then all files were successfully verified at
# build time, so this test can merely return true.
Contents of //tools:BUILD:
# I just use sh_binary as an example, this could
# be a more complicated rule of course.
sh_binary(
name = "py_verifier",
srcs = ["py_verifier.sh"],
visibility = ["//visibility:public"],
)
Contents of any package that wants to verify files:
load("//tools:py_verify_test.bzl", "py_verify_test")
py_verify_test(
name = "verify",
srcs = glob(["**/*.py"]),
)
A simple solution.
In your BUILD file:
load(":gofmt.bzl", "gofmt_test")
gofmt_test(
name = "format_test",
srcs = glob(["*.go"]),
)
In gofmt.bzl:
def gofmt_test(name, srcs):
cmd = """
export TMPDIR=.
out=$$(gofmt -d $(SRCS))
if [ -n "$$out" ]; then
echo "gmfmt failed:"
echo "$$out"
exit 1
fi
touch $#
"""
native.genrule(
name = name,
cmd = cmd,
srcs = srcs,
outs = [name + ".out"],
tools = ["gofmt.sh"],
)
Some remarks:
If your wrapper script grows, you should put it in a separate .sh file.
In the genrule command, we need $$ instead $ due to escaping (see documentation)
gofmt_test is actually not a test and will run with bazel build :all. If you really need a test, see Laszlo's example and call sh_test.
I call touch to create a file because genrule requires an output to succeed.
export TMPDIR=. is needed because by default the sandbox prevents writing in other directories.
To cache results for each file (and avoid rechecking a file that hasn't changed), you'll need to create multiple actions. See Laszlo's for loop.
To simplify the code, we could provide a generic rule. Maybe this is something we should put in a standard library.