Running a Bazel executable target as a Bazel test? - bazel

I have a Bazel executable target (of type fsharp_binary, but I don't think it should matter) that I can run using bazel run.
bazel run //my_app.exe
I would like to use this executable as a test, so that when I call bazel test it gets built and executed, and a non-zero exit code is considered a test failure.
bazel test //...
What I am looking for is something like this:
test_of_executable(
name = "my_test",
executable = "//my_app.exe",
success_codes = [ 0 ],
)
Then:
bazel test //:my_test
How can I achieve this in Bazel?

Just wrap your app as a sh_test. See for example https://github.com/bazelbuild/bazel/issues/1969.
What I use in my codebase is:
BUILD.bazel:
sh_test(
name = "test",
srcs = ["test.sh"],
data = [
"//:some_binary",
],
)
test.sh
some_project/some_subdir/some_binary
See here for an real example.

Related

Deploy py_binary without container bazel

I am attempting to create a py_binary in bazel and then copy that to a remote machine. At the moment I was doing a dummy app to make sure the pipeline would work. I cannot use a docker image as I need lower level control of the hardware than docker can provide.
My goal is to do the following:
build the software with bazel
create a tar package that contains the py_binary
copy and run that binary on another computer not connected to bazel
To do this I made a simple binary(that for context just makes some RPC calls to a server I am working on as a side project) and the build files is as follows:
py_binary(
name="rosecore_cli",
srcs=glob([
"src/*.py"
]),
deps = [
"//rosecore/proto:project_py_pb2_grpc",
"//rosecore/proto:project_py_pb2"
]
)
pkg_files(
name = "binary",
srcs = [
":rosecore_cli",
],
prefix = "/usr/share/rosecore/bin",
)
pkg_filegroup(
name = "arch",
srcs = [
":binary",
],
visibility = ["//visibility:public"],
)
pkg_tar(
name = "rosecore_tar",
srcs = [
":arch"
],
include_runfiles=True
)
When I build, copy the tar file and extract it I get the following error:
AssertionError: Cannot find .runfiles directory for ./usr/share/rosecore/bin/rosecore_cli
Any help would be appreciated :)

How to create rule which when run will also run dependent target first

I am trying to create Bazel rule which would execute docker-compose command and spin up all docker images from docker-compose.yaml file. I was able to get this going but my next step is to make my rule depend on another container_image target from my build files.
I would like to first run this container_image target and then my own rule. I need to run container_image rule as that is the only way the rule will actually load built image to docker. I need to do this as I am planing to inject the name of this newly built image to my docker-compose.yaml file.
Code for my rule is:
def _dcompose_up_impl(ctx):
toolchain_info = ctx.toolchains["#com_unfold_backend//rules:toolchain_type"].dc_info
test_image = ctx.attr.test_image
docker_up = ctx.actions.declare_file(ctx.label.package + "-" + ctx.label.name + ".yaml")
image_name = "bazel/%s:%s" % (test_image.label.package, test_image.label.name)
ctx.actions.expand_template(
output = docker_up,
template = ctx.file.docker_compose,
substitutions = {"{IMAGE}": image_name},
)
out = ctx.actions.declare_file(ctx.label.name + ".out")
ctx.actions.run(executable = ctx.executable.test_image, outputs = [out])
runfiles = ctx.runfiles(files = ctx.files.data + [docker_up, out])
cmd = """echo Running docker compose for {file}
{dockerbin} -f {file} up -d
""".format(file = docker_up.short_path, dockerbin = toolchain_info.path)
exe = ctx.actions.declare_file("docker-up.sh")
ctx.actions.write(exe, cmd)
return [DefaultInfo(
executable = exe,
runfiles = runfiles,
)]
dcompose_up = rule(
implementation = _dcompose_up_impl,
attrs = {
"docker_compose": attr.label(allow_single_file = [".yaml", ".yml"], mandatory = True),
"data": attr.label_list(allow_files = True),
"test_image": attr.label(
executable = True,
cfg = "exec",
mandatory = True,
),
},
toolchains = ["//rules:toolchain_type"],
executable = True,
)
Problem is out file I create when running container_image task test_image. I get error from Bazel Loaded image ID: sha256:c15a1b44d84dc5d3f1ba5be852e6a5dfbdc11e24ff42615739e348bdb0522813 Tagging c15a1b44d84dc5d3f1ba5be852e6a5dfbdc11e24ff42615739e348bdb0522813 as bazel/images/test:image_test ERROR: /monogit/test/BUILD:18:22: output '/test/integration.up.out' was not created ERROR: /monogit/test/BUILD:18:22: Action test/integration.up.out failed: not all outputs were created or valid
If I remove out file from runfiles for my docker-compose execution then my test_image does not get loaded to docker. In the first example it gets loaded but docker-compose then fails.
It is obvious to me that container_image rule does not create output files. In that case, how could I make Bazel run, not just build, container_image executable and then my executable?

How to pass test args in Skylark Bazel?

I'm writing some bazel tests where I need to be able to provide the full path to some file.
bazel test //webservice:checkstyle-test --test_arg="path_to_some_file"
My question is how can you parse the input arguments in your bazel test? Is there anything like ctx.arg?
BUILD
load("//:checkstyle.bzl", "checkstyle_test")
checkstyle_test(
name = ""
src = []
config = ""
)
checkstyle.bzl
def _checkstyle_test_impl(ctx):
// How can I get my input parameter here?
checkstyle_test = rule(
implementation = _checkstyle_test_impl,
test = True,
attrs = {
"_classpath": attr.label_list(default=[
Label("#com_puppycrawl_tools_checkstyle//jar")
]),
"config": attr.label(allow_single_file=True, default = "//config:checkstyle"),
"suppressions": attr.label(allow_single_file=True, default = "//config:suppressions"),
"license": attr.label(allow_single_file=True, default = "//config:license"),
"properties": attr.label(allow_single_file=True),
"opts": attr.string_list(),
"string_opts": attr.string_dict(),
"srcs": attr.label_list(allow_files = True),
"deps": attr.label_list(),
},
)
The value of --test_arg is passed to the test executable as program arguments when bazel runs it during bazel test, see https://docs.bazel.build/versions/main/command-line-reference.html#flag--test_arg
For example:
def _impl(ctx):
ctx.actions.write(
output = ctx.outputs.executable,
content = "echo test args: ; echo $#",
)
my_test = rule(
implementation = _impl,
test = True,
)
load(":defs.bzl", "my_test")
my_test(name = "foo")
$ bazel test foo --test_arg=--abc=123 --test_output=streamed
WARNING: Streamed test output requested. All tests will be run locally, without sharding, one at a time
INFO: Analyzed target //:foo (24 packages loaded, 277 targets configured).
INFO: Found 1 test target...
test args:
--abc=123
Target //:foo up-to-date:
bazel-bin/foo
INFO: Elapsed time: 0.559s, Critical Path: 0.13s
INFO: 5 processes: 3 internal, 2 linux-sandbox.
INFO: Build completed successfully, 5 total actions
//:foo PASSED in 0.0s
Executed 1 out of 1 test: 1 test passes.
INFO: Build completed successfully, 5 total actions
I don't think there's currently a way to get the value of --test_arg in a Starlark rule implementation (it's not added under ctx.fragments for example).

Is there a way to execute a repository_rule with local=True on every bazel invocation?

I have a repository_rule which is querying to see if the local system has a database running and is fully migrated.
# impl
result = ctx.execute(["mysql", "-u", "root", "--protocol", "tcp", "-e", "select * from %s.flyway_schema_history" % ctx.attr.dbname])
ctx.file(
"local_checksum",
"""
{RETURN_CODE}
{STDERR}
{STDOUT}
""".format(
RETURN_CODE = result.return_code,
STDERR = result.stderr,
STDOUT = result.stdout,
),
)
...
# Rule Def
local_database = repository_rule(
implementation = _local_database,
local = True,
configure=True,
attrs = {
"datasource_configuration": attr.label(providers = [DataSourceConnectionInfo]),
"dbname": attr.string(doc = """
If omitted, will be the name of the repository.
"""),
"migrations": attr.label_list(allow_files = True),
},
)
The local_checksum is re-calculated and does its job whenever the dependency graph changes (as stated in the docs https://docs.bazel.build/versions/master/skylark/repository_rules.html#when-is-the-implementation-function-executed).
But since the database is not managed by bazel is there any way to force this specific rule to run every time bazel is invoked to ensure all dependencies are available?
After some sleep I cobbled something together. Still looking for a better answer, I would think there is a first class way to solve this.
I created a bazel wrapper at tools/bazel
#!/bin/bash
set -e
echo "`date`
Generated by tools/bazel" > .bazelexec.stamp
# from https://github.com/grpc/grpc/blob/master/tools/bazel
exec -a "$0" "${BAZEL_REAL}" "$#"
and then I added to the rule an attribute for reading that file:
local_database = repository_rule(
implementation = _local_database,
local = True,
configure=True,
attrs = {
"datasource_configuration": attr.label(providers = [DataSourceConnectionInfo]),
"dbname": attr.string(doc = """
If omitted, will be the name of the repository.
"""),
"migrations": attr.label_list(allow_files = True),
"recalculate_when": attr.label_list(allow_files = True, doc = """
Files to watch which will trigger the repository to run when they change.
You can add a tools/bazel script to your local repository, and write a file with a date
every time bazel is executed in order to get the migrator to check each bazel run if
someone changed the database.
"""),
},
and lastly, I create paths for those files in the rule so the repository believes its graph has changed.
# If you don't do something with the file, then the rule does not recalculate.
[ctx.path(file) for file in ctx.attr.recalculate_when]
# Total mysql hack for now... need java tool which dumps content of a table for different databases
result = ctx.execute(
["mysql", "-u", "root", "--protocol", "tcp", "-e", "show databases"],
)
ctx.file(
"local_database_checksum",
"""
{RETURN_CODE}
{STDERR}
{STDOUT}
""".format(
RETURN_CODE = result.return_code,
STDERR = result.stderr,
STDOUT = result.stdout,
),
)
Now every time I run build, if the databases change, the local_checksum file changes, and can trigger other rules to re-build (in my case I'm generating jooq classes, so my query goes into the tables), if the databases and tables are stable, then it doesn't trigger a rebuild.

using if loop in docker file

RUN if [ "$AUTH_MS_PROFILE" = "test" ]; then RUN ["mvn", "verify"]; fi
so, the case is am trying to have two images for prod and test since I don't need to run integration test # prod so, am using build-arg to set dev and test profile
I need to have an if loop if the input is test it should test else it shouldn't
I would move all such conditions to a build_internal.sh file
if [ "$AUTH_MS_PROFILE" = "test" ]; then
mvn verify
fi
Copy this file inside and run it inside the Dockerfile. If you want to use your approach then you just need to use
RUN if [ "$AUTH_MS_PROFILE" = "test" ]; then mvn verify ; fi

Resources