how to show all gtest case by bazel without "test" cmd - bazel

I want to query all gtest cases by bazel,
parameter "--gtest_filter" only can be used with "bazel test " cmd
and I am try to use "bazel query bazel query //xxx:all", but it will show the test list which defined in BUILD file , I want to get the cases list from xxx.cc files.

This is not a job that bazel query can do. Query operates on the graph structure of targets. A fundamental design decision of Bazel is that this graph can be computed by looking only at BUILD files and the .bzl files they depend on. In particular, parsing source files is not allowed.
(The argument to --test_filter is simply passed through the test runner; Bazel does not know what it represents.)

If you use CLion with the Bazel plugin you get the following view for googletest tests:
This works even with Catch2 (but for Catch2 the view is not so nice). I guess that's some IDE magic here - nevertheless, it gives you what you want. I assume you can also come up with some type of Bazel Aspect that produces this information for you.
I tested this also with Lavender (with minor modifications) and Visual Studio which gives me in the test overview also a list of all test:

Related

Can I add static analysis to a py_binary or py_library rule?

I have a repo which uses bazel to build a bunch of Python code. I would like to introduce various flavors of static analysis into the build and have the build fail if these static analyses throw errors. What is the best way to do this?
For example, I'd like to declare something like:
py_library_with_static_analysis(
name = "foo",
srcs = ["foo.py"],
)
py_library_with_static_analysis(
name = "bar",
srcs = ["bar.py"],
deps = [":foo"],
)
In a build file and have it error out if there are mypy/flake/etc errors in foo.py. I would like to be able to do this gradually, converting libraries/binaries to static analysis one target at a time. I'm not sure if I should do this via a new rule, a macro, an aspect or something else.
Essentially, I think I'm asking how to run an additional command while building a py_binary/py_library and fail if that command fails.
I could create my own version of a py_library rule and have it run static analysis within the implementation but that seems like something which is really easy to get wrong (my guess is that native.py_library is quite complex?) and there doesn't seem to be a way to instantiate a native.py_library within a custom rule.
I've also played around with macros a bit, but haven't been able to get that to work either. I think my issue there is that a macro doesn't actually specify new commands, only new targets and I can't figure out how to make the static analysis target get force built along with the py_library/py_binary I'm interested in.
A macro that adds implicit test targets is not such a bad idea: The test targets will be picked up automatically when you run bazel test //..., which you could do in a gating CI to prevent imperfect code from merging.
Bazel supports a BUILD prelude (which is underdocumented) that you could use to replace all py_binary, py_library, and even py_test with your test-adding wrapper macros with minimal changes to existing code.
If you somehow fail the build instead it will make it harder to quickly prototype things. Sometimes you want to just quickly try something out, and you don't care about any pydoc violations yet.
In case you do want to fail the build, you might be able to use the Validations Output Group of a rule that you implement to wrap or replace your py_libraries.

Determine if files are part of any package

Given I have a list of files, e.g foo/src/main.cpp, foo/src/bar.cpp, foo/README.md is it possible to determine which of those files are part of a bazel package?
In my example, the output would e.g. be foo/src/main.cpp, foo/src/bar.cpp since the README.md would not be part of the build.
One way to do this would be to call bazel query on each file and see if it results in an output, but that is quite inefficient and so I was wondering if there is an easier way.
Background: I am trying to determine if a changes in a set of files have an impact on a target, and I want to use bazel query somepath(//some/target, set($FILES)) for that, but this will fail if any of the files in $FILES is not part of a BUILD file.
How about flipping it around and querying for all the source files of the target with:
bazel query 'kind("source file", deps(//some:target))'
and then checking if the result has any of the files in the set

How to use bazel query to get all test rule types in BUILD file?

A bazel query like bazel query 'kind(".*_test", //path/to/package:*)' will give the names of all test rules in the BUILD file.
But how would one get the actual test rule types in the BUILD file? i.e., py_test, cc_test, ... find out which of these types of rules existed/
Try using the --output label_kind flag, e.g.:
https://docs.bazel.build/versions/master/query-how-to.html#what-rules-are-defined-in-the-foo-package

Skylark - How to execute a jar from a repository rule

Context
I am writing a repository rule that invokes another Bazel project. My current approach is to build the additional project as a deploy jar. I would like a user to be able to instantiate the rule like:
jar_path = some/relative/path
my_rule(name = "something", p_arg="m_arg", binary=jar_path)
and then given the jar_path and the arguments, I would like the repository rule to execute the following command in the shell:
java -jar $(SOME_JAR) $(ARGUMENTS_PROVIDED_BY_RULE)
Problem
First, it's unclear how best to accomplish the deploy jar approach. So far, I have attempt two different approaches, with varying levels of success. For examples, I have skimmed through the scala_rules, the maven_rules, and the skylark cookbook.
Second, and more importantly, I am not sure whether the deploy jar is the best route to accomplishing my goals. Again, my interest is to invoke a target from an external Bazel project, that is currently hosted on github. (So feasibly, I could try to fetch the project using the http_archive rule).
Below, I describe the attempts I have made.
Approach 1
My first approach involved trying to execute the command using the command field in ctx.action. I tried various enumerations of
java -jar {computed_absolute_path_of_deploy_jar} {args_passed_from_instantiation}.
My biggest issue here was with determining the absolute path of the deploy jar. The file's root path, would contain some additional information. For example, it would like something like this.
/abs/olute/path[ something ]/rela/tive/path
As a side note, I'm not sure if this is a bug/nit, but the File.root.path, evaluated to None, despite File.none not being None.
My first approach involved was to was to try to use skylark [ctx.binary]
Approach 2
Next thing I tried was to mimic the input binary example from the docs. This was also unsuccessful. The issue was that the actual binary could not be found. Here is how I configured it.
First, I relaxed the repository rule into a regular skylark rule.
def _test_binary(ctx):
ctx.action(
....
arguments = [ctx.attr.p_arg],
executable = ctx.executable.binary)
test_binary = rule(
...
attrs = {
"binary":attr.label(mandatory=True, cfg="host", allow_files=True, executable=True),
...
}
Then, in my external project, I loaded the skylark rule into the WORKSPACE file. Finally, I called the macro from one of my BUILD files as follows:
load("#something_rule//:something_rule.bzl", "test_binary")
test_binary(name = "hello", p_arg = "hello", binary = "script.sh")
The script is a one line java -jar something_deploy.jar -- -arg:$1, and is in the same directory as the BUILD file.
Bazel complains that src/script.sh does not exist. I presume because it is looking for the file in /private/var/tmp/-bazel_username/somehash/relative_path. In response, I tried to pass the absolute path, which is not allowed.
Cheers.
It looks like you're mixing up repository rules with build extensions ("normal" rules). A good rule of thumb is:
Repository rules are for getting sources onto your system or symlinking them to a place Bazel can see them.
Build extension are for everything else: compiling, copying files, running binaries, etc.
I don't actually think you need to use either, for this. You say that the other project is on GitHub, so you can add the following to your WORKSPACE file:
http_archive(
name = "other_project",
...
)
Then, in your BUILD file:
genrule(
name = "run-a-jar",
srcs = ["#other_project//some/relative:path"],
cmd = "java -jar $(location #other_project//some/relative:path) -- arg1 arg2 > $#",
outs = ["jar-output"],
)
You shouldn't need to use the _deploy.jar target, since you're not moving the jar out of its project (_deploy.jar is useful when you need to relocate it).
Other things from your question:
I'm not sure if this is a bug/nit, but the File.root.path, evaluated to None,
Are you sure it didn't evaluate to ""? The path is relative to the execution root, so for sources, it will always be "" (for outputs, it'll be bazel-out/local-fastbuild/bin or similar).
Bazel complains that src/script.sh does not exist.
Passing -s to Bazel can really help debugging Skylark rules. You can see exactly where it is looking.

Sample TFS 2010 Build Process Template for NCover [duplicate]

I was wondering if any of you guys had any experience generating code coverage reports in TFS Build Server 2010 while running NUnit tests.
I know it can be easily done with the packaged alternative (MSTest + enabling coverage on the testrunconfig file), but things are a little more involved when using NUnit. I've found some info here and there pointing to NCover, but it seems outdated. I wonder if there are other alternatives and whether someone has actually implemented this or not.
Here's more info about our environment/needs:
- TFS Build Server 2010
- Tests are in plain class libraries (not Test libraries - i.e., no testrunconfig files associated), and are implemented in NUnit. We have no MSTests.
- We are interested in running coverage reports as part of each build and if possible setting coverage threshold requirements for pass/fail criteria.
We 've done it with NUnit-NCover and are pretty happy with our results. NUnit execution is followed by NUnitTfs execution in order to get our testing results published in the Build Log. Then NCover kicks in, generating our code coverage results.
One major thing that poses as a disadvantage is fact that setting up the arguments for properly invoking NCover wasn't trivial. But since I installed it, I never had to maintain it.
Two things could pose as disadvantages:
NUnitTfs doesn't work well with NCover (at least I couldn't find a way to execute both in the same step, so (since NCover invokes NUnit) I have to run Unit tests twice: (1) to get the test results and (2) to get coverage results over NCover. Naturally, that makes my builds last longer.
Setting up the arguments for properly invoking NCover wasn't trivial. But since I installed it, I never had to maintain it .
In any case, the resulting reporting (especially the Trend aspect) is very useful in monitoring how our code evolves within time. Especially if you 're working on a Platform (as opposed to short-timed Projects), Trend reports are of great value.
EDIT
I 'll try to present in a quick & dirty manner how I 've implemented this, I hope it can be useful. We currently have NCover 3.4.12 on our build server.
Our simple naming convention regarding our NUnit assemblies is that if we have a production assembly "123.dll", then another assembly named "123_nunit.dll" exists that implements its tests. So, each build has several *_nunit.dll assemblies that are of interest.
The part in the build process template under "If not disable tests" is the one that has been reworked in order to achieve our goals, in particular the section that was named "Run MSTest for Test Assemblies". The whole implementation is here, after some cleanups to make the flow easier to be understood (pic was too large to be directly inserted here).
At first, some additional Arguments are implemented in the Build Process Template & are then available to be set in each build definition:
We then form the NUnit args in "Formulate nunitCommandLine":
String.Format("{0} /xml={1}\\{2}.xml", nunitDLL, TestResultsDirectory, Path.GetFileNameWithoutExtension(nunitDLL))
This is then used in the "Invoke NUnit"
In case this succeeds & we have set coverage for this build we move to "Generate NCover NCCOV" (the coverage file for this particular assembly). For this we invoke NCover.Console.exe with the following as Args:
String.Format("""{0}"" ""{1}"" //w ""{2}"" //x ""{3}\{4}"" //literal //ias {5} //onlywithsource //p ""{6}""",
NUnitPath,
Path.GetFileName(nunitDLL),
Path.GetDirectoryName(nunitDLL),
Path.GetDirectoryName(Path.GetDirectoryName(nunitDLL)),
Path.GetFileName(nunitDLL).Replace("_nunit.dll", ".nccov"),
Path.GetFileNameWithoutExtension(nunitDLL).Replace("_nunit", ""),
BuildDetail.BuildNumber)
All these run in the foreach loop "For all nunit dlls". When we exit the loop, we enter "Final NCover Activities" & at first the part "Merge NCCovs", where NCover.Console.exe is executed again - this time with different args:
String.Format("""{0}\*.nccov"" //s ""{0}\{1}.nccov"" //at ""{2}\{3}\{3}.trend"" //p {1} ",
Path.GetDirectoryName(Path.GetDirectoryName(testAssemblies(0))),
BuildDetail.BuildNumber,
NCoverDropLocation,
BuildDetail.BuildDefinition.TeamProject
)
When this has run, we have reached the point where all NCCOV files of this build are merged into one NCCOV-file named after the build + the Trend file (that monitors the build throughout its life) has been updated with the elements of this current build.
We now have to only generate the final HTML report, this is done in "Generate final NCover rep" where we invoke NCover.reporting with the following args:
String.Format(" ""{0}\{1}.nccov"" //or FullCoverageReport //op ""{2}\{1}_NCoverReport.html"" //p ""{1}"" //at ""{3}\{4}\{4}_{5}.trend"" ",
Path.GetDirectoryName(Path.GetDirectoryName(testAssemblies(0))),
BuildDetail.BuildNumber,
PathForNCoverResults,
NCoverDropLocation,
BuildDetail.BuildDefinition.TeamProject,
BuildType
)

Resources