On my way to migrate an existing build to bazel, i have a submodule mod1 that has some JUnit tests reading files from a "testdata" directory. When trying to load those files, i have to use "mod1/testdata/test.txt" instead of "testdata/test.txt", i.e. the unit tests have to be aware of their corresponding bazel module directory.
(1) Is this the correct behaviour for bazel 0.23.2#debian and 0.23.2-homebrew?
(2) Is there a way to use the .java tests without changes, and to remove the need for a "mod1" prefix in bazel data/ runfiles?
My sample project is here: https://gitlab.com/jhinrichsen/bazel-data-test. I am looking for a way to use the same path "testdata/test.txt" for both root module and submodule. In my example project, bazel test AllTests suceeds, while bazel test mod1/AllTests fails because i need to prepend "mod1/" to "testdata/test.txt".
Not looking for a resources/classpath based solution as i cannot modify the existing test sources.
The behavior that you are seeing is indeed the correct behavior, and there is no way to strip the "mod1" prefix with the native Java rules. Anything you include with data will be scoped to its own package in the way you're seeing.
The reason for this is pretty straightforward. Let's say that your test target, //mod1:AllTests, also depended on a hypothetical //mod2:tests library. And let's say that hypothetical library also had a testdata/test.txt as a data dependency. The multiple test.txt files would conflict unless they were namespaced to their packages.
If you absolutely cannot modify the test source at all, then you are pretty much stuck. Here's a previous discussion about this:
https://groups.google.com/forum/#!topic/bazel-discuss/w6TDwSZvN0k
I would recommend if you're trying to work with Bazel, you accept the concept of runfiles and modify your tests to either work with the runfiles structure, or accept a command-line argument for where to find the test data.
Related
I would like a set of rules from my_package.bzl to be accessible to all BUILD files of a workspace without having to load my_package.bzl in the BUILD files. Basically I want the rules in the package to look like native rules. How can I achieve this?
I was thinking maybe there's a line I could add to one of the .bazelrcs or to the WORKSPACE file of the the project.
This can be achieved by adding a prelude_bazel file at //tools/build_rules:prelude_bazel (this must be a package, so tools/build_rules must contain a BUILD file).
This will be loaded and prepended to all BUILD files loaded by Bazel.
However, there are a few things to consider before going this route. It's currently undocumented, and while doing some searching to find any info on this feature, it's unclear if it will remain a part of Bazel.
It may also have performance / scaling problems. If the prelude were to change (or any of its dependencies), every BUILD file would have to be reloaded, and this may take some time depending on the size of the build graph.
The code_build (https://pub.dartlang.org/packages/code_builde) package provides a solution to generate classes and constructors, field and methods for that class.
My ultimate goal is to generate Flutter (https://flutter.io) Widgets based on the json structure given, but I don't know how to do this with the code_build or another package.
So help would be appreciated!
The general way to write something which outputs Dart code is to wrap up the functionality in a Builder and to perform the code generation with build_runner
At a high level you'd write a Builder that:
Has buildExtensions of {".json": [".dart"]}.
Reads in the buildStep.inputId asset and parses the json.
Uses code_builder to build up a String and then write it to the output asset.
Then you'd configure the builder in build.yaml. And either apply it manually to your package, or if you'd like to publish it as a utility it can apply to dependencies.
Your package would have a dev_dependency on build_runner and then you can execute builds with flutter packages run build_runner build.
There are more docs at https://github.com/dart-lang/build/tree/master/docs
You can see an example of a package which does something similar - starts with yaml files and outputs Dart files using code_builder at https://github.com/natebosch/message_builder
There is now an online tool which will generate the Dart classes from a JSON payload if you're only looking to structure your model classes. It won't do it dynamically at runtime, but it's super helpful when you're first building your program.
https://javiercbk.github.io/json_to_dart/
I have an external library ace.so.
cc_library(
name='ace',
hdrs=glob(['path/to/ace/**']),
srcs=['path/to/ace.so'],
)
How do I go about linking to that library with bazel? I know a colon can be used when invoking gcc/g++ directly, but I'm not sure how to get the same behavior from bazel.
I tried adding -l:ace.so (also -Wl,-l:ace.so) to copts but it seems bazel doesn't pass that to gcc or add it to the # file used for linker args.
I tried nocopts='-lace.so' in combination with linkopts=['-l:ace.so']. No luck.
I also tried cc_import instead of cc_library, but that didn't work either.
I've read the Importing precompiled C++ libraries doc, but I didn't see anything about using libs with an arbitrary prefix - or with no prefix.
As a temporary fix, I've added a symlink libace.so pointing to ace.so and changed the srcs line to match. While this works, I'd much rather convince bazel to use the lib as is.
Looking around how information about libraries is being collected and passed around, I am afraid this (assumption that "plain" dynamic libraries are prefixed with lib and libfoo.so can be given as -lfoo is fairly hard coded at the moment. The same would not be true of it was considered a "versioned" (matches a pattern "^.+\\.so(\\.\\d+)+$") dynamic library, which would be passed as -l:foo.so.1. But unfortunately that does not really help you, because you'd still need to employ a similar workaround and create a fiction of versioning to boot. That said, as long as your solib filenames are given, the symlink sounds like a reasonably sane workaround.
I wrote my own ORM framework (something along the lines of CoreData or Realm), and also wrote quite a few tests in Xcode for it.
Now I want to introduce an additional encoding format used for storing data on disk, but I also want to keep supporting the original encoding format.
Is there a good strategy to run my all my existing -test* methods for both encoding formats without duplicating the existing test code?
The easiest way I have found is to just create a new test target and add all the same test classes to it. If you want them run in one go, create a target that has both of these test targets as dependencies (or just runs them manually).
How you parametrize for your different targets is up to you, we've successfully used two implementations of a category that has the definition that varies.
Is there a way to specify optional dependencies in Bazel?
I'd like to make a rule to somewhat mirror Kitware's ExternalData, but I would like to see if I can enable workflows where the developer edits the file in-tree, ideally without needing to modify the BUILD file.
Ideal Workflow
Define a rule, external_data, which can fetch a file from a given server given its SHA-512.
If the file already exists, check it's SHA-512.
If that is what is requested, symlink / copy this file (ensuring that no tests can modify the original file).
If it is different, print a warning, but proceed as normal, to allow for developers to quickly modify the large files as they need.
I would like to do this such that Bazel can switch between the file being present and not, and be robust to false-positives on caching. An example scenario that I would like to avoid, if I were to not include it as an optional dependency:
In a prior run, the file was in the workspace, Bazel built the target, everything's fine and dandy.
Developer removes the file from the workspace after uploading, satisfied with their changes and wanting to test the download process.
When running the downstream target, Bazel doesn't care about the change in the workspace since it's not an explicit dependency, and the symlink is invalidated, and the test crashes and burns.
To me, it seems like I'd run into this if I tried to implement a repository_rule rule which manually checks for the file existence, and conditionally executes (I'm not sure if analysis would retrigger this rule being "evaluated" if Step 2 happens.).
Workaround
My current thought for an alternative workflow is to have an explicit option for external_data, use_workspace: if False, it will download the file; if True, it will just mirror exports_files([]). The developer can then set this when modifying files.
(Ideally, I'd like to optionally include a file which indicates the SHA (${file}.sha512), but this seems to go back to the original ask.)
One workaround is to use Bazel's glob(...) method to effectively check for file existence.
If you have a file, say basic.bin.sha512, and you want a rule to switch modes based on that file's existence, you can use glob(["basic.bin.sha512"]), which will either match the package file exactly or return an empty list.
I had tinkered around with using this on larger sets of files, and it appears to work. However, for the time being, I've erred to having a sort-of explicit "development" mode for the target definition to keep the Bazel build relatively consistent, regardless of what files may be checked out.
Here's an example usage:
https://github.com/EricCousineau-TRI/external_data_bazel/blob/4bf1dff/WORKFLOWS.md#edit-files-in-a-sha512-group