I have main project that implements bazel rule and subproject that simulates end user experience. In subproject I'd like to load archive that is created in the main project, as if it is loaded using http_archive. Here's repo setup example:
root/
|- WORKSPACE
|- BUILD
|- rules.bzl
\- integration-tests/
|- WORKSPACE
|- BUILD
The root/BUILD file has :release target which creates tar.gz file. I would like to load this file inside integration-tests/WORKSPACE as if it is loaded using http_archive. Is there a way to do this?
Simplest way I've found is to use:
http_archive(
urls = [
"file://path_to_archive",
],
)
Is there a way to do this?
Strictly speaking, no, there is not a way to do this. Repository rules like http_archive are executed during the loading phase of a build, while build outputs are created during the execution phase. A repository rule cannot depend on a build target, since that target won't have been built yet.
This is true even and especially across workspace boundaries. There is no way for your sub-project's WORKSPACE to directly depend on a build target from the parent project, or any other project.
In this case, I'd think about whether you actually need to load the release tarball in the WORKSPACE. Is the tarball actually a Bazel repository? If so, you might want to look into techniques for testing Bazel extensions.
You can override dependencies on the command line, and use a local checkout of a repository instead of the http_archive.
For example, given the following WORKSPACE.bazel
workspace(name = "cli_cpp")
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "qpid-cpp",
build_file_content = all_content,
strip_prefix = "qpid-cpp-main",
url = "https://github.com/apache/qpid-cpp/archive/main.zip",
)
You can checkout the qpid-cpp repository, or download and unzip that archive, make your changes in it, add an empty WORKSPACE.bazel file and also add the following BUILD.bazel there
filegroup(name = "all", srcs = glob(["**/*"]), visibility = ["//visibility:public"])
Now, run Bazel with
--override_repository=qpid-cpp=/path/to/qpid-cpp
Related
I have a Java project that has multiple subprojects. It currently uses gradle however we are now trying to move to Bazel.
How can I create a WAR file using Bazel build?
Could not find any example online.
The only thing I found is this repo:
https://github.com/bmuschko/rules_java_war
However it hasn't had any activity in over 3 years. I am not sure if it is still valid.
In Bazel, you can create a WAR (Web Application Archive) file by defining a war target in your BUILD file. Here are the steps to create a WAR in Bazel:
Define a Java library target: If your WAR project contains Java code, you will need to define a Java library target in your BUILD file. This target specifies the location of your Java code and its dependencies.
java_library(
name = "my_java_library",
srcs = glob(["src/main/java/**/*.java"]),
deps = [ "//third_party/library:library", ],
)
Define a filegroup target: If your WAR project contains any web application resources (such as HTML, JavaScript, and CSS files), you will need to define a filegroup target in your BUILD file. This target specifies the location of your web application resources.
filegroup(
name = "my_web_resources",
srcs = glob(["src/main/webapp/**/*"]),
)
Define a war target: Finally, you will need to define a war target in your BUILD file. This target specifies the location of your Java library and web application resources and creates the WAR file.
war(
name = "my_war_file",
libs = [":my_java_library"],
resources = [":my_web_resources"],
webxml = "src/main/webapp/WEB-INF/web.xml",
)
These are the basic steps for creating a WAR in Bazel. You can find additional information and best practices for creating WAR files in Bazel in the Bazel documentation. Note that the exact steps for creating a WAR in Bazel will depend on the specific architecture and technology stack of your project.
How do you enumerate and copy multiple files to the source folder in Bazel?
I'm new to Bazel and I am trying to replace a non-Bazel build step that is effectively cp -R with an idiomatic Bazel solution. Concrete use cases are:
copying .proto files to a a sub-project where they will be picked up by a non-Bazel build system. There are N .proto files in N Bazel packages, all in one protos/ directory of the repository.
copying numerous .gotmpl template files to a different folder where they can be picked up in a docker volume for a local docker-compose development environment. There are M template files in one Bazel package in a small folder hierarchy. Example code below.
Copy those same .gotmpl files to a gitops-type repo for a remote terraform to send to prod.
All sources are regular, checked in files in places where Bazel can enumerate them. All target directories are also Bazel packages. I want to write to the source folder, not just to bazel-bin, so other non-Bazel tools can see the output files.
Currently when adding a template file or a proto package, a script must be run outside of bazel to pick up that new file and add it to a generated .bzl file, or perform operations completely outside of Bazel. I would like to eliminate this step to move closer to having one true build command.
I could accomplish this with symlinks but it still has an error-prone manual step for the .proto files and it would be nice to gain the option to manipulate the files programmatically in Bazel in the build.
Some solutions I've looked into and hit dead ends:
glob seems to be relative to current package and I don't see how it can be exported since it needs to be called from BUILD. A filegroup solves the export issue but doesn't seem to allow enumeration of the underlying files in a way that a bazel run target can take as input.
Rules like cc_library that happily input globs as srcs are built into the Bazel source code, not written in Starlark
genquery and aspects seem to have powerful meta-capabilities but I can't see how to actually accomplish this task with them.
The "bazel can write to the source folder" pattern and write_source_files from aspect-build/bazel-lib might be great if I could programmatically generate the files parameter.
Here is the template example which is the simpler case. This was my latest experiment to bazel-ify cp -R. I want to express src/templates/update_templates_bzl.py in Bazel.
src/templates/BUILD:
# [...]
exports_files(glob(["**/*.gotmpl"]))
# [...]
src/templates/update_templates_bzl.py:
#!/usr/bin/env python
from pathlib import Path
parent = Path(__file__).parent
template_files = [str(f.relative_to(parent)) for f in list(parent.glob('**/*.gotmpl'))]
as_python = repr(template_files).replace(",", ",\n ")
target_bzl = Path(__file__).parent / "templates.bzl"
target_bzl.write_text(f""""Generated template list from {Path(__file__).relative_to(parent)}"
TEMPLATES = {as_python}""")
src/templates/copy_templates.bzl
"""Utility for working with this list of template files"""
load("#aspect_bazel_lib//lib:write_source_files.bzl", "write_source_files")
load("templates.bzl", "TEMPLATES")
def copy_templates(name, prefix):
files = {
"%s/%s" % (prefix, f) : "//src/templates:%s" % f for f in TEMPLATES
}
write_source_files(
name = name,
files = files,
visibility = ["//visibility:public"],
)
other/module:
load("//src/templates:copy_templates.bzl", "copy_templates")
copy_templates(
name = "write_template_files",
prefix = "path/to/gitops/repo/templates",
)
One possible method to do this would be to use google/bazel_rules_install.
As mentioned in the project README.md you need to add the following to your WORKSPACE file;
# file: WORKSPACE
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "com_github_google_rules_install",
urls = ["https://github.com/google/bazel_rules_install/releases/download/0.3/bazel_rules_install-0.3.tar.gz"],
sha256 = "ea2a9f94fed090859589ac851af3a1c6034c5f333804f044f8f094257c33bdb3",
strip_prefix = "bazel_rules_install-0.3",
)
load("#com_github_google_rules_install//:deps.bzl", "install_rules_dependencies")
install_rules_dependencies()
load("#com_github_google_rules_install//:setup.bzl", "install_rules_setup")
install_rules_setup()
Then in your src/templates directory you can add the following to bundle all your templates into one target.
# file: src/templates/BUILD.bazel
load("#com_github_google_rules_install//installer:def.bzl", "installer")
installer(
name = "install_templates",
data = glob(["**/*.gotmpl"]),
)
Then you can use the installer to install into your chosen directory like so.
bazel run //src/templates:install_templates -- path/to/gitops/repo/templates
It's also worth checking out bazelbuild/rules_docker for building your development environments using only Bazel.
I want to make a build for a C++ project so that developers can seamlessly build both in MacOS and Linux, but they need different pre-requisites.
Can I configure Bazel to run different commands depending on the architecture, as a pre-requisite to compiling C++ files?
You certainly can, there are two approaches I will suggest.
Approach 1 (recommended)
The best way to do this is to include all of your deps for each arch/os in your WORKSPACE file e.g.
# WORKSPACE
http_archive(
name = "foo_repo_mac",
# ...
)
http_archive(
name = "foo_repo_linux",
# ...
)
NOTE: While the example here uses http_archive, this approach works with other repository rules as well e.g. local_repository (for system deps), git_repository (for git deps) etc.
Then depend on your mac/linux version os these libs using a select statement e.g.
# //:BUILD.bazel
cc_library(
name = "foo_cross_platform",
deps = select({
"#platforms//os:macos": ["#foo_repo_mac//:foo"],
"#platforms//os:linux": ["#foo_repo_linux//:foo"],
}) + ["//other:deps"],
)
Now what happens here is that when you build the target //:foo_cross_platform Bazel will first evaluate the deps in the select statement. Let's say we are running on linux it will select "#foo_repo_linux//:foo" as a dep. Now that Bazel knows there is a dependency in an external repository it will go ahead and download the #foo_repo_linux repository. But because there is no dependency in #foo_repo_macos it won't download/configure that dependency. So in the case of a build on Linux this will work;
bazel build //...
bazel build //:foo_cross_platform
However it is unlikely that running running the following on linux would work.
bazel build #foo_repo_macos//:foo
By default you can select based on cpu/os/distro. A full list of the configurations that are included in default Bazel are listed here. For more complex configurable builds take a look over the Bazel docs on configuration.
Approach 2 (not recommended)
If the approach 1 does not meet your needs there is a second approach that you could take and that is to write your own [repository_rule]
(https://docs.bazel.build/versions/main/skylark/repository_rules.html), that uses the repository_ctx.os field with a skylark if statement. e.g.
# my_custom_repository.bzl
def _my_custom_repository_rule_impl(rctx):
if rctx.os.name == "linux":
rctx.download_and_extract(
url = "http//:www.my_linux_dep.com/foo",
#...
)
elif rctx.os.name == "macos":
rctx.download_and_extract(
url = "http//:www.my_macos_dep.com/foo",
#...
)
else:
fail("No dependency for this os")
# ... Generate a BUILD.bazel file etc.
my_custom_repository = repository_rule(_my_custom_repository_rule_impl)
Then add the following to your WORKSPACE;
# WORKSPACE
load("//:my_custom_repository.bzl", "my_custom_repository")
my_custom_repository(
name = "foo"
)
Then depend on it directly in your build file (This depends on how you generate your BUILD.bazel file).
The reason I don't recommend this approach is that, repository_ctx.os.name is not particularly stable and in general the configurations using selects is far more expressive.
In its C++ unit testing tutorial, Bazel suggests adding a root level gtest.BUILD file to the workspace root in order to properly integrate Google Test into the test project.
https://docs.bazel.build/versions/master/cpp-use-cases.html
Why would one create a new BUILD file and add gtest prefix to it rather than adding a new build rule to an existing BUILD file in the workspace? Is it just a minor style preference?
Because if you added a BUILD file somewhere in the workspace (e.g. under //third_party/gtest/BUILD) then that file would create a package there.
Then, if you had targets declared in that BUILD file, would their files exist under //third_party/gtest, or would they exist in the zip file that the http_archive downloads? If the former, then there's no need for a http_archive because the files are already in the source tree; if the latter, then the BUILD file references non-existent files in its own package. Both scenarios are flawed.
Better to call gtest's BUILD-file-to-be something that doesn't create a package, but that's descriptive of its purpose.
The build_file attribute of http_archive can reference any file, there's no requirement of the name. The name gtest.BUILD is mostly stylistic, yes, but it also avoids creating a package where it shouldn't. You could say it's an "inactive" BUILD file that will be "active" when Bazel downloads the http_archive, extracts it somewhere, and creates in that directory a symlink called BUILD which points to gtest.BUILD.
Another advantage of having such "inactive" BUILD files is that you can have multiple of them within one package, for multiple http_archives.
In my repository I have some files with the name "build" (automatically generated and/or imported, spread around elsewhere from where I have my bazel build files). These seem to be interpreted by Bazel as its BUILD files, and fail the full build I try to run with bazel build //...
Is there some way I can tell Bazel in a settings configuration file to ignore certain directories altogether? Or perhaps specify the build file names as something other than BUILD, like BUILD.bazel?
Or are my options:
To banish the name build from the entire repository.
To add a gigantic --deleted_packages=<...> to every run of build.
To not use full builds, instead specifying explicit targets.
I think this is a duplicate of the two questions you linked, but to expand on what you asked about in your comment:
You don't have to rename them BUILD.bazel, my suggestion is to add an empty BUILD.bazel to those directories. So you'd end up with:
my-project/
BUILD
src/
build/
stuff-bazel-shouldn't-mess-with
BUILD.bazel # Empty
Then Bazel will check for targets in BUILD.bazel, see that there are none, and won't try to parse the build/ directory.
And there is a distressing lack of documentation about BUILD vs. BUILD.bazel, at least that I could find.