I want to use rules_pkg
I have the following setup: Windows 10 x64 (Version 2004, Bazel 3.7.0, Visual Studio 16 2019, MSYS2 x86_64)
My minimal setup looks like this:
WORKSPACE.bazel
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
# rules_pkg
http_archive(
name = "rules_pkg",
sha256 = "6b5969a7acd7b60c02f816773b06fcf32fbe8ba0c7919ccdc2df4f8fb923804a",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_pkg/releases/download/0.3.0/rules_pkg-0.3.0.tar.gz",
"https://github.com/bazelbuild/rules_pkg/releases/download/0.3.0/rules_pkg-0.3.0.tar.gz",
],
)
BUILD.bazel
load("#bazel_tools//tools/build_defs/pkg:pkg.bzl", "pkg_tar")
cc_binary(
name = "HelloWorld",
srcs = ["main.cpp"],
)
pkg_tar(
name = "deploy_HelloWorld",
srcs = [
":HelloWorld",
],
extension = "tar.gz",
)
main.cpp
#include <iostream>
int main() {
std::cout << "Hello World!" << std::endl;
}
When I try to build i.e. bazel build //... I get:
PS G:\dev\BazelDemos\HelloWorld> bazel build //...
INFO: Analyzed 2 targets (20 packages loaded, 143 targets configured).
INFO: Found 2 targets...
ERROR: G:/dev/bazeldemos/helloworld/BUILD.bazel:8:8: PackageTar deploy_HelloWorld.tar.gz failed (Exit 9009): build_tar.exe failed: error executing command bazel-out/host/bin/external/bazel_tools/tools/build_defs/pkg/build_tar.exe --flagfile bazel-out/x64_windows-fastbuild/bin/deploy_HelloWorld.args
INFO: Elapsed time: 0.642s, Critical Path: 0.29s
INFO: 8 processes: 7 internal, 1 local.
FAILED: Build did NOT complete successfully
I can build without problems on two other Windows 10 machines with the same/similar setup. Any ideas?
More details to setup:
Path containes C:\msys64\usr\bin. BAZEL_SH is set to C:\msys64\usr\bin\bash.exe.
Python3 was not installed in not in my Path variable. Exit 9009 refers usually to an error triggered by a batch script that fails to call a specific command.
Related
I am trying to use a old version of jdk (7) in bazel for run a java_binary output as tool in the compilation process.
From the example code and following the documentation of bazel config java toolchains
I created a WORKSPACE where get the remotejdk:
load("#bazel_tools//tools/jdk:remote_java_repository.bzl", "remote_java_repository")
remote_java_repository(
name = "remotejdk",
prefix = "remotejdk", # Can be used with --java_runtime_version=openjdk_canary_11
version = "7", # or --java_runtime_version=11
exec_compatible_with = [ # Specifies constraints this JVM is compatible with "#platforms//cpu:arm",
"#platforms//os:linux",
"#platforms//cpu:x86_64"
],
urls=["https://download.java.net/openjdk/jdk7u75/ri/openjdk-7u75-b13-linux-x64-18_dec_2014.tar.gz"],
sha256 = "56d84d0bfc8c1194d501c889765a387e949d6a063feef6608e5e12b8152411fb")
and a BUILD file that define the toolchain
load("#rules_java//java:defs.bzl", "java_binary", "java_library")
load("//:create_file.bzl", "call_java_binary")
load(
"#bazel_tools//tools/jdk:default_java_toolchain.bzl",
"default_java_toolchain", "VANILLA_TOOLCHAIN_CONFIGURATION"
)
default_java_toolchain(
name = "repository_default_toolchain",
configuration = VANILLA_TOOLCHAIN_CONFIGURATION, # One of predefined configurations
java_runtime = "#remotejdk//:jdk", # JDK to use for compilation and toolchain's tools execution
jvm_opts = [],
source_version = "7",
)
call_java_binary(
name = "CreateFile",
)
java_binary(
name = "ProjectRunner",
srcs = ["src/main/java/com/example/ProjectRunner.java"],
main_class = "com.example.ProjectRunner",
deps = [":greeter"],
)
java_library(
name = "greeter",
srcs = ["src/main/java/com/example/Greeting.java"],
visibility = ["//src/main/java/com/example/cmdline:__pkg__"],
)
when I try set specifically //:repository_default_toolchain and the language version as descrived in the documentation:
bazel build //:CreateFile --extra_toolchains=//:repository_default_toolchain_definition --java_language_version=7 --java_runtime_version=7
the toolchain is descarted with the error:
Type #bazel_tools//tools/jdk:runtime_toolchain_type: target platform #local_config_platform//:host: Rejected toolchain #remotejdk//:jdk; mismatching config settings: prefix_version_setting
and fallback to openjdk11_linux again.
With bazel query --output=build "#remotejdk_toolchain_config_repo//:toolchain" 2>/dev/null i looked for the value of prefix_version_setting and it is remotejdk_7
which is the proper way to setup jdk 7 as toolchain?
the code is the java example from bazel/examples
Best regards
We are seeing duplicate builds of the same target in Bazel and wondering what could cause this.
Here is a sample output:
[52,715 / 55,135] 12 action running
Bazel package: some-pkg - Target: a_target - Generating files at bazel-out/host/bin/some-pkg/a_target_generate [for host]; 264s remote-cache, processwrapper-sandbox
Bazel package: some-pkg - Target: a_target - Generating files at bazel-out/k8-fastbuild/bin/some-pkg/a_target_generate; 264s remote-cache, processwrapper-sandbox
...
We have not been able to identify the issue. It looks like this is only happening on Linux but not on Macs.
The target a_target is a custom_rule target. It should be platform independent.
custom_rule = rule(
attrs = dict(
...
_custom_rule_java_binary = attr.label(
cfg = "host",
default = Label("//tools/bazel/build/rules/custom-rule:custom_rule_bin"),
executable = True,
),
_singlejar = attr.label(
cfg = "host",
default = Label("#bazel_tools//tools/jdk:singlejar"),
executable = True,
allow_files = True,
),
),
implementation = ...,
)
custom_rule_bin is defined as follow:
java_library(
name = "custom_rule",
srcs = glob(["src/main/java/**/*.java"]),
deps = [
...,
],
)
java_binary(
name = "custom_rule_bin",
visibility = ["//visibility:public"],
main_class = "...",
runtime_deps = [
"#org_slf4j_simple",
":custom_rule",
"//some-pkg:some_pkg", # same some-pkg where a_target is built twice
],
)
The difference is that one says "for host" and the other doesn't. Anyone knows what the extra "for host" build is?
I do have a feeling that it's somehow related to the cfg attribute on the custom rule. This is likely coming from some example code. We use the same value on all our rules which generate code. This custom rule is special because it requires code from the application being built by Bazel to run and generate additional code.
Any insights appreciated why host would be wrong and what would be the correct value.
Any ideas/tips how to debug this?
First, one note is that the host configuration is being mostly deprecated, and "exec" is usually preferred. Some info about that is here: https://bazel.build/rules/rules#configurations.
What's happening is that that target is being depended upon in multiple configurations, and so bazel will build that target in each configuration. You can use cquery to figure out what's going on
As a very simple example:
genrule(
name = "gen_bin",
outs = ["bin"],
srcs = [":gen_lib"],
exec_tools = [":gen_tool"],
cmd = "touch $#",
)
genrule(
name = "gen_tool",
outs = ["tool"],
srcs = [":gen_lib"],
cmd = "touch $#",
)
genrule(
name = "gen_lib",
outs = ["lib"],
cmd = "touch $#; sleep 10",
)
Building bin, bazel runs the gen_lib genrule twice (in parallel):
$ bazel build bin
INFO: Analyzed target //:bin (5 packages loaded, 16 targets configured).
INFO: Found 1 target...
[1 / 5] 2 actions running
Executing genrule //:gen_lib; 1s linux-sandbox
Executing genrule //:gen_lib; 1s linux-sandbox
bazel config gives the configurations that are currently in the in-memory build graph:
$ bazel config
Available configurations:
5b39bc31deb1f1d37f1f858e7eec3964394eacce5bede4456dd59d417af4a6e9 (exec)
723da02ae6d0c5577e98242c8f06ca1bd1c6d7b295c97345ac31b844bfe8f79c
8960923b9e7dc13418be101268efd8e57d80283213d18174705345598b699c6b
fd805cc1de357c04c7abac1b40bae600e3d9ee56a8d17af0c28b5031ca09bfb9 (host)
then cquery:
$ bazel cquery "rdeps(//..., gen_lib)"
INFO: Analyzed 3 targets (0 packages loaded, 1 target configured).
INFO: Found 3 targets...
//:gen_lib (5b39bc3)
//:gen_lib (8960923)
//:gen_tool (5b39bc3)
//:gen_bin (8960923)
//:gen_tool (8960923)
INFO: Elapsed time: 0.052s
INFO: 0 processes.
INFO: Build completed successfully, 0 total actions
(cquery gives the first 7 digits of the configuration hash)
--output=graph gives a dot graph which is a little more useful:
$ bazel cquery "rdeps(//..., gen_lib)" --output=graph > /tmp/graph
$ xdot /tmp/graph
So gen_bin is in the target configuration (8960923), and it depends on gen_lib, so gen_lib will also be built in the target configuration.
gen_bin also depends on gen_tool via the exec_tools attribute, and exec_tools builds everything in the exec configuration (5b39bc3).
gen_tool also depends on gen_lib, and since gen_tool is in the exec configuration, a version of gen_lib is built in the exec configuration.
(There's also another version of gen_tool in the target configuration in the output, and that's an artifact of using //... in the "universe" argument of rdeps(), since //... will capture every target. Similarly, doing bazel build //... would cause gen_tool to be built twice.)
I'm using Bazel on my CI server for building and testing my C++ library, but I can't retrieve generated reports/logs files.
I wonder if there is a way to refer to those generated files for a further use inside a genrule which can permit me to post-process files (generate HTML...) ?
bazel execution :
$ bazel test //unit:tests
INFO: Analyzed 2 targets (21 packages loaded, 400 targets configured).
INFO: Found 2 test targets...
INFO: Elapsed time: 29,326s, Critical Path: 6,86s
INFO: 22 processes: 22 darwin-sandbox.
INFO: Build completed successfully, 29 total actions
//unit:tests_a PASSED in 0.7s
//unit:tests_b PASSED in 0.7s
Executed 2 out of 2 tests: 2 tests pass.
INFO: Build completed successfully, 29 total actions
generated reports :
$ find bazel-out/ -name '*.xml' -or -name '*.log'
bazel-out//darwin-fastbuild/testlogs/unit/tests_a/test.log
bazel-out//darwin-fastbuild/testlogs/unit/tests_a/test.xml
bazel-out//darwin-fastbuild/testlogs/unit/tests_b/test.log
bazel-out//darwin-fastbuild/testlogs/unit/tests_b/test.xml
WORKSPACE :
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "gtest",
url = "https://github.com/google/googletest/archive/release-1.10.0.zip",
sha256 = "94c634d499558a76fa649edb13721dce6e98fb1e7018dfaeba3cd7a083945e91",
strip_prefix = "googletest-release-1.10.0",
)
unit/BUILD :
load("#rules_cc//cc:defs.bzl", "cc_test")
test_suite(name = "tests", tests = glob(["tests_*"]))
cc_test(name = "tests_a", srcs = ["ut.cc"], deps = ["#gtest//:gtest_main"])
cc_test(name = "tests_b", srcs = ["ut.cc"], deps = ["#gtest//:gtest_main"])
unit/ut.cc :
#include "gtest/gtest.h"
TEST(HelloTest, GetGreet) {
EXPECT_EQ(1, 1);
}
i think you can save report gen by bazel test via PATH indicated by the env var: TEST_UNDECLARED_OUTPUTS_DIR。bazel test will save your report in bazel-out/k8-fastbuild/testlogs////<test_binary_targrt>/test.outputs/outputs.zip
check this out:https://docs.bazel.build/versions/master/test-encyclopedia.html#initial-conditions
I am trying to run a "hello world" server in Spark building it with Bazel, but I am getting this error:
$ bazel run //:app
INFO: Analysed target //:app (0 packages loaded).
INFO: Found 1 target...
Target //:app up-to-date:
bazel-bin/app.jar
bazel-bin/app
INFO: Elapsed time: 0.201s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory
at spark.Service.<clinit>(Service.java:56)
at spark.Spark$SingletonHolder.<clinit>(Spark.java:51)
at spark.Spark.getInstance(Spark.java:55)
at spark.Spark.<clinit>(Spark.java:61)
at io.app.server.Main.main(Main.java:7)
Caused by: java.lang.ClassNotFoundException: org.slf4j.LoggerFactory
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 5 more
BUILD:
java_binary(
name = "app",
main_class = "io.app.server.Main",
srcs = ["src/main/java/io/app/server/Main.java"],
deps = [
"#org_slf4j_slf4j_simple//jar",
"#com_sparkjava_spark_core//jar",
]
)
The same error happens if I don't include slf4j, and it should not be a required dependency of spark.
WORKSPACE:
maven_jar(
name = "com_sparkjava_spark_core",
artifact = "com.sparkjava:spark-core:2.7.2"
)
maven_jar(
name = "org_slf4j_slf4j_simple",
artifact = "org.slf4j:slf4j-simple:1.7.21"
)
And finally, src/main/java/io/app/server/Main.java:
package io.app.server;
import static spark.Spark.*;
public class Main {
public static void main(String[] args) {
port(3000);
get("/", (req, res) -> "Hello World");
}
}
Any idea of what I could be doing wrong here?
Found what I was missing. It seems that maven_jar does not automatically fetch the "transitive dependencies" that the library itself has, see this.
Bazel only reads dependencies listed in your WORKSPACE file. If your
project (A) depends on another project (B) which list a dependency on
a third project (C) in its WORKSPACE file, you'll have to add both B
and C to your project's WORKSPACE file. This requirement can balloon
the WORKSPACE file size, but hopefully limits the chances of having
one library include C at version 1.0 and another include C at 2.0.
Large WORKSPACE files can be generated using the tool
generate_workspace. For details, see Generate external dependencies
from Maven projects.
So the solution seems to be to write a pom.xml and use generate_workspace.
EDIT: generate_workspace seems to be deprecated, use bazel_deps instead.
Another solution might be to use maven_install
git_repository(
name = "rules_jvm_external",
commit = "22b463c485f31b240888c89d17e67c460d7e68c0",
remote = "https://github.com/bazelbuild/rules_jvm_external.git",
)
load("#rules_jvm_external//:defs.bzl", "maven_install")
maven_install(
artifacts = [
"org.apache.spark:spark-core_2.12:3.1.2",
"org.apache.spark:spark-sql_2.12:3.1.2",
],
repositories = [
"https://repo.maven.apache.org/maven2/",
]
)
Problem
I'm trying to modify this tutorial for compiling a TensorFlow network for use with C++. It currently requires you to copy your network file into the TensorFlow source code so the dependencies can be found, but I'd rather not do that. Note that TensorFlow is also built with Bazel.
Here is my BUILD file:
cc_binary(
name = "mnistpredict_keras",
srcs = ["mnist_keras.cc", "MNIST.h"],
deps = [
"//tensorflow/core:tensorflow",
],
)
When I try to run $ bazel build :mnistpredict_keras I get the error:
ERROR: /home/saubin/git/tf-keras-speed-test/loadgraph/BUILD:17:1: no such package 'tensorflow/core': BUILD file not found on package path and referenced by '//:mnistpredict_keras'.
ERROR: Analysis of target '//:mnistpredict_keras' failed; build aborted.
INFO: Elapsed time: 0.105s
Obviously, the problem is I'm trying to compile something in my folder ~/git/tf-keras-speed-test/loadgraph but it can't find the dependency //tensorflow/core:tensorflow. How do I properly give the path to the dependency? The documentation for deps appears to be non-existent.
Attempted Solutions
Adding a local_repository to my WORKSPACE:
local_repository(
name = "tensorflow",
path = "/home/saubin/src/tensorflow",
)
This changed nothing.
Trying to pass the full path:
cc_binary(
name = "mnistpredict_keras",
srcs = ["mnist_keras.cc", "MNIST.h"],
deps = [
"/home/saubin/src/tensorflow/tensorflow/core:tensorflow",
],
)
But I get the same error:
ERROR: /home/saubin/git/tf-keras-speed-test/loadgraph/BUILD:17:1: no such package 'tensorflow/core': BUILD file not found on package path and referenced by '//:mnistpredict_keras'.
ERROR: Analysis of target '//:mnistpredict_keras' failed; build aborted.
INFO: Elapsed time: 0.287s
Mark TensorFlow as an external repository dependency:
cc_binary(
name = "mnistpredict_keras",
srcs = ["mnist_keras.cc", "MNIST.h"],
deps = [
"#tensorflow//tensorflow/core:tensorflow",
],
)
But this gives me this error:
WARNING: /home/saubin/.cache/bazel/_bazel_saubin/74f664e7cf53364557da8b57a716c919/external/tensorflow/WORKSPACE:1: Workspace name in /home/saubin/.cache/bazel/_bazel_saubin/74f664e7cf53364557da8b57a716c919/external/tensorflow/WORKSPACE (#org_tensorflow) does not match the name given in the repository's definition (#tensorflow); this will cause a build error in future versions.
ERROR: /home/saubin/git/tensorgraph/loadgraph/BUILD:1:1: error loading package '#tensorflow//tensorflow/core': Encountered error while reading extension file 'sycl/build_defs.bzl': no such package '#local_config_sycl//sycl': error loading package 'external': The repository named 'local_config_sycl' could not be resolved and referenced by '//:mnistpredict'.
ERROR: Analysis of target '//:mnistpredict' failed; build aborted.
INFO: Elapsed time: 0.326s
The documentation for external dependencies is https://bazel.build/versions/master/docs/external.html.
The syntax that you are looking for is #tensorflow//tensorflow/core:tensorflow.
Labels that start with // refer to the current repository. Labels that start with #reponame// refer to the reponame repository.