As bazel test encyclopedia said, UTC is bazel time zone. I am wondering is there any way that I can let bazel use system zone i.e Local?
You can use --action_env to pass in environment values, e.g.,
bazel test --test_output=all --action_env=TZ=Local :tz_test
INFO: From Testing //:tz_test:
==================== Test output for //:tz_test:
timezone: Local
================================================================================
Target //:tz_test up-to-date:
bazel-bin/tz_test
INFO: Elapsed time: 1.034s, Critical Path: 0.05s
//:tz_test PASSED in 0.0s
Note that this won't "un-cache" the test result, so you have to run bazel clean first to get bazel to pick up the environment variable change. I filed a bug about that.
If the test is a java_test, you can use the user.timezone JVM flag as an attribute on your java_test rule. E.g.,
java_test(
name = "Test",
test_class = "TZTest",
srcs = [
"TZTest.java",
],
jvm_flags = [
"-Duser.timezone=EST",
],
)
You can confirm that this works with TimeZone.getDefault().getDisplayName().
Related
I know in Dockerfile I can extend existing docker image using:
FROM python/python
RUN pip install request
But how to extend it in bazel?
I am not sure if I should use container_import, but with that I am getting the following error:
container_import(
name = "postgres",
base_image_registry = "some.artifactory.com",
base_image_repository = "/existing-image:v1.5.0",
layers = [
"//docker/new_layer",
],
)
root#ba5cc0a3f0b7:/tcx# bazel build pkg:postgres-instance --verbose_failures --sandbox_debug
ERROR: /tcx/docker/postgres-operator/BUILD.bazel:12:17: in container_import rule //docker/postgres-operator:postgres:
Traceback (most recent call last):
File "/root/.cache/bazel/_bazel_root/2f47bbce04529f9da11bfed0fc51707c/external/io_bazel_rules_docker/container/import.bzl", line 98, column 35, in _container_import_impl
"config": ctx.files.config[0],
Error: index out of range (index is 0, but sequence has 0 elements)
ERROR: Analysis of target '//pkg:postgres-instance' failed; build aborted: Analysis of target '//docker/postgres-operator:postgres' failed
INFO: Elapsed time: 0.209s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (1 packages loaded, 2 targets configured)
container_import is the correct rule to import an existing image. However, all it does is import, it doesn't pull it from anywhere. I think you're looking for container_pull instead, which will pull an image from a repository and then automatically use container_import to translate it for other rules_docker rules.
To add a new layer, use container_image, with base set to the imported image and tars set to the additional files you want to add. Or, if you want to add things in other formats, see the docs for alternates to tars (like debs or files).
Putting it all together, something like this in your WORKSPACE:
container_pull(
name = "postgres",
registry = "some.artifactory.com",
repository = "existing-image",
tag = "v1.5.0",
)
and then this in a BUILD file:
container_image(
name = "postgres_plus",
base = "#postgres//image",
tars = ["//docker/new_layer"],
)
The specific problem you're running into is that container_pull.layers isn't for adding new layers, it's for specifying the layers of the image you're importing. You could import those some other way (http_archive, check in the tar files, etc) and then specify them all by hand instead of using container_pull if you're doing something unusual.
I am trying to use Bazel with Pybind, and it requires that I set the following variables:
"""Repository rule for Python autoconfiguration.
`python_configure` depends on the following environment variables:
* `PYTHON_BIN_PATH`: location of python binary.
* `PYTHON_LIB_PATH`: Location of python libraries.
"""
https://github.com/pybind/pybind11_bazel/blob/master/python_configure.bzl
I dont want to have to pass it in manually when building my libraries, how can i hardcode these env vars in my WORKSPACE?
To (always) set environmental variable for a repository rule consumption, you case use --repo_env command line option. And if you want to include those with every invocation in your workspace, you can set add these flags to your .bazelrc file therein.
Now the wisdom of doing that could be questioned. If it's actually a project (repo) and not build host configuration, it would probably make more sense, be more targeted and more explicit, if it was an attribute of the given rule which was then checked in with the rest of the build configuration.
And looking at the name, there may be another question about specifying python configuration (from outside the bazel build) instead of actually using correctly resolved python toolchain (but there I have to say have no background in what the given rule is about and what is it trying to accomplish to render judgment, this is just a general comment).
To address your comment... I don't what other factors make it "not accept" or what exactly does that actually look like, but if I have this mini-example:
.
├── BUILD
├── WORKSPACE
└── customrule.bzl
Where customrule.bzl reads:
def _run_me(repo_ctx):
repo_ctx.file(
"WORKSPACE",
'workspace(name = "{}")\n'.format(repo_ctx.name),
executable = False,
)
repo_ctx.file(
"BUILD",
'exports_files(["var.sh"], visibility=["//visibility:public"])',
executable = False,
)
repo_ctx.file(
"var.sh",
"echo {}\n".format(repo_ctx.os.environ.get("var1")),
executable = True,
)
wsrule = repository_rule(
implementation = _run_me,
environ = ["var1"],
)
The WORKSPACE is:
load(":customrule.bzl", "wsrule")
wsrule(
name = "extdep"
)
And BUILD:
sh_binary(
name = "tgt",
srcs = ["#extdep//:var.sh"],
)
Then I do get:
$ bazel run --repo_env var1=val1 tgt
val1
and:
$ bazel run --repo_env var1=val2 tgt
val2
I.e. this is a way to pass variables to a repo rule and it does (as such) work.
If you absolutely know, you must call a build with some variable set to certain value (which as mentioned above is itself a requirement that is worth closer examination) and you want these to be associated with the project / repo. You can always check in a build.sh or any such file that wraps your bazel call to be exactly what it must be. But again, this looks more likely to not be really entirely "The Right Thing" to do or want.
When running bazel test the output contains only summary of the all tests, including total run time.
Running bazel with performance profiling does not help, because it does not indicate each test time.
So how to get the info about each test execution time?
UPD:
I have a sample repo to reproduce my problem:
$ git clone https://github.com/MikhailTymchukFT/bazel-java
$ cd bazel-java
$ bazel test //:AllTests --test_output=all --test_summary=detailed
Starting local Bazel server and connecting to it...
INFO: Analyzed 2 targets (20 packages loaded, 486 targets configured).
INFO: Found 2 test targets...
INFO: From Testing //:GreetingTest:
==================== Test output for //:GreetingTest:
JUnit4 Test Runner
..
Time: 0.017
OK (2 tests)
BazelTestRunner exiting with a return value of 0
JVM shutdown hooks (if any) will run now.
The JVM will exit once they complete.
-- JVM shutdown starting at 2020-04-07 09:44:56 --
================================================================================
INFO: From Testing //:MainTest:
==================== Test output for //:MainTest:
JUnit4 Test Runner
.
Time: 0.016
OK (1 test)
BazelTestRunner exiting with a return value of 0
JVM shutdown hooks (if any) will run now.
The JVM will exit once they complete.
-- JVM shutdown starting at 2020-04-07 09:44:57 --
================================================================================
INFO: Elapsed time: 21.009s, Critical Path: 6.68s
INFO: 10 processes: 6 darwin-sandbox, 4 worker.
INFO: Build completed successfully, 18 total actions
Test cases: finished with 3 passing and 0 failing out of 3 test cases
INFO: Build completed successfully, 18 total actions
I can see execution time of both tests in GreetingTest
==================== Test output for //:GreetingTest:
JUnit4 Test Runner
..
Time: 0.017
OK (2 tests)
, but cannot see the execution time of each test in this class/rule.
With --test_summary=short (the default value), the end of the output looks like this (lines for the other 325 tests truncated):
INFO: Elapsed time: 148.326s, Critical Path: 85.71s, Remote (0.00% of the time): [queue: 0.00%, setup: 0.00%, process: 0.00%]
INFO: 680 processes: 666 linux-sandbox, 14 worker.
INFO: Build completed successfully, 724 total actions
//third_party/GSL/tests:no_exception_throw_test (cached) PASSED in 0.4s
//third_party/GSL/tests:notnull_test (cached) PASSED in 0.5s
//aos/events:shm_event_loop_test PASSED in 12.3s
Stats over 5 runs: max = 12.3s, min = 2.4s, avg = 6.3s, dev = 3.7s
//y2018/control_loops/superstructure:superstructure_lib_test PASSED in 2.3s
Stats over 5 runs: max = 2.3s, min = 1.3s, avg = 1.8s, dev = 0.4s
Executed 38 out of 329 tests: 329 tests pass.
INFO: Build completed successfully, 724 total actions
Confusingly, --test_summary=detailed doesn't include the times, even though the name sounds like it should have strictly more information.
For sharded tests, that output doesn't quite have every single test execution, but it does give statistics about them as shown above.
If you want to access the durations programmatically, the build event protocol has a TestResult.test_attempt_duration_millis field.
Alternatively, using --test_output=all will print all the output from your actual test binaries, including the ones that pass. Many testing frameworks print a total execution time there.
There is a testlogs folder where you can find .xml files with the execution times of each testcase.
The bazel-testlogs symlink points to the same location.
For my example, these files will be located at /private/var/tmp/_bazel_<user>/<some md5 hash>/execroot/<project name>/bazel-out/<kernelname>-fastbuild/testlogs/GreetingTest/test.xml
The content of that file is like this:
<?xml version='1.0' encoding='UTF-8'?>
<testsuites>
<testsuite name='com.company.core.GreetingTest' timestamp='2020-04-07T09:58:28.409Z' hostname='localhost' tests='2' failures='0' errors='0' time='0.01' package='' id='0'>
<properties />
<testcase name='sayHiIsString' classname='com.company.core.GreetingTest' time='0.01' />
<testcase name='sayHi' classname='com.company.core.GreetingTest' time='0.0' />
<system-out />
<system-err /></testsuite></testsuites>
I have configured a toolchain for distcc, parameters were forwarded through a wrapper script to distcc.
distcc_wrapper_gcc.sh:
#!/bin/bash
set -euo pipefail
ccache distcc g++ "$#"
I want to start 240 parallel tasks like 'make -j240' before. the build command is:
bazel build --action_env=HOME --action_env=DISTCC_HOSTS="***" --config=joint_compilation --jobs=240 target
But the output is:
238 actions, 24 running
If I set --jobs lower than 24, the number of running actions will equal to it, else there are up to 24 processes running no matter what value in the parameters.
It really takes a long time to compile if there are only 24 actions running.
Is there a hard limit of the running actions? (This computer has 12 cpus and 2 threads per core)
Is there a way to break or ignore this limit?
below is the config content.
.bazelrc
# Create a new CROSSTOOL file for our toolchain.
build:joint_compilation --crosstool_top=//toolchain:distcc
# Use --cpu as a differentiator.
build:joint_compilation --cpu=joint_compilation
# Specify a "sane" C++ toolchain for the host platform.
build:joint_compilation --host_crosstool_top=#bazel_tools//tools/cpp:toolchain
toolchain/BUILD
package(default_visibility = ['//visibility:public'])
cc_toolchain_suite(
name = "distcc",
toolchains = {
"joint_compilation": ":joint_compilation_toolchain",
"distcc|joint_compilation": ":joint_compilation_toolchain",
},
)
filegroup(name = "empty")
filegroup(
name = "all",
srcs = [
"distcc_wrapper_gcc.sh",
],
)
cc_toolchain(
name = "joint_compilation_toolchain",
toolchain_identifier = "joint_compilation-toolchain",
all_files = ":all",
compiler_files = ":all",
cpu = "distcc",
dwp_files = ":empty",
dynamic_runtime_libs = [":empty"],
linker_files = ":all",
objcopy_files = ":empty",
static_runtime_libs = [":empty"],
strip_files = ":empty",
supports_param_files = 0,
)
toolchain/CROSSTOOL
major_version: "1"
minor_version: "0"
default_target_cpu: "joint_compilation"
toolchain {
toolchain_identifier: "joint_compilation-toolchain"
host_system_name: "i686-unknown-linux-gnu"
target_system_name: "joint_compilation-unknown-distcc"
target_cpu: "joint_compilation"
target_libc: "unknown"
compiler: "distcc"
abi_version: "unknown"
abi_libc_version: "unknown"
tool_path {
name: "gcc"
path: "distcc_wrapper_gcc.sh"
}
tool_path {
name: "g++"
path: "distcc_wrapper_gcc.sh"
}
tool_path {
name: "ld"
path: "/usr/bin/ld"
}
tool_path {
name: "ar"
path: "/usr/bin/ar"
}
tool_path {
name: "cpp"
path: "distcc_wrapper_gcc.sh"
}
tool_path {
name: "gcov"
path: "/usr/bin/gcov"
}
tool_path {
name: "nm"
path: "/usr/bin/nm"
}
tool_path {
name: "objdump"
path: "/usr/bin/objdump"
}
tool_path {
name: "strip"
path: "/usr/bin/strip"
}
cxx_builtin_include_directory: "/usr/lib/gcc/"
cxx_builtin_include_directory: "/usr/local/include"
cxx_builtin_include_directory: "/usr/include"
}
Unless there's some integration between distcc and bazel that I'm not aware of, bazel thinks it is executing everything on the local machine and is therefore limited by the local machine's resources. There is a local resources arg that can be tweaked, but instead I strongly recommend using bazel as intended. When building remotely, this means using a REAPI-capable buildfarm.
At least two exist:
https://github.com/bazelbuild/bazel-buildfarm
the official impl, started by Uber, used by many
two components in the architecture: server (scheduler) and worker
cache can be stored on the workers, or in a grpc-based cache
the latter seems to have seen little use so far
written in java
https://github.com/EdSchouten/bazel-buildbarn
written in go
somewhat different architecture (frontend/scheduler/worker)
more flexible caching
I've tried the former a little, and am about to try the latter - partially due to the caching, and partially due to the language: I find go far easier to read (and write) than java.
One of my Jenkins job is executing MSTest. I am passing the following command to
Execute Windows batch command:
del TestResults.trx
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\MSTest.exe" /testcontainer:D:\Projects\Jenkins\TestResultVerificationFromJenkins\TestResultVerificationFromJenkins\bin\Debug\TestResultVerificationFromJenkins.dll /resultsfile:TestResults.trx /nologo /detail:stdout
At the time of execution, Console Output is displaying the following values:
Starting execution...
Results Top Level Tests
------- ---------------
Passed TestResultVerificationFromJenkins.UnitTest1.PassTest
[stdout] = Test is passed*
1/1 test(s) Passed
Summary
Test Run Completed.
Passed 1
Total 1
Results file: C:\Program Files (x86)\Jenkins\jobs\JenkinsTestResultReader\workspace\TestResults.trx
Test Settings: Default Test Settings
In the post build step, I have to pass the MS test result "Test is passed" to a HTTP Request.
Is there any way to save this result in a Jenkins variable so that I can pass that to HTTP Request?
Regards,
Umesh
Since you are in the postbuild step, would parsing the console output for the test result and sending it off to the HTTP Request be an option for you?
For example, using Groovy Postbuild plugin, you could write a small script that could do this.
Perhaps something like:
if(manager.build.logFile.text.indexOf("Test Run Completed. Passed") >= 0)
manager.listener.logger.println (new URL("http://localhost?parameter=Test+is+passed")).getText()