I'm naively passing along some variable test metadata to some py_test targets to inject that metadata into some test result artifacts that later get uploaded to the cloud. I'm doing so using either the --test_env or --test_arg values at the bazel test invocation.
Would this variable data negatively affect the way test results are cached such that running the same test back to back would effectively disturb the bazel cache?
Command Line Inputs
Command line inputs can indeed disturb cache hits. Consider the following set of executions
BUILD file
py_test(
name = "test_inputs",
srcs = ["test_inputs.py"],
deps = [
":conftest",
"#pytest",
],
)
py_library(
name = "conftest",
srcs = ["conftest.py"],
deps = [
"#pytest",
],
)
Test module
import sys
import pytest
def test_pass():
assert True
def test_arg_in(request):
assert request.config.getoption("--metadata")
if __name__ == "__main__":
args = sys.argv[1:]
ret_code = pytest.main([__file__, "--log-level=ERROR"] + args)
sys.exit(ret_code)
First execution
$ bazel test //bazel_check:test_inputs --test_arg --metadata=abc
INFO: Analyzed target //bazel_check:test_inputs (0 packages loaded, 0 targets configured).
INFO: Found 1 test target...
INFO: 2 processes: 1 internal (50.00%), 1 local (50.00%).
INFO: Cache hit rate for remote actions: -- (0 / 0)
INFO: Total action wall time 0.40s
INFO: Critical path 0.57s (setup 0.00s, action wall time 0.00s)
INFO: Elapsed time 0.72s (preparation 0.12s, execution 0.60s)
INFO: Build completed successfully, 2 total actions
//bazel_check:test_inputs PASSED in 0.4s
Executed 1 out of 1 test: 1 test passes.
INFO: Build completed successfully, 2 total actions
Second execution: same argument value, cache hit!
$ bazel test //bazel_check:test_inputs --test_arg --metadata=abc
INFO: Analyzed target //bazel_check:test_inputs (0 packages loaded, 0 targets configured).
INFO: Found 1 test target...
INFO: 1 process: 1 internal (100.00%).
INFO: Cache hit rate for remote actions: -- (0 / 0)
INFO: Total action wall time 0.00s
INFO: Critical path 0.47s (setup 0.00s, action wall time 0.00s)
INFO: Elapsed time 0.61s (preparation 0.12s, execution 0.49s)
INFO: Build completed successfully, 1 total action
//bazel_check:test_inputs (cached) PASSED in 0.4s
Executed 0 out of 1 test: 1 test passes.
INFO: Build completed successfully, 1 total action
Third execution: new argument value, no cache hit
$ bazel test //bazel_check:test_inputs --test_arg --metadata=kk
INFO: Analyzed target //bazel_check:test_inputs (0 packages loaded, 93 targets configured).
INFO: Found 1 test target...
INFO: 2 processes: 1 internal (50.00%), 1 local (50.00%).
INFO: Cache hit rate for remote actions: -- (0 / 0)
INFO: Total action wall time 0.30s
INFO: Critical path 0.54s (setup 0.00s, action wall time 0.00s)
INFO: Elapsed time 0.71s (preparation 0.14s, execution 0.57s)
INFO: Build completed successfully, 2 total actions
//bazel_check:test_inputs PASSED in 0.3s
Executed 1 out of 1 test: 1 test passes.
INFO: Build completed successfully, 2 total actions
Fourth execution: reused same argument as first two runs
Interestingly enough there is no cache hit despite the result being cached earlier. Somehow it did not persist.
$ bazel test //bazel_check:test_inputs --test_arg --metadata=abc
INFO: Analyzed target //bazel_check:test_inputs (0 packages loaded, 0 targets configured).
INFO: Found 1 test target...
INFO: 2 processes: 1 internal (50.00%), 1 local (50.00%).
INFO: Cache hit rate for remote actions: -- (0 / 0)
INFO: Total action wall time 0.34s
INFO: Critical path 0.50s (setup 0.00s, action wall time 0.00s)
INFO: Elapsed time 0.71s (preparation 0.17s, execution 0.55s)
INFO: Build completed successfully, 2 total actions
//bazel_check:test_inputs PASSED in 0.3s
Executed 1 out of 1 test: 1 test passes.
INFO: Build completed successfully, 2 total actions
Environment Inputs
The same exact behavior applies for --test_env inputs
import os
import sys
import pytest
def test_pass():
assert True
def test_env_in():
assert os.environ.get("META_ENV")
if __name__ == "__main__":
args = sys.argv[1:]
ret_code = pytest.main([__file__, "--log-level=ERROR"] + args)
sys.exit(ret_code)
First execution
$ bazel test //bazel_check:test_inputs --test_env META_ENV=33
INFO: Build option --test_env has changed, discarding analysis cache.
INFO: Analyzed target //bazel_check:test_inputs (0 packages loaded, 7285 targets configured).
INFO: Found 1 test target...
INFO: 2 processes: 1 internal (50.00%), 1 local (50.00%).
INFO: Cache hit rate for remote actions: -- (0 / 0)
INFO: Total action wall time 0.29s
INFO: Critical path 0.66s (setup 0.00s, action wall time 0.00s)
INFO: Elapsed time 1.26s (preparation 0.42s, execution 0.84s)
INFO: Build completed successfully, 2 total actions
//bazel_check:test_inputs PASSED in 0.3s
Executed 1 out of 1 test: 1 test passes.
INFO: Build completed successfully, 2 total actions
Second execution: same env value, cache hit!
$ bazel test //bazel_check:test_inputs --test_env META_ENV=33
INFO: Analyzed target //bazel_check:test_inputs (0 packages loaded, 0 targets configured).
INFO: Found 1 test target...
INFO: 1 process: 1 internal (100.00%).
INFO: Cache hit rate for remote actions: -- (0 / 0)
INFO: Total action wall time 0.00s
INFO: Critical path 0.49s (setup 0.00s, action wall time 0.00s)
INFO: Elapsed time 0.67s (preparation 0.15s, execution 0.52s)
INFO: Build completed successfully, 1 total action
//bazel_check:test_inputs (cached) PASSED in 0.3s
Executed 0 out of 1 test: 1 test passes.
INFO: Build completed successfully, 1 total action
Third execution: new env value, no cache hit
$ bazel test //bazel_check:test_inputs --test_env META_ENV=44
INFO: Build option --test_env has changed, discarding analysis cache.
INFO: Analyzed target //bazel_check:test_inputs (0 packages loaded, 7285 targets configured).
INFO: Found 1 test target...
INFO: 2 processes: 1 internal (50.00%), 1 local (50.00%).
INFO: Cache hit rate for remote actions: -- (0 / 0)
INFO: Total action wall time 0.29s
INFO: Critical path 0.62s (setup 0.00s, action wall time 0.00s)
INFO: Elapsed time 1.22s (preparation 0.39s, execution 0.83s)
INFO: Build completed successfully, 2 total actions
//bazel_check:test_inputs PASSED in 0.3s
Executed 1 out of 1 test: 1 test passes.
INFO: Build completed successfully, 2 total actions
Fourth execution: reused same env value as first two runs
$ bazel test //bazel_check:test_inputs --test_env META_ENV=33
INFO: Build option --test_env has changed, discarding analysis cache.
INFO: Analyzed target //bazel_check:test_inputs (0 packages loaded, 7285 targets configured).
INFO: Found 1 test target...
INFO: 2 processes: 1 internal (50.00%), 1 local (50.00%).
INFO: Cache hit rate for remote actions: -- (0 / 0)
INFO: Total action wall time 0.28s
INFO: Critical path 0.66s (setup 0.00s, action wall time 0.00s)
INFO: Elapsed time 1.25s (preparation 0.40s, execution 0.85s)
INFO: Build completed successfully, 2 total actions
//bazel_check:test_inputs PASSED in 0.3s
Executed 1 out of 1 test: 1 test passes.
INFO: Build completed successfully, 2 total actions
Related
Im running sonar in Jenkins job. The analysis stage end successfully, but after that, the job get stuck, there is nothing in the log, but after a few min I get out off memory error and the job fails.
my sonar property file:
sonar.language=javascript
# sources
sonar.sources=src
sonar.exclusions=**/node_modules/**
# tests
sonar.tests=src
sonar.test.inclusions=**/*.test.js
# tests reports
sonar.testExecutionReportPaths=reports/test-reporter.xml
sonar.javascript.lcov.reportPaths=coverage/lcov.info
sonar.verbose=true
the log:
13:31:10 13:31:10.683 INFO: Analysis report generated in 522ms, dir size=4 MB
13:31:12 13:31:12.797 INFO: Analysis report compressed in 2114ms, zip size=2 MB
13:31:12 13:31:12.797 INFO: Analysis report generated in /my_reports_loc
13:31:12 13:31:12.797 DEBUG: Upload report
13:31:12 13:31:12.955 DEBUG: POST 200 http://my-sonar/api/ce/submit?projectKeymyProjt&projectName=projectNamet | time=157ms
13:31:12 13:31:12.958 INFO: Analysis report uploaded in 161ms
13:31:12 13:31:12.959 DEBUG: Report metadata written to /my_reports_loc
13:31:12 13:31:12.959 INFO: ANALYSIS SUCCESSFUL, you can browse http://my-sonar/dashboard?id=my-project
13:31:12 13:31:12.959 INFO: Note that you will be able to access the updated dashboard once the server has processed the submitted analysis report
13:31:12 13:31:12.959 INFO: More about the report processing at http://my-sonar/api/ce/task?id=my_id
13:31:12 13:31:12.964 DEBUG: eslint-bridge server will shutdown
13:31:13 13:31:13.208 DEBUG: stylelint-bridge server will shutdown
13:31:13 13:31:13.209 INFO: Analysis total time: 42.940 s
13:31:13 13:31:13.230 INFO: ------------------------------------------------------------------------
13:31:13 13:31:13.230 INFO: EXECUTION SUCCESS
13:31:13 13:31:13.230 INFO: ------------------------------------------------------------------------
13:31:13 13:31:13.230 INFO: Total time: 44.469s
13:31:13 13:31:13.369 INFO: Final Memory: 43M/1106M
13:31:13 13:31:13.369 INFO: ------------------------------------------------------------------------
[Pipeline] }
[Pipeline] //
[Pipeline] }
[Pipeline] //
13:33:47 java.lang.OutOfMemoryError: GC overhead limit exceeded
Finished: FAILURE
You need to increase the Heap Size.
The error about GC overhead implies that Jenkins is thrashing in Garbage Collection. This means it's probably spending more time doing Garbage Collection than doing useful work. This situation normally comes about when the heap is too small for the application.
This post will help you in case if you need to know, how to increase help size for jenkins.
During a bazel build, there's a bunch of text flying by that's temporarily displayed and then deleted from the screen. This happens all across the build. I've tried a couple of redirection techniques with stderr redirecting to standard output to no avail. I've also experimented with bazel's verbose flags.
Question: is there any way to capture this fleeting console output bazel generates? I'd like to at least study what information is being presented before its taken away, more as a learning exercise and to gain familiarity.
These options should allow you to expand all the log messages generated by actions/tasks and redirect them to a file.
# .bazelrc
common --color=no
common --curses=yes
build --show_progress_rate_limit=0
build --show_task_finish
build --show_timestamps
build --worker_verbose
Setting color=no and show_progress_rate_limit=0 results in the progress messages to be expanded (and kept) in the terminal.
curses=yes affects redirection (at least on my machine). The other flags just add more information to the log.
Example output (bash, bazel 1.0.0)
$> bazel build :my_project >& /tmp/bazel_build.log
$> cat /tmp/bazel_build.log
(11:22:46) INFO: Writing tracer profile to '.../command.profile.gz'
(11:22:46) INFO: Current date is 2019-11-01
(11:22:46) Loading: loading...
(11:22:46) Loading:
(11:22:46) Loading: 0 packages loaded
(11:22:46) Loading: 0 packages loaded
Fetching #bazel_tools; fetching
(11:22:46) Loading: 0 packages loaded
Fetching #bazel_tools; fetching
(11:22:46) Loading: 0 packages loaded
currently loading: path/to/my/project
(11:22:46) Analyzing: target //path/to/my/project:my_project (1 packages l\
oaded)
[...]
(11:22:46) INFO: Analyzed target //path/to/my/project:my_project (14 packages loaded, 670 targets configured).
(11:22:46)
(11:22:46) INFO: Found 1 target...
(11:22:46)
(11:22:46) [0 / 1] [Prepa] BazelWorkspaceStatusAction stable-status.txt
(11:22:46) [1 / 13] [Prepa] //path/to/my/project:my_project
(11:22:46) [5 / 12] 3 actions, 0 running
[Prepa] #deps//:my_dependency
(11:22:46) [10 / 12] [Scann] Compiling path/to/my/project/main.cc
(11:22:46) [10 / 12] [Prepa] Compiling path/to/my/project/main.cc
(11:22:46) [10 / 12] .../project:my_project; 0s processwrapper-sandbox
(11:22:46) [11 / 12] [Prepa] Linking path/to/my/project/my_project
Target //path/to/my/project:my_project up-to-date:
(11:22:46) [12 / 12] checking cached actions
bazel-bin/path/to/my/project/my_project
(11:22:46) [12 / 12] checking cached actions
(11:22:46) INFO: Elapsed time: 0.493s, Critical Path: 0.29s
(11:22:46) [12 / 12] checking cached actions
(11:22:46) INFO: 2 processes: 2 processwrapper-sandbox.
(11:22:46) [12 / 12] checking cached actions
(11:22:46) INFO: Build completed successfully, 12 total actions
(11:22:46) INFO: Build completed successfully, 12 total actions
Hope this helps.
bazel build //... &> log.txt
&> does the job
On top of #dms's excellent suggestions, the --subcommands flag can be used to persist the exact command line Bazel invokes for each action execution.
I have a jenkins job that is triggered when a PR is made to a bitbucket repository. The job will run a sonarqube scan of the repository.
Then, I use sonarqube plugin called sonar-for-bitbucket that will make a PR comment summarizing the analysis.
This part happens successfully.
This is my sonarqube configuration in jenkins.
This is the console output
...
INFO: ANALYSIS SUCCESSFUL
INFO: Executing post-job Sonar Plug-in for Bitbucket Cloud
INFO: [sonar4bitbucket] Plug-in is active and will analyze pull request with #199...
INFO: Task total time: 1:25.408 s
INFO: ------------------------------------------------------------------------
INFO: EXECUTION SUCCESS
INFO: ------------------------------------------------------------------------
INFO: Total time: 1:27.292s
INFO: Final Memory: 64M/827M
INFO: ------------------------------------------------------------------------
WARN: Unable to locate 'report-task.txt' in the workspace. Did the SonarScanner succedeed?
Finished: SUCCESS
However, when I go to sonarqube UI it doesn't show any analysis result.
I also tried running this from my macbook with sonar-scanner. Got the same result.
INFO: ANALYSIS SUCCESSFUL
INFO: Task total time: 35.944 s
INFO: ------------------------------------------------------------------------
INFO: EXECUTION SUCCESS
INFO: ------------------------------------------------------------------------
INFO: Total time: 2:50.449s
INFO: Final Memory: 28M/813M
INFO: ------------------------------------------------------------------------
I am trying to integrate sonar with Jenkins in my local, when integrating the build is successfully and execution is success, but it is not getting finished success, it show below error and it show out of memory
INFO: ANALYSIS SUCCESSFUL, you can browse http://localhost:9000/dashboard/index/al-config-server
INFO: Note that you will be able to access the updated dashboard once the server has processed the submitted analysis report
INFO: More about the report processing at http://localhost:9000/api/ce/task?id=AWWunlqCAbj3m24yQM89
INFO: Task total time: 6.254 s
INFO: ------------------------------------------------------------------------
INFO: EXECUTION SUCCESS
INFO: ------------------------------------------------------------------------
INFO: Total time: 8.008s
INFO: Final Memory: 57M/313M
INFO: ------------------------------------------------------------------------
****** B A T C H R E C U R S I O N exceeds STACK limits ******
Recursion Count=593, Stack Usage=90 percent
****** B A T C H PROCESSING IS A B O R T E D ******
WARN: Found multiple 'report-task.txt' in the workspace. Taking the first one.
C:\Users\sreenath.reddy\.jenkins\workspace\al-config-server\.scannerwork\report-task.txt
C:\Users\sreenath.reddy\.jenkins\workspace\al-config-server\target\sonar\report-task.txt
WARN: Found multiple 'report-task.txt' in the workspace. Taking the first one.
C:\Users\sreenath.reddy\.jenkins\workspace\al-config-server\.scannerwork\report-task.txt
C:\Users\sreenath.reddy\.jenkins\workspace\al-config-server\target\sonar\report-task.txt
ERROR: SonarQube scanner exited with non-zero code: 255
Finished: FAILURE
A bazel binary that I am building completes unsuccessfully during the analysis phase. What flags and tools can I use to debug why it fails during analysis.
Currently, clean builds return the following output
ERROR: build interrupted
INFO: Elapsed time: 57.819 s
FAILED: Build did NOT complete successfully (133 packages loaded)
If I retry building after failed completion, I receive the following output
ERROR: build interrupted
INFO: Elapsed time: 55.514 s
FAILED: Build did NOT complete successfully (68 packages loaded)
What flags can I use to identify
what packages are being loaded
what package the build is being interrupted on
whether the interruption is coming from a timeout or an external process.
Essentially, something similar to --verbose_failures but for the analysis phase rather than the execution phrase.
So far I have ran my build through the build profiler, and have not been able to glean any insight. Here is the output of my build:
WARNING: This information is intended for consumption by Blaze developers only, and may change at any time. Script against it at your own risk
INFO: Loading /<>/result
INFO: bazel profile for <> at Mon Jun 04 00:10:11 GMT 2018, build ID: <>, 49405 record(s)
INFO: Aggregating task statistics
=== PHASE SUMMARY INFORMATION ===
Total launch phase time 9.00 ms 0.02%
Total init phase time 91.0 ms 0.16%
Total loading phase time 1.345 s 2.30%
Total analysis phase time 57.063 s 97.53%
Total run time 58.508 s 100.00%
=== INIT PHASE INFORMATION ===
Total init phase time 91.0 ms
Total time (across all threads) spent on:
Type Total Count Average
=== LOADING PHASE INFORMATION ===
Total loading phase time 1.345 s
Total time (across all threads) spent on:
Type Total Count Average
CREATE_PACKAGE 0.67% 9 3.55 ms
VFS_STAT 0.69% 605 0.05 ms
VFS_DIR 0.96% 255 0.18 ms
VFS_OPEN 2.02% 8 12.1 ms
VFS_READ 0.00% 5 0.01 ms
VFS_GLOB 23.74% 1220 0.93 ms
SKYFRAME_EVAL 24.44% 3 389 ms
SKYFUNCTION 36.95% 8443 0.21 ms
SKYLARK_LEXER 0.19% 31 0.29 ms
SKYLARK_PARSER 0.68% 31 1.04 ms
SKYLARK_USER_FN 0.03% 5 0.27 ms
SKYLARK_BUILTIN_FN 5.91% 349 0.81 ms
=== ANALYSIS PHASE INFORMATION ===
Total analysis phase time 57.063 s
Total time (across all threads) spent on:
Type Total Count Average
CREATE_PACKAGE 0.30% 138 3.96 ms
VFS_STAT 0.05% 2381 0.03 ms
VFS_DIR 0.19% 1020 0.35 ms
VFS_OPEN 0.04% 128 0.61 ms
VFS_READ 0.00% 128 0.01 ms
VFS_GLOB 0.92% 3763 0.45 ms
SKYFRAME_EVAL 31.13% 1 57.037 s
SKYFUNCTION 65.21% 32328 3.70 ms
SKYLARK_LEXER 0.01% 147 0.10 ms
SKYLARK_PARSER 0.03% 147 0.39 ms
SKYLARK_USER_FN 0.20% 343 1.08 ms
As far as my command, I am running
bazel build src:MY_TARGET --embed_label MY_LABEL --stamp --show_loading_progress
Use the --host_jvm_debug startup flag to debug Bazel itself during a build.
From https://bazel.build/contributing.html:
Debugging Bazel
Start creating a debug configuration for both C++ and
Java in your .bazelrc with the following:
build:debug -c dbg
build:debug --javacopt="-g"
build:debug --copt="-g"
build:debug --strip="never"
Then you can rebuild Bazel with bazel build --config debug //src:bazel and use your favorite debugger to start debugging.
For debugging the C++ client you can just run it from gdb or lldb as
you normally would. But if you want to debug the Java code, you must
attach to the server using the following:
Run Bazel with debugging option --host_jvm_debug before the command (e.g., bazel --batch --host_jvm_debug build //src:bazel).
Attach a debugger to the port 5005. With jdb for instance, run jdb -attach localhost:5005. From within Eclipse, use the remote
Java application launch configuration.
Our IntelliJ plugin has built-in debugging support