How to analyze unsuccessful builds in the analysis phase? - bazel

A bazel binary that I am building completes unsuccessfully during the analysis phase. What flags and tools can I use to debug why it fails during analysis.
Currently, clean builds return the following output
ERROR: build interrupted
INFO: Elapsed time: 57.819 s
FAILED: Build did NOT complete successfully (133 packages loaded)
If I retry building after failed completion, I receive the following output
ERROR: build interrupted
INFO: Elapsed time: 55.514 s
FAILED: Build did NOT complete successfully (68 packages loaded)
What flags can I use to identify
what packages are being loaded
what package the build is being interrupted on
whether the interruption is coming from a timeout or an external process.
Essentially, something similar to --verbose_failures but for the analysis phase rather than the execution phrase.
So far I have ran my build through the build profiler, and have not been able to glean any insight. Here is the output of my build:
WARNING: This information is intended for consumption by Blaze developers only, and may change at any time. Script against it at your own risk
INFO: Loading /<>/result
INFO: bazel profile for <> at Mon Jun 04 00:10:11 GMT 2018, build ID: <>, 49405 record(s)
INFO: Aggregating task statistics
=== PHASE SUMMARY INFORMATION ===
Total launch phase time 9.00 ms 0.02%
Total init phase time 91.0 ms 0.16%
Total loading phase time 1.345 s 2.30%
Total analysis phase time 57.063 s 97.53%
Total run time 58.508 s 100.00%
=== INIT PHASE INFORMATION ===
Total init phase time 91.0 ms
Total time (across all threads) spent on:
Type Total Count Average
=== LOADING PHASE INFORMATION ===
Total loading phase time 1.345 s
Total time (across all threads) spent on:
Type Total Count Average
CREATE_PACKAGE 0.67% 9 3.55 ms
VFS_STAT 0.69% 605 0.05 ms
VFS_DIR 0.96% 255 0.18 ms
VFS_OPEN 2.02% 8 12.1 ms
VFS_READ 0.00% 5 0.01 ms
VFS_GLOB 23.74% 1220 0.93 ms
SKYFRAME_EVAL 24.44% 3 389 ms
SKYFUNCTION 36.95% 8443 0.21 ms
SKYLARK_LEXER 0.19% 31 0.29 ms
SKYLARK_PARSER 0.68% 31 1.04 ms
SKYLARK_USER_FN 0.03% 5 0.27 ms
SKYLARK_BUILTIN_FN 5.91% 349 0.81 ms
=== ANALYSIS PHASE INFORMATION ===
Total analysis phase time 57.063 s
Total time (across all threads) spent on:
Type Total Count Average
CREATE_PACKAGE 0.30% 138 3.96 ms
VFS_STAT 0.05% 2381 0.03 ms
VFS_DIR 0.19% 1020 0.35 ms
VFS_OPEN 0.04% 128 0.61 ms
VFS_READ 0.00% 128 0.01 ms
VFS_GLOB 0.92% 3763 0.45 ms
SKYFRAME_EVAL 31.13% 1 57.037 s
SKYFUNCTION 65.21% 32328 3.70 ms
SKYLARK_LEXER 0.01% 147 0.10 ms
SKYLARK_PARSER 0.03% 147 0.39 ms
SKYLARK_USER_FN 0.20% 343 1.08 ms
As far as my command, I am running
bazel build src:MY_TARGET --embed_label MY_LABEL --stamp --show_loading_progress

Use the --host_jvm_debug startup flag to debug Bazel itself during a build.
From https://bazel.build/contributing.html:
Debugging Bazel
Start creating a debug configuration for both C++ and
Java in your .bazelrc with the following:
build:debug -c dbg
build:debug --javacopt="-g"
build:debug --copt="-g"
build:debug --strip="never"
Then you can rebuild Bazel with bazel build --config debug //src:bazel and use your favorite debugger to start debugging.
For debugging the C++ client you can just run it from gdb or lldb as
you normally would. But if you want to debug the Java code, you must
attach to the server using the following:
Run Bazel with debugging option --host_jvm_debug before the command (e.g., bazel --batch --host_jvm_debug build //src:bazel).
Attach a debugger to the port 5005. With jdb for instance, run jdb -attach localhost:5005. From within Eclipse, use the remote
Java application launch configuration.
Our IntelliJ plugin has built-in debugging support

Related

Effect of --test_env and --test_arg on bazel cache

I'm naively passing along some variable test metadata to some py_test targets to inject that metadata into some test result artifacts that later get uploaded to the cloud. I'm doing so using either the --test_env or --test_arg values at the bazel test invocation.
Would this variable data negatively affect the way test results are cached such that running the same test back to back would effectively disturb the bazel cache?
Command Line Inputs
Command line inputs can indeed disturb cache hits. Consider the following set of executions
BUILD file
py_test(
name = "test_inputs",
srcs = ["test_inputs.py"],
deps = [
":conftest",
"#pytest",
],
)
py_library(
name = "conftest",
srcs = ["conftest.py"],
deps = [
"#pytest",
],
)
Test module
import sys
import pytest
def test_pass():
assert True
def test_arg_in(request):
assert request.config.getoption("--metadata")
if __name__ == "__main__":
args = sys.argv[1:]
ret_code = pytest.main([__file__, "--log-level=ERROR"] + args)
sys.exit(ret_code)
First execution
$ bazel test //bazel_check:test_inputs --test_arg --metadata=abc
INFO: Analyzed target //bazel_check:test_inputs (0 packages loaded, 0 targets configured).
INFO: Found 1 test target...
INFO: 2 processes: 1 internal (50.00%), 1 local (50.00%).
INFO: Cache hit rate for remote actions: -- (0 / 0)
INFO: Total action wall time 0.40s
INFO: Critical path 0.57s (setup 0.00s, action wall time 0.00s)
INFO: Elapsed time 0.72s (preparation 0.12s, execution 0.60s)
INFO: Build completed successfully, 2 total actions
//bazel_check:test_inputs PASSED in 0.4s
Executed 1 out of 1 test: 1 test passes.
INFO: Build completed successfully, 2 total actions
Second execution: same argument value, cache hit!
$ bazel test //bazel_check:test_inputs --test_arg --metadata=abc
INFO: Analyzed target //bazel_check:test_inputs (0 packages loaded, 0 targets configured).
INFO: Found 1 test target...
INFO: 1 process: 1 internal (100.00%).
INFO: Cache hit rate for remote actions: -- (0 / 0)
INFO: Total action wall time 0.00s
INFO: Critical path 0.47s (setup 0.00s, action wall time 0.00s)
INFO: Elapsed time 0.61s (preparation 0.12s, execution 0.49s)
INFO: Build completed successfully, 1 total action
//bazel_check:test_inputs (cached) PASSED in 0.4s
Executed 0 out of 1 test: 1 test passes.
INFO: Build completed successfully, 1 total action
Third execution: new argument value, no cache hit
$ bazel test //bazel_check:test_inputs --test_arg --metadata=kk
INFO: Analyzed target //bazel_check:test_inputs (0 packages loaded, 93 targets configured).
INFO: Found 1 test target...
INFO: 2 processes: 1 internal (50.00%), 1 local (50.00%).
INFO: Cache hit rate for remote actions: -- (0 / 0)
INFO: Total action wall time 0.30s
INFO: Critical path 0.54s (setup 0.00s, action wall time 0.00s)
INFO: Elapsed time 0.71s (preparation 0.14s, execution 0.57s)
INFO: Build completed successfully, 2 total actions
//bazel_check:test_inputs PASSED in 0.3s
Executed 1 out of 1 test: 1 test passes.
INFO: Build completed successfully, 2 total actions
Fourth execution: reused same argument as first two runs
Interestingly enough there is no cache hit despite the result being cached earlier. Somehow it did not persist.
$ bazel test //bazel_check:test_inputs --test_arg --metadata=abc
INFO: Analyzed target //bazel_check:test_inputs (0 packages loaded, 0 targets configured).
INFO: Found 1 test target...
INFO: 2 processes: 1 internal (50.00%), 1 local (50.00%).
INFO: Cache hit rate for remote actions: -- (0 / 0)
INFO: Total action wall time 0.34s
INFO: Critical path 0.50s (setup 0.00s, action wall time 0.00s)
INFO: Elapsed time 0.71s (preparation 0.17s, execution 0.55s)
INFO: Build completed successfully, 2 total actions
//bazel_check:test_inputs PASSED in 0.3s
Executed 1 out of 1 test: 1 test passes.
INFO: Build completed successfully, 2 total actions
Environment Inputs
The same exact behavior applies for --test_env inputs
import os
import sys
import pytest
def test_pass():
assert True
def test_env_in():
assert os.environ.get("META_ENV")
if __name__ == "__main__":
args = sys.argv[1:]
ret_code = pytest.main([__file__, "--log-level=ERROR"] + args)
sys.exit(ret_code)
First execution
$ bazel test //bazel_check:test_inputs --test_env META_ENV=33
INFO: Build option --test_env has changed, discarding analysis cache.
INFO: Analyzed target //bazel_check:test_inputs (0 packages loaded, 7285 targets configured).
INFO: Found 1 test target...
INFO: 2 processes: 1 internal (50.00%), 1 local (50.00%).
INFO: Cache hit rate for remote actions: -- (0 / 0)
INFO: Total action wall time 0.29s
INFO: Critical path 0.66s (setup 0.00s, action wall time 0.00s)
INFO: Elapsed time 1.26s (preparation 0.42s, execution 0.84s)
INFO: Build completed successfully, 2 total actions
//bazel_check:test_inputs PASSED in 0.3s
Executed 1 out of 1 test: 1 test passes.
INFO: Build completed successfully, 2 total actions
Second execution: same env value, cache hit!
$ bazel test //bazel_check:test_inputs --test_env META_ENV=33
INFO: Analyzed target //bazel_check:test_inputs (0 packages loaded, 0 targets configured).
INFO: Found 1 test target...
INFO: 1 process: 1 internal (100.00%).
INFO: Cache hit rate for remote actions: -- (0 / 0)
INFO: Total action wall time 0.00s
INFO: Critical path 0.49s (setup 0.00s, action wall time 0.00s)
INFO: Elapsed time 0.67s (preparation 0.15s, execution 0.52s)
INFO: Build completed successfully, 1 total action
//bazel_check:test_inputs (cached) PASSED in 0.3s
Executed 0 out of 1 test: 1 test passes.
INFO: Build completed successfully, 1 total action
Third execution: new env value, no cache hit
$ bazel test //bazel_check:test_inputs --test_env META_ENV=44
INFO: Build option --test_env has changed, discarding analysis cache.
INFO: Analyzed target //bazel_check:test_inputs (0 packages loaded, 7285 targets configured).
INFO: Found 1 test target...
INFO: 2 processes: 1 internal (50.00%), 1 local (50.00%).
INFO: Cache hit rate for remote actions: -- (0 / 0)
INFO: Total action wall time 0.29s
INFO: Critical path 0.62s (setup 0.00s, action wall time 0.00s)
INFO: Elapsed time 1.22s (preparation 0.39s, execution 0.83s)
INFO: Build completed successfully, 2 total actions
//bazel_check:test_inputs PASSED in 0.3s
Executed 1 out of 1 test: 1 test passes.
INFO: Build completed successfully, 2 total actions
Fourth execution: reused same env value as first two runs
$ bazel test //bazel_check:test_inputs --test_env META_ENV=33
INFO: Build option --test_env has changed, discarding analysis cache.
INFO: Analyzed target //bazel_check:test_inputs (0 packages loaded, 7285 targets configured).
INFO: Found 1 test target...
INFO: 2 processes: 1 internal (50.00%), 1 local (50.00%).
INFO: Cache hit rate for remote actions: -- (0 / 0)
INFO: Total action wall time 0.28s
INFO: Critical path 0.66s (setup 0.00s, action wall time 0.00s)
INFO: Elapsed time 1.25s (preparation 0.40s, execution 0.85s)
INFO: Build completed successfully, 2 total actions
//bazel_check:test_inputs PASSED in 0.3s
Executed 1 out of 1 test: 1 test passes.
INFO: Build completed successfully, 2 total actions

Thingsboard Performance Test - Gatling

I'm trying to do performance test using the https://github.com/thingsboard/gatling-mqtt project.
Although current versions of the tools are a bit different, I managed to have it running.
I've tested a couple of basic scripts with Gatling using HTTP and Gatling worked as expected.
But when using the scripts provided with the tool, that use the plugin for MQTT it does not work.
In fact it runs but it doesn't do anything. No connections, no logs, no reports, no errors.
On the other tests it increments the global OK=0 as the test progresses, and generates logs and reports.
But as you can see below, when running with the MQTT plugin it doesn't increment the count
.
Simulation MqttSimulation_localhost started...
================================================================================
2021-10-14 17:55:02 5s elapsed
---- Requests ------------------------------------------------------------------
> Global (OK=0 KO=0 )
---- MQTT Test -----------------------------------------------------------------
[--------------------------------------------------------------------------] 0%
waiting: 0 / active: 10 / done:0
================================================================================
================================================================================
2021-10-14 17:55:07 10s elapsed
---- Requests ------------------------------------------------------------------
> Global (OK=0 KO=0 )
---- MQTT Test -----------------------------------------------------------------
[--------------------------------------------------------------------------] 0%
waiting: 0 / active: 10 / done:0
================================================================================
================================================================================
2021-10-14 17:55:12 15s elapsed
---- Requests ------------------------------------------------------------------
> Global (OK=0 KO=0 )
---- MQTT Test -----------------------------------------------------------------
[--------------------------------------------------------------------------] 0%
waiting: 0 / active: 10 / done:0
================================================================================
================================================================================
2021-10-14 17:55:17 20s elapsed
---- Requests ------------------------------------------------------------------
> Global (OK=0 KO=0 )
... and it goes on forever
Any ideas on what could be happening?
I would certainly appreciate any help on this!
Thank you!

SonarQube Scanner not sending result to SonarQube after executing in a Jenkins job

After SonarQube Scanner was executed by a Jenkins job, the analysis result is not being uploaded to the SonarQube server:
00:15:23.616 INFO: ANALYSIS SUCCESSFUL
00:15:23.621 DEBUG: Post-jobs : GitHub Pull Request Issue Publisher (wrapped)
00:15:23.621 INFO: Executing post-job GitHub Pull Request Issue Publisher (wrapped)
In one of my other Jenkins jobs, the result is actually uploaded to SonarQube with this log which is not found for above case:
INFO: Sensor CPD Block Indexer (done) | time=0ms
INFO: 20 files had no CPD blocks
INFO: Calculating CPD for 46 files
INFO: CPD calculation finished
INFO: Analysis report generated in 312ms, dir size=1 MB
INFO: Analysis reports compressed in 227ms, zip size=561 KB
INFO: Analysis report uploaded in 256ms
Is there anything I can do to fix this?

Benchmarking with Siege always returns zero hits and zero failed transactions

All,
I'm learning to do benchmark testing using the Siege tool against our Rails app. I'm running Siege on my OS X box against a website hosted on another server. When I run it I always get zero hits and zero failed transactions no matter which site I run it against.
Because of the limitations of OS X, I've configured more ports with sudo sysctl -w net.inet.tcp.msl=1000 and more open files with launchctl limit maxfiles 10000 10000. I've also configured proxy variables in .siegerc since I'm running behind a proxy. Sample command line is:
siege -c1 -b -t10S 'www.google.com'.
But no matter which website I hit, I always get zero hits, zero failures and zero successful transactions. The Siege log file does not show any errors. What am I doing wrong?
siege -c1 -b -t10S 'www.google.com'
** SIEGE 3.0.5
** Preparing 1 concurrent users for battle.
The server is now under siege...
Lifting the server siege... done.
Transactions: 0 hits
Availability: 0.00 %
Elapsed time: 9.84 secs
Data transferred: 0.00 MB
Response time: 0.00 secs
Transaction rate: 0.00 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 0.00
Successful transactions: 0
Failed transactions: 0
Longest transaction: 0.00
Shortest transaction: 0.00

rails performance test warmup time

I am using a rails performance test run as rake test:benchmark. The result gives me the warmup time.
I can't find the meaning of the 211 ms warm up time. Some of the test take longer warmup
time. I know what wall_time, user_time and etc.
.ApiTest#test_license_pool (211 ms warmup)
wall_time: 167 ms
user_time: 47 ms
memory: 6.2 MB
gc_runs: 0
gc_time: 0 ms

Resources