bazel test after build still takes very long - bazel

I ran the following commands
bazel test ...
bazel build ...
bazel test ...
It appears that the seconds test run did not take advantage of caching at all. What could be the reason?

Related

Optimized alternative to stashing files in jenkins

The jenkins pipeline currently does a build and deploy, stashes some files, unstashes them and runs end to end tests on these files as so:
// build and deploy code
stash([
name: 'end-to-end-tests',
includes: <a bunch of files>
])
unstash('end-to-end-tests')
// code to run tests using npm run test:end-to-end-tests
In the interest of speeding up this pipeline, is there a way to get around the stash? I need end-to-end-tests in order to run my tests with the appropriate npm command later on, but how can I use this without stashing (if possible)?

Build step 'Publish Performance test result report' changed build result to FAILURE Finished: FAILURE

Cannot detect file type because of error, and : Failed to copy Build step 'Publish Performance test result report' changed build result to FAILURE Finished: FAILURE. whenever i run the script always i face this..
Take a look into Jenkins Console Output - it should give you the reason for the failure.
Most probably the Performance Plugin fails to find JMeter's .jtl results file in Jenkins Workspace, either the .jtl results file is missing or you're pointing the Performance Plugin to the incorrect location.
If you have a Script step to run a JMeter test like:
jmeter -n -t test.jmx -l result.jtl
You should be able to use simply result.jtl in the Performance Plugin.
Check out:
Performance Trend Reporting
How to Use the Jenkins Performance Plugin
and Running Performance Tests articles for more details on various aspects of Jenkins Performance Plugin use cases.

Fail Jenkins build when xUnit tests do not pass

I have Jenkins building my C# .NET Core api project. I added some xUnit tests and included a powershell script inside of my Jenkins build with the "dotnet test" command to execute the tests.
That all works well and the tests are run and i can see the output in the Jenkins console.
The problem is that if i have failing tests nothing happens - jenkins goes merrily along and finished up the build process and reports it as a success.
How can i get it to fail the build?
Is there a response from the 'dotnet test' command?
I know there are xUnit Jenkins plugins but they all seem to revolve around "display the results of xUnit tests". Which is not really what i am after. I want to ACT on the results of the tests, not just see them in fancy html.
You should check for the return code from dotnet test command. It returns 0 if all tests were successful and 1 if any of the tests failed. Unfortunately it's not documented but was confirmed in this issue

Rerun flaky JUnit test in case they failed

I have a job A in Jenkins for my automated testing that is triggered if another job B build is successful. The job A run several tests. Some of the test are flaky so I would like to run them again few times and let them the chance to pass so my build won't be unstable/failed.
Is there any plugin I can use?
I would suggest to fix your tests or rewrite them so they will only fail if something is broken. Maybe you can mock away the things that tend to fail. If you are depnending on a database connection, maybe you could use a sqlite or smething which is local.
But there is also a plugin which can retry a build:
https://wiki.jenkins-ci.org/display/JENKINS/Naginator+Plugin
Simply install the plugin, and then check the Post-Build action "Retry build after failure" on your project's configuration page.
If you want to rerun tests in JUnit-context, take a look here: SO: How to Re-run failed JUnit tests immediately?
Don't know of any plugin to run just the flaky/failed tests again, only the whole build. It should be possible, I just have not found any (and don't have enough time on my hand to write one). Here's what we did on a large java project where the build was ant based:
The build itself was pretty simple (using xml as formatter inside the junit ant task):
ant clean compile test
The build also accepted a single class name as parameter (using batchtest include section inside the junit ant task):
ant -Dtest.class.pattern=SomeClassName test
At the end of the jenkins job, we used the "Execute shell" build step. The idea was to search for all test results that had errors or failures, figure out the name of the class, then run that particular test class again. The file containing the failure will be overwritten, and the test collector at the end of the build will not see the flaky test failure, during the post build steps.
#!/bin/bash +x
cd ${WORKSPACE}
for i in $(seq 1 3); do
echo "Running failed tests $i time(s)"
for file in `find -path '*/TEST-*.xml' | xargs grep 'errors\|failures' | grep '\(errors\|failures\)="[1-9]' | cut -d ':' -f 1`; do
class=`basename ${file} .xml | rev | cut -d '.' -f 1 | rev`
ant -Dtest.class.pattern=${class} test
done
done
After getting the build back under control, you definitely need to address the flaky tests. Don't let the green build fool you, there's still work to be done.

What are the differences between the {before_,}{install,script} .travis.yml options?

Inside the .travis.yml configuration file what is the practical difference between before_install, install, before_script and script options?
I have found no documentation explaining the differences between these options.
You don't need to use these sections, but if you do, you communicate the intent of what you're doing:
before_install:
# execute all of the commands which need to be executed
# before installing dependencies
- composer self-update
- composer validate
install:
# install all of the dependencies you need here
- composer install --prefer-dist
before_script:
# execute all of the commands which need to be executed
# before running actual tests
- mysql -u root -e 'CREATE DATABASE test'
- bin/doctrine-migrations migrations:migrate
script:
# execute all of the commands which
# should make the build pass or fail
- vendor/bin/phpunit
- vendor/bin/php-cs-fixer fix --verbose --diff --dry-run
See, for example, https://github.com/localheinz/composer-normalize/blob/0.8.0/.travis.yml.
The difference is in the state of the job when something goes wrong.
Git 2.17 (Q2 2018) illustrates that in commit 3c93b82 (08 Jan 2018) by SZEDER Gábor (szeder).
(Merged by Junio C Hamano -- gitster -- in commit c710d18, 08 Mar 2018)
That illustrates the practical difference between before_install, install, before_script and script options
travis-ci: build Git during the 'script' phase
Ever since we started building and testing Git on Travis CI (522354d: Add Travis CI support, 2015-11-27, Git v2.7.0-rc0), we build Git in the
'before_script' phase and run the test suite in the 'script' phase
(except in the later introduced 32 bit Linux and Windows build jobs,
where we build in the 'script' phase').
Contrarily, the Travis CI practice is to build and test in the
'script' phase; indeed Travis CI's default build command for the
'script' phase of C/C++ projects is:
./configure && make && make test
The reason why Travis CI does it this way and why it's a better
approach than ours lies in how unsuccessful build jobs are
categorized. After something went wrong in a build job, its state can
be:
'failed', if a command in the 'script' phase returned an error.
This is indicated by a red 'X' on the Travis CI web interface.
'errored', if a command in the 'before_install', 'install', or
'before_script' phase returned an error, or the build job exceeded
the time limit.
This is shown as a red '!' on the web interface.
This makes it easier, both for humans looking at the Travis CI web
interface and for automated tools querying the Travis CI API, to
decide when an unsuccessful build is our responsibility requiring
human attention, i.e. when a build job 'failed' because of a compiler
error or a test failure, and when it's caused by something beyond our
control and might be fixed by restarting the build job, e.g. when a
build job 'errored' because a dependency couldn't be installed due to
a temporary network error or because the OSX build job exceeded its
time limit.
The drawback of building Git in the 'before_script' phase is that one
has to check the trace log of all 'errored' build jobs, too, to see
what caused the error, as it might have been caused by a compiler
error.
This requires additional clicks and page loads on the web interface and additional complexity and API requests in automated tools.
Therefore, move building Git from the 'before_script' phase to the
'script' phase, updating the script's name accordingly as well.
'ci/run-builds.sh' now becomes basically empty, remove it.
Several of our build job configurations override our default 'before_script' to do nothing; with this change our default 'before_script' won't do
anything, either, so remove those overriding directives as well.

Resources