how to create test report files for bitbucket pipelines? - bitbucket

How does one create test result files for bitbucket pipelines?
My bitbucket bitbucket-pipelines.yml contains:
options:
docker: true
pipelines:
default:
- step:
name: test bitbucket pipelines stuff..
script: # Modify the commands below to build your repository.
- /bin/bash -c 'mkdir test-results; echo Error Hello, World >> test-results/test1.txt; find'
and when running this pipeline i get
/bash -c 'mkdir test-results; echo Error Hello, World >> test-results/test1.txt; find'
<1s
+ /bin/bash -c 'mkdir test-results; echo Error Hello, World >> test-results/test1.txt; find'
.
(... censored/irrelevant stuff here)
./test-results
./test-results/test1.txt
then i get the "build teardown" saying it can't find test-results/test1.txt:
Build teardown
<1s
Searching for test report files in directories named
[test-results, failsafe-reports, test-reports, TestResults, surefire-reports] down to a depth of 4
Finished scanning for test reports. Found 0 test report files.
Merged test suites, total number tests is 0, with 0 failures and 0 errors.
i am surprised that it failed to find the ./test-results/test1.txt file.. hence the question.

Usually, each language/framework has some kind of utility to automatically produce such files as an outcome of a test suite run.
E.g. in Python you could simply run
pytest --junitxml=test-results/pytest.xml
See https://docs.pytest.org/en/latest/how-to/output.html#creating-junitxml-format-files
Manually crafting the xml youself feels brittle and tedious. Better find whatever library/option is available for your language/framework.

per https://support.atlassian.com/bitbucket-cloud/docs/test-reporting-in-pipelines/ seems it has to be XML files.. also it needs to be in a "j-unit xml format" ? an example of which can be found here https://www.ibm.com/docs/en/developer-for-zos/9.1.1?topic=formats-junit-xml-format
.. so try changing bitbucket-pipelines.yml to
options:
docker: true
pipelines:
default:
- step:
name: test bitbucket pipelines stuff..
script: # Modify the commands below to build your repository.
- export IMAGE_NAME2=easyad/easyad_nginx:$BITBUCKET_COMMIT
- /bin/bash bitbucket_pipeline_tests.sh
and in bitbucket_pipeline_tests.sh add
#!/bin/bash
mkdir test-results;
echo '<?xml version="1.0" encoding="UTF-8" ?>
<testsuites id="20140612_170519" name="New_configuration (14/06/12 17:05:19)" tests="225" failures="1262" time="0.001">
<testsuite id="codereview.cobol.analysisProvider" name="COBOL Code Review" tests="45" failures="17" time="0.001">
<testcase id="codereview.cobol.rules.ProgramIdRule" name="Use a program name that matches the source file name" time="0.001">
<failure message="PROGRAM.cbl:2 Use a program name that matches the source file name" type="WARNING">
WARNING: Use a program name that matches the source file name
Category: COBOL Code Review – Naming Conventions
File: /project/PROGRAM.cbl
Line: 2
</failure>
</testcase>
</testsuite>
</testsuites>' >>./test-results/test1.xml
then the pipeline run should say 17 / 45 tests failed, as indicated by the sample XML above...

Related

Artifact not being published in bitbucket pipeline

I'm doing quite trivial java build in BitBucket Pipeline. the only twist is that it is in the repository subdirectory.
my bitbucket-pipelines.yml:
pipelines:
default:
- step:
caches:
- gradle
script: # Modify the commands below to build your repository.
# You must commit the Gradle wrapper to your repository
# https://docs.gradle.org/current/userguide/gradle_wrapper.html
- bash "./foo bar/gradlew" -p "./foo bar" distTar
- ls ./foo\ bar/build -R
- echo 'THE END'
artifacts:
- ./foo bar/build/distributions/xxx.tar
My ls confirms that xxx.tar is in the expected location
....
./foo bar/build/distributions:brigitte.tar
....
, but artifact page is empty.
Found it! It should be
# ...
artifacts:
- foo bar/build/distributions/brigitte.tar
artifacts paths are not real path so "dot slash" at the beginning was invalidating my path. Shame that it was not raised as a warning!
Extending the existing answer, I'd like to highlight the docs fragment that speaks about this
https://support.atlassian.com/bitbucket-cloud/docs/use-artifacts-in-steps/#Introduction
You can use glob patterns to define artifacts. Glob patterns that start with a * will need to be put in quotes. Note: As these are glob patterns, path segments “.” and “..” won’t work. Use paths relative to the build directory.
So that's it: artifacts can't be outside the build directory and artifact definitions in the pipelines must not contain . and .. path segments.
Absolute paths starting with / will neither work!
It is truly shameful how obscure this is like, why wouldn't the pipelines throw an error if that declaration is invalid? Damn, Bitbucket!

How can I configure Travis CI to test correct loading a library repo under PlatformIO?

I have a library used by a number of Arduino projects. I use PlatformIO as my build system, so I've created a library.json file in the root of the library to identify dependent libraries that should be loaded when I include this library in a project. All good.
Sometimes the dependent libraries get changed - PlatformIO is particularly sensitive to renaming them in the Arduino library.properties file. It is a pain when I discover that my library is broken only when I try to build a project that uses it.
I'd like to configure Travis to run periodically (thanks, Travis cron jobs!) and confirm that I can load all dependent libaries.
pio ci does not really apply to libraries. pio test requires a PlatformIO subscription (highly recommended, but not always an option).
Put the following in .travis.yml:
```
PlatformIO dependency test
- language: python
python: 2.7
install:
- pip install -U platformio
script:
- mkdir test_platformio_deps
- cd test_platformio_deps
- echo "[env:adafruit_feather_m0]" > platformio.ini
- echo "platform = atmelsam" >> platformio.ini
- echo "board = adafruit_feather_m0" >> platformio.ini
- echo "framework = arduino" >> platformio.ini
- if [ "${TRAVIS_PULL_REQUEST_SLUG}" = "" ]; then echo "lib_deps = SPI, https://github.com/${TRAVIS_REPO_SLUG}" ; else echo "lib_deps = SPI, https://github.com/${TRAVIS_PULL_REQUEST_SLUG}#${TRAVIS_PULL_REQUEST_BRANCH}" ; fi >> platformio.ini
- cat platformio.ini
- mkdir src
- echo "int main() {}" > src/main.cpp
- platformio run
cache:
directories:
- "~/.platformio"
```
It will create a simple project that depends on your library and then attempt to build it. If all dependencies load, it will succeed.
The tricky line with TRAVIS_PULL_REQUEST_SLUG handles running the test within a PR.

How to use file parameter in jenkins

I am executing parameterised build in jenkins to count no. of lines in file which has 1 file parameter. Its file location is pqr. The name of the script file is linecount.sh which is saved at remote server. When i tried to execute it using command sh linecount.sh filename, it works perfectly from jenkins. But as i remove filename from the argument and execute same script as parameterised build it is showing below error on console :
Started by user Prasoon Gupta
[EnvInject] - Loading node environment variables.
Building in workspace users/Prasoon/sample_programs
Copying file to pqr
[sample_programs] $ /bin/sh -xe /tmp/hudson3529902665956638862.sh
+ sh linecount.sh
PRASOON4
linecount.sh: line 15: parameterBuild.txt: No such file or directory
Build step 'Execute shell' marked build as failure
Finished: FAILURE
I am uploading file (parameterBuild.txt) from my local machine. Why is it giving this error?
My doubt is in shell script I used argument as $1. How can I refer this when I am taking file as parameter.
The uploaded file will not retain the same name as it has on your local computer. It will be named after the File location argument specified in the file parameter settings:
In this example I will get a file called file.txt in my workspace root, regardless of what I call it on my computer.
So if I now build my job and enter the following in the parameter dialog (note that my local filename is table.html):
Then I get the following in the log (I have a build step which does ls -l):
Building on master in workspace /var/lib/jenkins/workspace/fs
Copying file to file.txt
[fs] $ /bin/sh -xe /tmp/hudson845437350739055843.sh
+ ls -l
total 4
-rw-r--r-- 1 jenkins jenkins 292 Feb 15 07:23 file.txt
Finished: SUCCESS
Note that table.html now is called file.txt, e.g. what I entered as File location.
So in you're case the command should be:
sh linecount.sh pqr
There is a a bug since ages that makes impossible to use fileParameter:
Handle file parameters
file parameter not working in pipeline job
There is a workaround for this issue https://github.com/janvrany/jenkinsci-unstashParam-library
and in a pipeline script you do:
library "jenkinsci-unstashParam-library"
node {
def file_in_workspace = unstashParam "file"
sh "cat ${file_in_workspace}"
}
If it's to do with Free-Style job & if your configuration looks similar to this - https://i.stack.imgur.com/vH7mQ.png then you can run simply do sh linecount.sh ${pqr} to get what you are looking for?

Break Travis-CI build on dotnet test failure

Doing some proof of concept I've a simple netcore repo with some xUnit tests at NetCoreXunit that I've got to build on both Appveyor and Travis. I've put in a failing test and Appveyor fails the build but I'm struggling to get Travis to do the same. It executes the tests happily and reports one of the tests fails but passes the build.
I've Googled to death and been trying to pipe and parse the output in a script step in the yaml configuration but my script knowledge is not great.
If anyone could help me get Travis to fail the build I'd be grateful. There's a link from the GitHub repo to both my Appveyor and Travis builds and if you commit to the repo it should build automatically.
--UPDATE--
So I got it as far as parsing the output of two test assemblies and correctly identifying if there's been a test failure; but I need to create a variable so both assemblies get tested before throwing the exit. I've had to jump through silly hoops to get this far; and one was I can't seem to define a variable without Travis complaining. It's also hardcoded and I'd like to extend it to finding all test assemblies not just the hardcoded.
after_success:
# Run tests
- dotnet test ./src/NetCoreXunit -xml ./out/NetCoreXunit.xml;
if grep -q 'result="Fail"' ./out/NetCoreXunit.xml ; then
echo 'Failed tests detected.';
else
echo 'All tests passed.';
fi;
- dotnet test ./src/NetCoreXunitB -xml ./out/NetCoreXunitB.xml;
if grep -q 'result="Fail"' ./out/NetCoreXunitB.xml ; then
echo 'Failed tests detected.';
else
echo 'All tests passed.';
fi;
Any advice appreciated: how do I get a list of all test assemblies and how do I declare and set a bool that I can then exitcode with?
Spent way to long trying to get .travis.yml to work; should have just gone straight down the Python route; works as follows called out to from the yml.
import os
import sys
import re
from subprocess import call
root_directory = os.getcwd()
print (root_directory)
regexp = re.compile(r'src[\/\\]NetCoreXunit.?$')
result = False
for child in os.walk(root_directory):
print (child[0])
if regexp.search(child[0]) is not None:
print ("Matched")
test_path = os.path.join(root_directory, child[0])
if os.path.isdir(test_path):
print ("IsDir")
print (test_path)
os.chdir(test_path)
call (["dotnet", "test", "-xml", "output.xml"])
if 'result="Fail"' in open("output.xml").read():
print (test_path + ": Failed tests detected")
result = True
else:
print (test_path + ": All tests passed")
os.chdir(root_directory)
if result:
print ("Failed tests detected")
sys.exit(1)

Run a shell script before build in Xcode

I need to adjust my build and version number for my project before build/archiving.
I tried multiple things, but so far to no avail.
I added a target with the script to update the numbers and added that as first dependency to my main target. But because I have multiple dependencies as I have extensions in my app and all dependencies are executed by Xcode in parallel (or at least in random order) this does not work.
I added a pre-action to my scheme with the same result. Xcode is not waiting for my pre-action to complete before continuing with the build (I added a sleep 100 to test).
As I'm altering build numbers it is crucial that the script can complete before anything else is started, but there is also one more side-effect: The build even stops due to the fact that the plist files have been altered while building the related target.
What makes it more difficult is, that I would like to use agvtools to set my version & build number. This obviously starts background processes that are out of my control to alter the plists.
Disclaimer: I have searched for other answers, didn't help.
agvtools just does not work in an Xcode build. It will always stop the build.
What works fine is PlistBuddy, although the setup is not as nice and neat.
I added a Pre-Action to the build in my main scheme to call a new target in my project:
xcodebuild -project "${SRCROOT}/MAIN_APP.xcodeproj" -scheme BuildNumberPreProcess
In the target BuildNumberPreProcess I have a Run Script:
VERSION=$(head -n 1 version.txt)
BUILD=`git rev-list $(git rev-parse --abbrev-ref HEAD) | wc -l | awk '{ print $1 }'`
echo "${VERSION} (${BUILD})"
SCRIPT="${SRCROOT}/CLIENT/Supporting Files/set-version-in-plist.sh"
"${SCRIPT}" "${SRCROOT}/MAIN_APP/Supporting Files/Info.plist" ${VERSION} ${BUILD}
"${SCRIPT}" "${SRCROOT}/EXTENSION/Info.plist" ${VERSION} ${BUILD}
...
set-version-in-plist.h:
#!/bin/sh
# set-version-in-plist.sh
#
# usage:
# set-version-in-plist LIST VERSION BUILD
# LIST: Info.plist path & name
# VERSION: version number xxx.xxx.xxx
# BUILD: build number xxxxx
#
# Location of PlistBuddy
PLISTBUDDY="/usr/libexec/PlistBuddy"
echo "$1: $2 ($3)"
${PLISTBUDDY} -c "Set :CFBundleShortVersionString $2" "$1";
${PLISTBUDDY} -c "Set :CFBundleVersion $3" "$1";
Xcode has command line tools for build/archiving: https://developer.apple.com/library/ios/technotes/tn2339/_index.html
So, you can write shell script that at first runs your script for adjusting build/version number and then runs xcode build/archive as command line tool.

Resources