Break Travis-CI build on dotnet test failure - travis-ci

Doing some proof of concept I've a simple netcore repo with some xUnit tests at NetCoreXunit that I've got to build on both Appveyor and Travis. I've put in a failing test and Appveyor fails the build but I'm struggling to get Travis to do the same. It executes the tests happily and reports one of the tests fails but passes the build.
I've Googled to death and been trying to pipe and parse the output in a script step in the yaml configuration but my script knowledge is not great.
If anyone could help me get Travis to fail the build I'd be grateful. There's a link from the GitHub repo to both my Appveyor and Travis builds and if you commit to the repo it should build automatically.
--UPDATE--
So I got it as far as parsing the output of two test assemblies and correctly identifying if there's been a test failure; but I need to create a variable so both assemblies get tested before throwing the exit. I've had to jump through silly hoops to get this far; and one was I can't seem to define a variable without Travis complaining. It's also hardcoded and I'd like to extend it to finding all test assemblies not just the hardcoded.
after_success:
# Run tests
- dotnet test ./src/NetCoreXunit -xml ./out/NetCoreXunit.xml;
if grep -q 'result="Fail"' ./out/NetCoreXunit.xml ; then
echo 'Failed tests detected.';
else
echo 'All tests passed.';
fi;
- dotnet test ./src/NetCoreXunitB -xml ./out/NetCoreXunitB.xml;
if grep -q 'result="Fail"' ./out/NetCoreXunitB.xml ; then
echo 'Failed tests detected.';
else
echo 'All tests passed.';
fi;
Any advice appreciated: how do I get a list of all test assemblies and how do I declare and set a bool that I can then exitcode with?

Spent way to long trying to get .travis.yml to work; should have just gone straight down the Python route; works as follows called out to from the yml.
import os
import sys
import re
from subprocess import call
root_directory = os.getcwd()
print (root_directory)
regexp = re.compile(r'src[\/\\]NetCoreXunit.?$')
result = False
for child in os.walk(root_directory):
print (child[0])
if regexp.search(child[0]) is not None:
print ("Matched")
test_path = os.path.join(root_directory, child[0])
if os.path.isdir(test_path):
print ("IsDir")
print (test_path)
os.chdir(test_path)
call (["dotnet", "test", "-xml", "output.xml"])
if 'result="Fail"' in open("output.xml").read():
print (test_path + ": Failed tests detected")
result = True
else:
print (test_path + ": All tests passed")
os.chdir(root_directory)
if result:
print ("Failed tests detected")
sys.exit(1)

Related

how to create test report files for bitbucket pipelines?

How does one create test result files for bitbucket pipelines?
My bitbucket bitbucket-pipelines.yml contains:
options:
docker: true
pipelines:
default:
- step:
name: test bitbucket pipelines stuff..
script: # Modify the commands below to build your repository.
- /bin/bash -c 'mkdir test-results; echo Error Hello, World >> test-results/test1.txt; find'
and when running this pipeline i get
/bash -c 'mkdir test-results; echo Error Hello, World >> test-results/test1.txt; find'
<1s
+ /bin/bash -c 'mkdir test-results; echo Error Hello, World >> test-results/test1.txt; find'
.
(... censored/irrelevant stuff here)
./test-results
./test-results/test1.txt
then i get the "build teardown" saying it can't find test-results/test1.txt:
Build teardown
<1s
Searching for test report files in directories named
[test-results, failsafe-reports, test-reports, TestResults, surefire-reports] down to a depth of 4
Finished scanning for test reports. Found 0 test report files.
Merged test suites, total number tests is 0, with 0 failures and 0 errors.
i am surprised that it failed to find the ./test-results/test1.txt file.. hence the question.
Usually, each language/framework has some kind of utility to automatically produce such files as an outcome of a test suite run.
E.g. in Python you could simply run
pytest --junitxml=test-results/pytest.xml
See https://docs.pytest.org/en/latest/how-to/output.html#creating-junitxml-format-files
Manually crafting the xml youself feels brittle and tedious. Better find whatever library/option is available for your language/framework.
per https://support.atlassian.com/bitbucket-cloud/docs/test-reporting-in-pipelines/ seems it has to be XML files.. also it needs to be in a "j-unit xml format" ? an example of which can be found here https://www.ibm.com/docs/en/developer-for-zos/9.1.1?topic=formats-junit-xml-format
.. so try changing bitbucket-pipelines.yml to
options:
docker: true
pipelines:
default:
- step:
name: test bitbucket pipelines stuff..
script: # Modify the commands below to build your repository.
- export IMAGE_NAME2=easyad/easyad_nginx:$BITBUCKET_COMMIT
- /bin/bash bitbucket_pipeline_tests.sh
and in bitbucket_pipeline_tests.sh add
#!/bin/bash
mkdir test-results;
echo '<?xml version="1.0" encoding="UTF-8" ?>
<testsuites id="20140612_170519" name="New_configuration (14/06/12 17:05:19)" tests="225" failures="1262" time="0.001">
<testsuite id="codereview.cobol.analysisProvider" name="COBOL Code Review" tests="45" failures="17" time="0.001">
<testcase id="codereview.cobol.rules.ProgramIdRule" name="Use a program name that matches the source file name" time="0.001">
<failure message="PROGRAM.cbl:2 Use a program name that matches the source file name" type="WARNING">
WARNING: Use a program name that matches the source file name
Category: COBOL Code Review – Naming Conventions
File: /project/PROGRAM.cbl
Line: 2
</failure>
</testcase>
</testsuite>
</testsuites>' >>./test-results/test1.xml
then the pipeline run should say 17 / 45 tests failed, as indicated by the sample XML above...

Powershell: Issue redirecting output from error stream when using docker

I am working on a set of build scripts which are called from a ubuntu hosted CI environment. The powershell build script calls jest via react-scripts via npm. Unfortunately jest doesn't use stderr correctly and writes non-errors to the stream.
I have redirected the error stream using 3>&1 2>&1 and this works fine from just powershell core ($LASTEXITCODE is 0 after running, no content from stderr is written in red).
However when I introduce docker via docker run, the build script appears to not behave and outputs the line that should be redirected from the error stream in red (and crashes). i.e. something like: docker : PASS src/App.test.js. Error: Process completed with exit code 1..
Can anyone suggest what I am doing wrong? because I'm a bit stumped. I include the sample PowerShell call below:-
function Invoke-ShellExecutable
{
param (
[ScriptBlock]
$Command
)
$Output = Invoke-Command $Command -NoNewScope | Out-String
if($LASTEXITCODE -ne 0) {
$CmdString = $Command.ToString().Trim()
throw "Process [$($CmdString)] returned a failure status code [$($LASTEXITCODE)]. The process may have outputted details about the error."
}
return $Output
}
Invoke-ShellExecutable {
($env:CI = "true") -and (npm run test:ci)
} 3>&1 2>&1

In Jenkins job, behave tests stops after any failure

I have created a jenkins "freestyle" job, in which I am trying to run multiple BDD testing process. Following is the "commands" I have put in "Jenins/Build/execute shell" section:
cd ~/FEXT_BETA_BDD
rm -rf allure_reports allure-reports allure-results
pip install behave
pip install selenium
pip install -r features/requirements.txt
# execute features in plan section
behave -f allure_behave.formatter:AllureFormatter -f pretty -o ./allure-reports
./features/plan/*.feature
# execute features in blueprint section
behave -f allure_behave.formatter:AllureFormatter -f pretty -o ./allure-reports
./features/blueprint/*.feature
What I have found is in Jenkins, if there is any test case intermittent failure, such message is shown in the Console Output:
"
...
0 features passed, 1 failed, 0 skipped
0 scenarios passed, 1 failed, 0 skipped
3 steps passed, 1 failed, 1 skipped, 0 undefined
Took 2m48.770s
Build step 'Execute shell' marked build as failure
"
And the leftover test cases are skipped. But if I was to run the behave command on my local host directly, I don't get this type of behaviour. The failure will be detected and the remaining test cases continues till all are finished.
So How may I work around this issue in Jenkins ?
Thanks,
Jack
You may try the following syntax:
set +e
# execute features in plan section
behave -f allure_behave.formatter:AllureFormatter -f pretty -o ./allure-reports
./features/plan/*.feature || echo 'ALERT: Build failed while running the plan section'
# execute features in blueprint section
behave -f allure_behave.formatter:AllureFormatter -f pretty -o ./allure-reports
./features/blueprint/*.feature || echo 'ALERT: Build failed while running the blueprint section'
# Restoring original configuration
set -e
Note:
Goal of set -e is to cause the shell to abort any time an error occurs. If you will see your log output, you will notice sh -xe at the start of execution which confirms that Execute Shell in Jenkins uses -e option. So, to disable it, you can use +e instead. However, it's good to restore it once your purpose is fulfilled so that subsequent commands produce expected result.
Ref: https://superuser.com/questions/1113014/what-would-set-e-and-set-x-commands-do-in-the-context-of-a-shell-script
The ConsoleOutput from the SummaryReporter above indicates that you have only one feature with one scenario (that fails). Behave has no such thing that it stops when the first scenario fails.
An early abortion of the test run can only occur if critical things happen:
A failure/exception in the before_all() hook occurs
A critical exception is raised (SystemExit, KeyboardInterrupt) to end the test run
Your implementation tells behave to abort the test run (make sense on critical failures when all other tests will also fail; why waste the time)
BUT: If the test run is aborted early, all the features/scenarios that are not executed yet are reported as untested counts in the SummaryReporter.
...
0 features passed, 1 failed, 0 skipped, 2 untested
0 scenarios passed, 1 failed, 0 skipped, 3 untested
0 steps passed, 1 failed, 0 skipped, 0 undefined, 6 untested
HINT: Untested counts are normally hidden. They are only shown if the counter is not zero (greater than zero).
This is not the case in your description.
SEE ALSO:
behave: features/runner.abort_by_user.feature

Getting error for Data stage compare command line tool

I am using a utility provided in Data Stage 9.1 diffapicmdline.exe to compare two jobs from different environment. I am using following batch script code to read the job names from text file in loop:
#echo off
SET var=
for /f "delims=" %%i in (grm_deploy.txt) do (C:\IBM\InformationServer91\Clients\Classic\diffapicmdline.exe /lhscd "/d=cs1cd:9080 /h=cs1i04 /u=user/p=password cs1cdhbi04/IST_GRM %%i" /rhscd "/d=cs1cd:9080 /h=cs1i04 /u=test /p=Pass cs1cdhbi04/TEST_NIMISH %%i" /t job /ot html /ol E:\compare_output.html)
echo this is the end
However I am getting following error:
D:\dataStage Components\Scripts>read_file.bat
Validating syntax of /lhscd.
Unknown flag specified 'jbLoadStgARxAR'
this is the end
Can anyone let me know what is going wrong over here?

how to use execute() in groovy to run any command

I usually build my project using these two commands from command line (dos)
G:\> cd c:
C:\> cd c:\my\directory\where\ant\exists
C:\my\directory\where\ant\exists> ant -Mysystem
...
.....
build successful
What If I want to do the above from groovy instead? groovy has execute() method but following does not work for me:
def cd_command = "cd c:"
def proc = cd_command.execute()
proc.waitFor()
it gives error:
Caught: java.io.IOException: Cannot run program "cd": CreateProcess error=2, The
system cannot find the file specified
at ant_groovy.run(ant_groovy.groovy:2)
Or more explicitly, I think binil's solution should read
"your command".execute(null, new File("/the/dir/which/you/want/to/run/it/from"))
According to this thread (the 2nd part), "cd c:".execute() tries to run a program called cd which is not a program but a built-in shell command.
The workaround would be to change directory as below (not tested):
System.setProperty("user.dir", "c:")
"your command".execute(null, /the/dir/which/you/want/to/run/it/from)
should do what you wanted.
Thanks Noel and Binil, I had a similar problem with a build Maven.
projects = ["alpha", "beta", "gamma"]
projects.each{ project ->
println "*********************************************"
println "now compiling project " + project
println "cmd /c mvn compile".execute(null, new File(project)).text
}
I got to fix the issue by running a command as below:
i wanted to run git commands from git folder, so below is the code which worked for me.
println(["git","add","."].execute(null, new File("/Users/xyz/test-org/groovy-scripts/$GIT_REPOS/")).text)
println(["git","commit","-m","updated values.yaml"].execute(null, new File("/Users/xyz/test-org/groovy-scripts/$GIT_REPOS/")).text)
println(["git","push","--set-upstream","origin","master"].execute(null, new File("/Users/xyz/test-org/groovy-scripts/$GIT_REPOS/")).text)

Resources