I have a declarative pipeline, and I have two groups of tests:
The failures from 1 group are actual warnings, and they must stop the build.
Failures from the second group are only warnings: ideally I'd like the stage to fail, but the overall build status to remain successful.
Simple version of my attempt at the warnings group. I have 2 steps:
steps {
bat label: "Compliance Test - Warnings",
script: "${env.WARNTESTCMD} -report compliance_report.xml"
catchError(buildResult: hudson.model.Result.SUCCESS, stageResult: hudson.model.Result.FAILURE) {
junit testResults: 'compliance_report.xml',
allowEmptyResults: true
}
}
I've tried using catchError to give me the result I want, but no dice. The status of the build is set to UNSTABLE, and I don't seem to have any say in that.
I guess the first thing I should ask is whether that's possible at all in Jenkins? Can I keeping my test results in junit format to be recognised by Jenkins as tests, or do I need to store them as artefacts but not official tests?
Related
I am using the Junit plugin for the tests report. My problem is most of the builds will mark as unstable which should be failed. Is there any way to mark the build as failed when having more than one test failed?
Example1:
Example2:
Example3:
As of now, there is no directly configurable threshold option for the JUnit plugin in Jenkins (Here is the open feature request). However, you can add an if condition after reporting the test results and set the build status.
def junitTestSummary = junit testResults: "**/*.xml"
// junitTestSummary.[failCount, totalCount, skipCount, passCount]
if(junitTestSummary.failCount>0){
error("Failing the pipeline. Because, there are ${junitTestSummary.failCount} tests failed")
}
I have a Jenkinsfile that contains a few logical checks for a commit to Github, and only after certain criteria are met, will it trigger a downstream build job. The relevant parts of the Jenkinsfile are below:
script {
if (... a bunch of conditions) {
echo 'Building because of on-demand job!'
build job: '/my/downstream/job', parameters: [gitParameter(name: 'BRANCH', value: env.BRANCH_NAME),
gitParameter(name: 'GIT_BRANCH', value: env.GIT_LOCAL_BRANCH)], wait: true, propagate: true
}
echo 'Skipping'
currentBuild.result = 'NOT_BUILT'
}
However, in my Github UI, anytime a job is skipped, it is rendered as a failure. For instance, when a commit is made that does not satisfy the condition, Jenkins correctly skips the build:
However, on the Github commit history, it shows as a failure:
I know this is somewhat trivial and literally only for appearances but it is quite aggravating to see so many red Xs. Is the best solution just to switch the currentBuild.result to SUCCESS? I am somewhat hesitant to do so since it's not technically a success (nothing was built) but I don't see another way of getting Github to not mark it as failed.
I am trying to setup various Jenkins pipelines whose last stage is always to run some acceptance tests. To cut a long story short, acceptance tests and test data (much of which is shared) for all products are checked into the same repository which is about 0.5 GB in size. It therefore seemed best to have a separate job for the acceptance tests and trigger it with a "build" step from each pipeline with the appropriate arguments to run the relevant tests. (It is also sometimes useful to rerun these tests without rebuilding the product)
stage('AcceptanceTest') {
steps {
build job: 'run-tests', parameters: ..., wait: true
}
}
So far I have seen that I can either:
trigger the job as normal. But this uses an extra agent/executor,
there doesn't seem to be a way to tell it to reuse the one from the
build (main pipeline). Both pipelines start with "agent { label 'master' }" but that
seems to mean "allocate a new agent on a node matching master".
trigger the job with the "wait: false" argument. This doesn't
block an executor but it does mean I can't report the results of the
tests in the main pipeline. It gives the impression that the test
stage has always succeeded.
Is there a better way?
I seem to have solved this, by adding "agent none" at the top of my main pipeline and moving "agent { label 'master' }" into the build stage. I can then leave my 'AcceptanceTest' stage without an agent and define it in the 'run-tests' job as before. I was under the impression from the docs that if you put agents in stages then all stages needed to have one, but it seems not to be the case. Which is lucky for this usecase...
I don't think that there's another way for declarative pipeline.
On the other hand for scripted pipeline you could execute this outside of node {} and it would just hold onto one executor on master releasing the one on slave.
stage("some") {
build job: 'test'
node {
...
Related question: Jenkis - Trigger another pipeline job in same machine - without creating new "Executor"
Background
After a lot of hard work we finally got a Jenkins CI pulling code from out GitHub repositories and are now doing Continuous Integration as well as Deployment.
We get the code and only deploy it if all the tests pass, as usual.
Now I have checked that there are a number of plugins for Java that besides running the tests, also do test coverage, like Cobertura.
But we don't use Java. We use Elixir.
In the Elixir world, we have excoveralls, which is a facade for the coveralls API. The coveralls API supports jenkins so it stands to reason I would find a Coveralls Plugin for Jenkins.
I was wrong. There is nothing.
Questions
So now I have a test coverage metric that is basically useless because I can't integrate it with Jenkins.
Are there any Erlang/Elixir plugins one can use with Jenkins for code coverage?
I also created a Issue in the projects ( which seems to be abandoned ... ) https://github.com/parroty/excoveralls/issues/167
I have a stage to publish the coverage on my Jenkinsfile. I'm not sure if that is the metric that you want but...
stage('Publish Coverage') {
when{
branch 'master'
}
steps {
publishHTML target: [
allowMissing: true,
alwaysLinkToLastBuild: true,
keepAll: true,
reportDir: 'cover',
reportFiles: 'excoveralls.html',
reportName: 'Coverage Report'
]
}
}
I have found 2 ways of doing this:
Using Hex package JUnit formatter together with junit post pipeline step
Using covertool together with Cobertura Jenkins pluing
Option 1
This solution works and is quite nice. It forces me to change the test_helper.exs but that is a minor inconvenience overall. It is nice but it only offers the most basic of reports and for me this is where it fails.
Option 2
The option I decided to go with. Yes, making the Jenkinsfile work for Cobertura was a nightmare, specially because in previous versions it was not even possible and because there is contradictory information scattered all over the place.
However, once you get that Jenkinsfile going, you get to rip those sweet reports from Cobertura. Cobertura was made with Java in mind, there is no two ways about it. In the reports you see things like Class coverage and such, but you can easily translate that do modules. The interface offers a lot more information and tracks coverage over time, which is something I actually want.
For future notice, here is my Jenkinsfile:
pipeline {
agent any
environment {
SOME_VAR = "/home/deployer"
}
stages {
stage("Build") {
steps {
sh "MIX_ENV=test mix do deps.get, deps.compile"
}
}
stage("Test") {
steps {
sh "mix test --cover"
}
}
stage("Credo"){
steps{
sh "mix credo --strict"
}
}
stage("Deploy"){
when{
expression{
env.BRANCH_NAME == "master"
}
}
steps{
sh '''
echo "Deploy with AWS or GCP or whatever"
'''
}
}
}
post{
always{
cobertura coberturaReportFile: "coverage.xml"
}
}
}
Of notice:
1. I am extremely Nazi with my code, so I also use Credo. You can further configure it as to not blow the entire pipeline because you missed a new line at the end of file but as I said, I am quite Nazi with my code.
2. The Deploy stage only runs if the pushed branch is Master. There are other ways of doing this, but I found it that having this way for a small project was good enough.
Overall I like covertools for now but I don't know if the first solution has the same potential. At least I didn't see it.
Hope this post helps!
Original thread:
https://elixirforum.com/t/excoveralls-plugin-for-jenkins-ci/18842
Another way to post coverage from Jenkins for Elixir project is using ExCoveralls option mix coveralls.post. This allows you to post the coverage from any host, including your Jenkins server. Based on the example on this Jenkins tutorial page, you can write in Jenkinsfile like this:
pipeline {
agent any
stages {
// Assuming all environment variables are set beforehand
stage('run unit test') {
steps {
sh 'echo "Run Unit Test and Post coverage"'
sh '''
MIX_ENV=test mix coveralls.post --token $COVERALLS_REPO_TOKEN --sha $GIT_COMMIT --branch $GIT_BRANCH --name "jenkins" --message $GIT_COMMIT_MSG
'''
}
}
}
}
In the old configuration we had 2 jobs, test and build.
The build ran after test had run successfully, but we could manually trigger build if we want to skip the tests.
After we switched to multiple pipeline using Jenkinsfile, we had to put those 2 build jobs in to the same file:
stage('Running tests'){
...
}
stage('Build'){
...
}
So now the build step is only triggered after running tests successfully, and we cannot manually trigger build, without commenting out the test steps and commit to the repository.
I am wondering if there is a better approach/practise to utilise the Jenkinsfile to overcome this limitation?
Using pipeline and Jenkinsfile is becoming the standard and preferred way of running jobs on Jenkins now a days. So using a Jenkinsfile is certainly the way to go.
One way to solve the problem is to make the job parameterized:
// Set the parameter properties, this will be done at the first run so that we can trigger with parameters manually
properties([parameters([booleanParam(defaultValue: true, description: 'Testing will be done if this is checked', name: 'DO_TEST')])])
stage('Running tests'){
// Putting the check inside of the stage step so that we don't confuse the stage view
if (params['DO_TEST']) {
...
}
}
stage('Build'){
...
}
The first time the job runs, it will add a parameter to the job. After that we can trigger manually and select whether tests should run. The default value will be used when it's triggered by SCM.