I am executing tests concurrently on jenkins using gradle.
test {
useTestNG {
setParallel('classes')
setThreadCount(4)
}
}
Everything runs great, but when test fails I see related jenkins log messed up with output from other threads. Is there a way that I can have for every test separated output?
UPD
It has nothing to do with jenkins, output logs already messed up in gradle.
Related
From time to time our Jenkins pipeline is marked as unstable, after researching it I found that it originates from the Junit plugin, which is publishing test results.
The weird thing is that all the tests are successfully passing (logs and also the whole pipeline proceeding), however the exported test results show that there are some errors.
Can anyone explain this to me?
All tests are passing (logs):
The test results exported by junit are showing some failures:
The whole build is marked as unstable (yellow):
I'm experiencing some odd behavior with a Jenkins build (Jenkins project is a multi-branch pipeline with the Jenkinsfile provided by the source repository). The last step is to deploy the application which involves replacing an artifact on a remote host and then restarting the process that runs it.
Everything works perfectly except for one problem - the service is no longer running after the build completes. I even added some debugging messages after the restart script to prove with the build output that it really was working. But for some reason, after the build exits the service is no longer running. I've done extensive testing to ensure Jenkins connects to the remote host as the correct user, has the right env vars set, etc. Plus, the restart script output is very detailed in the first place - there would be no way to get the successful output if it didn't actually work. So I am assuming the process that runs the deploy steps on the remote host is doing something else after the build completes execution.
Here is where it gets weird: if I run the same exact deploy commands using the Script Console for the same exact remote host, it works. And the service isn't stopped after successfully starting up.
By "same exact" I mean the script is the same, but the DSL is different between the Script Console and the pipeline. For example, in the Script Console, I use
println "deployscript.sh <args>".execute().text
Whereas in the pipeline I use
pipeline {
agent {
node 'mynode'
}
stages {
/*other stages commented out for testing*/
stage('Deploy') {
steps {
script {
sh 'deployscript.sh <args>'
}
}
}
}
}
I also don't have any issues running the commands manually via SSH.
Does anyone know what is going on here? Is there a difference in how the Script Console vs the Build Agent connects to the remote host? Do either of these processes run other commands? I understand that the SSH session is controlled by a Java process, but I don't know much else about the Jenkins implementation.
If anyone is curious about the application itself, it is a Progress Application Server for OpenEdge (PASOE) instance. The deploy process involves un-deploying the old WAR file, deploying the new one, and then stopping/starting the instance.
UPDATE:
I added 60-second sleep to the end of the deploy script to give me time to test the service before the Jenkins process ended. This was successful, so I am certain that when the Jenkins build process exits is when it causes the service to go down. I am not sure if this is an issue with Jenkins owning a process, but again the Script Console handles this fine...
Found the issue. It's buried away in some low-level Jenkins documentation, but Jenkins builds have a default behavior of killing any processes spawned by the build. This confirms that Jenkins was the culprit and the build indeed was running correctly. It was just being killed after the build completed.
The fix is to set the value of the BUILD_ID environment variable (JENKINS_NODE_COOKIE for pipeline, like in my situation) to "dontKillMe".
For example:
pipeline {
agent { /*set agent*/ }
environment {
JENKINS_NODE_COOKIE="dontKillMe"
}
stages { /*set build stages*/ }
}
See here for more details: https://wiki.jenkins.io/display/JENKINS/ProcessTreeKiller
I'm working on workflow-durable-task-step Jenkins plugin and I want to debug one of the tests of this plugin. To understand the problem I need to see Jenkins logs. By default INFO level logs are shown during tests, but I need FINE level.
How to show all possible logs for internal Jenkins process during mvn test command?
I've tried to run tests like mvn -Djava.util.logging.loglevel=FINEST test but this option changes log level for test itself but not Jenkins internal process. I mean if I write something like LOGGER.log(Level.FINE, "Hello world"); in the body of my test - it will be show but no logs with FINE level of Jenkins process started by my test will be displayed.
I think you are looking for ${JENKINS_URL}/log/levels
Documentation: Logger Configuration
Also see: Viewing Logs
Within your test add something like the following:
#Rule
public LoggerRule logs = new LoggerRule()
.recordPackage(YourClass.class, Level.FINE)
The LoggerRule ultimately is what you wanted
I have Jenkins building my C# .NET Core api project. I added some xUnit tests and included a powershell script inside of my Jenkins build with the "dotnet test" command to execute the tests.
That all works well and the tests are run and i can see the output in the Jenkins console.
The problem is that if i have failing tests nothing happens - jenkins goes merrily along and finished up the build process and reports it as a success.
How can i get it to fail the build?
Is there a response from the 'dotnet test' command?
I know there are xUnit Jenkins plugins but they all seem to revolve around "display the results of xUnit tests". Which is not really what i am after. I want to ACT on the results of the tests, not just see them in fancy html.
You should check for the return code from dotnet test command. It returns 0 if all tests were successful and 1 if any of the tests failed. Unfortunately it's not documented but was confirmed in this issue
On my Jenkins pipleline I run unit tests on both: debug and release configurations. Each test configuration generates separate JUnit XML results file. Test names on both: debug and release configuration are same. Currently I use the following junit command in order to show test results:
junit allowEmptyResults: true, healthScaleFactor: 0.0, keepLongStdio: true, testResults: 'Test-Dir/Artifacts/test_xml_reports_*/*.xml'
The problem is that on Jenkins UI both: debug and release tests results are shown together and it is not possible to know which test (from debug or release configuration) is failed.
Is it possible to show debug and release tests results separately? If yes, how can I do that?
We run the same integration tests against two different configurations with different DB types. We use maven and the failsafe plugin, so I take advantage of the -Dsurefire.reportNameSuffix so I can see the difference between the two runs.
The following is an example block of our Jenkinsfile:
stage('Integration test MySql') {
steps {
timeout(75) {
sh("mvn -e verify -DskipUnitTests=true -DtestConfigResource=conf/mysql-local.yaml " +
"-DintegrationForkCount=1 -DdbInitMode=migrations -Dmaven.test.failure.ignore=false " +
"-Dsurefire.reportNameSuffix=MYSQL")
}
}
post {
always {
junit '**/failsafe-reports/*MYSQL.xml'
}
}
}
In the report, the integration tests run against mysql then show up with MYSQL appended to their name.
It looks no solution for my question.
As a workaround I changed JUnit XML report format and included build variant name (debug/release) as a package name.