Missing log output from static logger class field - jenkins

When executing JUnit tests in a Maven build on Jenkins some logs are not being written to the console.
class Foo {
static final Logger log = LoggerFactory.getLogger(Foo.class);
static Logger log() {
return LoggerFactory.getLogger(Foo.class);
}
void baz() {
log.error("error 1")
log().error("error 2")
}
}
Method baz is called from my JUnit test and error 2 is logged when executing the build on Jenkins, but error 1 is missing. Locally I cannot reproduce the problem. Executing the same Maven build on my machine I see the output of both logging statements.
I run the same Java version on Jenkins and locally:
OpenJDK Runtime Environment Temurin-11.0.16.1+1 (build 11.0.16.1+1)
Any pointers what could be causing this or how I could further drill down into the problem for finding the root cause?
Update: I can see that the Logger instance from the first logging statement is a different one than the one from the second logging statement.

Turns out some other tests in our suite ends up calling LogManager.shutdown at some point. This causes the static final logger reference to become stale. Apparently this only started showing up after some other unrelated changes caused the order of test execution to change so the shutdown of the logging system would happen before the relevant logger is being used in another test.

Related

Jenkins Script Console vs Build Agent

I'm experiencing some odd behavior with a Jenkins build (Jenkins project is a multi-branch pipeline with the Jenkinsfile provided by the source repository). The last step is to deploy the application which involves replacing an artifact on a remote host and then restarting the process that runs it.
Everything works perfectly except for one problem - the service is no longer running after the build completes. I even added some debugging messages after the restart script to prove with the build output that it really was working. But for some reason, after the build exits the service is no longer running. I've done extensive testing to ensure Jenkins connects to the remote host as the correct user, has the right env vars set, etc. Plus, the restart script output is very detailed in the first place - there would be no way to get the successful output if it didn't actually work. So I am assuming the process that runs the deploy steps on the remote host is doing something else after the build completes execution.
Here is where it gets weird: if I run the same exact deploy commands using the Script Console for the same exact remote host, it works. And the service isn't stopped after successfully starting up.
By "same exact" I mean the script is the same, but the DSL is different between the Script Console and the pipeline. For example, in the Script Console, I use
println "deployscript.sh <args>".execute().text
Whereas in the pipeline I use
pipeline {
agent {
node 'mynode'
}
stages {
/*other stages commented out for testing*/
stage('Deploy') {
steps {
script {
sh 'deployscript.sh <args>'
}
}
}
}
}
I also don't have any issues running the commands manually via SSH.
Does anyone know what is going on here? Is there a difference in how the Script Console vs the Build Agent connects to the remote host? Do either of these processes run other commands? I understand that the SSH session is controlled by a Java process, but I don't know much else about the Jenkins implementation.
If anyone is curious about the application itself, it is a Progress Application Server for OpenEdge (PASOE) instance. The deploy process involves un-deploying the old WAR file, deploying the new one, and then stopping/starting the instance.
UPDATE:
I added 60-second sleep to the end of the deploy script to give me time to test the service before the Jenkins process ended. This was successful, so I am certain that when the Jenkins build process exits is when it causes the service to go down. I am not sure if this is an issue with Jenkins owning a process, but again the Script Console handles this fine...
Found the issue. It's buried away in some low-level Jenkins documentation, but Jenkins builds have a default behavior of killing any processes spawned by the build. This confirms that Jenkins was the culprit and the build indeed was running correctly. It was just being killed after the build completed.
The fix is to set the value of the BUILD_ID environment variable (JENKINS_NODE_COOKIE for pipeline, like in my situation) to "dontKillMe".
For example:
pipeline {
agent { /*set agent*/ }
environment {
JENKINS_NODE_COOKIE="dontKillMe"
}
stages { /*set build stages*/ }
}
See here for more details: https://wiki.jenkins.io/display/JENKINS/ProcessTreeKiller

How to set log level for Jenkins in plugin tests

I'm working on workflow-durable-task-step Jenkins plugin and I want to debug one of the tests of this plugin. To understand the problem I need to see Jenkins logs. By default INFO level logs are shown during tests, but I need FINE level.
How to show all possible logs for internal Jenkins process during mvn test command?
I've tried to run tests like mvn -Djava.util.logging.loglevel=FINEST test but this option changes log level for test itself but not Jenkins internal process. I mean if I write something like LOGGER.log(Level.FINE, "Hello world"); in the body of my test - it will be show but no logs with FINE level of Jenkins process started by my test will be displayed.
I think you are looking for ${JENKINS_URL}/log/levels
Documentation: Logger Configuration
Also see: Viewing Logs
Within your test add something like the following:
#Rule
public LoggerRule logs = new LoggerRule()
.recordPackage(YourClass.class, Level.FINE)
The LoggerRule ultimately is what you wanted

Jenkins plugin development - how to check if plugin is executing on Jenkins Master

I have implemented a custom Pipeline-compatible Jenkins plugin which extends SynchronousNonBlockingStepExecution class. I would like to implement particular logic around execution on Jenkins Master vs Jenkins Slave. How can I check from within the plugin code if the step is running on Master?
Usually, the plugin execution happens on a build executor in a Jenkins slave. But, to verify which part of the plugin code execute on master/slave, you can use the following.
Find out an environment variable that is specific to your slave. I use NODE_NAME env var for this purpose. You can find out all available environment variables in your Jenkins instance via http://localhost:8080/env-vars.html (replace the host name to match your instance). In there, you'll find the NODE_NAME:
NODE_NAME
Name of the agent if the build is on an agent, or "master" if run on master
Log/print the environment variable. Following is a code snippet that shows how you can print it. In there, I show the example using the setUp method of hudson BuildWrapper for reference.
#Override
public Environment setUp(final AbstractBuild build, final Launcher launcher,
final BuildListener listener) throws IOException, InterruptedException {
String node = System.getenv("NODE_NAME");
String msg = "I'm executing on node: " + node;
listener.getLogger().println(msg); //prints to build log
logger.info(msg); // slf4j logger - prints to catalina log/jenkins log
}
Alternatively, you can also write the value into a file, and read the value from there.

How to statically check the code of a Jenkins pipeline that uses shared libraries?

I'm coding Jenkins pipelines but my development process is extremely inefficient. For each modification, I must commit and push my code, and manually run the pipeline. A simple typo makes me do it all again. My version control log is a mess.
I'm trying to use the Pipeline Linter, but it fails since it doesn't recognize the Shared Libraries that I'm using.
Here is a simplified version of my code that I'll try to lint. This code works when I run it from the interface:
//importing class MyClass defined in src/com/company/MyClass.groovy
import com.company.MyClass.*
//importing src/com/company/helper/Log.groovy
import com.company.helper.Log;
def call(String env) {
def mud
pipeline {
agent none
stages{
stage('Checkout') {
agent any
steps {
mud = new MyClass(script: this)
}
}
}
}
}
I run the pipeline linter with this command:
ssh -p 8222 jenkins declarative-linter < myPipeline.groovy
And, although it works fine when I run the pipeline in Jenkins, I get the following lint validation error:
Errors encountered validating Jenkinsfile:
WorkflowScript: 2: unable to resolve class com.company.helper.Log
# line 2, column 1.
import com.company.helper.Log;
^
WorkflowScript: 25: unable to resolve class MyClass
# line 25, column 35.
mud = new MyClass(script: this)
How do I use the pipeline linter with shared libraries?
I also welcome any help to streamline my development process!
I couldn't find a good solution for that , so I created a pipeline job that contains all the relevant functions from the shared library.
once I have this flow , I can ply with it without commit anything until it works ...
Answer is that it isn't possible to check and Jenkins pipeline developers are doomed to have a very inefficient development process.
I've just found that there is an issue about this in Jenkins bug database. I've tried some of the solutions, but nothing worked.
I'd still like any tips about how to efficiently code Jenkins pipelines.

Gradle/TestNG: split log for concurrent tests

I am executing tests concurrently on jenkins using gradle.
test {
useTestNG {
setParallel('classes')
setThreadCount(4)
}
}
Everything runs great, but when test fails I see related jenkins log messed up with output from other threads. Is there a way that I can have for every test separated output?
UPD
It has nothing to do with jenkins, output logs already messed up in gradle.

Resources