I run Jenkins job that supposes to output the logs artifacts.
post {
always {
archiveArtifacts artifacts: 'logs/my-job-name_*.log' , fingerprint: true
}
}
I got this error in the Console Output
Error in Logging Configuration. Using default configs.
Unable to configure handler 'debug_file_handler'
And the artifacts don't created
Someone runs the Jenkins job from the VM terminal with the root user.
The root user was the owner of the logs,
so the further Jenkins runs (with the user jenkins) couldn't write to this log.
I deleted the log folder and it worked out.
Related
I have a Jenkins Job DSL job that worked well until about january (it is not used that often). Last week, the job failed with the error message ERROR: java.io.IOException: Failed to persist config.xml (no Stack trace, just that message). There were no changes to the job since the last successful execution in january.
[...]
13:06:22 Processing provided DSL script
13:06:22 New run name is '#15 (Branch_B20_2_x)'
13:06:22 ERROR: java.io.IOException: Failed to persist config.xml
13:06:22 [WS-CLEANUP] Deleting project workspace...
13:06:22 [WS-CLEANUP] Deferred wipeout is used...
13:06:22 [WS-CLEANUP] done
13:06:22 Finished: FAILURE
I thougt that between january and noew, maybe some plugin was updated and the DSL script is now wrong, so I changed my DSL script to the most easy one I could imagine (example from job-dsl plugin page):
job('example') {
steps {
shell('echo Hello World!')
}
}
But the job still fails with the exact same error.
I checked the jenkins logs but nothing to see.
I am running jenkins in a docker swarm container and each job is executed in an own build agent conatiner using docker-swarm-plugin (no changes to that either, worked in january).
The docker deamon logs also show no errors.
The filesystem for the workspace of jenkins also is not full and the user in the build agent container has write access to taht file system.
It even does not work, when I mount an empty tmpfs to the workspace.
Does anyone have an idea what goes wrong or at least a hint where to continue searching for that error?
Jenkins version: 2.281
job-dsl plugin version: 1.77
Docker version: 20.10.4
Problem was solved by updating jenkins to 2.289
Seems like there war some problem with the combination of the versions before. I will keep you updated if some of the next updates chnages anything.
I'm experiencing some odd behavior with a Jenkins build (Jenkins project is a multi-branch pipeline with the Jenkinsfile provided by the source repository). The last step is to deploy the application which involves replacing an artifact on a remote host and then restarting the process that runs it.
Everything works perfectly except for one problem - the service is no longer running after the build completes. I even added some debugging messages after the restart script to prove with the build output that it really was working. But for some reason, after the build exits the service is no longer running. I've done extensive testing to ensure Jenkins connects to the remote host as the correct user, has the right env vars set, etc. Plus, the restart script output is very detailed in the first place - there would be no way to get the successful output if it didn't actually work. So I am assuming the process that runs the deploy steps on the remote host is doing something else after the build completes execution.
Here is where it gets weird: if I run the same exact deploy commands using the Script Console for the same exact remote host, it works. And the service isn't stopped after successfully starting up.
By "same exact" I mean the script is the same, but the DSL is different between the Script Console and the pipeline. For example, in the Script Console, I use
println "deployscript.sh <args>".execute().text
Whereas in the pipeline I use
pipeline {
agent {
node 'mynode'
}
stages {
/*other stages commented out for testing*/
stage('Deploy') {
steps {
script {
sh 'deployscript.sh <args>'
}
}
}
}
}
I also don't have any issues running the commands manually via SSH.
Does anyone know what is going on here? Is there a difference in how the Script Console vs the Build Agent connects to the remote host? Do either of these processes run other commands? I understand that the SSH session is controlled by a Java process, but I don't know much else about the Jenkins implementation.
If anyone is curious about the application itself, it is a Progress Application Server for OpenEdge (PASOE) instance. The deploy process involves un-deploying the old WAR file, deploying the new one, and then stopping/starting the instance.
UPDATE:
I added 60-second sleep to the end of the deploy script to give me time to test the service before the Jenkins process ended. This was successful, so I am certain that when the Jenkins build process exits is when it causes the service to go down. I am not sure if this is an issue with Jenkins owning a process, but again the Script Console handles this fine...
Found the issue. It's buried away in some low-level Jenkins documentation, but Jenkins builds have a default behavior of killing any processes spawned by the build. This confirms that Jenkins was the culprit and the build indeed was running correctly. It was just being killed after the build completed.
The fix is to set the value of the BUILD_ID environment variable (JENKINS_NODE_COOKIE for pipeline, like in my situation) to "dontKillMe".
For example:
pipeline {
agent { /*set agent*/ }
environment {
JENKINS_NODE_COOKIE="dontKillMe"
}
stages { /*set build stages*/ }
}
See here for more details: https://wiki.jenkins.io/display/JENKINS/ProcessTreeKiller
I'm new to Jenkins and I am trying to play around with it.
I'm trying to run a pipeline with a command that will run a simple dir on a remote windows fileserver (with a UNC path provided).
pipeline {
agent any
stages {
stage('Read File') {
steps {
bat 'whoami'
bat label: 'check directory', script: 'dir \\\\filesrv\\C$\\NewUser'
}
}
}
}
The whoami command returns the Jenkins AD user i configured to run the service on the slave
but after that I get an error Access is denied.
I tried giving the Jenkins AD service user local admin permission on the Jenkins master and slave servers and also on the file server. didn't help.
I also tried to explicitly giving that user full control permission on the folder I'm trying to access (located on the file server). didn't help.
I also tried giving permission to the computer accounts like many thread suggenst and point to this link https://serverfault.com/questions/135867/how-to-grant-network-access-to-localsystem-account also didn't help.
Will appreciate some assistance in understanding what permission is it missing?
Thanks in advance
i'll post the solution for any feature reference for those who are using windows environment...
the thing i was missing was making the target folder a shared folder.
so instead of \\\\filesrv\\C$\\NewUser' the path that worked is \\\\filesrv\\NewUser'
where NewUser is the name of the share
I have setup "Enable Artifactory trigger" on my jenkins job. The job gets triggered automatically whenever there is an update in the artifactory path.
The trigger job output provides the artifact that triggered the build.
What i need is to parse the artifact and use as an environment variable?
Jenkins console log of artifactory trigger job
The build was triggered by a change in Artifactory.
The artifact which triggered the build is: http://xxx.xxx.com/artifactory/api/storage/artifactory-snapshot/ABC/exp-abc-12-5-br/abc-12-5-99-015/up-image.tgz
[EnvInject] - Loading node environment variables.
Building remotely on slave in workspace /data/engit-private-jenkins/workspace/Tools/test_artifactory_trigger
[test_artifactory_trigger] $ /bin/sh -xe /tmp/jenkins6159521155154940953.sh
+ echo 'hello world'
hello world
Finished: SUCCESS
How do I access the http artifactory url in the above jenkins console log in the same job? I dont see it getting stored in the environment
We can get the URL of the file in Artifactory that triggered the job using :
environment {
RT_TRIGGER_URL = "${currentBuild.getBuildCauses('org.jfrog.hudson.trigger.ArtifactoryCause')[0]?.url}"
}
Build trigger documentation link
Jenkins is configured to run with Jenkins Service Log On Account user: Domain1\User1
My job runs the command:
echo %USERDOMAIN%\%USERNAME%
And it prints: Domain1\User1
Now I change the Jenkins Service Log On Account user: Domain1\User2
Restart Jenkins service.
Run the job again, but it still prints: Domain1\User1
Why the %USERNAME% isn't refreshed?
The %USERNAME% environment variable shows the username of who runs the jenkins service, not that one, who is currently logged in.
I've found some bug reports regarding this issue:
https://issues.jenkins-ci.org/browse/JENKINS-27739
https://issues.jenkins-ci.org/browse/JENKINS-27739
Should be solved in Jenkins version 1.617