Jenkins: Logs from jenkins (System log -> All Jenkins logs) are not reflected in the jenkins.log or slave.log files. How can I fix this?
Related
I used "apt install jenkins" installed jenkins and it is started successful. but when it build a item it throw a error as below:
+ whoami
jenkins
+ supervisorctl restart Zonr.RuifWu.WebApi
error: <class 'PermissionError'>, [Errno 13] Permission denied: file: /usr/lib/python3/dist-packages/supervisor/xmlrpc.py line: 560
Build step 'Execute shell' marked build as failure
Finished: FAILURE
So I vi /etc/defualt/jenkins and modify the config
#JENKINS_USER=$NAME
#JENKINS_USER=$NAME
JENKINS_USER=root
JENKINS_GROUP=root
I execute the command "systemctl restart jenkins" and build the item again, it still show the error message as same as above. the current operaion user is still jenkins not root.
I modify the config file and input some error code intentionally, and the jenkins can be restart successfuly! this is my config file(the ,,,,, and ---- is just for test):
# pulled in from the init script; makes things easier.
NAME=jenkins
# arguments to pass to java
# Allow graphs etc. to work even when an X server is present
JAVA_ARGS="-Djava.awt.headless=true"
#JAVA_ARGS="-Xmx256m"
# make jenkins listen on IPv4 address
#JAVA_ARGS="-Djava.net.preferIPv4Stack=true"
PIDFILE=/var/run/$NAME/$NAME.pid
# user and group to be invoked as (default to jenkins)
#JENKINS_USER=$NAME
#JENKINS_USER=$NAME
JENKINS_USER=root
JENKINS_GROUP=root
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
------------------------------------------------------------
# location of the jenkins war file
JENKINS_WAR=/usr/share/java/$NAME.war
# jenkins home location
JENKINS_HOME=/var/lib/$NAME
My jenkins version is 2.346.2. so my question is:
how can I use jenkins user to run "supservisorctl restart xxxxx"? I preferd jenkins user rather than root.
Is this "/etc/defualt/jenkins" the jenkins config file? how to change running user to root?
Could anyone help me please? Thank you very much!
I have found the config here: /usr/lib/systemd/system/jenkins.service
I changed User=root Group=root and then systemctl restart jenkins.
It works well!
I run Jenkins job that supposes to output the logs artifacts.
post {
always {
archiveArtifacts artifacts: 'logs/my-job-name_*.log' , fingerprint: true
}
}
I got this error in the Console Output
Error in Logging Configuration. Using default configs.
Unable to configure handler 'debug_file_handler'
And the artifacts don't created
Someone runs the Jenkins job from the VM terminal with the root user.
The root user was the owner of the logs,
so the further Jenkins runs (with the user jenkins) couldn't write to this log.
I deleted the log folder and it worked out.
I have a Jenkins Job DSL job that worked well until about january (it is not used that often). Last week, the job failed with the error message ERROR: java.io.IOException: Failed to persist config.xml (no Stack trace, just that message). There were no changes to the job since the last successful execution in january.
[...]
13:06:22 Processing provided DSL script
13:06:22 New run name is '#15 (Branch_B20_2_x)'
13:06:22 ERROR: java.io.IOException: Failed to persist config.xml
13:06:22 [WS-CLEANUP] Deleting project workspace...
13:06:22 [WS-CLEANUP] Deferred wipeout is used...
13:06:22 [WS-CLEANUP] done
13:06:22 Finished: FAILURE
I thougt that between january and noew, maybe some plugin was updated and the DSL script is now wrong, so I changed my DSL script to the most easy one I could imagine (example from job-dsl plugin page):
job('example') {
steps {
shell('echo Hello World!')
}
}
But the job still fails with the exact same error.
I checked the jenkins logs but nothing to see.
I am running jenkins in a docker swarm container and each job is executed in an own build agent conatiner using docker-swarm-plugin (no changes to that either, worked in january).
The docker deamon logs also show no errors.
The filesystem for the workspace of jenkins also is not full and the user in the build agent container has write access to taht file system.
It even does not work, when I mount an empty tmpfs to the workspace.
Does anyone have an idea what goes wrong or at least a hint where to continue searching for that error?
Jenkins version: 2.281
job-dsl plugin version: 1.77
Docker version: 20.10.4
Problem was solved by updating jenkins to 2.289
Seems like there war some problem with the combination of the versions before. I will keep you updated if some of the next updates chnages anything.
I have a jenkins master-slave setup through JNLP connections. Everything is working fine except I can not find any logs on the slave nodes. There are logs on the master in $JENKINS-HOME/logs/slaves but none on the slave node.
Can you tell me on which path the log is or if there is even logging on the slave node?
Thank you very much!
Q
Jenkins stores all logs on master only, that's why you cannot find any log on nodes.
On Windows, the slave stores error logs in the same folder as the slave.jar file.
This reports things like:
Dec 19, 2018 2:38:14 PM hudson.remoting.Engine waitForServerToBack
"INFO: Failed to connect to the master. Will retry again".
That message will never appear be uploaded to Master.
I would like to see a similar log file on other slaves.
They are mostly transferred to Master by TCP.
For example, when a step starts, like a shell task, will do like this
call your shell content
# your script will be transform into a script file
script.sh > jenkins-log.txt
# running...
# after running
echo $? > jenkins-result.txt
during the running progress, data will be transport by TCP(pull or push)
jenkins-log.txt -> Filestream -> RemoteStream -> Master
and in master, you will see single log like it
jobs/xxx/branch/master/<id>/log
When job is done, master will send the command to clean the temp dir in the agent, so you can't see anything about logs.
One more thing, in our company, we are facing the problem of too much logs are being sent to Master like a DDOS, so a simple way to solve is to add a pipeline after the shell
limit by tail
xxx | tail -c 512k
or limit the size by head command
xxx | (head -n 1000;dd status=none of=/dev/null)
I am trying to push our jenkins build logs to S3.
I used Groovy plugin and the following script in Build phase
// This script should be run in a system groovy script build step.
// The FilePath class understands what node a path is on, not just the path.
import hudson.FilePath
// Get path to console log file on master.
logFile = build.getLogFile()
// Turn this into a FilePath object.
logFilePathOnMaster = new FilePath(logFile)
logFileName = build.envVars["JOB_BASE_NAME"] + build.envVars["RT_RELEASE_STAGING_VERSION"] + '.txt'
// Create remote file path obj to build agent.
remoteLogFile = new FilePath(build.workspace, logFileName)
// Copy contents of master's console log to file on build agent.
remoteLogFile.copyFrom(logFilePathOnMaster)
And then I am using S3 plugin to push .txt files to S3.
But this script fetches the build log file from the master node.
How are the build logs transferred from slave to master node ?
Can I access the build log file on my slave node without master's involvement at all ?
The slave node must be preserving the build logs while building somewhere ? I cant seem to find it.
I am not much familiar with Groovy but here is the solution which worked for me using shell script.
I am using Jenkins 'Node and Label parameter plugin' to run our java process on a slave node. Job is triggered using 'Build >> Execute Shell' option. The log is collected into a file as below :
sudo java -jar xxx.jar | sudo tee -a ${JOB_NAME}/${BUILD_NUMBER}.log 2>&1
This log file is then pushed to S3 :
sudo aws --region ap-south-1 s3 cp ${JOB_NAME}/${BUILD_NUMBER}.log s3://bucket/JenkinsLogs/${JOB_NAME}/${BUILD_NUMBER}.log
Its working perfectly for us. Hope it helps you too.