jenkins workspace dir issue - docker

My pipeline has been work all good until today.
Jenkins dynamically spins up a slave container (docker cloud) where all my steps are run from. Error as below, just wondering why jenkins create a tmp dir in the workspace dir.
[Pipeline] sh
[xxx_root_proj] Running shell script
+ cd ./xxx_root_proj
/home/jenkins/workspace/xxx_root_proj#tmp/durable-b532c37c/script.sh: 3: cd: can't cd to ./xxx_root_proj
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 2
Finished: FAILURE
Just wondering if anyone has come across this before.
I think the "/home/jenkins/workspace/xxx_root_proj#tmp" is the problem, not sure how jenkins uses this.
Thanks in advance

#tmp folder is created by Jenkins in workspace for shared library components etc. Basically a temp working dir for the pipeline

Related

Jenkins console output not printing anything between start and end of pipeline

I have created a new job in Jenkins using pipeline. After this I have provided Gitlab project url and Jenkinsfile path in SCM. While building the pipeline I am not able to see any message between start pipeline and end pipeline.
While putting invalid code to JenkinsFile, build is failing but when running simple command like echo its not printing anything to console.
Is there anything I am missing?
Console Output
[Pipeline] Start of Pipeline
[Pipeline] End of Pipeline
Finished: SUCCESS
Jenkinsfile
pipeline {
agent any
stages {
stage ('Build') {
steps {
echo
'Running build phase. '
}
}
}
}
console output
Jenkinsfile code
I would suggest to install all the required plugins and then restart your Jenkins server and if you are running this locally then a system restart might be helpful.
For testing, try the same echo in a scripted pipeline block:
steps {
script {
echo 'Running build phase. '
}
}

jenkins pipeline running inside docker image just hangs

I have a very simple script to test running inside a docker container.
The container starts and I can connect to the container.
node('docker') {
docker.image('python:3').inside() {
sh "python --version"
}
}
In the end the job fails. Any ideas what is wrong?
Update 1:
I have added the environment variable to Jenkins and now see the following. Looks like some strange variables are passed to docker.
Any idea how I can examine the command given in the sh?
[Pipeline] stage
[Pipeline] { (test)
[Pipeline] echo
I'm here
[Pipeline] sh
invalid argument "=" for "-e, --env" flag: invalid environment variable: =
See 'docker exec --help'.
process apparently never started in /var/lib/jenkins-
slave/workspace/SYSTEM/clean-artifactory#tmp/durable-4d51de81
[Pipeline] }
[Pipeline] // stage
This was a bug in the Durable Task plugin and has been fixed by the latest release (1.33).
See JENKINS-59903
I had the same problem and after a long wait this error message is logged in console:
Cannot contact : java.io.FileNotFoundException: File '/var/lib/jenkins/workspace/myproject#2#tmp/durable-1a2d497f/output.txt' does not exist
The problem is Durable Task plugin. In my case I downgraded Durable Task plugin from latest (1.31) to 1.30 and that solved the problem.
I'm using Docker Pipeline version 1.21

Disable displaying '[Pipeline]*' lines in Jenkins pipeline logs?

Jenkins scripted pipeline logs are littered with lines that begin with [Pipeline] that give insight to pipeline flow. Is there a way, using groovy scripted pipelines, to not include them in the logs?
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/jenkins_home/jobs/pipe/jobs/pd.oh52a1/branches/dev/workspace
[Pipeline] {
[Pipeline] sh
+ echo /var/jenkins_home/jobs/pipe/jobs/pd.oh52a1/branches/dev/workspace
[Pipeline] }
[Pipeline] // node
[Pipeline] echo
I am hoping I can programmatically enable/disable these at the beginning of a run so I can see them if I want to but turn them off by default.

Different working directories in jenkins pipeline

Here's a simple Jenkins pipeline job that highlights the different working directories seen by sh vs script.
pipeline {
agent any
stages {
stage('Stage1') {
steps {
sh """
pwd
"""
script {
echo "cwd--pwd: "+ "pwd".execute().text
}
}
}
}
}
Here's how the Jenkins instance was launched
/Users/MyJenkinsUser/dirJenkinsLaunched$ java -jar /Applications/Jenkins/jenkins.war --httpPort=8080
Here's the console output of the job...
Started by user MyJenkinsUser
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] node
Running on Jenkins in /Users/MyJenkinsUser/.jenkins/jobs/TestPipeline/workspace
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Stage1)
[Pipeline] sh
[workspace] Running shell script
+ pwd
/Users/MyJenkinsUser/.jenkins/jobs/TestPipeline/workspace
[Pipeline] script
[Pipeline] {
[Pipeline] echo
cwd--pwd: /Users/MyJenkinsUser/dirJenkinsLaunched
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
I find it curious that they would be different working directories, the shell command sh step uses the workspace as the working directory while the groovy script step uses the directory the Jenkins process was launched.
Question: how can I make my Jenkins scripted pipeline steps (script) use the workspace as the working directory by default?
I guess it makes sense after realizing this most clearly, that groovy is a java thing and we launch the Jenkins war file from java, and that launching imposes a certain working directory. I wonder the origins of this design for the Jenkins behavior. It made me go a bit wonky with a bunch of file not found errors as I ported some sh commands into the more substantive groovy syntax because I wanted to avoid all the double nesting escaping craziness that one can fall into in the shell, especially when spaces are invoked in paths and what not.
You shall not use execute() in Jenkins pipelines. Use the pipeline DSL's steps instead of arbitrary Groovy code.
As you noticed, this such "native" code is executed on the Jenkins master and without any relation to the current job.
Unfortunately this may not be a possible operation. I'll have to redesign the script code to explicitly use the workspace variable instead of relying on the current working directory Java uses.
Changing the current working directory in Java?

configuring nodes for Jenkins build slaves with docker

I've configured my Jenkins master to use docker and I can connect to docker, I've got a simple pipeline to test this:
node ('docker-build-slave') {
stage ('On slave') {
sh 'ls -l'
sh 'uname -a'
}
}
When I instigate a build and look at whats being written to the console, I get:
Started by user chris adkin
[Pipeline] node
Still waiting to schedule task
All nodes of label ‘docker-build-slave’ are offline
and it just hangs, I'm wondering if there is something really obvious I ave missed, do I need to create a node for my docker build slaves ?.
If I go onto the machine hosting jenkins, I can see that the build slave container have been started.
The docker-build-slave that you supply is a label filtering the available Jenkins agents (master/slaves). If you do not have this label assigned either to the master or to any of the (available) slaves, this job cannot be built. Read more about labels
To let Jenkins pipeline, use the docker global variable, e.g. as described in this example:
node {
checkout scm
/*
* In order to communicate with the MySQL server, this Pipeline explicitly
* maps the port (`3306`) to a known port on the host machine.
*/
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw" -p 3306:3306') { c ->
/* Run some tests which require MySQL */
sh 'make check'
}
}
So after some digging around, and I am a bit shame faced to answer my own question, I came across this Jenkins article:
https://issues.jenkins-ci.org/browse/JENKINS-44859
I built my image using the Java JDK 7, the article states, and I quote from the comment added by Vinson Lee:
Jenkins 2.54+ requires Java 8.
I modified the docker file for my image to install open jdk 8 and everything is now working.
If you use node {} with a specific label, and don't have any nodes with that label set-up, the build will be stuck forever, as mentioned by StephenKings answer. You also need to make sure you have at least 2 executors set-up when using a single node (like 'master'), otherwise pipeline builds will usually be stuck, as they consist of a root build and several sub-builds for the steps.
The fix worked, this is the console output from successfully running the build job:
Started by user chris adkin
[Pipeline] node
Still waiting to schedule task
All nodes of label ‘docker-build-slave’ are offline
Running on docker-13b5a18eb067 in /home/jenkins/workspace/Pipeline With Docker Slave
[Pipeline] {
[Pipeline] stage
[Pipeline] { (On slave)
[Pipeline] sh
[Pipeline With Docker Slave] Running shell script
+ ls -l
total 0
[Pipeline] sh
[Pipeline With Docker Slave] Running shell script
+ uname -a
Linux localhost 4.9.49-moby #1 SMP Wed Sep 27 00:36:29 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS

Resources