Jenkins pipeline working in different folders - jenkins

Alright so I am just learning about pipelines in Jenkins and I've ran into a small problem.
It is building my war file in one directory but trying to build the docker image in another one, which will ofcourse fail.
so a shorthand log describes the problem quite well:
[Pipeline] stage
[Pipeline] { (build war)
[Pipeline] node
Running on Jenkins in /root/.jenkins/workspace/Wunderbaren#2
[Pipeline] {
[Pipeline] stage
[Pipeline] { (build dockerimage)
[Pipeline] script
[Pipeline] {
[Pipeline] dir
Running in /root/.jenkins/workspace/Wunderbaren/backend
[Pipeline] {
the Jenkinsfile:
pipeline {
agent any
stages {
stage('build war') {
agent {
docker { image 'gradle:latest' }
}
steps {
sh 'gradle war -b backend/build.gradle'
}
}
stage('build dockerimage') {
steps {
script {
dir('backend/') {
def image = docker.build("munhunger/wunderbaren")
docker.withRegistry('https://registry.hub.docker.com', 'docker-hub-credentials') {
image.push("${env.BUILD_NUMBER}")
image.push("latest")
}
}
}
}
}
}
}
What I find odd is that I have a similar project with pretty much the exact same configuration. only differs in folder names and docker tag. And that seems to be working 100% of the times, so I feel quite lost on this one!

Turns out you need to reuse the node:
stage('build war') {
agent {
docker {
image 'gradle:latest'
reuseNode true
}
}
steps {
sh 'gradle war -b backend/build.gradle'
}
}
From the documentation I found at https://go.cloudbees.com/docs/cloudbees-documentation/use/reference/pipeline/
reuseNode
A boolean, false by default. If true, run the container in the node specified at the top-level of the Pipeline, in the same workspace, rather than on a new node entirely.
This option is valid for docker and dockerfile, and only has an effect when used on an agent for an individual stage.

From Jenkins Pipeline Documentation
The agent section specifies where the entire Pipeline, or a specific stage, will execute in the Jenkins environment depending on where the agent section is placed. The section must be defined at the top-level inside the pipeline block, but stage-level usage is optional.
I believe this means the 'build war' stage will execute in a separate environment from the 'build docker image' stage. As far as similar syntax working in a different job, perhaps the same agent is defined for both stages?

Related

Using withEnv in a declarative pipeline

I'm trying to run docker command in my declarative pipeline, to install docker env on my slave machine i'm trying to use docker commons plugin "https://plugins.jenkins.io/docker-commons/", but no success.
Further research i have got below link mentioning how to use this plugin.
https://automatingguy.com/2017/11/06/jenkins-pipelines-simple-delivery-flow/
I have configured docker in manage jenkins -> global tool configuration, but dont find how to use below section in my declarative pipeline of jenkins, i think below structure/syntax will work for scripted jenkins pipeline
def dockerTool = tool name: 'docker', type:
'org.jenkinsci.plugins.docker.commons.tools.DockerTool'
withEnv(["DOCKER=${dockerTool}/bin"]) {
stages{}
}
Can someone pls help, how i can use docker common tool in declarative pipeline of jenkins.
Note: I cannot switch to scripted pipeline due to standardization with other projects
Here is the working example
pipeline{
agent any
stages{
stage('test') {
steps{
script{
test_env="this is test env"
withEnv(["myEnv=${test_env}"]){
echo "${env.myEnv}"
}
}
}
}
}
}
I have this feeling that you are don't need to use either withEnv or docker commons. Have you seen this? https://www.jenkins.io/doc/book/pipeline/docker/
There are plenty of good examples of how to use docker with Jenkinsfile.
My attempt to answer your question (if I got it right), if you are asking about declarative equivalent for scripted withEnv, then probably you are looking for environment {}? Something like this:
pipeline {
agent any
environment {
DOCKER = "${dockerTool}/bin"
}
stages {
stage('One') {
steps {
// steps here
}
}
}
}
Here is a working declarative pipeline solution as of Docker Commons v1.17
Note: the tool name, dockerTool is a keyword and docker-19.03.11 is name I gave my installation in Jenkins > Manage Jenkins > Global Tool Configuration page.
pipeline {
agent any
tools {
dockerTool 'docker-19.03.11'
}
stages {
stage('build') {
steps {
sh'''
echo 'FROM mongo:3.2' > Dockerfile
echo 'CMD ["/bin/echo", "HELLO WORLD...."]' >> Dockerfile
'''
script {
docker.withRegistry('http://192.168.99.100:5000/v2/') {
def image = docker.build('test/helloworld2:$BUILD_NUMBER')
}
}
}
}
}
}

Will a Jenkins pipeline compile twice when building a tag?

I want to setup a Jenkins pipeline which builds a Docker image whenever Jenkins is building a tag, so I used buildingTag() in the when condition. This works fine but I have some trouble understanding Jenkins at this point.
Every commit triggers the "Compile" stage. If a tag is built, will the "Compile" stage be executed twice? In a first run on the e.g. master branch and in a second run when explicitly starting the "Tag" build job? If so, how could this be avoided?
pipeline {
agent any
environment {
APP_NAME = 'myapp'
}
stages {
stage('Compile') {
steps {
echo "Start compiling..."
}
}
stage('Build Docker Image') {
when { buildingTag() }
steps {
echo "Building a Docker image..."
}
}
}
}
For a multibranch project branch builds are separate from tag builds, so yes, each build would have the compile stage running. They will also have separate workspaces, so they should not affect each other.
If you don't want a stage to run at tag build, just add a when { not { buildingTag() } } expression to that stage.

Jenkins Docker pipeline stuck on "Waiting for next available executor"

In my project I have a Jenkins pipeline, which should execute two stages on a provided Docker image, and a third stage on the same machine but outside the container. Running this third stage on the same machine is crucial, because the previous stages produces some output that is needed later. These files are stored on the machine trough mounted volumes.
In order to be sure these files are accessible in the third stage, I manually select a specific node. Here is my pipeline (modified a little, because it's from work):
pipeline {
agent {
docker {
label 'jenkins-worker-1'
image 'custom-image:1.0'
registryUrl 'https://example.com/registry'
args '-v $HOME/.m2:/root/.m2'
}
}
stages {
stage('Test') {
steps {
sh 'mvn test'
}
}
stage('Package') {
steps {
sh 'mvn package'
sh 'mv target workdir/'
}
}
stage('Upload') {
agent {
node {
label 'jenkins-worker-1'
}
}
steps {
sh 'uploader.sh workdir'
}
}
}
}
The node is preconfigured for uploading, so I can't simply upload built target from Docker container, it has to be done from the physical machine.
And here goes my problem: while the first two stages works perfectly fine, the third stage cannot start, because: "Waiting for next available executor" suddenly appears in logs. It's obvious the node is waiting for itself, I cannot use another machine. It looks like Docker is blocking something and Jenkins thinks the node is busy, so it waits eternally.
I look for a solution, that will allow me to run stages both in and outside the container, on the same machine.
Apparently the nested stages feature would solve this problem, but unfortunately it's available since version 1.3 of pipeline plugin, but my node has 1.2.9.

Different working directories in jenkins pipeline

Here's a simple Jenkins pipeline job that highlights the different working directories seen by sh vs script.
pipeline {
agent any
stages {
stage('Stage1') {
steps {
sh """
pwd
"""
script {
echo "cwd--pwd: "+ "pwd".execute().text
}
}
}
}
}
Here's how the Jenkins instance was launched
/Users/MyJenkinsUser/dirJenkinsLaunched$ java -jar /Applications/Jenkins/jenkins.war --httpPort=8080
Here's the console output of the job...
Started by user MyJenkinsUser
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] node
Running on Jenkins in /Users/MyJenkinsUser/.jenkins/jobs/TestPipeline/workspace
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Stage1)
[Pipeline] sh
[workspace] Running shell script
+ pwd
/Users/MyJenkinsUser/.jenkins/jobs/TestPipeline/workspace
[Pipeline] script
[Pipeline] {
[Pipeline] echo
cwd--pwd: /Users/MyJenkinsUser/dirJenkinsLaunched
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
I find it curious that they would be different working directories, the shell command sh step uses the workspace as the working directory while the groovy script step uses the directory the Jenkins process was launched.
Question: how can I make my Jenkins scripted pipeline steps (script) use the workspace as the working directory by default?
I guess it makes sense after realizing this most clearly, that groovy is a java thing and we launch the Jenkins war file from java, and that launching imposes a certain working directory. I wonder the origins of this design for the Jenkins behavior. It made me go a bit wonky with a bunch of file not found errors as I ported some sh commands into the more substantive groovy syntax because I wanted to avoid all the double nesting escaping craziness that one can fall into in the shell, especially when spaces are invoked in paths and what not.
You shall not use execute() in Jenkins pipelines. Use the pipeline DSL's steps instead of arbitrary Groovy code.
As you noticed, this such "native" code is executed on the Jenkins master and without any relation to the current job.
Unfortunately this may not be a possible operation. I'll have to redesign the script code to explicitly use the workspace variable instead of relying on the current working directory Java uses.
Changing the current working directory in Java?

How to set specific workspace folder for jenkins multibranch pipeline projects

I have an external tool that should be called as build-step in one of my jenkins jobs. Unfortunately, this tool has some issues with quoting commands to avoid problems with whitespaces in the path that is called from.
Jenkins is installed in C:\Program Files (x86)\Jenkins. Hence I'm having trouble with jenkins calling the external tool.
What I tried is to set "Workspace Root Directory" in Jenkins->configuration to C:\jenkins_workspace in order to avoid any whitespaces. This works for Freestyle Projects but my Multibranch Pipeline Project is still checked out and built under C:\Program Files (x86)\Jenkins\workspace.
One solution would be to move the whole jenkins installation to e.g. C:\jenkins. This I would like to avoid. Is there a proper way to just tell Jenkins Pipeline jobs to use the "Workspace Root Directory" as well?
Thanks for any help
the ws instruction sets the workspace for the commands inside it. for declarative pipelines, it's like this:
ws("C:\jenkins") {
echo "awesome commands here instead of echo"
}
You can also call a script to build the customWorkspace to use:
# if the current branch is master, this helpfully sets your workspace to /tmp/ma
partOfBranch = sh(returnStdout: true, script: 'echo $BRANCH_NAME | sed -e "s/ster//g"')
path = "/tmp/${partOfBranch}"
sh "mkdir ${path}"
ws(path) {
sh "pwd"
}
you can also set it globally by using the agent block (generally at the top of the pipeline block), by applying it to a node at that level:
pipeline {
agent {
node {
label 'my-defined-label'
customWorkspace '/some/other/path'
}
}
stages {
stage('Example Build') {
steps {
sh 'mvn -B clean verify'
}
}
}
}
Another node instruction later on might override it. Search for customWorkspace at https://jenkins.io/doc/book/pipeline/syntax/. You can also it use it with the docker and dockerfile instructions.
Try this syntax instead:
pipeline {
agent {
label {
label 'EB_TEST_SEL'
customWorkspace "/home/jenkins/workspace/ReleaseBuild/${BUILD_NUMBER}/"
}
}
}

Resources