Build from parent directory with Jenkins - docker

We are adapting monorepo to our project and we organized the code to three folders in which one of them is a shared_code between the two other projects.
When changes done in one of the project or the shared code a build triggered according the changes and a docker image is build and pushed to a docker registry.
project1
-Dockerfile
-Jenkinsfile
project2
-Dockerfile
-Jenkinsfile
shared_code
Jenkinfile
Since we using docker with a shared code, so the build should be done from parent directory as docker doesn't expect files from parent directory to be included.
Something like this should be done:
docker build -t project1:tag -f project1/Dockerfile .
This all still fine, but when using Jenkins I defined Jenkinsfile inside project1, so when I build project1 from Jenkins I fall to the issue that says docker doesn't access parent directory, and at the same time I don't wanna to take the Jenkinsfile to the parent folder in order to keep a certain organization.
Is there a way I could configure the directory that Jenkins takes for the build in the Jenkinsfile?

I came back to answer my quetion if someone is facing the same challenge.
This is what helped me to build the image from Jenkins:
stage("Build image") {
// Build the docker image with a tag
steps {
script {
dockerImage = docker.build("project1:${env.BUILD_ID}", "-f ./project1/Dockerfile .")
}
}
}

Related

How to update pipeline to run specific stage in a separate container?

I have a pod templated with two separate container templates. One template is JNLP(master) and the (build) other is for the specific stage for the pipeline. What code changes are required in that specific stage (build stage) to run that specific (build) stage in a separate container?
I have a similar scenario using CI/CD.
Instead to run the specific step, you can build a target step, tagging it with a different name, than you can run it.
Ex.:
Say your image name is just image, you can build the target step you want to run (lets say the tests step) with:
docker build --target test-step -t image-test .
Than, run it with:
docker run image-test

Creating Docker image and running as service in Jenkins

I have a JSP website. I am building DevOps pipeline. I am looking for help to integrate Jenkins with the Docker.
I already have docker file which does task of Deploying war file to the tomcat server.
(Command1)
Through the command line I can run the docker file and create an image.
I can run created image as a service and able to browse the website.
(Command2)
I want to do these two steps in Jenkins. I need your help to integrate these two commands in Jenkins, so that I need not to run these two commands manually one after other.
I think that you can use the "Docker Pipeline Plugin" for that.
For the first command, you can have a stage that runs:
myImage = docker.build("my-image:my-tag")
If you need you can have another stage where you can run some tests inside the image with:
myImage.inside {
sh './run-test.sh'
}
Finally, you can push the image to the repository to your repository with:
docker.withRegistry('https://your-registry.com', 'credentials_id') { //use a second parameter if you repository requires authentication
myImage.push('new_tag') //You can push it with a new tag
}
Please note that if you wanna use the docker.* methods in a declarative pipeline you must do it inside a script step or in a function.
(More info in the plugin's user guide)
For the second command, you only have to update the running image in the server. For doing that you have a lot of options (docker service update if you're using Docker Swarm, for example) and I think that part is outside of the scope of this post.

Why does Jenkins shell file copy not work as expected (does not overwrite existing files)

I have a step in a Jenkins pipeline to copy some source files to the workspace.
stage('Copy Files') {
script {
echo 'Staging files'
sh "cp -ar /home/dev/src/ ${env.WORKSPACE}"
}
}
Yet, when I rerun the build it uses the old code. The only solution is to delete the workspace prior to the copy. In a normal Linux file system a copy will overwrite the destination. Why does Jenkins behave differently--i.e., old files are not overwritten? From the syntax it seems like it is just running a shell command, so why does this not have the expected behavior?
It is because, Jenkins run on master node and workspace will be on the worker node.
when checkout scm and sh "" code blocks are in different stages, files will not be save from first stage to other. You should use stash & unstash. when you stash a directory path, files in that dir will be available to the unstashed step in later stages.
Jenkins doc - here

Copy file from Jenkins master to slave in Pipeline

I have some windows slave at my Jenkins so I need to copy file to them in pipeline. I heard about Copy To Slave and Copy Artifact plugins, but they doesn't have pipeline syntax manual. So I don't know how to use them in pipeline.
Direct copy doesn't work.
def inputFile = input message: 'Upload file', parameters: [file(name: 'parameters.xml')]
new hudson.FilePath(new File("${ENV:WORKSPACE}\\parameters.xml")).copyFrom(inputFile)
This code returns and error:
Caused: java.io.IOException: Failed to copy /var/lib/jenkins/jobs/_dev/jobs/(TEST)job/builds/107/parameters.xml to d:\Jenkins\workspace\_dev\(TEST)job\parameters.xml
Is there any way to copy file from master to slave in Jenkins Pipeline?
As I understand copyFrom is executed on your Windows node, therefore the source path is not accessible.
I think you want to look into the stash/unstash steps (Jenkins Pipeline: Basic Steps), which work across different nodes. Also this example might be helpful.
Pipeline DSL context runs on master node even that your write node('someAgentName') in your pipeline.
Try to use stash/unstash, but it is bad for large files.
Try External Workspace Manager Plugin. It has
pipelines steps and good for large files.
Try to use an intermediate storage. archive() and sh("wget $url") will be helpful.
If the requirement is to copy an executable to the test slave and to publish the test results, this is easy to do without the Copy to Slave plugin.
A shared folder should be created on each test slave (normal Windows shared folder).
After build: Build script copies the executable to the shared directory on each slave. A simple batch script using copy command is sufficient for this.
stage ('Copy to slaves') {
steps {
bat 'call "copy-to-slave.bat"'
}
}
During test: The test script copies the executable to another directory and runs it.
After test: Post-build action "Publish Robot Framework test results" can be used to report the test results. It is not necessary to copy the test result files back to the master first.
I recommend on Pipeline: Phoenix AutoTest plugin
Jenkins plugin website:
https://plugins.jenkins.io/phoenix-autotest/#documentation
GitHub repository of plugin:
https://github.com/jenkinsci/phoenix-autotest-plugin

Jenkins Pipeline push Docker image

My Jenkins job is Pipeline that running in Dockers:
node('docker') {
//Git checkout
git url: 'ssh://blah.blah:29411/test.git'
//Build
sh 'make'
//Verify/Run
sh './runme'
}
I'm working with kernel and my sources take a lot of time to get it from GIT (it's about 2GB). I'm looking on how I can push the docker image to use it for the next build so it will already contain most of the sources. I probably need to do:
docker push blahdockergit.blah/myjenkinsslaveimage
but it should run outside of the container.
Found in pipeline syntax that following class can be used for building external jobs

Resources