My Jenkins job is Pipeline that running in Dockers:
node('docker') {
//Git checkout
git url: 'ssh://blah.blah:29411/test.git'
//Build
sh 'make'
//Verify/Run
sh './runme'
}
I'm working with kernel and my sources take a lot of time to get it from GIT (it's about 2GB). I'm looking on how I can push the docker image to use it for the next build so it will already contain most of the sources. I probably need to do:
docker push blahdockergit.blah/myjenkinsslaveimage
but it should run outside of the container.
Found in pipeline syntax that following class can be used for building external jobs
Related
step1:I deploymented jenkins、rancher、gitlab servers in my local environment.I would want to realize CI/CD pipeline.
step2:My project is a web-based management system which used vue and gin.The project source code was pushed into the gitlab repository.I wrote a Dockerfile and a Jenkinsfile in my local IDE.The next is my important description.In my Jenkinsfile:there is some statement like the following shows :
steps {
sh "kubectl set image deployment/gin-vue gin-vue=myhost/containers/gin-vue.${BUILD_NUMBER} -n web"
}
Then executed git commit and git push command.
step3:I created a task in jenkins and clicked the build button to run the corresponding pipeline script.
step4:After end running,the newer image could not be updated into the rancher.
enter image description here
But I executed this command in rancher kubectl client terminal "kubectl set image deployment/gin-vue gin-vue=myhost/containers/gin-vue.${BUILD_NUMBER} -n web" could be successful.
So what's the cause of this problem?And this confused me so many days and not found the solution.Thanks a lot!
I executed this command in rancher kubectl client terminal "kubectl set image deployment/gin-vue gin-vue=myhost/containers/gin-vue.${BUILD_NUMBER} -n web" could be successful.
When running jobs from Jenkinsfile with Pipeline syntax and a Docker agent, the pipeline fails with "Docker: command not found." I understand this to mean that either (1) Docker is not installed; or (2) Jenkins is not pointing to the correct Docker installation path. My situation is very similar to this issue: Docker command not found in local Jenkins multi branch pipeline . Jenkins is installed on MacOS and running off of localhost:8080. Docker is also installed (v18.06.0-ce-mac70)./
That user's solution included a switch from pipeline declarative syntax to node scripted syntax. However I want to resolve the issue while retaining the declarative syntax.
Jenkinsfile
#!groovy
pipeline {
agent {
docker {
image 'node:7-alpine'
}
}
stages {
stage('Unit') {
steps {
sh 'node -v'
sh 'npm -v'
}
}
}
}
Error message
docker inspect -f . node:7-alpine
docker: command not found
docker pull node:7-alpine
docker: command not found
In Jenkins Global Tool Configuration, for Docker installations I tried both (1) install automatically (from docker.com); and (2) local installation with installation root /usr/local/.
All of the relevant plugins appears to be installed as well.
I solved this problem here: https://stackoverflow.com/a/58688536/8160903
(Add Docker's path to Homebrew Jenkins plist /usr/local/Cellar/jenkins-lts/2.176.3/homebrew.mxcl.jenkins-lts.plist)
I would check the user who is running the jenkins process and make sure they are part of the docker group.
You can try adding the full path of docker executable on your machine to Jenkins at Manage Jenkins > Global Tool Configuration.
I've seen it happen sometimes that the user which has started Jenkins doesn't have the executable's location on $PATH.
I want to checkout a git repo and then run its build so I tried:
sh "git clone --depth 1 -b master git#github.com:user/repo.git"
build './repo'
but that yields:
ERROR: No item named ./repo found
I've tried to use dir('repo') but apparently that errors when you run it from within docker (because kubernetes is stuck on an old version of docker that doesnt support this).
Any idea on how to run the build pipeline from the checked out repo?
The 'build' pipeline steps expect a job name, not a pipeline folder with a Jenkinsfile in its root folder.
The correct way to do this is to set the pipeline job with the Jenkinsfile, as described here ('In SCM' section), and call it by its Job name from your pipeline.
Pipelines are not built for chaining unless you use shared libraries where you put the Pipeline code in a Groovy class or as a step, but that it is a subject for a full article.
I'm creating a jenkins pipeline which utilizes a build.gradle script to build the project.
One of the first things gradle does is check out some git repos, I need to run this with ssh, so I thought I could wrap the code in sshagent like this:
sshagent(['c6f7cd1b-9bb3-4b33-9db0-cbd1f62cd0ba']){
sh 'git clone git#Repo.git'
}
the Id is mapped to a global jenkins credential with the private key in it, I also use it in another pipeline to tag a repo and push it to master, using the same credentialId.
However I get following output when trying to run the pipeline:
FATAL: [ssh-agent] Could not find specified credentials
I have no idea why I get this, when I'm using a copy paste from the other pipeline.
Anyone who can point me towards the right direction?
Thx
I have a pipeline job which loads Jenkinsfile from git repository. My Jenkinsfile looks like this:
#!groovy
#Library('global-utils-lib') _
node("mvn") {
stage('build') {
checkout scm
}
stage('merge-request'){
mergeRequest()
}
}
global-utils-lib is shared library loaded in Global Pipeline Libraries from another git repo with following structure
vars/mergeRequest.groovy
mergeRequest.groovy:
def call() {
sh "ip addr"
def workspacePath = env.WORKSPACE
new File(workspacePath + "/file.txt").text
}
Job is run against docker container (docker plugin).
When I run this job then docker container is provisioned correctly and scm is downloaded but I get FileNotFoundException.
It looks like code from shared library is executed against jenkins master not slave:
presented IP comes from master
file is loaded correctly when I pass correct path to the scm on master
How can I run library code against slave? What I am missing?
It's generally not a good idea to try and do things like new File() instead of using existing Pipeline steps.
Your Pipeline script is interpreted and executed by the Jenkins master so, as you're seeing, the attempt to use the File API doesn't work as you might expect.
Sticking to Pipeline steps helps ensure that your pipeline is durable (i.e. survives restarts), is pausable, and doesn't block the execution thread, preventing parallel steps from working, for example.
In this case, the existing readFile step can be used.
I don't know how well the Docker Plugin interacts with Pipeline (though I imagine it should be transparent), and without knowing which agents have the "mvn" label, or whether you can reproduce this outside of a shared library, it's unclear why your sh step would appear to be running on the master.
The Docker Pipeline Plugin is explicitly designed for Pipeline, so it might give better results.