How to configure Gradle cache when running Jenkins with Docker - jenkins

I'm working on building Jenkins pipeline for building a project with Gradle.
Jenkins has several slaves. All the slaves are connected to a NAS.
Some of the build steps run Gradle inside Docker containers while others run directly on the slaves.
The goal is to use as much cache as possible but I have also run into deadlock issues such as
Could not create service of type FileHasher using GradleUserHomeScopeServices.createCachingFileHasher().
> Timeout waiting to lock file hash cache (/home/slave/.gradle/caches/4.2/fileHashes). It is currently in use by another Gradle instance.

Due to the Gradle issue mentioned in the comment above, I do something like this — copying the Gradle cache into the container at startup, and writing any changes back at the end of the build:
pipeline {
agent {
docker {
image '…'
// Mount the Gradle cache in the container
args '-v /var/cache/gradle:/tmp/gradle-user-home:rw'
}
}
environment {
HOME = '/home/android'
GRADLE_CACHE = '/tmp/gradle-user-home'
}
stages {
stage('Prepare container') {
steps {
// Copy the Gradle cache from the host, so we can write to it
sh "rsync -a --include /caches --include /wrapper --exclude '/*' ${GRADLE_CACHE}/ ${HOME}/.gradle || true"
}
}
…
}
post {
success {
// Write updates to the Gradle cache back to the host
sh "rsync -au ${HOME}/.gradle/caches ${HOME}/.gradle/wrapper ${GRADLE_CACHE}/ || true"
}
}
}

Related

Use Docker Pipeline Plugin without interactive mode

I'm trying to use docker with Jenkins Scripted pipeline, and faced with several problems.
If I use it in sh docker ... it results in an error
command not found docker
I tried to fix it by changing Install setting in Global Configuration tool - but not succeed with it.
I'm trying to use Docker plugin now.
def run_my_stage(String name, String cmd, String commit) {
return {
stage(name) {
node("builder") {
docker.withRegistry("192.168.1.33:5000") {
def myimg = docker.image("my-img")
sh "docker pull ${myimg.imageName()}"
sh "docker run ${cmd}"
}
}
}
}
Where cmd == --user=\$UID --rm -t -v ./build/:/home/user/build 192.168.1.33:5000/my-img
I use this code for parallel stages (list of stages generated dynamically), and got this error
java.net.MalformedURLException: no protocol: 192.168.1.33:5000
What is proper usage of this plugin?
I found a lot of examples with withRun and other methods from docker, but I don't need to run any commands inside this image, I have command in Dockerfile (so it built-in for my container).
The error itself has the answer :).
java.net.MalformedURLException: no protocol: 192.168.1.33:5000
You are missing protocol in custom registry. Refer https://jenkins.io/doc/book/pipeline/docker/#custom-registry
def run_my_stage(String name, String cmd, String commit) {
return {
stage(name) {
node("builder") {
docker.withRegistry("https://192.168.1.33:5000") {
def myimg = docker.image("my-img")
sh "docker pull ${myimg.imageName()}"
sh "docker run ${cmd}"
}
}
}
}
You are missing the protocol, the registry must be https://192.168.1.33:5000
Also I have problem with relative path, but simple fix with adding pwd before relative path to build fixed.
Thx #yzT

How do I use Jenkins to build a private GitHub Rust project with a private GitHub dependency?

I have a private GitHub Rust project that depends on another private GitHub Rust project and I want to build the main one with Jenkins. I have called the organization Organization and the dependency package subcrate in the below code.
My Jenkinsfile looks something like
pipeline {
agent {
docker {
image 'rust:latest'
}
}
stages {
stage('Build') {
steps {
sh "cargo build"
}
}
etc...
}
}
I have tried the following in Cargo.toml to reference the dependency, it works fine on my machine
[dependencies]
subcrate = { git = "ssh://git#ssh.github.com/Organization/subcrate.git", tag = "0.1.0" }
When Jenkins runs I get the following error
+ cargo build
Updating registry `https://github.com/rust-lang/crates.io-index`
Updating git repository `ssh://git#github.com/Organization/subcrate.git`
error: failed to load source for a dependency on `subcrate`
Caused by:
Unable to update ssh://git#github.com/Organization/subcrate.git?tag=0.1.0#0623c097
Caused by:
failed to clone into: /usr/local/cargo/git/db/subcrate-3e391025a927594e
Caused by:
failed to authenticate when downloading repository
attempted ssh-agent authentication, but none of the usernames `git` succeeded
Caused by:
error authenticating: no auth sock variable; class=Ssh (23)
script returned exit code 101
How can I get Cargo to access this GitHub repository? Do I need to inject the GitHub credentials onto the slave? If so, how can I do this? Is it possible to use the same credentials Jenkins uses to checkout the main crate in the first place?
I installed the ssh-agent plugin and updated my Jenkinsfile to look like this
pipeline {
agent {
docker {
image 'rust:latest'
}
}
stages {
stage('Build') {
steps {
sshagent(credentials: ['id-of-github-credentials']) {
sh "ssh -vvv -T git#github.com"
sh "cargo build"
}
}
}
etc...
}
}
I get the error
+ ssh -vvv -T git#github.com
No user exists for uid 113
script returned exit code 255
Okay, I figured it out, No user exists for uid error is because of a mismatch between the users in the host /etc/passwd and the container /etc/passwd. This can be fixed by mounting /etc/passwd.
agent {
docker {
image 'rust:latest'
args '-v /etc/passwd:/etc/passwd'
}
}
Then
sshagent(credentials: ['id-of-github-credentials']) {
sh "cargo build"
}
Works just fine

How to save Docker volume from within Cloudbees Pipeline in case of fail

I run a set of API-Tests in a Docker-Container that are started by a Jenkins-Pipeline-Stage (Cloudbees-plugin).
I would like to save the logs of the tests away in case the stage (see below) fails.
I tried to do it with a post-action in a later stage but then I do not have access to the image any more.
How would you approach this problem? How can I save the image away in case of a fail?
stage('build Dockerimage and run API-tests') {
steps{
script {
def apitestimage = docker.build('apitestimage', '--no-cache=true dockerbuild')
apitestimage.inside('-p 5800:5800') {
dir('testing'){
sh 'ctest -V'
}
}
sh 'docker rmi --force apitestimage'
}
}
}
Use a post { failure { .. } } step to archive the data of the failing stage directly within the failed stage, not later.

Jenkins Pipeline: Executing a shell script

I have create a pipeline like below and please note that I have the script files namely- "backup_grafana.sh" and "gitPush.sh" in source code repository where the Jenkinsfile is present. But I am unable to execute the script because of the following error:-
/home/jenkins/workspace/grafana-backup#tmp/durable-52495dad/script.sh:
line 1: backup_grafana.sh: not found
Please note that I am running jenkins master on kubernetes in a pod. So copying scripts files as suggested by the error is not possible because the pod may be destroyed and recreated dynamically(in this case with a new pod, my scripts will no longer be available in the jenkins master)
pipeline {
agent {
node {
label 'jenkins-slave-python2.7'
}
}
stages {
stage('Take the grafana backup') {
steps {
sh 'backup_grafana.sh'
}
}
stage('Push to the grafana-backup submodule repository') {
steps {
sh 'gitPush.sh'
}
}
}
}
Can you please suggest how can I run these scripts in Jenkinsfile? I would like to also mention that I want to run these scripts on a python slave that I have already created finely.
If the command 'sh backup_grafana.sh' fails to execute when it actually should have successfully executed, here are two possible solutions.
1) Maybe you need a dot slash in front of those executable commands to tell your shell where they are. if they are not in your $PATH, you need to tell your shell that they can be found in the current directory. here's the fixed Jenkinsfile with four non-whitespace characters added:
pipeline {
agent {
node {
label 'jenkins-slave-python2.7'
}
}
stages {
stage('Take the grafana backup') {
steps {
sh './backup_grafana.sh'
}
}
stage('Push to the grafana-backup submodule repository') {
steps {
sh './gitPush.sh'
}
}
}
}
2) Check whether you have declared your file as a bash or sh script by declaring one of the following as the first line in your script:
#!/bin/bash
or
#!/bin/sh

How to configure a Jenkinsfile to build docker image and push it to a private registry

I have two questions. But maybe I don't know how to add tags for this question so that I added the tag for both. First question is related to Jenkins plugin usage to bake and push docker image using this. Below is my Jenkinsfile script. I finished building jar file in target directory. Then I want to run this docker plugin to bake with this artifact. As you know, we needed to have a Dockerfile so I put it in a root directory where git cloned the source. How I configure this? I don't know how to this. If I run below, Jenkins told that there is no steps.
pipeline {
agent any
stages {
stage('build') {
steps {
git branch: 'master', credentialsId: 'e.joe-gitlab', url: 'http://70.121.224.108/gitlab/cicd/spring-petclinic.git'
sh 'mvn clean package'
}
}
stage('verify') {
steps {
sh 'ls -alF target'
}
}
stage('build-docker-image') {
steps {
docker.withRegistry('https://sds.redii.net/', 'redii-e.joe') {
def app = docker.build("sds.redii.net/e-joe/spring-pet-clinic-demo:v1",'.')
app.push()
}
}
}
}
}
UPDATE
this is another Jenkins Pipeline Syntax sniffet generator. But it doesn't work neither.
pipeline {
agent any
stages {
stage('build') {
steps {
git branch: 'master', credentialsId: 'e.joe-gitlab', url: 'http://70.121.224.108/gitlab/cicd/spring-petclinic.git'
sh 'mvn clean package'
}
}
stage('verify') {
steps {
sh 'ls -alF target'
}
}
stage('docker') {
withDockerRegistry([credentialsId: 'redii-e.joe', url: 'https://sds.redii.net']) {
def app = docker.build("sds.redii.net/e-joe/spring-pet-clinic-demo:v1",'.')
app.push()
}
}
}
}
Dokerfile is like the below. If I try baking image in my local, I got the following error.
container_linux.go:247: starting container process caused "chdir to cwd (\"/usr/myapp\") set in config.json failed: not a directory"
oci runtime error: container_linux.go:247: starting container process caused "chdir to cwd (\"/usr/myapp\") set in config.json failed: not a directory"
: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
DockerFile
FROM openjdk:7
COPY ./target/spring-petclinic-1.5.1.jar /usr/myapp
WORKDIR /usr/myapp
RUN java spring-petclinic-1.5.1.jar
You are writing your .jar to /usr/myapp. Which means that /usr/myapp will be the jar file and not a directory, resulting in that error. Change your docker copy line to COPY ./target/spring-petclinic-1.5.1.jar /usr/myapp/ (with the trailing slash) and your Dockerfile should work.

Resources