Use Docker Pipeline Plugin without interactive mode - docker

I'm trying to use docker with Jenkins Scripted pipeline, and faced with several problems.
If I use it in sh docker ... it results in an error
command not found docker
I tried to fix it by changing Install setting in Global Configuration tool - but not succeed with it.
I'm trying to use Docker plugin now.
def run_my_stage(String name, String cmd, String commit) {
return {
stage(name) {
node("builder") {
docker.withRegistry("192.168.1.33:5000") {
def myimg = docker.image("my-img")
sh "docker pull ${myimg.imageName()}"
sh "docker run ${cmd}"
}
}
}
}
Where cmd == --user=\$UID --rm -t -v ./build/:/home/user/build 192.168.1.33:5000/my-img
I use this code for parallel stages (list of stages generated dynamically), and got this error
java.net.MalformedURLException: no protocol: 192.168.1.33:5000
What is proper usage of this plugin?
I found a lot of examples with withRun and other methods from docker, but I don't need to run any commands inside this image, I have command in Dockerfile (so it built-in for my container).

The error itself has the answer :).
java.net.MalformedURLException: no protocol: 192.168.1.33:5000
You are missing protocol in custom registry. Refer https://jenkins.io/doc/book/pipeline/docker/#custom-registry
def run_my_stage(String name, String cmd, String commit) {
return {
stage(name) {
node("builder") {
docker.withRegistry("https://192.168.1.33:5000") {
def myimg = docker.image("my-img")
sh "docker pull ${myimg.imageName()}"
sh "docker run ${cmd}"
}
}
}
}

You are missing the protocol, the registry must be https://192.168.1.33:5000

Also I have problem with relative path, but simple fix with adding pwd before relative path to build fixed.
Thx #yzT

Related

How do I use Jenkins to build a private GitHub Rust project with a private GitHub dependency?

I have a private GitHub Rust project that depends on another private GitHub Rust project and I want to build the main one with Jenkins. I have called the organization Organization and the dependency package subcrate in the below code.
My Jenkinsfile looks something like
pipeline {
agent {
docker {
image 'rust:latest'
}
}
stages {
stage('Build') {
steps {
sh "cargo build"
}
}
etc...
}
}
I have tried the following in Cargo.toml to reference the dependency, it works fine on my machine
[dependencies]
subcrate = { git = "ssh://git#ssh.github.com/Organization/subcrate.git", tag = "0.1.0" }
When Jenkins runs I get the following error
+ cargo build
Updating registry `https://github.com/rust-lang/crates.io-index`
Updating git repository `ssh://git#github.com/Organization/subcrate.git`
error: failed to load source for a dependency on `subcrate`
Caused by:
Unable to update ssh://git#github.com/Organization/subcrate.git?tag=0.1.0#0623c097
Caused by:
failed to clone into: /usr/local/cargo/git/db/subcrate-3e391025a927594e
Caused by:
failed to authenticate when downloading repository
attempted ssh-agent authentication, but none of the usernames `git` succeeded
Caused by:
error authenticating: no auth sock variable; class=Ssh (23)
script returned exit code 101
How can I get Cargo to access this GitHub repository? Do I need to inject the GitHub credentials onto the slave? If so, how can I do this? Is it possible to use the same credentials Jenkins uses to checkout the main crate in the first place?
I installed the ssh-agent plugin and updated my Jenkinsfile to look like this
pipeline {
agent {
docker {
image 'rust:latest'
}
}
stages {
stage('Build') {
steps {
sshagent(credentials: ['id-of-github-credentials']) {
sh "ssh -vvv -T git#github.com"
sh "cargo build"
}
}
}
etc...
}
}
I get the error
+ ssh -vvv -T git#github.com
No user exists for uid 113
script returned exit code 255
Okay, I figured it out, No user exists for uid error is because of a mismatch between the users in the host /etc/passwd and the container /etc/passwd. This can be fixed by mounting /etc/passwd.
agent {
docker {
image 'rust:latest'
args '-v /etc/passwd:/etc/passwd'
}
}
Then
sshagent(credentials: ['id-of-github-credentials']) {
sh "cargo build"
}
Works just fine

How to configure Gradle cache when running Jenkins with Docker

I'm working on building Jenkins pipeline for building a project with Gradle.
Jenkins has several slaves. All the slaves are connected to a NAS.
Some of the build steps run Gradle inside Docker containers while others run directly on the slaves.
The goal is to use as much cache as possible but I have also run into deadlock issues such as
Could not create service of type FileHasher using GradleUserHomeScopeServices.createCachingFileHasher().
> Timeout waiting to lock file hash cache (/home/slave/.gradle/caches/4.2/fileHashes). It is currently in use by another Gradle instance.
Due to the Gradle issue mentioned in the comment above, I do something like this — copying the Gradle cache into the container at startup, and writing any changes back at the end of the build:
pipeline {
agent {
docker {
image '…'
// Mount the Gradle cache in the container
args '-v /var/cache/gradle:/tmp/gradle-user-home:rw'
}
}
environment {
HOME = '/home/android'
GRADLE_CACHE = '/tmp/gradle-user-home'
}
stages {
stage('Prepare container') {
steps {
// Copy the Gradle cache from the host, so we can write to it
sh "rsync -a --include /caches --include /wrapper --exclude '/*' ${GRADLE_CACHE}/ ${HOME}/.gradle || true"
}
}
…
}
post {
success {
// Write updates to the Gradle cache back to the host
sh "rsync -au ${HOME}/.gradle/caches ${HOME}/.gradle/wrapper ${GRADLE_CACHE}/ || true"
}
}
}

Jenkins 2.0: Running SBT in a docker container

I have the following Jenkinsfile:
def notifySlack = { String color, String message ->
slackSend(color: color, message: "${message}: Job ${env.JOB_NAME} [${env.BUILD_NUMBER}] (${env.BUILD_URL})")
}
node {
try {
notifySlack('#FFFF00', 'STARTED')
stage('Checkout project') {
checkout scm
}
scalaImage = docker.image('<myNexus>/centos-sbt:2.11.8')
stage('Test project') {
docker.withRegistry('<myNexus>', 'jenkins-nexus') {
scalaImage.inside('-v /var/lib/jenkins/.ivy2:/root/.ivy2') { c ->
sh 'sbt clean test'
}
}
}
if (env.BRANCH_NAME == 'master') {
stage('Release new version') {
docker.withRegistry('<myNexus>', 'jenkins-nexus') {
scalaImage.inside('-v /var/lib/jenkins/.ivy2:/root/.ivy2') { c ->
sh 'sbt release'
}
}
}
}
notifySlack('#00FF00', 'SUCCESSFUL')
} catch (e) {
currentBuild.result = "FAILED"
notifySlack('#FF0000', 'FAILED')
throw e
}
}
Unfortunately when I reach the sbt clean test line I end up with the following error:
java.lang.IllegalArgumentException: URI has a query component
at java.io.File.<init>(File.java:427)
at sbt.IO$.uriToFile(IO.scala:160)
at sbt.IO$.toFile(IO.scala:135)
at sbt.Classpaths$.sbt$Classpaths$$bootRepository(Defaults.scala:1942)
at sbt.Classpaths$$anonfun$appRepositories$1.apply(Defaults.scala:1912)
at sbt.Classpaths$$anonfun$appRepositories$1.apply(Defaults.scala:1912)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at sbt.Classpaths$.appRepositories(Defaults.scala:1912)
at sbt.Classpaths$$anonfun$58.apply(Defaults.scala:1193)
at sbt.Classpaths$$anonfun$58.apply(Defaults.scala:1190)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at sbt.EvaluateSettings$MixedNode.evaluate0(INode.scala:175)
at sbt.EvaluateSettings$INode.evaluate(INode.scala:135)
at sbt.EvaluateSettings$$anonfun$sbt$EvaluateSettings$$submitEvaluate$1.apply$mcV$sp(INode.scala:69)
at sbt.EvaluateSettings.sbt$EvaluateSettings$$run0(INode.scala:78)
at sbt.EvaluateSettings$$anon$3.run(INode.scala:74)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
If I run the simple docker run ... followed by docker exec I get what I want but I would like to work with the defined Jenkins functionality.
So this seems to be an SBT issue. I use version 0.13.16 inside the docker image. From what I understand the classpath contains a query parameter that SBT:
doesn't like
doesn't know how to handle
is illegal
I put no such query parameters myself so I thought that this .inside method does that. I checked the env in the container and found a single entry RUN_CHANGES_DISPLAY_URL=<my_ip>/job/scheduler/job/fix-jenkins-pipeline/23/display/redirect?page=changes. I tried to unset it but didn't manage to.
I'm out of ideas and am not really sure I'm in the right direction. Any help would be appreciated.
So after long and tedious searches what finally worked for me is setting explicitly the .sbt and .ivy2 folder like this inside the docker container:
sbt -Dsbt.global.base=.sbt -Dsbt.boot.directory=.sbt -Dsbt.ivy.home=.ivy2 clean test
That somehow prevents sbt from generating the ? folder and directly puts the aforementioned folders in the root of the directory checkout.
I spent a lot of time tracing this down through the code.
It looks like the easiest solution is to just pass -Duser.home=<path> to sbt, or to set it in the SBT_OPTS environment variable; then all the rest of the directories will be built as if the <path> is the user's home directory.
I fixed this by setting ivys cache directory -> How to override the location of Ivy's Cache?
The problem was that it wasn't set and by default it creates a ? folder which in return can't be handled by sbt itself.
I created a custom Dockerfile to have more control over sbt.
Here are the steps I executed to solve the issue:
I created a file called ivysettings.xml with the following contents:
<ivysettings>
<properties environment="env" />
<caches defaultCacheDir="/home/jenkins/.ivy2/cache" />
</ivysettings>
and a Dockerfile:
FROM openjdk:8
RUN wget -O- "http://downloads.lightbend.com/scala/2.11.11/scala-2.11.11.tgz" \
| tar xzf - -C /usr/local --strip-components=1
RUN curl -Ls https://git.io/sbt > /usr/bin/sbt && chmod 0755 /usr/bin/sbt
RUN adduser -u 1000 --disabled-password --gecos "" jenkins
ADD ./files/ivysettings.xml /home/jenkins/.ivy2/ivysettings.xml
RUN chown -R jenkins:jenkins /home/jenkins
USER jenkins
CMD ["sbt"]
I then pushed the image to our private docker repository and our pipeline finally works!
Problem is that jenkins is running a specific user inside the container. But overriding it does the trick.
withDockerContainer(args: "-u root -v ${HOME}/.sbt:/root/.sbt -v ${HOME}/.ivy2:/root/.ivy2 -e HOME=/root",
image: 'xyz/sbt:v') {

How to save Docker volume from within Cloudbees Pipeline in case of fail

I run a set of API-Tests in a Docker-Container that are started by a Jenkins-Pipeline-Stage (Cloudbees-plugin).
I would like to save the logs of the tests away in case the stage (see below) fails.
I tried to do it with a post-action in a later stage but then I do not have access to the image any more.
How would you approach this problem? How can I save the image away in case of a fail?
stage('build Dockerimage and run API-tests') {
steps{
script {
def apitestimage = docker.build('apitestimage', '--no-cache=true dockerbuild')
apitestimage.inside('-p 5800:5800') {
dir('testing'){
sh 'ctest -V'
}
}
sh 'docker rmi --force apitestimage'
}
}
}
Use a post { failure { .. } } step to archive the data of the failing stage directly within the failed stage, not later.

Jenkins Pipeline: Executing a shell script

I have create a pipeline like below and please note that I have the script files namely- "backup_grafana.sh" and "gitPush.sh" in source code repository where the Jenkinsfile is present. But I am unable to execute the script because of the following error:-
/home/jenkins/workspace/grafana-backup#tmp/durable-52495dad/script.sh:
line 1: backup_grafana.sh: not found
Please note that I am running jenkins master on kubernetes in a pod. So copying scripts files as suggested by the error is not possible because the pod may be destroyed and recreated dynamically(in this case with a new pod, my scripts will no longer be available in the jenkins master)
pipeline {
agent {
node {
label 'jenkins-slave-python2.7'
}
}
stages {
stage('Take the grafana backup') {
steps {
sh 'backup_grafana.sh'
}
}
stage('Push to the grafana-backup submodule repository') {
steps {
sh 'gitPush.sh'
}
}
}
}
Can you please suggest how can I run these scripts in Jenkinsfile? I would like to also mention that I want to run these scripts on a python slave that I have already created finely.
If the command 'sh backup_grafana.sh' fails to execute when it actually should have successfully executed, here are two possible solutions.
1) Maybe you need a dot slash in front of those executable commands to tell your shell where they are. if they are not in your $PATH, you need to tell your shell that they can be found in the current directory. here's the fixed Jenkinsfile with four non-whitespace characters added:
pipeline {
agent {
node {
label 'jenkins-slave-python2.7'
}
}
stages {
stage('Take the grafana backup') {
steps {
sh './backup_grafana.sh'
}
}
stage('Push to the grafana-backup submodule repository') {
steps {
sh './gitPush.sh'
}
}
}
}
2) Check whether you have declared your file as a bash or sh script by declaring one of the following as the first line in your script:
#!/bin/bash
or
#!/bin/sh

Resources