Jenkins 2.0: Running SBT in a docker container - docker

I have the following Jenkinsfile:
def notifySlack = { String color, String message ->
slackSend(color: color, message: "${message}: Job ${env.JOB_NAME} [${env.BUILD_NUMBER}] (${env.BUILD_URL})")
}
node {
try {
notifySlack('#FFFF00', 'STARTED')
stage('Checkout project') {
checkout scm
}
scalaImage = docker.image('<myNexus>/centos-sbt:2.11.8')
stage('Test project') {
docker.withRegistry('<myNexus>', 'jenkins-nexus') {
scalaImage.inside('-v /var/lib/jenkins/.ivy2:/root/.ivy2') { c ->
sh 'sbt clean test'
}
}
}
if (env.BRANCH_NAME == 'master') {
stage('Release new version') {
docker.withRegistry('<myNexus>', 'jenkins-nexus') {
scalaImage.inside('-v /var/lib/jenkins/.ivy2:/root/.ivy2') { c ->
sh 'sbt release'
}
}
}
}
notifySlack('#00FF00', 'SUCCESSFUL')
} catch (e) {
currentBuild.result = "FAILED"
notifySlack('#FF0000', 'FAILED')
throw e
}
}
Unfortunately when I reach the sbt clean test line I end up with the following error:
java.lang.IllegalArgumentException: URI has a query component
at java.io.File.<init>(File.java:427)
at sbt.IO$.uriToFile(IO.scala:160)
at sbt.IO$.toFile(IO.scala:135)
at sbt.Classpaths$.sbt$Classpaths$$bootRepository(Defaults.scala:1942)
at sbt.Classpaths$$anonfun$appRepositories$1.apply(Defaults.scala:1912)
at sbt.Classpaths$$anonfun$appRepositories$1.apply(Defaults.scala:1912)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at sbt.Classpaths$.appRepositories(Defaults.scala:1912)
at sbt.Classpaths$$anonfun$58.apply(Defaults.scala:1193)
at sbt.Classpaths$$anonfun$58.apply(Defaults.scala:1190)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at sbt.EvaluateSettings$MixedNode.evaluate0(INode.scala:175)
at sbt.EvaluateSettings$INode.evaluate(INode.scala:135)
at sbt.EvaluateSettings$$anonfun$sbt$EvaluateSettings$$submitEvaluate$1.apply$mcV$sp(INode.scala:69)
at sbt.EvaluateSettings.sbt$EvaluateSettings$$run0(INode.scala:78)
at sbt.EvaluateSettings$$anon$3.run(INode.scala:74)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
If I run the simple docker run ... followed by docker exec I get what I want but I would like to work with the defined Jenkins functionality.
So this seems to be an SBT issue. I use version 0.13.16 inside the docker image. From what I understand the classpath contains a query parameter that SBT:
doesn't like
doesn't know how to handle
is illegal
I put no such query parameters myself so I thought that this .inside method does that. I checked the env in the container and found a single entry RUN_CHANGES_DISPLAY_URL=<my_ip>/job/scheduler/job/fix-jenkins-pipeline/23/display/redirect?page=changes. I tried to unset it but didn't manage to.
I'm out of ideas and am not really sure I'm in the right direction. Any help would be appreciated.

So after long and tedious searches what finally worked for me is setting explicitly the .sbt and .ivy2 folder like this inside the docker container:
sbt -Dsbt.global.base=.sbt -Dsbt.boot.directory=.sbt -Dsbt.ivy.home=.ivy2 clean test
That somehow prevents sbt from generating the ? folder and directly puts the aforementioned folders in the root of the directory checkout.

I spent a lot of time tracing this down through the code.
It looks like the easiest solution is to just pass -Duser.home=<path> to sbt, or to set it in the SBT_OPTS environment variable; then all the rest of the directories will be built as if the <path> is the user's home directory.

I fixed this by setting ivys cache directory -> How to override the location of Ivy's Cache?
The problem was that it wasn't set and by default it creates a ? folder which in return can't be handled by sbt itself.
I created a custom Dockerfile to have more control over sbt.
Here are the steps I executed to solve the issue:
I created a file called ivysettings.xml with the following contents:
<ivysettings>
<properties environment="env" />
<caches defaultCacheDir="/home/jenkins/.ivy2/cache" />
</ivysettings>
and a Dockerfile:
FROM openjdk:8
RUN wget -O- "http://downloads.lightbend.com/scala/2.11.11/scala-2.11.11.tgz" \
| tar xzf - -C /usr/local --strip-components=1
RUN curl -Ls https://git.io/sbt > /usr/bin/sbt && chmod 0755 /usr/bin/sbt
RUN adduser -u 1000 --disabled-password --gecos "" jenkins
ADD ./files/ivysettings.xml /home/jenkins/.ivy2/ivysettings.xml
RUN chown -R jenkins:jenkins /home/jenkins
USER jenkins
CMD ["sbt"]
I then pushed the image to our private docker repository and our pipeline finally works!

Problem is that jenkins is running a specific user inside the container. But overriding it does the trick.
withDockerContainer(args: "-u root -v ${HOME}/.sbt:/root/.sbt -v ${HOME}/.ivy2:/root/.ivy2 -e HOME=/root",
image: 'xyz/sbt:v') {

Related

Jenkins - shell script - find parameter format not correct

hope one of you can help - I'm running a script in jenkins pipeline, so that I can upload source maps to rollbar, so I need to loop through the minified js files - I'm trying to do this with the FIND command but it keeps giving me an error: find parameter format not correct - script below:
stages {
stage('Set correct environment link') {
steps {
script {
buildTarget = params.targetEnv
if(params.targetEnv.equals("prj1")){
linkTarget = "http://xxx.xxx.xx/xxx/"
} else if(params.targetEnv.equals(.......)..etc.
stage('Uploading source maps to Rollbar') {
steps {
sh ''' #!/bin/bash
# Save a short git hash, must be run from a git
# repository (or a child directory)
version=$(git rev-parse --short HEAD)
# Use the post_server_time access token, you can
# find one in your project access token settings
post_server_item=e1d04646bf614e039d0af3dec3fa03a7
echo "Uploading source maps for version $version!"
# We upload a source map for each resulting JavaScript
# file; the path depends on your build config
for path in $(find dist/renew -name "*.js"); do
# URL of the JavaScript file on the web server
url=${linkTarget}/$path
# a path to a corresponding source map file
source_map="#$path.map"
echo "Uploading source map for $url"
curl --silent --show-error https://api.rollbar.com/api/1/sourcemap \
-F access_token=$post_server_item \
-F version=$version \
-F minified_url=$url \
-F source_map=$source_map \
> /dev/null
done
'''
}
}
stage('Deploy') {
steps {
echo 'Deploying....'
From Bash CLI
With the help of #jeb I managed to resolve this FIND issue on jenkins by placing the absolute path for (find) in the shell script - initially it was using the windows FIND cmd, so i needed to point to cygwin find cmd
before: for path in $(find dist/renew -name "*.js"); do
and after: for path in $(/usr/bin/find dist/renew -name "*.js"); do
Thaks to all that commented

Use Docker Pipeline Plugin without interactive mode

I'm trying to use docker with Jenkins Scripted pipeline, and faced with several problems.
If I use it in sh docker ... it results in an error
command not found docker
I tried to fix it by changing Install setting in Global Configuration tool - but not succeed with it.
I'm trying to use Docker plugin now.
def run_my_stage(String name, String cmd, String commit) {
return {
stage(name) {
node("builder") {
docker.withRegistry("192.168.1.33:5000") {
def myimg = docker.image("my-img")
sh "docker pull ${myimg.imageName()}"
sh "docker run ${cmd}"
}
}
}
}
Where cmd == --user=\$UID --rm -t -v ./build/:/home/user/build 192.168.1.33:5000/my-img
I use this code for parallel stages (list of stages generated dynamically), and got this error
java.net.MalformedURLException: no protocol: 192.168.1.33:5000
What is proper usage of this plugin?
I found a lot of examples with withRun and other methods from docker, but I don't need to run any commands inside this image, I have command in Dockerfile (so it built-in for my container).
The error itself has the answer :).
java.net.MalformedURLException: no protocol: 192.168.1.33:5000
You are missing protocol in custom registry. Refer https://jenkins.io/doc/book/pipeline/docker/#custom-registry
def run_my_stage(String name, String cmd, String commit) {
return {
stage(name) {
node("builder") {
docker.withRegistry("https://192.168.1.33:5000") {
def myimg = docker.image("my-img")
sh "docker pull ${myimg.imageName()}"
sh "docker run ${cmd}"
}
}
}
}
You are missing the protocol, the registry must be https://192.168.1.33:5000
Also I have problem with relative path, but simple fix with adding pwd before relative path to build fixed.
Thx #yzT

jenkins pipeline script to deal module in subdirectory

I have a git url maven project which I want to only deal one of its submodule.
I write in pipeline script :
...
stage("mvn build") {
steps {
script {
sh "mvn package -DskipTests=true"
}
}
}
error arise: The goal you specified requires a project to execute but there is no POM in this directory (/xx/jenkins/workspace/biz-commons_deploy). so I add command :
sh "cd cmiot-services/comm" # subdir of biz-commons_deploy
def PWD = pwd();
echo "##=${PWD} "
sh "mvn package -DskipTests=true"
not work, print ##=/root/.jenkins/workspace/biz-commons_deploy, the error is the same as before .
how can I solve this problem and why the echo and error use different user space?
I make it using sh "mvn -f cmiot-services/comm/pom.xml package -DskipTests=true",still not know where this two user path come from and why sh cd not work.
steps {
sh '''
# list items in current directory to see where is your pom.xml
ls -l
# run job by comment out following two lines, if you don't know the
# relative path of folder where pom.xml insides exactly
cd <folder where pom.xml insides>
mvn package -DskipTests=true
'''
}
As Yong answered, every sh steps are independent, imagine Jenkins is opening a new ssh connection on your slave each time.
For your script, instead of a workaround with sh, why not using build in dir step ?
Something like this should do it :
stage("mvn build") {
steps {
script {
dir('cmiot-services/comm') {
sh "mvn package -DskipTests=true"
}
}
}
}
when you are executing Jenkins Pipline, the current directory is the Jenkins workspace directory.
You can add a step to clone the repo that your code is in (granted that the environment you are running the Jenkins instance is able to connect to your repo and clone).
You can then navigate into the directory that has the pom.xml. And finally execute the maven command.
...
stage("Clone Repo") {
steps {
script {
sh "git clone ssh://git#bitbucket.org:repo/app.git"
}
}
}
stage("mvn build") {
steps {
script {
sh "cd app/"
sh "pwd"
sh "mvn package -DskipTests=true"
}
}
}

How to configure Gradle cache when running Jenkins with Docker

I'm working on building Jenkins pipeline for building a project with Gradle.
Jenkins has several slaves. All the slaves are connected to a NAS.
Some of the build steps run Gradle inside Docker containers while others run directly on the slaves.
The goal is to use as much cache as possible but I have also run into deadlock issues such as
Could not create service of type FileHasher using GradleUserHomeScopeServices.createCachingFileHasher().
> Timeout waiting to lock file hash cache (/home/slave/.gradle/caches/4.2/fileHashes). It is currently in use by another Gradle instance.
Due to the Gradle issue mentioned in the comment above, I do something like this — copying the Gradle cache into the container at startup, and writing any changes back at the end of the build:
pipeline {
agent {
docker {
image '…'
// Mount the Gradle cache in the container
args '-v /var/cache/gradle:/tmp/gradle-user-home:rw'
}
}
environment {
HOME = '/home/android'
GRADLE_CACHE = '/tmp/gradle-user-home'
}
stages {
stage('Prepare container') {
steps {
// Copy the Gradle cache from the host, so we can write to it
sh "rsync -a --include /caches --include /wrapper --exclude '/*' ${GRADLE_CACHE}/ ${HOME}/.gradle || true"
}
}
…
}
post {
success {
// Write updates to the Gradle cache back to the host
sh "rsync -au ${HOME}/.gradle/caches ${HOME}/.gradle/wrapper ${GRADLE_CACHE}/ || true"
}
}
}

Jenkinsfile custom docker container "could not find FROM instruction"

I have just started working with Jenkinsfiles and Docker so apologies if this is something obvious.
I have a repo containing a Dockerfile and a Jenkins file.
The Dockerfile simply extends a base Ubuntu image (ubuntu:trusty) by adding several dependencies and build tools.
The Jenkinsfile currently only builds the Docker container for me:
node('docker') {
stage "Prepare environment"
checkout scm
docker.build('build-image')
}
When I run the Jenkins build, the output log shows the Docker container being successfully created, but just before it should finish successfully, I get:
Successfully built 04ba77c72c74
[Pipeline] dockerFingerprintFrom
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
[Bitbucket] Notifying commit build result
[Bitbucket] Build result notified
ERROR: could not find FROM instruction in /home/emackenzie/jenkins/workspace/001_test-project_PR-1-ROWUV6YLERZKDQWCAGJK5MQHNKY7RJRHC2TH4DNOZSEKE6PZB74A/Dockerfile
Finished: FAILURE
I have been unable to find any guidance on why I am getting this error from the internet, so any help would be greatly appreciated
Dockerfile:
FROM ubuntu:trusty
MAINTAINER Ed Mackenzie
# setup apt repos
RUN echo "deb http://archive.ubuntu.com/ubuntu/ trusty multiverse" >> /etc/apt/sources.list \
&& echo "deb-src http://archive.ubuntu.com/ubuntu/ trusty multiverse" >> /etc/apt/sources.list \
&& apt-get update
# python
RUN apt-get install -y python python-dev python-openssl
It's because your FROM line uses a tab for whitespace, instead of space(s). This is a bug in the Jenkins CI Docker workflow plugin, which expects the line to begin with FROM followed by a space.
From the jenkinsci/docker-workflow-plugin source on Github:
String fromImage = null;
// ... other stuff
if (line.startsWith("FROM ")) {
fromImage = line.substring(5);
break;
}
// ... other stuff ...
if (fromImage == null) {
throw new AbortException("could not find FROM instruction in " + dockerfile);
}
If you use spaces instead of tabs, it should work fine.
I just ran in to the same problem and it was a similar solution. Check if the file is encoded with a BOM at the beginning of the file (this can be done with something like Notepad++). If so, save it without the marker and the plugin will stop complaining.
The error can be solved by changing the "from" statement to "FROM"

Resources