I am running my Jenkins on Kubernetes and want to build a project using docker image. Here is the pipeline.
pipeline {
agent { node { label 'kubeagent' } }
stages {
stage('Example Build') {
agent {
docker {
image 'node'
args "-u root"
}
}
steps {
echo 'Hello'
}
}
}
}
The build is running on build agent dynamically created on Kubernetes. When I run this pipeline I am getting the message like There are no nodes with the label ‘docker’. The main Jenkins image I deployed doesn't have docker installed. I want to use docker dynamically from Jenkins. I have configured docker plugins as required but still no luck. Is there any way we can use tools sections in pipeline and then use the docker dynamically?
tools {
maven 'maven'
dockerTool 'docker'
}
Any lead here?
I am running a CI pipeline for a repo in Jenkins using declarative pipeline.
The repo now contains its own Dockerfile at .docker/php/Dockerfile, which I want to use to build a container and run the pipeline in.
Normally, I get the code in the container using a volume in docker-compose.yaml:
volumes:
- .:/home/wwwroot/comms-and-push
...So I set up my Jenkinsfile like this:
pipeline {
agent {
dockerfile {
dir ".docker/php"
args "-v .:/home/wwwroot/comms-and-push"
}
}
stages {
...
However, this results in an error when running the pipeline:
Error: docker: Error response from daemon: create .: volume name is too short, names should be at least two alphanumeric characters.
I cannot specify the full path because I don't know it in this context -- it's running in some Jenkins workspace.
What I've tried so far:
Using the WORKSPACE variable
args "-v ${WORKSPACE}:/home/wwwroot/comms-and-push"
results in error:
No such property: WORKSPACE for class: groovy.lang.Binding
Setting an environment variable before the pipeline:
environment {
def WORKSPACE = pwd()
}
pipeline {
agent {
dockerfile {
dir '.docker/php'
args "-v ${env.WORKSPACE}/:/home/wwwroot/comms-and-push"
}
}
...
results in ${env.WORKSPACE} resolving to null.
The standard Jenkins Docker integration already knows how to mount the workspace directory into a container. It has the same filesystem path inside different containers and directly on a worker outside a container. You don't need to supply a docker run -v argument yourself.
agent {
dockerfile {
dir ".docker/php"
// No args
}
}
stages {
stage('Diagnostics') {
sh "pwd" // Prints the WORKSPACE
sh "ls" // Shows the build tree contents
sh "ls /" // Shows the image's root directory
}
}
If you look at the extended Jenkins logs, you'll see that it provides the -v option itself.
I would suggest to use the docker.image.inside() method, so in your case it is going to be something like docker.image.inside("-v /home/wwwroot/comms-and-push:/home/wwwroot/comms-and-push:rw").
I am using Build Toolkit to build docker image for each microservice.
./build.sh
export DOCKER_BUILDKIT=1
# ....
docker build -t ....
# ...
This works on my machine with docker (18.09.2).
However, it does not work with Jenkins, that I setup as following :
EKS is provisioned with a Terraform module
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "5.0.0"
# ....
}
Jenkins is deployed on EKS (v1.12.10-eks-ffbd9 , docker://18.6.1) via this Helm Chart.
Jenkins plugins as defined in Values of the helm release:
kubernetes:1.18.1
workflow-job:2.33
workflow-aggregator:2.6
credentials-binding:1.19
git:3.11.0
blueocean:1.19.0
bitbucket-oauth:0.9
Jenkins Pipeline is declarative, and it uses a Pod template where the container image is docker:18-dind and the container name is dind.
This is my Jenkinsfile
pipeline {
agent {
kubernetes {
defaultContainer 'jnlp'
yamlFile 'jenkins-pod.yaml'
}
}
stages {
stage('Build Backends') {
steps {
container('dind') {
sh 'chmod +x *sh'
sh './build.sh -t=dev'
}
containerLog 'dind'
}
}
}
}
When Jenkins executes this pipeline, it shows this error :
buildkit not supported by daemon
I am not sure which software should I upgrade to make docker-buildkit work ? and to which version ?
Terraform eks Module which is now 5.0.0 ?
Or
docker:18-dind image which behaves like environment of the ephemeral Jenkins slaves ?
Or
the Jenkins Plugin kubernetes:1.18.1 ?
As per docker-ce sources, there are two requirements to make successful check isSessionSupported for starting buildkit session:
dockerCli.ServerInfo().HasExperimental
versions.GreaterThanOrEqualTo(dockerCli.Client().ClientVersion(), "1.31"
So:
check version of your docker-cli library
and is HasExperimental option enabled.
To check if it has Experimantal support, run:
docker version -f '{{.Server.Experimental}}'
Docker buildkit support came out of experimental in 18.09, so you may need to upgrade docker inside of EKS:
EKS (v1.12.10-eks-ffbd9 , docker://18.6.1
Or perhaps you have an old dind image (the 18-dind should be new enough, but an older version of this tag pointing to 18.06 or 18.03 would not). You can try 18.09-dind and 19-dind which should both work if the build is actually happening inside dind.
I'd like to automate the Flyway migrations for our MariaDB database. For testing purposes I added the following service to my docker-compose.yml running only the info command.
flyway:
image: boxfuse/flyway:5.2.4
command: -url=jdbc:mariadb://mariadb_service -schemas=$${MYSQL_DATABASE} -table=schema_version -connectRetries=60 info
volumes:
- ./db/migration:/flyway/sql
depends_on:
- mariadb_service
This seems to be working, i.e. I can see the output of info.
Now I'd like to take this idea one step further and integrate this into our Jenkins build pipeline. This is where I get stuck.
If I deployed the Docker stack with the above docker-compose.yml in my Jenkinsfile would the corresponding stage fail upon errors during the migration? Speaking, would Jenkins notice that error?
If this is not true how can I integrate the Flyway migration in my Jenkins pipeline? I found that there is a Flyway Runner plugin but I didn't see if this can connect to a database in a Docker stack deployed by the Jenkinsfile
You can use Jenkins built-in support for Docker. Then your pipeline script may contain the stage
stage('Apply DB changes') {
agent {
docker {
image 'boxfuse/flyway:5.2.4'
args '-v ./db/migration:/flyway/sql --entrypoint=\'\''
}
}
steps {
sh "/flyway/flyway -url=jdbc:mariadb://mariadb_service -schemas=${MYSQL_DATABASE} -table=schema_version -connectRetries=60 info"
}
}
This way the steps will be executed within temporary Docker container created by Jenkins agent from boxfuse/flyway image. If the command fails the entire stage will fail as well.
I have created a Dockerfile (for a Node JNLP slave which can be used with the Kubernetes Plugin of Jenkins ). I am extending from from the official image jenkinsci/jnlp-slave
FROM jenkinsci/jnlp-slave
USER root
MAINTAINER Aryak Sengupta <aryak.sengupta#hyland.com>
LABEL Description="Image for NodeJS slave"
COPY cert.crt /usr/local/share/ca-certificates
RUN update-ca-certificates
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash \
&& apt-get install -y nodejs
ENTRYPOINT ["jenkins-slave"]
I have this image saved inside my Pod template (in K8s plugin configuration). Now, when I'm trying to run a build on this slave, I find that two containers are getting spawned up inside the Pod (A screenshot to prove the same.).
My Pod template looks like this:
And my Kubernetes configuration looks like this:
Now if I do a simple docker ps, I find that there are two containers which started up (Why?):
Now, inside the Jenkins Job configuration of Jenkins, whatever I add in the build step, the steps get executed in the first container .
Even if I use the official Node container inside my PodTemplate, the result is still the same:
I have tried to print the Node version inside my Jenkins Job, and the output is "Node not found" . Also, to verify my haunch, I have done a docker exec into my second container and tried to print the Node version. In this case, it works absolutely fine.
This is what my build step looks like:
So, to boil it down, I have two major questions:
Why does two separate (one for JNLP and one with all custom changes) containers start up whenever I fire up the Jenkins Job?
Why is my job running on the first container where Node isn't installed? How do I achieve the desired behaviour of building my project with Node using this configuration?
What am I missing?
P.S. - Please do let me know if the question turns out to be unclear in some parts.
Edit: I understand that this can be done using the Pipeline Jenkins plugin where I can explicitly mention the container name, but I need to do this from the Jenkins UI. Is there any way to specify the container name along with the slave name which I am already doing like this:
The Jenkins kubernetes plugin will always create a JNLP slave container inside the pod that is created to perform the build. The podTemplate is where you define the other containers you need in order to perform your build.
In this case it seems you would want to add a Node container to your podTemplate. In your build you would then have the build happen inside the named Node container.
You shouldn't really care where the Pod runs. All you need to do is make sure you add a container that has the resources you need (like Node in this case). You can add as many containers as you want to a podTemplate. I have some with 10 or more containers for steps like PMD, Maven, curl, etc.
I use a Jenkinsfile with pipelines.
podTemplate(cloud: 'k8s-houston', label: 'api-hire-build',
containers: [
containerTemplate(name: 'maven', image: 'maven:3-jdk-8-alpine', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'pmd', image: 'stash.company.com:8443/pmd:pmd-bin-5.5.4', alwaysPullImage: false, ttyEnabled: true, command: 'cat')
],
volumes: [
persistentVolumeClaim(claimName: 'jenkins-pv-claim', mountPath: '/mvn/.m2nrepo')
]
)
{
node('api-hire-build') {
stage('Maven compile') {
container('maven') {
sh "mvn -Dmaven.repo.local=/mvn/.m2nrepo/repository clean compile"
}
}
stage('PMD SCA (docker)') {
container('pmd') {
sh 'run.sh pmd -d "$PWD"/src -f xml -reportfile "$PWD"/target/pmd.xml -failOnViolation false -rulesets java-basic,java-design,java-unusedcode -language java'
sh 'run.sh pmd -d "$PWD"/src -f html -reportfile "$PWD"/target/pmdreport.html -failOnViolation false -rulesets java-basic,java-design,java-unusedcode -language java'
sh 'run.sh cpd --files "$PWD"/src --minimum-tokens 100 --failOnViolation false --language java --format xml > "$PWD"/target/duplicate-code.xml'
}
archive 'target/duplicate-code.xml'
step([$class: 'PmdPublisher', pattern: 'target/pmd.xml'])
}
}
}
Alright so I've figured out the solution. mhang li's answer was the clue but he didn't explain it one bit.
Basically, you need to modify the official Jenkins Slave image found here and modify it to include the changes for your slave as well. Essentially, you are clubbing the JNLP and Slave containers into one and building a combined image.
The modification format will just look like this (picking up from the Dockerfile linked)
FROM jenkins/slave:3.27-1
MAINTAINER Oleg Nenashev <o.v.nenashev#gmail.com>
LABEL Description="This is a base image, which allows connecting Jenkins agents via JNLP protocols" Vendor="Jenkins project" Version="3.27"
COPY jenkins-slave /usr/local/bin/jenkins-slave
**INCLUDE CODE FOR YOUR SLAVE. Eg install node, java, whatever**
ENTRYPOINT ["jenkins-slave"] # Make sure you include this file as well
Now, name the slave container jnlp (Reason - bug). So now, you will have one container that spawns which will be your JNLP + Slave. All in all, your Kubernetes Plugin Pod Template will look something like this. Notice the custom url to the docker image I have put in. Also, make sure you don't include a Command To Run unless you need one.
Done! Your builds should now run within this container and should function exactly like you programmed the Dockerfile!
To set Container Template -> Name as jnlp.
https://issues.jenkins-ci.org/browse/JENKINS-40847