Jenkinsfile docker - docker

I'm running a jenkins instance on GCE inside a docker container and would like to execute a multibranch pipeline from this Jenkinsfile and Github. I'm using the GCE jenkins tutorial for this. Here is my Jenkinsfile
node {
def project = 'xxxxxx'
def appName = 'gceme'
def feSvcName = "${appName}-frontend"
def imageTag = "eu.gcr.io/${project}/${appName}:${env.BRANCH_NAME}.${env.BUILD_NUMBER}"
checkout scm
sh("echo Build image")
stage 'Build image'
sh("docker build -t ${imageTag} .")
sh("echo Run Go tests")
stage 'Run Go tests'
sh("docker run ${imageTag} go test")
sh("echo Push image to registry")
stage 'Push image to registry'
sh("gcloud docker push ${imageTag}")
sh("echo Deploy Application")
stage "Deploy Application"
switch (env.BRANCH_NAME) {
// Roll out to canary environment
case "canary":
// Change deployed image in canary to the one we just built
sh("sed -i.bak 's#eu.gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/canary/*.yaml")
sh("kubectl --namespace=production apply -f k8s/services/")
sh("kubectl --namespace=production apply -f k8s/canary/")
sh("echo http://`kubectl --namespace=production get service/${feSvcName} --output=json | jq -r '.status.loadBalancer.ingress[0].ip'` > ${feSvcName}")
break
// Roll out to production
case "master":
// Change deployed image in canary to the one we just built
sh("sed -i.bak 's#eu.gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/production/*.yaml")
sh("kubectl --namespace=production apply -f k8s/services/")
sh("kubectl --namespace=production apply -f k8s/production/")
sh("echo http://`kubectl --namespace=production get service/${feSvcName} --output=json | jq -r '.status.loadBalancer.ingress[0].ip'` > ${feSvcName}")
break
// Roll out a dev environment
default:
// Create namespace if it doesn't exist
sh("kubectl get ns ${env.BRANCH_NAME} || kubectl create ns ${env.BRANCH_NAME}")
// Don't use public load balancing for development branches
sh("sed -i.bak 's#LoadBalancer#ClusterIP#' ./k8s/services/frontend.yaml")
sh("sed -i.bak 's#eu.gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/dev/*.yaml")
sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/services/")
sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/dev/")
echo 'To access your environment run `kubectl proxy`'
echo "Then access your service via http://localhost:8001/api/v1/proxy/namespaces/${env.BRANCH_NAME}/services/${feSvcName}:80/"
}
}
I always get an error docker not found:
[apiservice_master-GJCRJX6ZJPDVVSEUHIS6VBX7OYMFS5WKRVRKCSF4PSO76ZGZPKFQ] Running shell script
+ docker build -t eu.gcr.io/xxxxx/apiservice:master.1 .
/var/jenkins_home/workspace/apiservice_master-GJCRJX6ZJPDVVSEUHIS6VBX7OYMFS5WKRVRKCSF4PSO76ZGZPKFQ#tmp/durable-b4503ecc/script.sh: 2: /var/jenkins_home/workspace/apiservice_master-GJCRJX6ZJPDVVSEUHIS6VBX7OYMFS5WKRVRKCSF4PSO76ZGZPKFQ#tmp/durable-b4503ecc/script.sh: docker: not found
What do I have to change to make docker work inside jenkins?

That looks like DiD (Docker in Docker), which this recent issue points out as problematic.
See "Using Docker-in-Docker for your CI or testing environment? Think twice."
That same issue recommends to run in privilege mode.
And make sure your docker container in which you are executing does have docker installed.

You need the docker client installed in the Jenkins agent image used for that node, ie. cloudbees/java-with-docker-client
And the docker socket mounted in the agent

Related

Terraform ignore parallelism flag when running through Jenkins

I am running Terraform job using Jenkins pipeline. Terraform refresh is taking too long 10m~, using -parallelism=60 (local)terraform runs much faster 2.5m~.
When running the same config through Jenkins salve with parallelism I don't see any improve in running time.
Jenkins ver. 2.154
Jenkins Docker plugin 1.1.6
SSH Agent plugin 1.19
Flow: Jenkins master creates job -> Jenkins slave running Docker image of terraform
def terraformRun(String envName, String terraformAction, String dirName = 'env') {
sshagent(['xxxxxxx-xxx-xxx-xxx-xxxxxxxx']) {
withEnv(["ENV_NAME=${envName}", "TERRAFORM_ACTION=${terraformAction}", "DIR_NAME=${dirName}"]) {
sh '''
#!/bin/bash
set -e
ssh-keyscan -H "bitbucket.org" >> ~/.ssh/known_hosts
AUTO_APPROVE=""
echo terraform "${TERRAFORM_ACTION}" on "${ENV_NAME}"
cd "${DIR_NAME}"
export TF_WORKSPACE="${ENV_NAME}"
echo "terraform init"
terraform init -input=false
echo "terraform refresh"
terraform apply -refresh-only -auto-approve -parallelism=60 -var-file=tfvars/"${ENV_NAME}".tfvars -var-file=../variables.tfvars # Refresh is working but it seems to ignore parallelism
echo "terraform ${TERRAFORM_ACTION}"
if [ ${TERRAFORM_ACTION} = "apply" ]; then
AUTO_APPROVE="-auto-approve"
fi
terraform ${TERRAFORM_ACTION} -refresh=false -var-file=tfvars/"${ENV_NAME}".tfvars -var-file=../variables.tfvars ${AUTO_APPROVE}
echo "terraform ${TERRAFORM_ACTION} on ${ENV_NAME} finished successfully."
'''
}
}
}

Docker Multi Stage Build access Test Reports in Jenkins when Tests Fail

I am doing a multi stage build in docker to separate the app from testing. Now at some point in my Dockerfile I run:
RUN pytest --junit=test-reports/junit.xml
In my Jenkinsfile respectivly I do:
def app = docker.build("app:${BUILD_NUMBER}", "--target test .")
app.inside {
sh "cp -r /app/test-reports test-reports"
}
junit 'test-reports/junit.xml'
Now if my test fail, the build fails which is good. But the rest of the stage is not executed, i.e. I dont have access to the test-reports folder. How can I manage that?
I resolved similar task by using always block after build stage.
Please check if below code can help.
always{
script{
sh '''#!/bin/bash
docker create -it --name test_report app:${BUILD_NUMBER} /bin/bash
docker cp test_report:/app/test-reports ./test-reports
docker rm -f test_report
'''
}
junit 'test-reports/junit.xml'
}

Build and Run Docker Container in Jenkins

I need to run docker container in Jenkins so that installed libraries like pycodestyle can be runnable in the following steps.
I successfully built Docker Container (in Dockerfile)
How do I access to the container so that I can use it in the next step? (Please look for >> << code in Build step below)
Thanks
stage('Build') {
// Install python libraries from requirements.txt (Check Dockerfile for more detail)
sh "docker login -u '${DOCKER_USR}' -p '${DOCKER_PSW}' ${DOCKER_REGISTRY}"
sh "docker build \
--tag '${DOCKER_REGISTRY}/${DOCKER_TAG}:latest' \
--build-arg HTTPS_PROXY=${PIP_PROXY} ."
>> sh "docker run -ti ${DOCKER_REGISTRY}/${DOCKER_TAG}:latest sh" <<<
}
}
stage('Linting') {
sh '''
awd=$(pwd)
echo '===== Linting START ====='
for file in $(find . -name '*.py'); do
filename=$(basename $file)
if [[ ${file:(-3)} == ".py" ]] && [[ $filename = *"test"* ]] ; then
echo "perform PEP8 lint (python pylint blah) for $filename"
cd $awd && cd $(dirname "${file}") && pycodestyle "${filename}"
fi
done
echo '===== Linting END ====='
'''
}
You need to mount the workspace of your Jenkins job (containing your python project) as volume (see "docker run -v" option) to your container and then run the "next step" build step inside this container. You can do this by providing a shell script as part of your project's source code, which does the "next step" or write this script in a previous build stage.
It would be something like this:
sh "chmod +x build.sh"
sh "docker run -v $WORKSPACE:/workspace ${DOCKER_REGISTRY}/${DOCKER_TAG}:latest /workspace/build.sh"
build.sh is an executable script, which is part of your project's workspace and performans the "next step".
$WORKSPACE is the folder that is used by your jenkins job (normally /var/jenkins_home/jobs//workspace - it is provided by Jenkins as a build variable.
Please note: This solution requires that the Docker daemon is running on the same host as Jenkins! Otherwise the workspace will not be available to your container.
Another solution would be to run Jenkins as Docker container, so you can share the Jenkins home/workspaces easily with the containers you run within your build jobs, like described here:
Running Jenkins tests in Docker containers build from dockerfile in codebase

aws ecs ec2 continuous deployment with jenkins

I am using jenkins for continuous deployment from gitlab into aws ecs ec2 container instance. I am using jenkins file for this purpose. For registering the task definition on each push I have placed the task definition json file in an aws folder in the gitlab. Is it possible to put the task definition json file in the jenkins so that we can need to keep only the jenkinsfile in the gitlab?
There is a workspace folder in jenkins /var/lib/jenkins/workspace/jobname which is created after first build. Can we place the task definition in that place?
My Jenkinsfile is pasted below
stage 'Checkout'
git 'git#gitlab.xxxx.com/repo.git'
stage ("Docker build") {
sh "docker build --no-cache -t xxxx:${BUILD_NUMBER}" ."
}
stage("Docker push") {
docker.withRegistry('https://xxxxxxxxxxxx.dkr.ecr.ap-southeast-1.amazonaws.com', 'ecr:regopm:ecr-credentials') {
docker.image("xxxxx:${BUILD_NUMBER}").push(remoteImageTag)
}
}
stage ("Deploy") {
sh "sed -e 's;BUILD_TAG;${BUILD_NUMBER};g' aws/task-definition.json > aws/task-definition-${remoteImageTag}.json"
sh " \
aws ecs register-task-definition --family ${taskFamily} \
--cli-input-json ${taskDefile} \
"
def taskRevision = sh (
returnStdout: true,
script: " aws ecs describe-task-definition --task-definition ${taskFamily} | egrep 'revision' | tr ',' ' '| awk '{print \$2}' "
).trim()
sh " \
aws ecs update-service --cluster ${clusterName} \
--service ${serviceName} \
--task-definition ${taskFamily}:${taskRevision} \
--desired-count 1 \
"
}
On the very same approach, but putting togheter some reusable logic, we just open-sourced our "glueing" tool, that we're using from Jenkins as well (please, see Extra section for templates on Jenkins pipelines):
https://github.com/GuccioGucci/yoke
use esc-cli as an alternative incase if you do not want to use task definition , install esc-cli on jenkins node and run it, but that still need docker-compose on the git.

Jenkins on GCE not building

I'm trying to get my head around Jenkins CD and k8s on GCE. I'm following the tutorial on GCE: https://cloud.google.com/solutions/continuous-delivery-jenkins-container-engine
For some reason the app won't build:
This is the Jenkins console output.
This is my Jenkins file:
node {
def project = 'xxxxxx'
def appName = 'gceme'
def feSvcName = "${appName}-frontend"
def imageTag = "eu.gcr.io/${project}/${appName}:${env.BRANCH_NAME}.${env.BUILD_NUMBER}"
checkout scm
sh("echo Build image")
stage 'Build image'
sh("docker build -t ${imageTag} .")
sh("echo Run Go tests")
stage 'Run Go tests'
sh("docker run ${imageTag} go test")
sh("echo Push image to registry")
stage 'Push image to registry'
sh("gcloud docker push ${imageTag}")
sh("echo Deploy Application")
stage "Deploy Application"
switch (env.BRANCH_NAME) {
// Roll out to canary environment
case "canary":
// Change deployed image in canary to the one we just built
sh("sed -i.bak 's#eu.gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/canary/*.yaml")
sh("kubectl --namespace=production apply -f k8s/services/")
sh("kubectl --namespace=production apply -f k8s/canary/")
sh("echo http://`kubectl --namespace=production get service/${feSvcName} --output=json | jq -r '.status.loadBalancer.ingress[0].ip'` > ${feSvcName}")
break
// Roll out to production
case "master":
// Change deployed image in canary to the one we just built
sh("sed -i.bak 's#eu.gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/production/*.yaml")
sh("kubectl --namespace=production apply -f k8s/services/")
sh("kubectl --namespace=production apply -f k8s/production/")
sh("echo http://`kubectl --namespace=production get service/${feSvcName} --output=json | jq -r '.status.loadBalancer.ingress[0].ip'` > ${feSvcName}")
break
// Roll out a dev environment
default:
// Create namespace if it doesn't exist
sh("kubectl get ns ${env.BRANCH_NAME} || kubectl create ns ${env.BRANCH_NAME}")
// Don't use public load balancing for development branches
sh("sed -i.bak 's#LoadBalancer#ClusterIP#' ./k8s/services/frontend.yaml")
sh("sed -i.bak 's#eu.gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/dev/*.yaml")
sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/services/")
sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/dev/")
echo 'To access your environment run `kubectl proxy`'
echo "Then access your service via http://localhost:8001/api/v1/proxy/namespaces/${env.BRANCH_NAME}/services/${feSvcName}:80/"
}
}
Could someone please lead me into the right direction? I'm lost.
It is clear from the error that problem is on creating symlink for the dockerfile which does not exist. Trace the given location and check if file exist and have required permission.
Absolut Fail. Forgot Dockerfile...

Resources