Jenkins on GCE not building - docker

I'm trying to get my head around Jenkins CD and k8s on GCE. I'm following the tutorial on GCE: https://cloud.google.com/solutions/continuous-delivery-jenkins-container-engine
For some reason the app won't build:
This is the Jenkins console output.
This is my Jenkins file:
node {
def project = 'xxxxxx'
def appName = 'gceme'
def feSvcName = "${appName}-frontend"
def imageTag = "eu.gcr.io/${project}/${appName}:${env.BRANCH_NAME}.${env.BUILD_NUMBER}"
checkout scm
sh("echo Build image")
stage 'Build image'
sh("docker build -t ${imageTag} .")
sh("echo Run Go tests")
stage 'Run Go tests'
sh("docker run ${imageTag} go test")
sh("echo Push image to registry")
stage 'Push image to registry'
sh("gcloud docker push ${imageTag}")
sh("echo Deploy Application")
stage "Deploy Application"
switch (env.BRANCH_NAME) {
// Roll out to canary environment
case "canary":
// Change deployed image in canary to the one we just built
sh("sed -i.bak 's#eu.gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/canary/*.yaml")
sh("kubectl --namespace=production apply -f k8s/services/")
sh("kubectl --namespace=production apply -f k8s/canary/")
sh("echo http://`kubectl --namespace=production get service/${feSvcName} --output=json | jq -r '.status.loadBalancer.ingress[0].ip'` > ${feSvcName}")
break
// Roll out to production
case "master":
// Change deployed image in canary to the one we just built
sh("sed -i.bak 's#eu.gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/production/*.yaml")
sh("kubectl --namespace=production apply -f k8s/services/")
sh("kubectl --namespace=production apply -f k8s/production/")
sh("echo http://`kubectl --namespace=production get service/${feSvcName} --output=json | jq -r '.status.loadBalancer.ingress[0].ip'` > ${feSvcName}")
break
// Roll out a dev environment
default:
// Create namespace if it doesn't exist
sh("kubectl get ns ${env.BRANCH_NAME} || kubectl create ns ${env.BRANCH_NAME}")
// Don't use public load balancing for development branches
sh("sed -i.bak 's#LoadBalancer#ClusterIP#' ./k8s/services/frontend.yaml")
sh("sed -i.bak 's#eu.gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/dev/*.yaml")
sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/services/")
sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/dev/")
echo 'To access your environment run `kubectl proxy`'
echo "Then access your service via http://localhost:8001/api/v1/proxy/namespaces/${env.BRANCH_NAME}/services/${feSvcName}:80/"
}
}
Could someone please lead me into the right direction? I'm lost.

It is clear from the error that problem is on creating symlink for the dockerfile which does not exist. Trace the given location and check if file exist and have required permission.

Absolut Fail. Forgot Dockerfile...

Related

make a deployment redownload an image with jenkins

I wrote a pipeline for an Hello World web app, nothing biggy, it's a simple hello world page.
I made it so if the tests pass, it'll deploy it to a remote kubernetes cluster.
My problem is that if I change the html page and try to redeploy into k8s the page remains the same (the pods aren't rerolled and the image is outdated).
I have the autopullpolicy set to always. I thought of using specific tags within the deployment yaml but I have no idea how to integrate that with my jenkins (as in how do I make jenkins set the BUILD_NUMBER as the tag for the image in the deployment).
Here is my pipeline:
pipeline {
agent any
environment
{
user = "NAME"
repo = "prework"
imagename = "${user}/${repo}"
registryCreds = 'dockerhub'
containername = "${repo}-test"
}
stages
{
stage ("Build")
{
steps {
// Building artifact
sh '''
docker build -t ${imagename} .
docker run -p 80 --name ${containername} -dt ${imagename}
'''
}
}
stage ("Test")
{
steps {
sh '''
IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' ${containername})
STATUS=$(curl -sL -w "%{http_code} \n" $IP:80 -o /dev/null)
if [ $STATUS -ne 200 ]; then
echo "Site is not up, test failed"
exit 1
fi
echo "Site is up, test succeeded"
'''
}
}
stage ("Store Artifact")
{
steps {
echo "Storing artifact: ${imagename}:${BUILD_NUMBER}"
script {
docker.withRegistry('https://registry.hub.docker.com', 'dockerhub') {
def customImage = docker.image(imagename)
customImage.push(BUILD_NUMBER)
customImage.push("latest")
}
}
}
}
stage ("Deploy to Kubernetes")
{
steps {
echo "Deploy to k8s"
script {
kubernetesDeploy(configs: "deployment.yaml", kubeconfigId: "kubeconfig") }
}
}
}
post {
always {
echo "Pipeline has ended, deleting image and containers"
sh '''
docker stop ${containername}
docker rm ${containername} -f
'''
}
}
}
EDIT:
I used sed to replace the latest tag with the build number every time I'm running the pipeline and it works. I'm wondering if any of you have other ideas because it seems so messy right now.
Thanks.
According to the information from Kubernetes Continuous Deploy Plugin p.6. you can add enableConfigSubstitution: true to kubernetesDeploy() section and use ${BUILD_NUMBER} instead of latest in deployment.yaml:
By checking "Enable Variable Substitution in Config", the variables
(in the form of $VARIABLE or `${VARIABLE}) in the configuration files
will be replaced with the values from corresponding environment
variables before they are fed to the Kubernetes management API. This
allows you to dynamically update the configurations according to each
Jenkins task, for example, using the Jenkins build number as the image
tag to be pulled.

CICD Pipeline Deployment stage unable to come out after successful deployment

Background : Spring boot application deployment using cicd pipeline declarative script
Issue :
The spring boot application jar file is able to launch successfully. After some time we can access application health info also from browser but the build job is unable to exit from deployment stage. It is spinning at this stage continuously.
Action Taken: even we have added timeout=120000 in launch command but no change in behaviour.
Help : please help us how can we make clean exit after deployment stage from jenkin cicd declarative pipeline.
We are ssh'ing and executing our launch command. The code is like:
sshagent([sshAgent]) { sh "scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -v *.jar sudouser#${server}:/opt/project/tmp/application-demo.jar" sh "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null sudouser#${server} nohup '/opt/java/hotspot/8/64_bit/jdk1.8.0_141/bin/java -jar -Dspring.profiles.active=$profile -Dhttpport=8890 - /opt/project/tmp/application-demo.jar ' timeout=120000" }
I need to come out (clean exit) from jenkins build after deployment stage is successful.
You need to add the '&' to the start process in backgroud
Example:
nohup '/opt/java/hotspot/8/64_bit/jdk1.8.0_141/bin/java -jar -Dspring.profiles.active = $ profile -Dhttpport = 8890 - /opt/project/tmp/application-demo.jar &
You can also put an 'if' condition for when the log appears 'started in' to exit execution.
Example:
status(){
timeout=$1
process=$2
log=$3
string=$4
if (timeout ${timeout} tail -f ${log} &) | grep "${string}" ; then
echo "${process} started with success."
else
echo "${process} startup failed." 1>&2
exit 1
fi
}
start_app(){
java -jar -dspring.profiles.active=$profile -dhttpport=8890 - /opt/project/tmp/application-demo.jar >> /tmp/log.log 2>&1 &
status "60" "application-demo" "/tmp/log.log" "started"
}

aws ecs ec2 continuous deployment with jenkins

I am using jenkins for continuous deployment from gitlab into aws ecs ec2 container instance. I am using jenkins file for this purpose. For registering the task definition on each push I have placed the task definition json file in an aws folder in the gitlab. Is it possible to put the task definition json file in the jenkins so that we can need to keep only the jenkinsfile in the gitlab?
There is a workspace folder in jenkins /var/lib/jenkins/workspace/jobname which is created after first build. Can we place the task definition in that place?
My Jenkinsfile is pasted below
stage 'Checkout'
git 'git#gitlab.xxxx.com/repo.git'
stage ("Docker build") {
sh "docker build --no-cache -t xxxx:${BUILD_NUMBER}" ."
}
stage("Docker push") {
docker.withRegistry('https://xxxxxxxxxxxx.dkr.ecr.ap-southeast-1.amazonaws.com', 'ecr:regopm:ecr-credentials') {
docker.image("xxxxx:${BUILD_NUMBER}").push(remoteImageTag)
}
}
stage ("Deploy") {
sh "sed -e 's;BUILD_TAG;${BUILD_NUMBER};g' aws/task-definition.json > aws/task-definition-${remoteImageTag}.json"
sh " \
aws ecs register-task-definition --family ${taskFamily} \
--cli-input-json ${taskDefile} \
"
def taskRevision = sh (
returnStdout: true,
script: " aws ecs describe-task-definition --task-definition ${taskFamily} | egrep 'revision' | tr ',' ' '| awk '{print \$2}' "
).trim()
sh " \
aws ecs update-service --cluster ${clusterName} \
--service ${serviceName} \
--task-definition ${taskFamily}:${taskRevision} \
--desired-count 1 \
"
}
On the very same approach, but putting togheter some reusable logic, we just open-sourced our "glueing" tool, that we're using from Jenkins as well (please, see Extra section for templates on Jenkins pipelines):
https://github.com/GuccioGucci/yoke
use esc-cli as an alternative incase if you do not want to use task definition , install esc-cli on jenkins node and run it, but that still need docker-compose on the git.

Jenkinsfile docker

I'm running a jenkins instance on GCE inside a docker container and would like to execute a multibranch pipeline from this Jenkinsfile and Github. I'm using the GCE jenkins tutorial for this. Here is my Jenkinsfile
node {
def project = 'xxxxxx'
def appName = 'gceme'
def feSvcName = "${appName}-frontend"
def imageTag = "eu.gcr.io/${project}/${appName}:${env.BRANCH_NAME}.${env.BUILD_NUMBER}"
checkout scm
sh("echo Build image")
stage 'Build image'
sh("docker build -t ${imageTag} .")
sh("echo Run Go tests")
stage 'Run Go tests'
sh("docker run ${imageTag} go test")
sh("echo Push image to registry")
stage 'Push image to registry'
sh("gcloud docker push ${imageTag}")
sh("echo Deploy Application")
stage "Deploy Application"
switch (env.BRANCH_NAME) {
// Roll out to canary environment
case "canary":
// Change deployed image in canary to the one we just built
sh("sed -i.bak 's#eu.gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/canary/*.yaml")
sh("kubectl --namespace=production apply -f k8s/services/")
sh("kubectl --namespace=production apply -f k8s/canary/")
sh("echo http://`kubectl --namespace=production get service/${feSvcName} --output=json | jq -r '.status.loadBalancer.ingress[0].ip'` > ${feSvcName}")
break
// Roll out to production
case "master":
// Change deployed image in canary to the one we just built
sh("sed -i.bak 's#eu.gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/production/*.yaml")
sh("kubectl --namespace=production apply -f k8s/services/")
sh("kubectl --namespace=production apply -f k8s/production/")
sh("echo http://`kubectl --namespace=production get service/${feSvcName} --output=json | jq -r '.status.loadBalancer.ingress[0].ip'` > ${feSvcName}")
break
// Roll out a dev environment
default:
// Create namespace if it doesn't exist
sh("kubectl get ns ${env.BRANCH_NAME} || kubectl create ns ${env.BRANCH_NAME}")
// Don't use public load balancing for development branches
sh("sed -i.bak 's#LoadBalancer#ClusterIP#' ./k8s/services/frontend.yaml")
sh("sed -i.bak 's#eu.gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/dev/*.yaml")
sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/services/")
sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/dev/")
echo 'To access your environment run `kubectl proxy`'
echo "Then access your service via http://localhost:8001/api/v1/proxy/namespaces/${env.BRANCH_NAME}/services/${feSvcName}:80/"
}
}
I always get an error docker not found:
[apiservice_master-GJCRJX6ZJPDVVSEUHIS6VBX7OYMFS5WKRVRKCSF4PSO76ZGZPKFQ] Running shell script
+ docker build -t eu.gcr.io/xxxxx/apiservice:master.1 .
/var/jenkins_home/workspace/apiservice_master-GJCRJX6ZJPDVVSEUHIS6VBX7OYMFS5WKRVRKCSF4PSO76ZGZPKFQ#tmp/durable-b4503ecc/script.sh: 2: /var/jenkins_home/workspace/apiservice_master-GJCRJX6ZJPDVVSEUHIS6VBX7OYMFS5WKRVRKCSF4PSO76ZGZPKFQ#tmp/durable-b4503ecc/script.sh: docker: not found
What do I have to change to make docker work inside jenkins?
That looks like DiD (Docker in Docker), which this recent issue points out as problematic.
See "Using Docker-in-Docker for your CI or testing environment? Think twice."
That same issue recommends to run in privilege mode.
And make sure your docker container in which you are executing does have docker installed.
You need the docker client installed in the Jenkins agent image used for that node, ie. cloudbees/java-with-docker-client
And the docker socket mounted in the agent

Building Go app with "vendor" directory on Jenkins with Docker

I'm trying to set up a Jenkins Pipeline to build and deploy my first Go project using a Jenkinsfile and docker.image().inside . I can't figure out how to get go to pick up the dependencies in the vendor/ directory.
When I run the build, I get a bunch of errors:
+ goapp test ./...
src/dao/demo_dao.go:8:2: cannot find package "github.com/dgrijalva/jwt-go" in any of:
/usr/lib/go_appengine/goroot/src/github.com/dgrijalva/jwt-go (from $GOROOT)
/usr/lib/go_appengine/gopath/src/github.com/dgrijalva/jwt-go (from $GOPATH)
/workspace/src/github.com/dgrijalva/jwt-go
...why isn't it picking up the Vendor directory?
When I throw in some logging, it seems that after running sh "cd /workspace/src/bitbucket.org/nalbion/go-demo" the next sh command is still in the original ${WORKSPACE} directory. I really like the idea of the Jenkins file, but I can't find any decent documentation for it.
(Edit - there is decent documentation here but dir("/workspace/src/bitbucket.org/nalbion/go-demo") {} doesn't seem to work within docker.image().inside)
My Docker file resembles:
FROM golang:1.6.2
# Google's App Engine Go SDK
RUN wget https://storage.googleapis.com/appengine-sdks/featured/go_appengine_sdk_linux_amd64-1.9.40.zip -q -O go_appengine_sdk.zip && \
unzip -q go_appengine_sdk.zip -d /usr/lib/ && \
rm go_appengine_sdk.zip
ENV PATH /usr/lib/go_appengine:/go/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV GOPATH /usr/lib/go_appengine/gopath
# Add Jenkins user
RUN groupadd -g 132 jenkins && useradd -d "/var/jenkins_home" -u 122 -g 132 -m -s /bin/bash jenkins
And my Jenkinsfile:
node('docker') {
currentBuild.result = "SUCCESS"
try {
stage 'Checkout'
checkout scm
stage 'Build and Test'
env.WORKSPACE = pwd()
docker.image('nalbion/go-web-build:latest').inside(
"-v ${env.WORKSPACE}:/workspace/src/bitbucket.org/nalbion/go-demo " +
"-e GOPATH=/usr/lib/go_appengine/gopath:/workspace") {
// Debugging
sh 'echo GOPATH: $GOPATH'
sh "ls -al /workspace/src/bitbucket.org/nalbion/go-demo"
sh "cd /workspace/src/bitbucket.org/nalbion/go-demo"
sh "pwd"
sh "go vet ./src/..."
sh "goapp test ./..."
}
stage 'Deploy to DEV'
docker.image('nalbion/go-web-build').inside {
sh "goapp deploy --application go-demo --version v${v} app.yaml"
}
timeout(time:5, unit:'DAYS') {
input message:'Approve deployment?', submitter: 'qa'
}
stage 'Deploy to PROD'
docker.image('nalbion/go-web-build').inside {
sh "goapp deploy --application go-demo --version v${v} app.yaml"
}
} catch (err) {
currentBuild.result = "FAILURE"
// send notifications
throw err
}
}
I managed to get it working by including the cd in the same sh statement:
docker.image('nalbion/go-web-build:latest')
.inside("-v ${env.WORKSPACE}:/workspace/src/bitbucket.org/nalbion/go-demo " +
"-e GOPATH=/usr/lib/go_appengine/gopath:/workspace") {
sh """
cd /workspace/src/bitbucket.org/nalbion/go-demo
go vet ./src/...
goapp test ./...
"""
}

Resources