Kaniko: How to cache folders from Gatsby build in Kubernetes using Tekton? - docker

I am building a CI/CD pipeline using Tekton on a bare metal Kubernetes Cluster. I have managed to cache the necessary images (Node & Nginx) and the layers, but how can I cache the .cache / public folders created by Gatsby build? These folders are not present in the repo. If the build step does not find these folders in takes longer because it needs to create all images using Sharp.
The pipeline has a PVC attached. In the task it is called source (workspaces). To be more clear, how can I copy the Gatsby folders to this PVC after the build has finished and to the Kaniko container before the next build?
The Tekton task has the following steps:
Use Kaniko warmer to cache Docker Images used in the Docker build
Create a timestamp so that "RUN build" is executed every time even if the files don't change because it runs a GraphQL query
Build and push image using Kaniko
& 5. Export image digest used by next step in the pipeline
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-docker-image
spec:
params:
- name: pathToDockerFile
type: string
description: The path to the dockerfile to build
default: $(resources.inputs.source-repo.path)/Dockerfile
- name: pathToContext
type: string
description: |
The build context used by Kaniko
(https://github.com/GoogleContainerTools/kaniko#kaniko-build-contexts)
default: $(resources.inputs.source-repo.path)
resources:
inputs:
- name: source-repo
type: git
outputs:
- name: builtImage
type: image
- name: event-to-sink
type: cloudEvent
workspaces:
# PVC
- name: source
description: |
Folder to write docker image digest
results:
- name: IMAGE-DIGEST
description: Digest of the image just built.
steps:
- name: kaniko-warmer
image: gcr.io/kaniko-project/warmer
workingDir: $(workspaces.source.path)
args:
- --cache-dir=$(workspaces.source.path)/cache
- --image=node:14-alpine
- --image=nginx:1.19.5
- name: print-date-unix-timestamp
image: bash:latest
script: |
#!/usr/bin/env bash
date | tee $(params.pathToContext)/date
- name: build-and-push
workingDir: $(workspaces.source.path)
image: gcr.io/kaniko-project/executor:v1.3.0
env:
- name: 'DOCKER_CONFIG'
value: '/tekton/home/.docker/'
command:
- /kaniko/executor
args:
- --build-arg=CACHEBUST=$(params.pathToContext)/date
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url)
- --context=$(params.pathToContext)
- --cache=true
- --cache-ttl=144h
- --cache-dir=$(workspaces.source.path)/cache
- --use-new-run
- --snapshotMode=redo
- --cache-repo=<repo>/kaniko-cache
- --log-timestamp
securityContext:
runAsUser: 0
- name: write-digest
workingDir: $(workspaces.source.path)
image: gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/imagedigestexporter:v0.16.2
command: ['/ko-app/imagedigestexporter']
args:
- -images=[{"name":"$(resources.outputs.builtImage.url)","type":"image","url":"$(resources.outputs.builtImage.url)","digest":"","OutputImageDir":"$(workspaces.source.path)/$(params.pathToContext)/image-digest"}]
- -terminationMessagePath=$(params.pathToContext)/image-digested
securityContext:
runAsUser: 0
- name: digest-to-result
workingDir: $(workspaces.source.path)
image: docker.io/stedolan/jq#sha256:a61ed0bca213081b64be94c5e1b402ea58bc549f457c2682a86704dd55231e09
script: |
cat $(params.pathToContext)/image-digested | jq '.[0].value' -rj | tee /$(results.IMAGE-DIGEST.path)
Dockerfile
FROM node:14-alpine as build
ARG CACHEBUST=1
RUN apk update \
&& apk add \
build-base \
libtool \
autoconf \
automake \
pkgconfig \
nasm \
yarn \
libpng-dev libjpeg-turbo-dev giflib-dev tiff-dev \
zlib-dev \
python \
&& rm -rf /var/cache/apk/*
EXPOSE 8000 9000
RUN yarn global add gatsby-cli
WORKDIR /usr/src/app
COPY ./package.json .
RUN yarn install
COPY . .
RUN yarn build && echo $CACHEBUST
CMD ["yarn", "serve"]
FROM nginx:1.19.5 as serve
EXPOSE 80
COPY --from=build /usr/src/app/public /usr/share/nginx/html

how can I cache the .cache / public folders created by Gatsby build? These folders are not present in the repo.
If Persistent Volumes is available on your cluster and these volumes is available from all nodes, you can use a PVC-backed workspace for cache.
A more generic solution that also works in a regional cluster (e.g. cloud) is to upload the cached folder to something, e.g. a Bucket (Minio?) or potentially Redis? Then also need a Task that download this folder - potentially in parallel with git clone when starting a new PipelineRun. GitHub Actions has a similar solution with the cache action.
Example of a Task with two workspaces that copy a file from one workspace to the other:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: copy-between-workspaces
spec:
workspaces:
- name: ws-a
- name: ws-b
steps:
- name: copy
image: ubuntu
script: cp $(workspaces.ws-a.path)/myfile $(workspaces.ws-b.path)/myfile

Related

Package installed in dockerfile inaccessable in manifest file

I'm quite new to kubernetes and docker.
I am trying to create a kubernetes CronJob which will, every x minutes, clone a repo, build the docker file in that repo, then apply the manifest file to create the job.
When I install git in the CronJob dockerfile, when I run any git command in the kubernetes manifest file, it doesn't recognise it. How should I go about fixing this please?
FROM python:3.8.10
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y git
RUN useradd -rm -d /home/worker -s /bin/bash -g root -G sudo -u 1001 worker
WORKDIR /home/worker
COPY . /home/worker
RUN chown -R 1001:1001 .
USER worker
ENTRYPOINT ["/bin/bash"]
apiVersion: "batch/v1"
kind: CronJob
metadata:
name: cron-job-test
namespace: me
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.28
imagePullPolicy: Always
command:
- /bin/sh
- -c
args:
- git log;
restartPolicy: OnFailure
You should use the correct image that has git binary installed to run git commands. In the manifest you are using image: busybox:1.28 to run the pod which doesnt have git installed. Hence you are getting the error.
Use correct image name and try

How to write a task in ansible so that they perform the same thing as docker build and docker run

I searched the forum but couldn't find anything. Poke your finger if that)
How to write a task in ansible so that they perform the same thing as:
docker build. -t alpine: volume
docker run --rm -ti -v colibri:/colibri alpine: volume
which will create a docker image for me and connect the volume so that the files are synced there.
My Dockerfile looks like this:
FROM alpine:3.12
RUN apk add unzip && \
addgroup -S -g 9999 www && \
adduser -u 9999 -S -G www www && \
mkdir /colibri && chown www:www /colibri
COPY artifact.zip /colibri/artifact.zip
USER www
WORKDIR colibri
RUN unzip artifact.zip && rm artifact.zip
task in ansible:
- name: Build image
community.docker.docker_image:
build:
path: "{{ remote_path }}/docker/volume"
name: volume
tag: v1
push: no
source: build
- name: Build an volume on artefact
community.docker.docker_container:
name: volume:v1
state: present
volumes:
- colibri:/colibri
cleanup: yes
I will answer my own question myself. Through tests and errors, I arrived at this result
- name: Build an image
community.docker.docker_image:
build:
path: "{{ remote_path }}/docker/volume"
name: volume
tag: v1
source: build
- name: Build an artefact on volume
community.docker.docker_container:
name: volume
image: volume:v1
state: started
timeout: 300
volumes:
- colibri_magento:/colibri
auto_remove: yes
cleanup: yes

Is there a way to update Jenkins running in Kubernetes?

I'm trying to run Jenkins in Kubernetes but the version of Jenkins is outdated. It says I need atleast version 2.138.4 for the Kubernetes plugin.
Im using this jenkins image from Docker hub ("jenkins/jenkins:lts"). But when I try to run this in Kubernetes it says the version is 2.60.3. I previously used a really old version of Jenkins (2.60.3) but I updated my Dockerfile to use the latest image. After that I build the image again and threw it to Kubernetes. I even delete my Kubernetes Deployment and Service before deploying them again.
I'm currently working in a development environment using Minikube.
Dockerfile:
FROM jenkins/jenkins:lts
ENV JENKINS_USER admin
ENV JENKINS_PASS admin
# Skip initial setup
ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/plugins.txt
USER root
RUN apt-get update \
&& apt-get install -qqy apt-transport-https ca-certificates curl gnupg2 software-properties-common
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
RUN apt-get update -qq \
&& apt-get install docker-ce -y
RUN usermod -aG docker jenkins
RUN apt-get clean
RUN curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose && chmod +x /usr/local/bin/docker-compose
USER jenkins
The Kubernetes deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: mikemanders/my-jenkins-image:1.0
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
And the Kubernetes Service:
apiVersion: v1
kind: Service
metadata:
name: jenkins
spec:
type: NodePort
selector:
app: jenkins
ports:
- port: 8080
targetPort: 8080
I think my Kubernetes configuration are good, so I'm guessing it has something to do with Docker?
What am I missing/doing wrong here?
TL;DR
To update a deployment, you need new Docker image based on the new Jenkins release:
docker build -t mikemanders/my-jenkins-image:1.1 .
docker push mikemanders/my-jenkins-image
kubectl set image deployment/jenkins mikemanders/my-jenkins-image=1.1 --record
Kubernetes deploys images not dockerfiles
As per Images man
You create your Docker image and push it to a registry before referring to it in a Kubernetes pod.
The image property of a container supports the same syntax as the docker command does, including private registries and tags.
So, you need an image to deploy.
Update your image
To update your image in registry, use docker build -t and docker push:
docker build -t mikemanders/my-jenkins-image:1.1
docker push mikemanders/my-jenkins-image
It will rebuild the image with updated jenkins/jeinkis:lts. Then image will be uploaded to the container registry.
The catch is that you are updating the image version (e.g. 1.0->1.1) before updating the cluster.

CrashloopBackOff on Pod in Kubernetes(on GCP with Jenkins)

My pods are under the state of "CrashloopBackOff", the setup is Jenkins with Kubernetes on GCP.
I have found a few answers where it indicates that my Dockerfile is not good and that it needs to be in an infinite state.
But I run the command in the production.yaml ["sh", "-c", "app -port=8080"] to have it in that state.
The exact same Dockerfile was used and it was working when I deployed the project manually to kubernetes.
The project I'm trying to submit looks like this:
The Dockerfile
FROM php:7.2.4-apache
COPY apache_default /etc/apache2/sites-available/000-default.conf
RUN a2enmod rewrite
COPY src /var/www/html/src
COPY public /var/www/html/public
COPY config /var/www/html/config
ADD composer.json /var/www/html
ADD composer.lock /var/www/html
# Install software
RUN apt-get update && apt-get install -y git
# Install unzip
RUN apt-get install -y unzip
# Install curl
RUN apt-get install -y curl
# Install dependencies
RUN php -r "readfile('http://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer
RUN cd /var/www/html && composer install --no-dev --no-interaction --optimize-autoloader
# install pdo for mysql
RUN docker-php-ext-install pdo pdo_mysql
COPY "memory-limit-php.ini" "/usr/local/etc/php/conf.d/memory-limit-php.ini"
RUN chmod 777 -R /var/www
# Production envivorment
ENV ENVIVORMENT=prod
EXPOSE 80
CMD apachectl -D FOREGROUND
CMD ["app"]
The Jenkinsfile
def project = '****'
def appName = 'wobbl-mobile-backend'
def imageTag = "gcr.io/${project}/${appName}"
def feSvcName = "wobbl-main-backend-service"
pipeline {
agent {
kubernetes {
label 'sample-app'
defaultContainer 'jnlp'
yamlFile 'k8s/pod/pod.yaml'
}
}
stages {
// Deploy Image and push with image container builder
stage('Build and push image with Container Builder') {
steps {
container('gcloud') {
sh "PYTHONUNBUFFERED=1 gcloud container builds submit -t ${imageTag} ."
}
}
}
// Deploy to production
stage('Deploy Production') {
// Production branch
steps{
container('kubectl') {
// Change deployed image in canary to the one we just built
sh("sed -i.bak 's#gcr.io/cloud-solutions-images/wobbl-main:1.0.0#${imageTag}#' ./k8s/production/*.yaml")
sh("kubectl --namespace=production apply -f k8s/services/")
sh("kubectl --namespace=production apply -f k8s/production/")
sh("echo http://`kubectl --namespace=production get service/${feSvcName} -o jsonpath='{.status.loadBalancer.ingress[0].ip}'` > ${feSvcName}")
}
}
}
}
}
Than the yaml kubernetes configurations:
pod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
component: ci
spec:
# Use service account that can deploy to all namespaces
serviceAccountName: default
containers:
- name: gcloud
image: gcr.io/cloud-builders/gcloud
command:
- cat
tty: true
- name: kubectl
image: gcr.io/cloud-builders/kubectl
command:
- cat
tty: true
The service used backend.yaml
kind: Service
apiVersion: v1
metadata:
name: wobbl-main-backend-service
spec:
ports:
- name: http
port: 8080
targetPort: 8080
protocol: TCP
selector:
role: backend
app: wobbl-main
The deployment production.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: wobbl-main-backend-production
spec:
replicas: 1
template:
metadata:
name: backend
labels:
app: wobbl-main
role: backend
env: production
spec:
containers:
- name: backend
image: gcr.io/cloud-solutions-images/wobbl-main:1.0.0
resources:
limits:
memory: "500Mi"
cpu: "100m"
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 8080
command: ["sh", "-c", "app -port=8080"]
ports:
- name: backend
containerPort: 8080
When I run kubernetes describe pod **** -n production I get the following response:
Normal Created 3m (x4 over 4m) kubelet,
gke-jenkins-cd-default-pool-83e2f18e-hvwp Created container Normal
Started 3m (x4 over 4m) kubelet,
gke-jenkins-cd-default-pool-83e2f18e-hvwp Started container Warning
BackOff 2m (x8 over 4m) kubelet,
gke-jenkins-cd-default-pool-83e2f18e-hvwp Back-off restarting failed
container
Any hints on how to debug this?
First your Docker file says :
CMD ["app"]
And then within your deployment definition you have :
command: ["sh", "-c", "app -port=8080"]
This is repetition. I suggest you use one of these.
Secondly I assume one of the install commands get you the app binary. Make sure its part of your $PATH
Plus you have a pod and a deployment manifest. I hope you're using either one of them and not deploying both.

Circleci 2.0 Build on Dockerfile

I have a Node.JS application that I'd like to build and test using CircleCI and Amazon ECR. The documentation is not clear on how to build an image from a Dockerfile in a repository. I've looked here: https://circleci.com/docs/2.0/building-docker-images/ and here https://circleci.com/blog/multi-stage-docker-builds/ but it's not clear what I put under the executor. Here's what I've got so far:
version: 2
jobs:
build:
docker:
steps:
- checkout
- setup_remote_docker:
version: 17.05.0-ce
# build the image
- run: docker build -t $ECR_REPO:0.1 .
CircleCI fails with the following error:
* The job has no executor type specified. The job should have one of the following keys specified: "machine", "docker", "macos"
The base image is take from the Dockerfile. I'm using CircleCI's built in AWS Integration so I don't think I need to add aws_auth. What do I need to put under the executor to get this running?
Build this with a Docker-in-Docker config:
version: 2
jobs:
build:
working_directory: /app
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache \
py-pip=9.0.0-r1 gcc \
libffi-dev python-dev \
linux-headers \
musl-dev \
libressl-dev \
make
pip install \
docker-compose==1.12.0 \
awscli==1.11.76 \
ansible==2.4.2.0
- run:
name: Save Vault Password to File
command: echo $ANSIBLE_VAULT_PASS > .vault-pass.txt
- run:
name: Decrypt .env
command: |
ansible-vault decrypt .circleci/envs --vault-password-file .vault-pass.txt
- run:
name: Move .env
command: rm -f .env && mv .circleci/envs .env
- restore_cache:
keys:
- v1-{{ .Branch }}
paths:
- /caches/app.tar
- run:
name: Load Docker image layer cache
command: |
set +o pipefail
docker load -i /caches/app.tar | true
- run:
name: Build application Docker image
command: |
docker build --cache-from=app -t app .
- run:
name: Save Docker image layer cache
command: |
mkdir -p /caches
docker save -o /caches/app.tar app
- save_cache:
key: v1-{{ .Branch }}-{{ epoch }}
paths:
- /caches/app.tar
- deploy:
name: Push application Docker image
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
login="$(aws ecr get-login --region $ECR_REGION)"
${login}
docker tag app "${ECR_ENDPOINT}:${CIRCLE_SHA1}"
docker push "${ECR_ENDPOINT}:${CIRCLE_SHA1}"
fi
You need to specify a Docker image for your build to run in in the first place. This should work:
version: 2
jobs:
build:
docker:
- image: docker:stable-git
steps:
- checkout
- setup_remote_docker:
version: 17.05.0-ce
# build the image
- run: docker build -t $ECR_REPO:0.1 .

Resources