Kubernetes warning for clashes while using env variable in docker? why? - docker

We're using Gitlab for CI/CD. I'll include the script which we're using
gitlab ci-cd file
services:
- docker:19.03.11-dind
before_script:
- apk update && apk add bash
- apk update && apk add gettext
workflow:
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH == "developer" || $CI_COMMIT_BRANCH == "stage"|| ($CI_COMMIT_BRANCH =~ (/^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i))
when: always
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH != "developer" || $CI_COMMIT_BRANCH != "stage"|| ($CI_COMMIT_BRANCH !~ (/^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i))
when: never
stages:
- build
- Publish
- deploy
cache:
paths:
- .m2/repository
- target
build_jar:
image: maven:3.8.3-jdk-11
stage: build
script:
- mvn clean install package -DskipTests=true
artifacts:
paths:
- target/*.jar
docker_build_dev:
stage: Publish
image: docker:19.03.11
services:
- docker:19.03.11-dind
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
only:
- /^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i
- developer
docker_build_stage:
stage: Publish
image: docker:19.03.11
services:
- docker:19.03.11-dind
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
only:
- stage
deploy_dev:
stage: deploy
image: stellacenter/aws-helm-kubectl
variables:
ENV_VAR_NAME: development
before_script:
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" patient-service.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_DEV $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- cat patient-service.yml | envsubst | kubectl apply -f patient-service.yml -n ${KUBE_NAMESPACE_DEV}
only:
- /^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i
- developer
deploy_stage:
stage: deploy
image: stellacenter/aws-helm-kubectl
variables:
ENV_VAR_NAME: stage
before_script:
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" patient-service.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_STAGE $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- cat patient-service.yml | envsubst | kubectl apply -f patient-service.yml -n ${KUBE_NAMESPACE_STAGE}
only:
- stage
According to the script, we just merged the script not to face conflicts/clashes for stage and development while deployment. Previously, we having each docker files for each environment(stage and developer). Now I want to merge the dockerfile also, I merged, but the dockerfile is not fetching. Having clashes (warning shows after pipeline succeeds) in Kubernetes. I don't know how to clear the warning in Kubernetes. I'll enclose the docker file which I merged.
FROM maven:3.8.3-jdk-11 AS MAVEN_BUILD
COPY pom.xml /build/
COPY src /build/src/
WORKDIR /build/
RUN mvn clean install package -DskipTests=true
FROM openjdk:11
ARG environment_name
WORKDIR /app
COPY --from=MAVEN_BUILD /build/target/patient-service-*.jar /app/patient-service.jar
ENV PORT 8094
ENV env_var_name=$environment_name
EXPOSE $PORT
ENTRYPOINT ["java","-Dspring.profiles.active= $env_var_name","-jar","/app/patient-service.jar"]
the last line, we used before,
ENTRYPOINT ["java","-Dspring.profiles.active=development","-jar","/app/patient-service.jar"] -for developer dockerfile
ENTRYPOINT ["java","-Dspring.profiles.active=stage","-jar","/app/patient-service.jar"] - for stage dockerfile
At the time, its working fine, I'm not facing any issue on Kubernetes. I'd just add environment variable to fetch along with whether development or stage. You can check ,my script after the docker build. After adding the variable only, we began facing the clashes. Please help me to sort this out. Thanks in advance.
Yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: patient-app
labels:
app: patient-app
spec:
replicas: 1
selector:
matchLabels:
app : patient-app
template:
metadata:
labels:
app: patient-app
spec:
containers:
- name: patient-app
image: registry.gitlab.com/stella-center/backend-services/patient-service:<VERSION>
imagePullPolicy: Always
ports:
- containerPort: 8094
imagePullSecrets:
- name: gitlab-registry-token-auth
---
apiVersion: v1
kind: Service
metadata:
name: patient-service
spec:
type: NodePort
selector:
app: patient-app
ports:
- port: 8094
targetPort: 8094

As I understood you want to run the same image built from this docker file in both environments using the variable in the docker file, I would suggest following below:
1- remove "ENV env_var_name=$environment_name" and add ENV_VAR_NAME directly in the ENTRYPOIT (make sure the variable is upper case) as below .
ENV PORT 8094
EXPOSE $PORT
ENTRYPOINT ["java","-Dspring.profiles.active= $ENV_VAR_NAME","-jar","/app/patient-service.jar"]
2- Add this variable as an environment variable in patient-service.yml:
...
apiVersion: apps/v1
kind: Deployment
metadata:
name: patient-app
labels:
app: patient-app
spec:
replicas: 1
selector:
matchLabels:
app : patient-app
template:
metadata:
labels:
app: patient-app
spec:
containers:
- name: patient-app
image: registry.gitlab.com/stella-center/backend-services/patient-service:<VERSION>
imagePullPolicy: Always
ports:
- containerPort: 8094
env:
- name: ENV_VAR_NAME
value: "${ENV_VAR_NAME}"
imagePullSecrets:
- name: gitlab-registry-token-auth
3- specify the variable in the GitLab ci yml file each stage with its value and use envsubst with the deployment command:
deploy_dev:
stage: deploy
image: stellacenter/aws-helm-kubectl
variables:
ENV_VAR_NAME: development
before_script:
- apk update && apk add gettext
..
script:
..
- cat patient-service.yml | envsubst | kubectl apply -f -n ${KUBE_NAMESPACE_STAGE} -
...

Related

"##[error]Error response from daemon: failed to reach build target <stage> in Dockerfile" only during CI pipeline

I'm getting this error in my PR pipeline and I'm not sure what the cause and solution is.
The Docker task is pretty well templated and the stage does exist in my Dockerfile:
# docker.yaml
parameters:
- name: service
default: ''
- name: jobName
default: ''
- name: jobDisplayName
default: ''
- name: taskDisplayName
default: ''
- name: dockerCommand
default: ''
- name: target
default: ''
- name: tag
default: ''
jobs:
- job: ${{ parameters.jobName }}
displayName: ${{ parameters.jobDisplayName }}
# Handle whether to run service or not
variables:
servicesChanged: $[ stageDependencies.Changed.Changes.outputs['detectChanges.servicesChanged'] ]
condition: or(contains(variables['servicesChanged'], '${{ parameters.service }}'), eq(variables['Build.Reason'], 'Manual'))
steps:
# Set to app repo
- checkout: app
# Run the Docker task
- task: Docker#2
# Run if there have been changes
displayName: ${{ parameters.taskDisplayName }}
inputs:
command: ${{ parameters.dockerCommand }}
repository: $(imageRepository)-${{ parameters.service }}
dockerfile: $(dockerFilePath)/${{ parameters.service }}/docker/Dockerfile
buildContext: $(dockerFilePath)/${{ parameters.service }}
containerRegistry: $(dockerRegistryServiceConnection)
arguments: --target ${{ parameters.target }}
tags: |
${{ parameters.tag }}-$(Build.BuildNumber)
# Dockerfile
# syntax=docker/dockerfile:1
# creating a python base with shared environment variables
FROM python:3.8-slim as python-base
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
PIP_NO_CACHE_DIR=off \
PIP_DISABLE_PIP_VERSION_CHECK=on \
PIP_DEFAULT_TIMEOUT=100 \
POETRY_HOME="/opt/poetry" \
POETRY_VIRTUALENVS_IN_PROJECT=true \
POETRY_NO_INTERACTION=1 \
PYSETUP_PATH="/opt/pysetup" \
VENV_PATH="/opt/pysetup/.venv"
ENV PATH="$POETRY_HOME/bin:$VENV_PATH/bin:$PATH"
# builder-base is used to build dependencies
FROM python-base as builder-base
RUN apt-get update \
&& apt-get install --no-install-recommends -y \
curl \
build-essential
# Install Poetry - respects $POETRY_VERSION & $POETRY_HOME
ENV POETRY_VERSION=1.1.8
RUN curl -sSL https://raw.githubusercontent.com/sdispater/poetry/master/get-poetry.py | python
# We copy our Python requirements here to cache them
# and install only runtime deps using poetry
WORKDIR $PYSETUP_PATH
COPY ./poetry.lock ./pyproject.toml ./
RUN poetry install --no-dev
# 'development' stage installs all dev deps and can be used to develop code.
# For example using docker-compose to mount local volume under /app
FROM python-base as development
# Copying poetry and venv into image
COPY --from=builder-base $POETRY_HOME $POETRY_HOME
COPY --from=builder-base $PYSETUP_PATH $PYSETUP_PATH
# Copying in our entrypoint
# COPY ./docker/docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x . /opt/pysetup/.venv/bin/activate
# venv already has runtime deps installed we get a quicker install
WORKDIR $PYSETUP_PATH
RUN poetry install
WORKDIR /app
COPY . .
EXPOSE 5000
CMD [ "python", "src/manage.py", "runserver", "0.0.0.0:5000"]
# 'unit-tests' stage runs our unit tests with unittest and coverage.
FROM development AS unit-tests
RUN coverage run --omit='src/manage.py,src/config/*,*/.venv/*,*/*__init__.py,*/tests.py,*/admin.py' src/manage.py test src --tag=ut && \
coverage report
The Dockerfile is being found correctly and the pipeline looks it starts to build the image, but then throws this error. I can run a docker build ... --target unit-tests locally without issue, so it is isolated the Azure Pipelines.
Suggestions for what could be causing this?
EDIT:
This is the project structure:
app/
admin/
docker/
Dockerfile
entrypoint.sh
src/
...
api/
docker/
Dockerfile
entrypoint.sh
src/
...
portal/
docker/
Dockerfile
entrypoint.sh
src/
...
This is a portion of the devspace.yaml:
admin-ut:
image: ${APP-NAME}/${ADMIN-UT}
dockerfile: ${ADMIN}/docker/Dockerfile
context: ${ADMIN}/
build:
buildKit:
args: []
options:
target: unit-tests
EDIT2:
Maybe the issue is related to not having BuildKit per this question:
Azure Pipelines: Build a specific stage in a multistage Dockerfile without building non-dependent stages
But there is Github issue that is related:
https://github.com/MicrosoftDocs/azure-devops-docs/issues/9196#issuecomment-761624398
So I've modified my docker.yaml for Azure Pipelines to:
- task: Docker#2
# Run if there have been changes
displayName: ${{ parameters.taskDisplayName }}
inputs:
command: ${{ parameters.dockerCommand }}
repository: $(imageRepository)-${{ parameters.service }}
dockerfile: $(dockerFilePath)/${{ parameters.service }}/docker/Dockerfile
buildContext: $(dockerFilePath)/${{ parameters.service }}
containerRegistry: $(dockerRegistryServiceConnection)
arguments: --target ${{ parameters.target }}
tags: |
${{ parameters.tag }}-$(Build.BuildNumber)
env:
DOCKER_BUILDKIT: 1
Now I get a more verbose error output:
failed to solve with frontend dockerfile.v0: failed to create LLB definition: target stage unit-tests could not be found
##[error]#1 [internal] load build definition from Dockerfile
##[error]#1 sha256:acc1b908d881e469d44e7f005ceae0820d5ee08ada351a0aa2a7b8e749c8f6fe
##[error]#1 transferring dockerfile: 974B done
##[error]#1 DONE 0.0s
##[error]#2 [internal] load .dockerignore
##[error]#2 sha256:189c0a02bba84ed5c5f9ea82593d0e664746767c105d65afdf3cd0771eeb378e
##[error]#2 transferring context: 346B done
##[error]#2 DONE 0.0s
##[error]failed to solve with frontend dockerfile.v0: failed to create LLB definition: target stage unit-tests could not be found
##[error]The process '/usr/bin/docker' failed with exit code 1
I had this same issue, but fixed it in a simple way. Just use AS insted of as
Like so:
#FROM image as test
FROM image AS test
Just a small nuance
Ok, I think I have it sorted out now and the pipeline stages are running successfully. It was a combination of adding DOCKER_BUILDKIT: 1 like:
- task: Docker#2
# Run if there have been changes
displayName: ${{ parameters.taskDisplayName }}
inputs:
command: ${{ parameters.dockerCommand }}
repository: $(imageRepository)-${{ parameters.service }}
dockerfile: $(dockerFilePath)/${{ parameters.service }}/docker/Dockerfile
buildContext: $(dockerFilePath)/${{ parameters.service }}
containerRegistry: $(dockerRegistryServiceConnection)
arguments: --target ${{ parameters.target }}
tags: |
${{ parameters.tag }}-$(Build.BuildNumber)
env:
DOCKER_BUILDKIT: 1
And then removing # syntax=docker/dockerfile:1 from each Dockerfile. Local environment still works after making the modification to the Dockerfile as well.
I had this same error a few minutes ago.
My client service was like this before:
services:
client:
container_name: client
image: client
build: ./client
ports:
- '3000:3000'
volumes:
- ./client:/app
- ./app/node_modules
I moved the build: ./client at the beginning of the service and it worked for me.
After, it was like this:
services:
client:
build: ./client
container_name: client
image: client
ports:
- '3000:3000'
volumes:
- ./client:/app
- ./app/node_modules

Kaniko: How to cache folders from Gatsby build in Kubernetes using Tekton?

I am building a CI/CD pipeline using Tekton on a bare metal Kubernetes Cluster. I have managed to cache the necessary images (Node & Nginx) and the layers, but how can I cache the .cache / public folders created by Gatsby build? These folders are not present in the repo. If the build step does not find these folders in takes longer because it needs to create all images using Sharp.
The pipeline has a PVC attached. In the task it is called source (workspaces). To be more clear, how can I copy the Gatsby folders to this PVC after the build has finished and to the Kaniko container before the next build?
The Tekton task has the following steps:
Use Kaniko warmer to cache Docker Images used in the Docker build
Create a timestamp so that "RUN build" is executed every time even if the files don't change because it runs a GraphQL query
Build and push image using Kaniko
& 5. Export image digest used by next step in the pipeline
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-docker-image
spec:
params:
- name: pathToDockerFile
type: string
description: The path to the dockerfile to build
default: $(resources.inputs.source-repo.path)/Dockerfile
- name: pathToContext
type: string
description: |
The build context used by Kaniko
(https://github.com/GoogleContainerTools/kaniko#kaniko-build-contexts)
default: $(resources.inputs.source-repo.path)
resources:
inputs:
- name: source-repo
type: git
outputs:
- name: builtImage
type: image
- name: event-to-sink
type: cloudEvent
workspaces:
# PVC
- name: source
description: |
Folder to write docker image digest
results:
- name: IMAGE-DIGEST
description: Digest of the image just built.
steps:
- name: kaniko-warmer
image: gcr.io/kaniko-project/warmer
workingDir: $(workspaces.source.path)
args:
- --cache-dir=$(workspaces.source.path)/cache
- --image=node:14-alpine
- --image=nginx:1.19.5
- name: print-date-unix-timestamp
image: bash:latest
script: |
#!/usr/bin/env bash
date | tee $(params.pathToContext)/date
- name: build-and-push
workingDir: $(workspaces.source.path)
image: gcr.io/kaniko-project/executor:v1.3.0
env:
- name: 'DOCKER_CONFIG'
value: '/tekton/home/.docker/'
command:
- /kaniko/executor
args:
- --build-arg=CACHEBUST=$(params.pathToContext)/date
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url)
- --context=$(params.pathToContext)
- --cache=true
- --cache-ttl=144h
- --cache-dir=$(workspaces.source.path)/cache
- --use-new-run
- --snapshotMode=redo
- --cache-repo=<repo>/kaniko-cache
- --log-timestamp
securityContext:
runAsUser: 0
- name: write-digest
workingDir: $(workspaces.source.path)
image: gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/imagedigestexporter:v0.16.2
command: ['/ko-app/imagedigestexporter']
args:
- -images=[{"name":"$(resources.outputs.builtImage.url)","type":"image","url":"$(resources.outputs.builtImage.url)","digest":"","OutputImageDir":"$(workspaces.source.path)/$(params.pathToContext)/image-digest"}]
- -terminationMessagePath=$(params.pathToContext)/image-digested
securityContext:
runAsUser: 0
- name: digest-to-result
workingDir: $(workspaces.source.path)
image: docker.io/stedolan/jq#sha256:a61ed0bca213081b64be94c5e1b402ea58bc549f457c2682a86704dd55231e09
script: |
cat $(params.pathToContext)/image-digested | jq '.[0].value' -rj | tee /$(results.IMAGE-DIGEST.path)
Dockerfile
FROM node:14-alpine as build
ARG CACHEBUST=1
RUN apk update \
&& apk add \
build-base \
libtool \
autoconf \
automake \
pkgconfig \
nasm \
yarn \
libpng-dev libjpeg-turbo-dev giflib-dev tiff-dev \
zlib-dev \
python \
&& rm -rf /var/cache/apk/*
EXPOSE 8000 9000
RUN yarn global add gatsby-cli
WORKDIR /usr/src/app
COPY ./package.json .
RUN yarn install
COPY . .
RUN yarn build && echo $CACHEBUST
CMD ["yarn", "serve"]
FROM nginx:1.19.5 as serve
EXPOSE 80
COPY --from=build /usr/src/app/public /usr/share/nginx/html
how can I cache the .cache / public folders created by Gatsby build? These folders are not present in the repo.
If Persistent Volumes is available on your cluster and these volumes is available from all nodes, you can use a PVC-backed workspace for cache.
A more generic solution that also works in a regional cluster (e.g. cloud) is to upload the cached folder to something, e.g. a Bucket (Minio?) or potentially Redis? Then also need a Task that download this folder - potentially in parallel with git clone when starting a new PipelineRun. GitHub Actions has a similar solution with the cache action.
Example of a Task with two workspaces that copy a file from one workspace to the other:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: copy-between-workspaces
spec:
workspaces:
- name: ws-a
- name: ws-b
steps:
- name: copy
image: ubuntu
script: cp $(workspaces.ws-a.path)/myfile $(workspaces.ws-b.path)/myfile

Is there a way to update Jenkins running in Kubernetes?

I'm trying to run Jenkins in Kubernetes but the version of Jenkins is outdated. It says I need atleast version 2.138.4 for the Kubernetes plugin.
Im using this jenkins image from Docker hub ("jenkins/jenkins:lts"). But when I try to run this in Kubernetes it says the version is 2.60.3. I previously used a really old version of Jenkins (2.60.3) but I updated my Dockerfile to use the latest image. After that I build the image again and threw it to Kubernetes. I even delete my Kubernetes Deployment and Service before deploying them again.
I'm currently working in a development environment using Minikube.
Dockerfile:
FROM jenkins/jenkins:lts
ENV JENKINS_USER admin
ENV JENKINS_PASS admin
# Skip initial setup
ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/plugins.txt
USER root
RUN apt-get update \
&& apt-get install -qqy apt-transport-https ca-certificates curl gnupg2 software-properties-common
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
RUN apt-get update -qq \
&& apt-get install docker-ce -y
RUN usermod -aG docker jenkins
RUN apt-get clean
RUN curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose && chmod +x /usr/local/bin/docker-compose
USER jenkins
The Kubernetes deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: mikemanders/my-jenkins-image:1.0
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
And the Kubernetes Service:
apiVersion: v1
kind: Service
metadata:
name: jenkins
spec:
type: NodePort
selector:
app: jenkins
ports:
- port: 8080
targetPort: 8080
I think my Kubernetes configuration are good, so I'm guessing it has something to do with Docker?
What am I missing/doing wrong here?
TL;DR
To update a deployment, you need new Docker image based on the new Jenkins release:
docker build -t mikemanders/my-jenkins-image:1.1 .
docker push mikemanders/my-jenkins-image
kubectl set image deployment/jenkins mikemanders/my-jenkins-image=1.1 --record
Kubernetes deploys images not dockerfiles
As per Images man
You create your Docker image and push it to a registry before referring to it in a Kubernetes pod.
The image property of a container supports the same syntax as the docker command does, including private registries and tags.
So, you need an image to deploy.
Update your image
To update your image in registry, use docker build -t and docker push:
docker build -t mikemanders/my-jenkins-image:1.1
docker push mikemanders/my-jenkins-image
It will rebuild the image with updated jenkins/jeinkis:lts. Then image will be uploaded to the container registry.
The catch is that you are updating the image version (e.g. 1.0->1.1) before updating the cluster.

CrashloopBackOff on Pod in Kubernetes(on GCP with Jenkins)

My pods are under the state of "CrashloopBackOff", the setup is Jenkins with Kubernetes on GCP.
I have found a few answers where it indicates that my Dockerfile is not good and that it needs to be in an infinite state.
But I run the command in the production.yaml ["sh", "-c", "app -port=8080"] to have it in that state.
The exact same Dockerfile was used and it was working when I deployed the project manually to kubernetes.
The project I'm trying to submit looks like this:
The Dockerfile
FROM php:7.2.4-apache
COPY apache_default /etc/apache2/sites-available/000-default.conf
RUN a2enmod rewrite
COPY src /var/www/html/src
COPY public /var/www/html/public
COPY config /var/www/html/config
ADD composer.json /var/www/html
ADD composer.lock /var/www/html
# Install software
RUN apt-get update && apt-get install -y git
# Install unzip
RUN apt-get install -y unzip
# Install curl
RUN apt-get install -y curl
# Install dependencies
RUN php -r "readfile('http://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer
RUN cd /var/www/html && composer install --no-dev --no-interaction --optimize-autoloader
# install pdo for mysql
RUN docker-php-ext-install pdo pdo_mysql
COPY "memory-limit-php.ini" "/usr/local/etc/php/conf.d/memory-limit-php.ini"
RUN chmod 777 -R /var/www
# Production envivorment
ENV ENVIVORMENT=prod
EXPOSE 80
CMD apachectl -D FOREGROUND
CMD ["app"]
The Jenkinsfile
def project = '****'
def appName = 'wobbl-mobile-backend'
def imageTag = "gcr.io/${project}/${appName}"
def feSvcName = "wobbl-main-backend-service"
pipeline {
agent {
kubernetes {
label 'sample-app'
defaultContainer 'jnlp'
yamlFile 'k8s/pod/pod.yaml'
}
}
stages {
// Deploy Image and push with image container builder
stage('Build and push image with Container Builder') {
steps {
container('gcloud') {
sh "PYTHONUNBUFFERED=1 gcloud container builds submit -t ${imageTag} ."
}
}
}
// Deploy to production
stage('Deploy Production') {
// Production branch
steps{
container('kubectl') {
// Change deployed image in canary to the one we just built
sh("sed -i.bak 's#gcr.io/cloud-solutions-images/wobbl-main:1.0.0#${imageTag}#' ./k8s/production/*.yaml")
sh("kubectl --namespace=production apply -f k8s/services/")
sh("kubectl --namespace=production apply -f k8s/production/")
sh("echo http://`kubectl --namespace=production get service/${feSvcName} -o jsonpath='{.status.loadBalancer.ingress[0].ip}'` > ${feSvcName}")
}
}
}
}
}
Than the yaml kubernetes configurations:
pod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
component: ci
spec:
# Use service account that can deploy to all namespaces
serviceAccountName: default
containers:
- name: gcloud
image: gcr.io/cloud-builders/gcloud
command:
- cat
tty: true
- name: kubectl
image: gcr.io/cloud-builders/kubectl
command:
- cat
tty: true
The service used backend.yaml
kind: Service
apiVersion: v1
metadata:
name: wobbl-main-backend-service
spec:
ports:
- name: http
port: 8080
targetPort: 8080
protocol: TCP
selector:
role: backend
app: wobbl-main
The deployment production.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: wobbl-main-backend-production
spec:
replicas: 1
template:
metadata:
name: backend
labels:
app: wobbl-main
role: backend
env: production
spec:
containers:
- name: backend
image: gcr.io/cloud-solutions-images/wobbl-main:1.0.0
resources:
limits:
memory: "500Mi"
cpu: "100m"
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 8080
command: ["sh", "-c", "app -port=8080"]
ports:
- name: backend
containerPort: 8080
When I run kubernetes describe pod **** -n production I get the following response:
Normal Created 3m (x4 over 4m) kubelet,
gke-jenkins-cd-default-pool-83e2f18e-hvwp Created container Normal
Started 3m (x4 over 4m) kubelet,
gke-jenkins-cd-default-pool-83e2f18e-hvwp Started container Warning
BackOff 2m (x8 over 4m) kubelet,
gke-jenkins-cd-default-pool-83e2f18e-hvwp Back-off restarting failed
container
Any hints on how to debug this?
First your Docker file says :
CMD ["app"]
And then within your deployment definition you have :
command: ["sh", "-c", "app -port=8080"]
This is repetition. I suggest you use one of these.
Secondly I assume one of the install commands get you the app binary. Make sure its part of your $PATH
Plus you have a pod and a deployment manifest. I hope you're using either one of them and not deploying both.

Circleci 2.0 Build on Dockerfile

I have a Node.JS application that I'd like to build and test using CircleCI and Amazon ECR. The documentation is not clear on how to build an image from a Dockerfile in a repository. I've looked here: https://circleci.com/docs/2.0/building-docker-images/ and here https://circleci.com/blog/multi-stage-docker-builds/ but it's not clear what I put under the executor. Here's what I've got so far:
version: 2
jobs:
build:
docker:
steps:
- checkout
- setup_remote_docker:
version: 17.05.0-ce
# build the image
- run: docker build -t $ECR_REPO:0.1 .
CircleCI fails with the following error:
* The job has no executor type specified. The job should have one of the following keys specified: "machine", "docker", "macos"
The base image is take from the Dockerfile. I'm using CircleCI's built in AWS Integration so I don't think I need to add aws_auth. What do I need to put under the executor to get this running?
Build this with a Docker-in-Docker config:
version: 2
jobs:
build:
working_directory: /app
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache \
py-pip=9.0.0-r1 gcc \
libffi-dev python-dev \
linux-headers \
musl-dev \
libressl-dev \
make
pip install \
docker-compose==1.12.0 \
awscli==1.11.76 \
ansible==2.4.2.0
- run:
name: Save Vault Password to File
command: echo $ANSIBLE_VAULT_PASS > .vault-pass.txt
- run:
name: Decrypt .env
command: |
ansible-vault decrypt .circleci/envs --vault-password-file .vault-pass.txt
- run:
name: Move .env
command: rm -f .env && mv .circleci/envs .env
- restore_cache:
keys:
- v1-{{ .Branch }}
paths:
- /caches/app.tar
- run:
name: Load Docker image layer cache
command: |
set +o pipefail
docker load -i /caches/app.tar | true
- run:
name: Build application Docker image
command: |
docker build --cache-from=app -t app .
- run:
name: Save Docker image layer cache
command: |
mkdir -p /caches
docker save -o /caches/app.tar app
- save_cache:
key: v1-{{ .Branch }}-{{ epoch }}
paths:
- /caches/app.tar
- deploy:
name: Push application Docker image
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
login="$(aws ecr get-login --region $ECR_REGION)"
${login}
docker tag app "${ECR_ENDPOINT}:${CIRCLE_SHA1}"
docker push "${ECR_ENDPOINT}:${CIRCLE_SHA1}"
fi
You need to specify a Docker image for your build to run in in the first place. This should work:
version: 2
jobs:
build:
docker:
- image: docker:stable-git
steps:
- checkout
- setup_remote_docker:
version: 17.05.0-ce
# build the image
- run: docker build -t $ECR_REPO:0.1 .

Resources