I need to learn a CI pipeline in which there is a step for building and pushing an image using a Dockerfile and another step for creating a helm chart image in which there is a definition of the image created by the docker file. After that, there's a CD pipeline in which there's an installation of what was created by the helm chart only.
What is the difference between the image created directly by a Dockerfile and the one which is created by the helm chart? Why isn't the Docker image enough?
Amount to effort
To deploy a service on Kubernetes using docker image you need to manually create various configuration files like deployment.yaml. Such files keep on increasing as you have more and more services added to your environment.
In the Helm chart, we can provide a list of all services that we wish to deploy in requirements.yaml file and Helm will ensure that all those services get deployed to the target environment using deployment.yaml, service.yaml & values.yaml files.
Configurations to maintain
Also adding configuration like routing, config maps, secrets, etc becomes manually and requires configuration over-&-above your service deployment.
For example, if you want to add an Nginx proxy to your environment, you need to separately deploy it using the Nginx image and all the proxy configurations for your functional services.
But with Helm charts, this can be achieved by configuring just one file within your Helm chart: ingress.yaml
Flexibility
Using docker images, we need to provide configurations for each environment where we want to deploy our services.
But using the Helm chart, we can just override the properties of the existing helm chart using the environment-specific values.yaml file. This becomes even easier using tools like ArgoCD.
Code-Snippet:
Below is one example of deployment.yaml file that we need to create if we want to deploy one service using docker-image.
Inline, I have also described how you could alternatively populate a generic deployment.yaml template in Helm repository using different files like requirements.yaml and Values.yaml
deployment.yaml for one service
crazy-project/charts/accounts/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: accounts
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: accounts
app.kubernetes.io/instance: crazy-project
template:
metadata:
labels:
app.kubernetes.io/name: accounts
app.kubernetes.io/instance: crazy-project
spec:
serviceAccountName: default
automountServiceAccountToken: true
imagePullSecrets:
- name: regcred
containers:
- image: "image.registry.host/.../accounts:1.2144.0" <-- This version can be fetched from 'requirements.yaml'
name: accounts
env: <-- All the environment variables can be fetched from 'Values.yaml'
- name: CLUSTERNAME
value: "com.company.cloud"
- name: DB_URI
value: "mongodb://connection-string&replicaSet=rs1"
imagePullPolicy: IfNotPresent
volumeMounts:
- name: secretfiles
mountPath: "/etc/secretFromfiles"
readOnly: true
- name: secret-files
mountPath: "/etc/secretFromfiles"
readOnly: true
ports:
- name: HTTP
containerPort: 9586
protocol: TCP
resources:
requests:
memory: 450Mi
cpu: 250m
limits:
memory: 800Mi
cpu: 1
volumes:
- name: secretFromfiles
secret:
secretName: secret-from-files
- name: secretFromValue
secret:
secretName: secret-data-vault
optional: true
items:...
Your deployment.yaml in Helm chart could be a generic template(code-snippet below) where the details are populated using values.yaml file.
env:
{{- range $key, $value := .Values.global.envVariable.common }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
Your Values.yaml would look like this:
accounts:
imagePullSecrets:
- name: regcred
envVariable:
service:
vars:
spring_data_mongodb_database: accounts_db
spring_product_name: crazy-project
...
Your requirements.yaml would be like below. 'dependencies' are the services that you wish to deploy.
dependencies:
- name: accounts
repository: "<your repo>"
version: "= 1.2144.0"
- name: rollover
repository: "<your repo>"
version: "= 1.2140.0"
The following diagram will help you visualize what I have mentioned above:
Related
Background
I have set up some Self Hosted, Azure Devops build agents, inside my AKS cluster. This is the documentation: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops
The agents have been successfully created, I can see them in my Agent Pools and target them from my pipeline.
One of the first things my pipeline does, is build and push some docker images. This is a problem inside a self hosted agent. The documentation includes the below warning and link:
In order to use Docker from within a Docker container, you bind-mount the Docker socket.
If you're sure you want to do this, see the bind mount documentation on Docker.com.
Files
Kubernetes Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: azdevops-deployment
labels:
app: azdevops-agent
spec:
replicas: 1 #here is the configuration for the actual agent always running
selector:
matchLabels:
app: azdevops-agent
template:
metadata:
labels:
app: azdevops-agent
spec:
containers:
- name: kubepodcreation
image: AKRTestcase.azurecr.io/kubepodcreation:5306
env:
- name: AZP_URL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_URL
- name: AZP_TOKEN
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_TOKEN
- name: AZP_POOL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_POOL
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-volume
volumes:
- name: docker-volume
hostPath:
path: /var/run/docker.sock
Error
Attempting to run the pipeline gives me the following error:
##errorUnhandled: Unable to locate executable file: 'docker'. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also check the file mode to verify the file is executable.
Questions
Is it possible (and safe) to build and push docker images from an Azure Devops build agent running in a docker container?
How can I modify the Kubernetes deployment file, to bind mount the docker socket.
Any help will be greatly appreciated.
I have config file like this :
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: aaa-aaa/jenkins.war.LTS.2.89.4
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
I have in same directory of this config file, an image of jenkins : jenkins.war.LTS.2.89.4
How can I deploy with using this image ?
You can not run a war file of Jenkins directly on kubernetes. You need to create a docker image of that war file to be able to run it on kubernetes.
Follow this guide to create a docker image of the war file.
Once you have a docker image you can push that image to a remote or local and private or public docker registry and refer that url in the kubernetes deployment yaml in image section.
Also I would suggest to use helm chart of Jenkins to deploy Jenkins on kubernetes.
I want to pass some values from Kubernetes yaml file to the containers. These values will be read in my Java app using System.getenv("x_slave_host").
I have this dockerfile:
FROM jetty:9.4
...
ARG slave_host
ENV x_slave_host $slave_host
...
$JETTY_HOME/start.jar -Djetty.port=9090
The kubernetes yaml file contains this part where I added env section:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: master
spec:
template:
metadata:
labels:
app: master
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: master
image: xregistry.azurecr.io/Y:latest
ports:
- containerPort: 9090
volumeMounts:
- name: shared-data
mountPath: ~/.X/experiment
- env:
- name: slave_host
value: slavevalue
- name: jupyter
image: xregistry.azurecr.io/X:latest
ports:
- containerPort: 8000
- containerPort: 8888
volumeMounts:
- name: shared-data
mountPath: /var/folder/experiment
imagePullSecrets:
- name: acr-auth
Locally when I did the same thing using docker compose, it worked using args. This is a snippet:
master:
image: master
build:
context: ./master
args:
- slave_host=slavevalue
ports:
- "9090:9090"
So now I am trying to do the same thing but in Kubernetes. However, I am getting the following error (deploying it on Azure):
error: error validating "D:\\a\\r1\\a\\_X\\deployment\\kub-deploy.yaml": error validating data: field spec.template.spec.containers[1].name for v1.Container is required; if you choose to ignore these errors, turn validation off with --validate=false
In other words, how to rewrite my docker compose file to kubernetes and passing this argument.
Thanks!
env section should be added under containers, like this:
containers:
- name: master
env:
- name: slave_host
value: slavevalue
To elaborate a on #Kun Li's answer, besides adding environment variables e.g. in the Deployment manifest directly you can create a ConfigMap (or Secret depending on the data being stored) and reference these in your manifests. This is a good way of sharing the same environment variables across applications, compared to manually adding environment variables to several different applications.
Note that a ConfigMap can consist of one or more key: value pairs and it's not limited to storing environment variables, it's just one of the use cases. And as i mentioned before, consider using a Secret if the data is classified as sensitive.
Example of a ConfigMap manifest, in this case used for storing an environment variable:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-env-var
data:
slave_host: slavevalue
To create a ConfigMap holding one key=value pair using kubectl create:
kubectl create configmap my-env --from-literal=slave_host=slavevalue
To get hold of all environment variables configured in a ConfigMap use the following in your manifest:
containers:
envFrom:
- configMapRef:
name: my-env-var
Or if you want to pick one specific environment variable from your ConfigMap containing several variables:
containers:
env:
- name: slave_host
valueFrom:
configMapKeyRef:
name: my-env-var
key: slave_host
See this page for more examples of using ConfigMap's in different situations.
Many applications require configuration via some combination of config files, command line arguments, and environment variables. These configuration artifacts should be decoupled from image content in order to keep containerized applications portable. The ConfigMap API resource provides mechanisms to inject containers with configuration data while keeping containers agnostic of Kubernetes. ConfigMap can be used to store fine-grained information like individual properties or coarse-grained information like entire config files or JSON blobs.
I am unable to find where configmaps are saved. I know they are created however I can only read them via the minikube dashboard.
ConfigMaps in Kubernetes can be consumed in many different ways and mounting it as a volume is one of those ways.
You can choose where you would like to mount the ConfigMap on your Pod. Example from K8s documentation:
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
Pod
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "cat /etc/config/special.how" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
restartPolicy: Never
Note the volumes definition and the corresponding volumeMounts.
Other ways include:
Consumption via environment variables
Consumption via command-line arguments
Refer to the documentation for full examples.
I've followed a few guides, and I've got CI set up with Google Container Engine and Google Container Registry. The problem is my updates aren't being applied to the deployment.
So this is my deployment.yml which contains a Kubernetes Service and Deployment:
apiVersion: v1
kind: Service
metadata:
name: my_app
labels:
app: my_app
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
selector:
app: my_app
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my_app
spec:
replicas: 1
template:
metadata:
labels:
app: my_app
spec:
containers:
- name: node
image: gcr.io/me/my_app:latest
ports:
- containerPort: 3000
resources:
requests:
memory: 100
- name: phantom
image: docker.io/wernight/phantomjs:2.1.1
command: ["phantomjs", "--webdriver=8910", "--web-security=no", "--load-images=false", "--local-to-remote-url-access=yes"]
ports:
- containerPort: 8910
resources:
requests:
memory: 1000
As part of my CI process I run a script which updates the image in google cloud registry, then runs kubectl apply -f /deploy/deployment.yml. Both tasks succeed, and I'm notified the Deployment and Service has been updated:
2016-09-28T14:37:26.375Zgoogleclouddeploymentservice "my_app" configured
2016-09-28T14:37:27.370Zgoogleclouddeploymentdeployment "my_app" configured
Since I've included the :latest tag on my image, I thought the image would be downloaded each time the deployment is updated. Acccording to the docs a RollingUpdate should also be the default strategy.
However, when I run my CI script which updates the deployment - the updated image isn't downloaded and the changes aren't applied. What am I missing? I'm assuming that since nothing is changing in deployment.yml, no update is being applied. How do I get Kubernetes to download my updated image and use a RollingUpdate to deploy it?
You can force an update of a deployment by changing any field, such as a label. So in my case, I just added this at the end of my CI script:
kubectl patch deployment fb-video-extraction -p \
"{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
We have recently published a technical overview of how the approach that we call GitOps approach can be implemented in GKE.
All you need to do is configure GCR builder to pick-up code changes from Github and run builds, you then install Weave Cloud agent in your cluster and connect to a repo where YAML files are stored, and the agent will take care of updating the repo with new images and applying the changes to the cluster.
For a more high-level overview, see also:
The GitOps Pipeline
Deploy Applications & Manage Releases
Disclaimer: I am a Kubernetes contributor and Weaveworks employee. We build open-source and commercial tools that help people to get to production with Kubernetes sooner.