Docker build failing inside Azure Dev Ops Self Hosted Agent - docker

Background
I have set up some Self Hosted, Azure Devops build agents, inside my AKS cluster. This is the documentation: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops
The agents have been successfully created, I can see them in my Agent Pools and target them from my pipeline.
One of the first things my pipeline does, is build and push some docker images. This is a problem inside a self hosted agent. The documentation includes the below warning and link:
In order to use Docker from within a Docker container, you bind-mount the Docker socket.
If you're sure you want to do this, see the bind mount documentation on Docker.com.
Files
Kubernetes Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: azdevops-deployment
labels:
app: azdevops-agent
spec:
replicas: 1 #here is the configuration for the actual agent always running
selector:
matchLabels:
app: azdevops-agent
template:
metadata:
labels:
app: azdevops-agent
spec:
containers:
- name: kubepodcreation
image: AKRTestcase.azurecr.io/kubepodcreation:5306
env:
- name: AZP_URL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_URL
- name: AZP_TOKEN
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_TOKEN
- name: AZP_POOL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_POOL
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-volume
volumes:
- name: docker-volume
hostPath:
path: /var/run/docker.sock
Error
Attempting to run the pipeline gives me the following error:
##errorUnhandled: Unable to locate executable file: 'docker'. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also check the file mode to verify the file is executable.
Questions
Is it possible (and safe) to build and push docker images from an Azure Devops build agent running in a docker container?
How can I modify the Kubernetes deployment file, to bind mount the docker socket.
Any help will be greatly appreciated.

Related

Difference between pushing a docker image and installing helm image

I need to learn a CI pipeline in which there is a step for building and pushing an image using a Dockerfile and another step for creating a helm chart image in which there is a definition of the image created by the docker file. After that, there's a CD pipeline in which there's an installation of what was created by the helm chart only.
What is the difference between the image created directly by a Dockerfile and the one which is created by the helm chart? Why isn't the Docker image enough?
Amount to effort
To deploy a service on Kubernetes using docker image you need to manually create various configuration files like deployment.yaml. Such files keep on increasing as you have more and more services added to your environment.
In the Helm chart, we can provide a list of all services that we wish to deploy in requirements.yaml file and Helm will ensure that all those services get deployed to the target environment using deployment.yaml, service.yaml & values.yaml files.
Configurations to maintain
Also adding configuration like routing, config maps, secrets, etc becomes manually and requires configuration over-&-above your service deployment.
For example, if you want to add an Nginx proxy to your environment, you need to separately deploy it using the Nginx image and all the proxy configurations for your functional services.
But with Helm charts, this can be achieved by configuring just one file within your Helm chart: ingress.yaml
Flexibility
Using docker images, we need to provide configurations for each environment where we want to deploy our services.
But using the Helm chart, we can just override the properties of the existing helm chart using the environment-specific values.yaml file. This becomes even easier using tools like ArgoCD.
Code-Snippet:
Below is one example of deployment.yaml file that we need to create if we want to deploy one service using docker-image.
Inline, I have also described how you could alternatively populate a generic deployment.yaml template in Helm repository using different files like requirements.yaml and Values.yaml
deployment.yaml for one service
crazy-project/charts/accounts/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: accounts
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: accounts
app.kubernetes.io/instance: crazy-project
template:
metadata:
labels:
app.kubernetes.io/name: accounts
app.kubernetes.io/instance: crazy-project
spec:
serviceAccountName: default
automountServiceAccountToken: true
imagePullSecrets:
- name: regcred
containers:
- image: "image.registry.host/.../accounts:1.2144.0" <-- This version can be fetched from 'requirements.yaml'
name: accounts
env: <-- All the environment variables can be fetched from 'Values.yaml'
- name: CLUSTERNAME
value: "com.company.cloud"
- name: DB_URI
value: "mongodb://connection-string&replicaSet=rs1"
imagePullPolicy: IfNotPresent
volumeMounts:
- name: secretfiles
mountPath: "/etc/secretFromfiles"
readOnly: true
- name: secret-files
mountPath: "/etc/secretFromfiles"
readOnly: true
ports:
- name: HTTP
containerPort: 9586
protocol: TCP
resources:
requests:
memory: 450Mi
cpu: 250m
limits:
memory: 800Mi
cpu: 1
volumes:
- name: secretFromfiles
secret:
secretName: secret-from-files
- name: secretFromValue
secret:
secretName: secret-data-vault
optional: true
items:...
Your deployment.yaml in Helm chart could be a generic template(code-snippet below) where the details are populated using values.yaml file.
env:
{{- range $key, $value := .Values.global.envVariable.common }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
Your Values.yaml would look like this:
accounts:
imagePullSecrets:
- name: regcred
envVariable:
service:
vars:
spring_data_mongodb_database: accounts_db
spring_product_name: crazy-project
...
Your requirements.yaml would be like below. 'dependencies' are the services that you wish to deploy.
dependencies:
- name: accounts
repository: "<your repo>"
version: "= 1.2144.0"
- name: rollover
repository: "<your repo>"
version: "= 1.2140.0"
The following diagram will help you visualize what I have mentioned above:

Kubernetes - How to run local image of jenkins

I have config file like this :
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: aaa-aaa/jenkins.war.LTS.2.89.4
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
I have in same directory of this config file, an image of jenkins : jenkins.war.LTS.2.89.4
How can I deploy with using this image ?
You can not run a war file of Jenkins directly on kubernetes. You need to create a docker image of that war file to be able to run it on kubernetes.
Follow this guide to create a docker image of the war file.
Once you have a docker image you can push that image to a remote or local and private or public docker registry and refer that url in the kubernetes deployment yaml in image section.
Also I would suggest to use helm chart of Jenkins to deploy Jenkins on kubernetes.

Copy files into kubernetes pod with deployment.yaml

I have containerized microservice built with Java. This application uses the default /config-volume directory when it searches for property files.
Previously I manually deployed via Dockerfile, and now I'm looking to automate this process with Kubernetes.
The container image starts the microservice immediately so I need to add properties to the config-volume folder immediately. I accomplished this in Docker with this simple Dockerfile:
FROM ########.amazon.ecr.url.us-north-1.amazonaws.com/company/image-name:1.0.0
RUN mkdir /config-volume
COPY path/to/my.properties /config-volume
I'm trying to replicate this type of behavior in a kubernetes deployment.yaml but I have found no way to do it.
I've tried performing a kubectl cp command immediately after applying the deployment and it sometimes works, but it can result in a race condition which cause the microservice to fail at startup.
(I've redacted unnecessary parts)
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
spec:
replicas: 1
template:
spec:
containers:
- env:
image: ########.amazon.ecr.url.us-north-1.amazonaws.com/company/image-name:1.0.0
name: my-service
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /config-volume
name: config-volume
volumes:
- name: config-volume
emptyDir: {}
status: {}
Is there a way to copy files into a volume inside the deployment.yaml?
You are trying to emulate a ConfigMap using volumes. Instead, put your configuration into a ConfigMap, and mount that to your deployments. The documentation is there:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
Once you have your configuration as a ConfigMap, mount it using something like this:
...
containers:
- name: mycontainer
volumeMounts:
- name: config-volume
mountPath: /config-volume
volumes:
- name: config-volume
configMap:
name: nameOfConfigMap

Kubernetes Storage type Hostpath- files mapping issue

Hi I am using latest kubernetes 1.13.1 and docker-ce (Docker version 18.06.1-ce, build e68fc7a).
I setup a deployment file that mount a file from the host (host-path) and mounts it inside a container (mountPath).
The bug is when I am trying to mount a find from the host to the container I get an error message that It's not a file. (Kubernetes think that the file is a directory for some reason)
When I am trying to run the containers using the command:
Kubectl create -f
it stay at ContainerCreating stage forever.
after deeper look on it using Kubectl describe pod it say:
Is has an error message the the file is not recognized as a file.
Here is the deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: notixxxion
name: notification
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: notification
spec:
containers:
- image: docker-registry.xxxxxx.com/xxxxx/nxxxx:laxxt
name: notixxxion
ports:
- containerPort: xxx0
#### host file configuration
volumeMounts:
- mountPath: /opt/notification/dist/hellow.txt
name: test-volume
readOnly: false
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /exec-ui/app-config/hellow.txt
# this field is optional
type: FileOrCreate
#type: File
status: {}
I have reinstalled the kubernetes cluster and it got little bit better.
kubernetes now can read files without any problem and the container in creating and running But, there is some other issue with the host path storage type:
hostPath containing mounts do not update as they change on the host even after I delete the pod and create it again
Check for file permissions which you are trying to mount!
As a last resort try using privileged mode.
Hope it helps!

Volume mounting in Jenkins on Kubernetes

I'm trying to setup Jenkins to run in a container on Kubernetes, but I'm having trouble persisting the volume for the Jenkins home directory.
Here's my deployment.yml file. The image is based off jenkins/jenkins
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-deployment
labels:
app: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: 1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins
imagePullPolicy: "Always"
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
However, if i then push a new container to my image repository and update the pods using the below commands, Jenkins comes back online but asks me to start from scratch (enter admin password, none of my Jenkins jobs are there, no plugins etc)
kubectl apply -f kubernetes (where my manifests are stored)
kubectl set image deployment/jenkins-deployment jenkins=1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins:$VERSION
Am I misunderstanding how this volume mount is meant to work?
As an aside, I also have backup and restore scripts which backup the Jenkins home directory to s3, and download it again, but that's somewhat outside the scope of this issue.
You should use PersistentVolumes along with StatefulSet instead of Deployment resource if you wish your data to survive re-deployments|restarts of your pod.
You have specified the volume type EmptyDir. This will essentially mount an empty directory on the kube node that runs your pod. Every time you restart your deployment, the pod could move between kube hosts and the empty dir isn't present, so your data isn't persisting across restarts.
I see you're pulling you image from an ECR repository, so I'm assuming you're running k8s in AWS.
You'll need to configure a StorageClass for AWS. If you've provisioned k8s using something like kops, this will already be configured. You can confirm this by doing kubectl get storageclass - the provisioner should be configured as EBS:
NAME PROVISIONER
gp2 (default) kubernetes.io/aws-ebs
Then, you need to specify a persistentvolumeclaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2 # must match your storageclass from above
resources:
requests:
storage: 30Gi
You can now the pv claim on your deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-deployment
labels:
app: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: 1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins
imagePullPolicy: "Always"
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
persistentVolumeClaim:
claimName: jenkins-data # must match the claim name from above

Resources