Run docker commands within pod using helm - docker

I am trying to run a docker container within a job that I am deploying with helm using AKS. The purpose of this is to run some tests using Selenium and make some postgres calls to automate web ui tests.
When trying to run within the job, the following error is received:
"Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
I ran into this problem locally, but can work around it using
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock web-ui-auto:latest /bin/bash
The problem is I am using helm to deploy the job separate from the running tasks since it can take about an hour to complete.
I tried adding a deployment.yaml to my helm like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "web-ui-auto.fullname" . }}
labels:
app: {{ template "web-ui-auto.name" . }}
chart: {{ template "web-ui-auto.chart" . }}
draft: {{ .Values.draft | default "draft-app" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
revisionHistoryLimit: 5
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "web-ui-auto.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "web-ui-auto.name" . }}
draft: {{ .Values.draft | default "draft-app" }}
release: {{ .Release.Name }}
annotations:
buildID: {{ .Values.buildID | default "" | quote }}
container.apparmor.security.beta.kubernetes.io/{{ .Chart.Name }}: runtime/default
spec:
containers:
- name: {{ .Chart.Name }}
securityContext:
runAsNonRoot: {{ .Values.securityContext.runAsNonRoot }}
runAsUser: {{ .Values.securityContext.runAsUser }}
runAsGroup: {{ .Values.securityContext.runAsGroup }}
allowPrivilegeEscalation: {{ .Values.securityContext.allowPrivilegeEscalation }}
seccompProfile:
type: RuntimeDefault
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.deployment.containerPort }}
protocol: TCP
volumeMounts:
- name: dockersock
mountPath: "/var/run/docker.sock"
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
but was met with failures still. My question is what is the best method to run docker within the job successfully when deploying with helm. Any help is appreciated.

Related

DockerFile execution is not working from kubernates job

we are in process of migrating init-container to kubernates job . so I added init-container image location in containers section of job.yaml. but shell script execution within .dockerfile of init-container is not getting invoked. Could some one help what could be wrong here?
job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}-init-job"
namespace: {{ .Release.Namespace }}
spec:
template:
metadata:
annotations:
linkerd.io/inject: disabled
"helm.sh/hook-delete-policy": before-hook-creation
"helm.sh/hook": pre-install,pre-upgrade,pre-delete
"helm.sh/hook-weight": "-5"
spec:
serviceAccountName: {{ .Release.Name }}-init-service-account
containers:
- name: app-installer
image: artifactorylocation/test-init-container:1.0.1
command:
- /bin/bash
- -c
- echo Hello executing k8s init-container
securityContext:
readOnlyRootFilesystem: true
restartPolicy: OnFailure
.dockerfile of test-init-container
FROM repository/java17-ol8-x64:adddd4c
WORKDIR /
ADD target/test-init-container-ms.jar ./
ADD target/lib ./lib
ADD start.sh /
RUN chmod +x /start.sh
CMD ["sh", "/start.sh"]
EXPOSE 8080
start.sh is not been executed by job.
Just replace your job.yml to avoid replacing the entrypoint(CMD):
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}-init-job"
namespace: {{ .Release.Namespace }}
spec:
template:
metadata:
annotations:
linkerd.io/inject: disabled
"helm.sh/hook-delete-policy": before-hook-creation
"helm.sh/hook": pre-install,pre-upgrade,pre-delete
"helm.sh/hook-weight": "-5"
spec:
serviceAccountName: {{ .Release.Name }}-init-service-account
containers:
- name: app-installer
image: artifactorylocation/test-init-container:1.0.1
securityContext:
readOnlyRootFilesystem: true
restartPolicy: OnFailure

Unable to pull from Gitlab Container Registry unless set to Everyone With Access

I am working on building a simple pipeline with Gitlab. I'm using Minikube on my laptop and I've installed gitlab-runner using helm on the same namespace of the application I'm trying to deploy. I've not installed Gitlab on Minikube, I'm using Gitlab.com.
Anyway, after a lot of attempts, the deployment was successful and the application was deployed but failed because it can't pull the image from the registry.gitlab.com. The error is repository does not exist or may require 'docker login': denied: requested access to the resource is denied
I've also logged in successfully with docker login registry.gitlab.com -u username -p pwd but I can't pull the image, same error as above.
I've created secrets according to the documentation. Here's my deployment file
apiVersion: v1
kind: Secret
metadata:
name: registry-credentials
namespace: {{ .Values.applicationName }}
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: ..hidden..
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.applicationName }}
namespace: {{ .Values.applicationName }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Values.applicationName }}
template:
metadata:
labels:
app: {{ .Values.applicationName }}
spec:
containers:
- name: {{ .Values.applicationName }}
image: registry.gitlab.com/gfalco77/maverick:latest
imagePullPolicy: Always
ports:
- containerPort: 8001
imagePullSecrets:
- name: registry-credentials
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.applicationName }}
spec:
ports:
- name: {{ .Values.applicationName }}
port: 8001
targetPort: 8001
protocol: TCP
selector:
app: {{ .Values.applicationName }}
I've also created the deploy token with read_registry.
Project visibility is already Public but container registry was set to 'Only Project Members'
Only way I can make it work is to change the permissions of the container registry to Everyone With Access.
Is this obvious or it can also be done with permissions 'Only project members'?
Thanks

Kubernetes Rolling Update with Helm2

I am trying to perform a Kubernetes Rolling Update using Helm v2; however, I'm unable to.
When I perform a helm upgrade on a slow Tomcat image, the original pod is destroyed.
I would like to figure out how to achieve zero downtime by incrementally updating Pods instances with new ones, and draining old ones.
To demonstrate, I created a sample slow Tomcat Docker image, and a Helm chart.
To install:
helm install https://github.com/h-q/slowtom/raw/master/docs/slowtom.tgz --name slowtom \
-f https://github.com/h-q/slowtom/raw/master/docs/slowtom/environments/initial.yaml
You can follow the logs by running kubectl logs -f slowtom-sf-0, and once ready you can access the application on http://localhost:30901
To upgrade:
(and that's where I need help)
helm upgrade slowtom https://github.com/h-q/slowtom/raw/master/docs/slowtom.tgz \
-f https://github.com/h-q/slowtom/raw/master/docs/slowtom/environments/upgrade.yaml
The upgrade.yaml is identical to the initial.yaml deployment file with the exception of the tag version number.
Here the original pod is destroyed, and the new one starts. Meanwhile, users are unable to access the application on http://localhost:30901
To Delete:
helm del slowtom --purge
Reference
Local Helm Chart
Download helm chart:
curl -LO https://github.com/h-q/slowtom/raw/master/docs/slowtom.tgz
tar vxfz ./slowtom.tgz
Install from local helm-chart:
helm install --debug ./slowtom --name slowtom -f ./slowtom/environments/initial.yaml
Upgrade from local helm-chart:
helm upgrade --debug slowtom ./slowtom -f ./slowtom/environments/upgrade.yaml
Docker Image
Dockerfile
FROM tomcat:8.5-jdk8-corretto
RUN mkdir /usr/local/tomcat/webapps/ROOT && \
echo '<html><head><title>Slow Tomcat</title></head><body><h1>Slow Tomcat Now Ready</h1></body></html>' >> /usr/local/tomcat/webapps/ROOT/index.html
RUN echo '#!/usr/bin/env bash' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'x=2' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'secs=$(($x * 60))' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'while [ $secs -gt 0 ]; do' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo ' >&2 echo -e "Blast off in $secs\033[0K\r"' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo ' sleep 1' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo ' : $((secs--))' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'done' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo '>&2 echo "slow cataline done. will now start real catalina"' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'exec catalina.sh run' >> /usr/local/tomcat/bin/slowcatalina.sh && \
chmod +x /usr/local/tomcat/bin/slowcatalina.sh
ENTRYPOINT ["/usr/local/tomcat/bin/slowcatalina.sh"]
Helm Chart Content
slowtom/Chart.yaml
apiVersion: v1
description: slow-tomcat Helm chart for Kubernetes
name: slowtom
version: 1.1.2 # whatever
slowtom/values.yaml
# Do not use this file, but ones from environmments folder
slowtom/environments/initial.yaml
# Storefront
slowtom_sf:
name: "slowtom-sf"
hasHealthcheck: "true"
isResilient: "false"
replicaCount: 2
aspect_values:
- name: y_aspect
value: "storefront"
image:
repository: hqasem/slow-tomcat
pullPolicy: IfNotPresent
tag: 1
env:
- name: y_env
value: whatever
slowtom/environments/upgrade.yaml
# Storefront
slowtom_sf:
name: "slowtom-sf"
hasHealthcheck: "true"
isResilient: "false"
replicaCount: 2
aspect_values:
- name: y_aspect
value: "storefront"
image:
repository: hqasem/slow-tomcat
pullPolicy: IfNotPresent
tag: 2
env:
- name: y_env
value: whatever
slowtom/templates/deployment.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Values.slowtom_sf.name }}
labels:
chart: "{{ .Chart.Name | trunc 63 }}"
chartVersion: "{{ .Chart.Version | trunc 63 }}"
visualize: "true"
app: {{ .Values.slowtom_sf.name }}
spec:
replicas: {{ .Values.slowtom_sf.replicaCount }}
selector:
matchLabels:
app: {{ .Values.slowtom_sf.name }}
template:
metadata:
labels:
app: {{ .Values.slowtom_sf.name }}
visualize: "true"
spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: {{ .Values.slowtom_sf.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/usr/local/tomcat/bin/slowcatalina.sh"]
args: ["whatever"]
env:
{{ toYaml .Values.env | indent 12 }}
{{ toYaml .Values.slowtom_sf.aspect_values | indent 12 }}
resources:
{{ toYaml .Values.resources | indent 12 }}
---
slowtom/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
name: {{.Values.slowtom_sf.name}}
labels:
chart: "{{ .Chart.Name | trunc 63 }}"
chartVersion: "{{ .Chart.Version | trunc 63 }}"
app: {{.Values.slowtom_sf.name}}
visualize: "true"
hasHealthcheck: "{{ .Values.slowtom_sf.hasHealthcheck }}"
isResilient: "{{ .Values.slowtom_sf.isResilient }}"
spec:
type: NodePort
selector:
app: {{.Values.slowtom_sf.name}}
sessionAffinity: ClientIP
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: http
nodePort: 30901
---
Unlike Deployment, StatefulSet does not start a new pod before destroying the old one during a rolling update. Instead, the expectation is that you have multiple pods, and they will be replaced one-by-one. Since you only have 1 replica configured, it must destroy it first. Either increase your replica count to 2 or more, or switch to a Deployment template.
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#rolling-update
I solved this problem by adding Readiness or Startup Probes to my deployment.yaml
slowtom/templates/deployment.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Values.slowtom_sf.name }}
labels:
chart: "{{ .Chart.Name | trunc 63 }}"
chartVersion: "{{ .Chart.Version | trunc 63 }}"
visualize: "true"
app: {{ .Values.slowtom_sf.name }}
spec:
replicas: {{ .Values.slowtom_sf.replicaCount }}
selector:
matchLabels:
app: {{ .Values.slowtom_sf.name }}
template:
metadata:
labels:
app: {{ .Values.slowtom_sf.name }}
visualize: "true"
spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: {{ .Values.slowtom_sf.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/usr/local/tomcat/bin/slowcatalina.sh"]
args: ["whatever"]
env:
{{ toYaml .Values.env | indent 12 }}
{{ toYaml .Values.slowtom_sf.aspect_values | indent 12 }}
resources:
{{ toYaml .Values.resources | indent 12 }}
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 1
failureThreshold: 3
---

Converting docker-compose to a helm chart?

I have a docker-compose file containing 2 images to a security tool I am using. My challenge is to convert it into helm chart consisting of deployment.yaml and service.yaml. The docker-compose looks like this -
version: '3'
services:
nginx:
ports:
- "80:80"
- "443:443"
environment:
- NG_SERVER_NAME=192.168.1.228
links:
- tomcat8
image: continuumsecurity/iriusrisk-prod:nginx-prod-ssl
container_name: iriusrisk-nginx
volumes:
- "./cert.pem:/etc/nginx/ssl/star_iriusrisk_com.crt"
- "./key.pem:/etc/nginx/ssl/star_iriusrisk_com.key"
tomcat8:
environment:
- IRIUS_DB_URL=jdbc\:postgresql\://192.168.1.228\:5432/iriusprod?user\=iriusprod&password\=alongandcomplexpassword2523
- IRIUS_EDITION=saas
- IRIUS_EXT_URL=http\://192.168.1.228
- grails_env=production
image: continuumsecurity/iriusrisk-prod:tomcat8-2
container_name: iriusrisk-tomcat8
There is a postgres server running too which I am able to convert into a helm chart and expose it to my ip (192.168.1.228) on port 5432. But for the iriusrisk and tomcat image which are linked to each other, I am not able to it figure out. This has been my solution for the deployment file for both.
deployment-tomcat.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat
labels:
app: {{ .Values.tomcat.app.name }}
spec:
replicas: {{ .Values.tomcat.replicas }}
selector:
matchLabels:
app: {{ .Values.tomcat.app.name }}
template:
metadata:
labels:
app: {{ .Values.tomcat.app.name }}
spec:
{{- if .Values.tomcat.imagePullSecretsName }}
imagePullSecrets:
- name: {{ .Values.tomcat.imagePullSecretsName }}
{{- end}}
restartPolicy: Always
serviceAccountName: {{ .Values.tomcat.serviceAccountName }}
containers:
- name: {{ .Values.tomcat.app.name }}
image: "{{ .Values.tomcat.ImageName }}:{{ .Values.tomcat.ImageTag }}"
container_name: iriusrisk-tomcat8
imagePullPolicy: {{ .Values.tomcat.ImagePullPolicy }}
ports:
- containerPort: {{ .Values.tomcat.port }}
env:
- name: IRIUS_DB_URL
value: jdbc\:postgresql\://192.168.1.228\:5432/iriusprod?user\=iriusprod&password\=alongandcomplexpassword2523
- name: IRIUS_EDITION
value: saas
- name: IRIUS_EXT_URL
value: http\://192.168.1.228
- name: grails_env
value: production
deployment-iriusrisk.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: iriusrisk
labels:
app: {{ .Values.iriusrisk.app.name }}
spec:
replicas: {{ .Values.iriusrisk.replicas }}
selector:
matchLabels:
app: {{ .Values.iriusrisk.app.name }}
template:
metadata:
labels:
app: {{ .Values.iriusrisk.app.name }}
spec:
{{- if .Values.iriusrisk.imagePullSecretsName }}
imagePullSecrets:
- name: {{ .Values.iriusrisk.imagePullSecretsName }}
{{- end}}
restartPolicy: Always
serviceAccountName: {{ .Values.iriusrisk.serviceAccountName }}
containers:
- name: {{ .Values.iriusrisk.app.name }}
image: "{{ .Values.iriusrisk.ImageName }}:{{ .Values.iriusrisk.ImageTag }}"
container_name: iriusrisk-nginx
imagePullPolicy: {{ .Values.iriusrisk.ImagePullPolicy }}
ports:
- containerPort: {{ .Values.iriusrisk.port }}
env:
- name: NG_SERVER_NAME
value: "192.168.1.228"
volumes:
- "./cert.pem:/etc/nginx/ssl/star_iriusrisk_com.crt"
- "./key.pem:/etc/nginx/ssl/star_iriusrisk_com.key"
How should I go around solving this issue? I have looked at "linking" pods with each other but none of the solutions I tried worked. I am bit new to this hence I am still a bit confused about how to expose pods and connect to each other.
The kompose tool now includes the ability to convert to Helm charts from docker-compose.yml files:
kompose convert -c
Check out the kompose Alternative Conversions documentation (also here).
From my current knowledge, there is no such tool is developed or published that converts helm-chart into docker-compose file. But the conversion from docker-compose to kubernetes resource manifests can be done by using tool like kompose (https://kompose.io).
I think it is not necessary to convert from helm chart to docker-compose. You can use Minikube to run whatever is needed locally. Otherwise, the other alternative is to run your containers locally and reverse engineer. i.e. produce the docker compose file. Here is a link to GitHub that does this for you https://github.com/Red5d/docker-autocompose.
Good Luck

How do I ensure same environment for all my workers(containers) in Airflow?

I have a config for deploying 4 pods(hence, 4 workers) for Airflow on Kubernetes using Docker. However, all of a sudden, worker-0 is unable to make a certain curl request whereas other workers are able to make one. This is resulting in the failure of pipelines.
I have tried reading about mismatching configs and stateful sets but in my case, there is one config for all the workers and this is the only single source of truth.
statefulsets-workers.yaml file is as follows:
# Workers are not in deployment, but in StatefulSet, to allow each worker expose a mini-server
# that only serve logs, that will be used by the web server.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: {{ template "airflow.fullname" . }}-worker
labels:
app: {{ template "airflow.name" . }}-worker
chart: {{ template "airflow.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
serviceName: "{{ template "airflow.fullname" . }}-worker"
updateStrategy:
type: RollingUpdate
# Use experimental burst mode for faster StatefulSet scaling
# https://github.com/kubernetes/kubernetes/commit/****
podManagementPolicy: Parallel
replicas: {{ .Values.celery.num_workers }}
template:
metadata:
{{- if .Values.airflow.pallet.config_path }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{{- end }}
labels:
app: {{ template "airflow.name" . }}-worker
release: {{ .Release.Name }}
spec:
restartPolicy: Always
terminationGracePeriodSeconds: 30
securityContext:
runAsUser: 1002
fsGroup: 1002
containers:
- name: {{ .Chart.Name }}-worker
imagePullPolicy: {{ .Values.airflow.image_pull_policy }}
image: "{{ .Values.airflow.image }}:{{ .Values.airflow.imageTag }}"
volumeMounts:
{{- if .Values.airflow.storage.enabled }}
- name: google-cloud-key
mountPath: /var/secrets/google
readOnly: true
{{- end }}
- name: worker-logs
mountPath: /usr/local/airflow/logs
- name: data
mountPath: /usr/local/airflow/rootfs
env:
{{- if .Values.airflow.storage.enabled }}
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
{{- end }}
{{- range $setting, $option := .Values.airflow.config }}
- name: {{ $setting }}
value: {{ $option }}
{{- end }}
securityContext:
allowPrivilegeEscalation: false
envFrom:
- configMapRef:
name: pallet-env-file
args: ["worker"]
ports:
- name: wlog
containerPort: 8793
protocol: TCP
{{- if .Values.airflow.image_pull_secret }}
imagePullSecrets:
- name: {{ .Values.airflow.image_pull_secret }}
{{- end }}
{{- if .Values.airflow.storage.enabled }}
volumes:
- name: google-cloud-key
secret:
secretName: {{ .Values.airflow.storage.secretName }}
{{- end }}
volumeClaimTemplates:
- metadata:
name: worker-logs
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 50Gi
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 50Gi
I expect all the workers to be able to connect to the service to which I am making the curl request.
It turns out that the environment was indeed the same, however the receiving machine didn't have the new IP of the node whitelisted.
When all the pods crashed, they took the node down with them and restarting the node gave it a new IP. Hence, connection timed out for the worker in that node.

Resources