I have a docker-compose file containing 2 images to a security tool I am using. My challenge is to convert it into helm chart consisting of deployment.yaml and service.yaml. The docker-compose looks like this -
version: '3'
services:
nginx:
ports:
- "80:80"
- "443:443"
environment:
- NG_SERVER_NAME=192.168.1.228
links:
- tomcat8
image: continuumsecurity/iriusrisk-prod:nginx-prod-ssl
container_name: iriusrisk-nginx
volumes:
- "./cert.pem:/etc/nginx/ssl/star_iriusrisk_com.crt"
- "./key.pem:/etc/nginx/ssl/star_iriusrisk_com.key"
tomcat8:
environment:
- IRIUS_DB_URL=jdbc\:postgresql\://192.168.1.228\:5432/iriusprod?user\=iriusprod&password\=alongandcomplexpassword2523
- IRIUS_EDITION=saas
- IRIUS_EXT_URL=http\://192.168.1.228
- grails_env=production
image: continuumsecurity/iriusrisk-prod:tomcat8-2
container_name: iriusrisk-tomcat8
There is a postgres server running too which I am able to convert into a helm chart and expose it to my ip (192.168.1.228) on port 5432. But for the iriusrisk and tomcat image which are linked to each other, I am not able to it figure out. This has been my solution for the deployment file for both.
deployment-tomcat.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat
labels:
app: {{ .Values.tomcat.app.name }}
spec:
replicas: {{ .Values.tomcat.replicas }}
selector:
matchLabels:
app: {{ .Values.tomcat.app.name }}
template:
metadata:
labels:
app: {{ .Values.tomcat.app.name }}
spec:
{{- if .Values.tomcat.imagePullSecretsName }}
imagePullSecrets:
- name: {{ .Values.tomcat.imagePullSecretsName }}
{{- end}}
restartPolicy: Always
serviceAccountName: {{ .Values.tomcat.serviceAccountName }}
containers:
- name: {{ .Values.tomcat.app.name }}
image: "{{ .Values.tomcat.ImageName }}:{{ .Values.tomcat.ImageTag }}"
container_name: iriusrisk-tomcat8
imagePullPolicy: {{ .Values.tomcat.ImagePullPolicy }}
ports:
- containerPort: {{ .Values.tomcat.port }}
env:
- name: IRIUS_DB_URL
value: jdbc\:postgresql\://192.168.1.228\:5432/iriusprod?user\=iriusprod&password\=alongandcomplexpassword2523
- name: IRIUS_EDITION
value: saas
- name: IRIUS_EXT_URL
value: http\://192.168.1.228
- name: grails_env
value: production
deployment-iriusrisk.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: iriusrisk
labels:
app: {{ .Values.iriusrisk.app.name }}
spec:
replicas: {{ .Values.iriusrisk.replicas }}
selector:
matchLabels:
app: {{ .Values.iriusrisk.app.name }}
template:
metadata:
labels:
app: {{ .Values.iriusrisk.app.name }}
spec:
{{- if .Values.iriusrisk.imagePullSecretsName }}
imagePullSecrets:
- name: {{ .Values.iriusrisk.imagePullSecretsName }}
{{- end}}
restartPolicy: Always
serviceAccountName: {{ .Values.iriusrisk.serviceAccountName }}
containers:
- name: {{ .Values.iriusrisk.app.name }}
image: "{{ .Values.iriusrisk.ImageName }}:{{ .Values.iriusrisk.ImageTag }}"
container_name: iriusrisk-nginx
imagePullPolicy: {{ .Values.iriusrisk.ImagePullPolicy }}
ports:
- containerPort: {{ .Values.iriusrisk.port }}
env:
- name: NG_SERVER_NAME
value: "192.168.1.228"
volumes:
- "./cert.pem:/etc/nginx/ssl/star_iriusrisk_com.crt"
- "./key.pem:/etc/nginx/ssl/star_iriusrisk_com.key"
How should I go around solving this issue? I have looked at "linking" pods with each other but none of the solutions I tried worked. I am bit new to this hence I am still a bit confused about how to expose pods and connect to each other.
The kompose tool now includes the ability to convert to Helm charts from docker-compose.yml files:
kompose convert -c
Check out the kompose Alternative Conversions documentation (also here).
From my current knowledge, there is no such tool is developed or published that converts helm-chart into docker-compose file. But the conversion from docker-compose to kubernetes resource manifests can be done by using tool like kompose (https://kompose.io).
I think it is not necessary to convert from helm chart to docker-compose. You can use Minikube to run whatever is needed locally. Otherwise, the other alternative is to run your containers locally and reverse engineer. i.e. produce the docker compose file. Here is a link to GitHub that does this for you https://github.com/Red5d/docker-autocompose.
Good Luck
Related
I am trying to run a docker container within a job that I am deploying with helm using AKS. The purpose of this is to run some tests using Selenium and make some postgres calls to automate web ui tests.
When trying to run within the job, the following error is received:
"Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
I ran into this problem locally, but can work around it using
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock web-ui-auto:latest /bin/bash
The problem is I am using helm to deploy the job separate from the running tasks since it can take about an hour to complete.
I tried adding a deployment.yaml to my helm like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "web-ui-auto.fullname" . }}
labels:
app: {{ template "web-ui-auto.name" . }}
chart: {{ template "web-ui-auto.chart" . }}
draft: {{ .Values.draft | default "draft-app" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
revisionHistoryLimit: 5
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "web-ui-auto.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "web-ui-auto.name" . }}
draft: {{ .Values.draft | default "draft-app" }}
release: {{ .Release.Name }}
annotations:
buildID: {{ .Values.buildID | default "" | quote }}
container.apparmor.security.beta.kubernetes.io/{{ .Chart.Name }}: runtime/default
spec:
containers:
- name: {{ .Chart.Name }}
securityContext:
runAsNonRoot: {{ .Values.securityContext.runAsNonRoot }}
runAsUser: {{ .Values.securityContext.runAsUser }}
runAsGroup: {{ .Values.securityContext.runAsGroup }}
allowPrivilegeEscalation: {{ .Values.securityContext.allowPrivilegeEscalation }}
seccompProfile:
type: RuntimeDefault
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.deployment.containerPort }}
protocol: TCP
volumeMounts:
- name: dockersock
mountPath: "/var/run/docker.sock"
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
but was met with failures still. My question is what is the best method to run docker within the job successfully when deploying with helm. Any help is appreciated.
I have deplyonment.yml file which looks like below :
apiVersion : apps/v1
kind: Deployment
metadata:
name: worker
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- name: worker
image: $(RegistryName)/$(RepositoryName):$(Build.BuildNumber)
imagePullPolicy: Always
But I am not able to use $(RegistryName) and $(RepositoryName) as I am not sure how to even initialize this and assign a value here.
If I specify something like below
image: XXXX..azurecr.io/werepo:$(Build.BuildNumber)
It worked with the direct static and exact names. But I don't want to hard core registry and repository name.
Is there any way to replace this dynamically? just like the way I am passing these in task
- task: KubernetesManifest#0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
kubernetesServiceConnection: 'XXXX-connection'
namespace: 'XXXX-namespace'
manifests: |
$(Pipeline.Workspace)/manifests/deployment.yml
containers: |
$(Registry)/$(webRepository):$(Build.BuildNumber)
You can do something like
deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: test-image
labels:
app: test-image
spec:
selector:
matchLabels:
app: test-image
tier: frontend
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: test-image
tier: frontend
spec:
containers:
- image: TEST_IMAGE_NAME
name: test-image
ports:
- containerPort: 8080
name: http
- containerPort: 443
name: https
in CI step or run sed command in ubuntu like
steps:
- id: 'set test core image in yamls'
name: 'ubuntu'
args: ['bash','-c','sed -i "s,TEST_IMAGE_NAME,gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA," deployment.yaml']
above will resolve your issue.
Above command simply find & replace TEST_IMAGE_NAME with variables that creating the docker image URI.
Option : 2 kustomization
If you want to do it with customization
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service.yaml
- deployment.yaml
namespace: default
commonLabels:
app: myapp
images:
- name: myapp
newName: registry.gitlab.com/jkpl/kustomize-demo
newTag: IMAGE_TAG
sh file
#!/usr/bin/env bash
set -euo pipefail
# Set the image tag if not set
if [ -z "${IMAGE_TAG:-}" ]; then
IMAGE_TAG=$(git rev-parse HEAD)
fi
sed "s/IMAGE_TAG/${IMAGE_TAG}/g" k8s-base/kustomization.template.sed.yaml > location/kustomization.yaml
Demo github : https://gitlab.com/jkpl/kustomize-demo
I created customize Docker Image and stored in my local system Now I want use that Docker Image via kubectl .
Docker image:-
1:- docker build -t backend:v1 .
Then Kubernetes file:-
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: backend
namespace: web-console
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: backend
spec:
containers:
- env:
- name: mail_auth_pass
- name: mail_auth_user
- name: mail_from
- name: mail_greeting
- name: mail_service
- name: mail_sign
- name: mongodb_url
value: mongodb://mongodb.mongodb.svc.cluster.local/console
- name: server_host
value: "0.0.0.0"
- name: server_port
value: "3000"
- name: server_sessionSecret
value: "1234"
image: backend
imagePullPolicy: Never
name: backend
resources: {}
restartPolicy: Always
status: {}```
Command to run kubectl:- kubectl create -f backend-deployment.yaml
**getting Error:-**
error: error validating "backend-deployment.yaml": error validating data: [ValidationError(Deployment.spec.template.spec.containers[0].env[9]): unknown field "image" in io.k8s.api.core.v1.EnvVar, ValidationError(Deployment.spec.template.spec.containers[0].env[9]): unknown field "imagePullPolicy" in io.k8s.api.core.v1.EnvVar]; if you choose to ignore these errors, turn validation off with --validate=false
Local Registry
Set the local registry first using this command
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Image Tag
Given a Dockerfile, the image could be built and tagged this easy way:
docker build -t localhost:5000/my-image
Image Pull Policy
the field imagePullPolicy should then be changed to Never get the right image from the right repo.
given this sample pod template
apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app
spec:
containers:
- name: app
image: localhost:5000/my-image
imagePullPolicy: Never
Deploy Pod
The pod can be deployed using:
kubectl create -f pod.yml
Hope this comes in handy :)
As the error specifies unknown field "image" and unknown field "imagePullPolicy"
There is syntax error in your kubernetes deployment file.
Make these changes in your yaml file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: backend
namespace: web-console
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: backend
spec:
containers:
- name: backend
image: backend
imagePullPolicy: Never
env:
- name: mail_auth_pass
- name: mail_auth_user
- name: mail_from
- name: mail_greeting
- name: mail_service
- name: mail_sign
- name: mongodb_url
value: mongodb://mongodb.mongodb.svc.cluster.local/console
- name: server_host
value: "0.0.0.0"
- name: server_port
value: "3000"
- name: server_sessionSecret
value: "1234"
resources: {}
restartPolicy: Always
status: {}
Validate your kubernetes yaml file online using https://kubeyaml.com/
Or with kubectl apply --validate=true --dry-run=true -f deployment.yaml
Hope this helps.
I have a config for deploying 4 pods(hence, 4 workers) for Airflow on Kubernetes using Docker. However, all of a sudden, worker-0 is unable to make a certain curl request whereas other workers are able to make one. This is resulting in the failure of pipelines.
I have tried reading about mismatching configs and stateful sets but in my case, there is one config for all the workers and this is the only single source of truth.
statefulsets-workers.yaml file is as follows:
# Workers are not in deployment, but in StatefulSet, to allow each worker expose a mini-server
# that only serve logs, that will be used by the web server.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: {{ template "airflow.fullname" . }}-worker
labels:
app: {{ template "airflow.name" . }}-worker
chart: {{ template "airflow.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
serviceName: "{{ template "airflow.fullname" . }}-worker"
updateStrategy:
type: RollingUpdate
# Use experimental burst mode for faster StatefulSet scaling
# https://github.com/kubernetes/kubernetes/commit/****
podManagementPolicy: Parallel
replicas: {{ .Values.celery.num_workers }}
template:
metadata:
{{- if .Values.airflow.pallet.config_path }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{{- end }}
labels:
app: {{ template "airflow.name" . }}-worker
release: {{ .Release.Name }}
spec:
restartPolicy: Always
terminationGracePeriodSeconds: 30
securityContext:
runAsUser: 1002
fsGroup: 1002
containers:
- name: {{ .Chart.Name }}-worker
imagePullPolicy: {{ .Values.airflow.image_pull_policy }}
image: "{{ .Values.airflow.image }}:{{ .Values.airflow.imageTag }}"
volumeMounts:
{{- if .Values.airflow.storage.enabled }}
- name: google-cloud-key
mountPath: /var/secrets/google
readOnly: true
{{- end }}
- name: worker-logs
mountPath: /usr/local/airflow/logs
- name: data
mountPath: /usr/local/airflow/rootfs
env:
{{- if .Values.airflow.storage.enabled }}
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
{{- end }}
{{- range $setting, $option := .Values.airflow.config }}
- name: {{ $setting }}
value: {{ $option }}
{{- end }}
securityContext:
allowPrivilegeEscalation: false
envFrom:
- configMapRef:
name: pallet-env-file
args: ["worker"]
ports:
- name: wlog
containerPort: 8793
protocol: TCP
{{- if .Values.airflow.image_pull_secret }}
imagePullSecrets:
- name: {{ .Values.airflow.image_pull_secret }}
{{- end }}
{{- if .Values.airflow.storage.enabled }}
volumes:
- name: google-cloud-key
secret:
secretName: {{ .Values.airflow.storage.secretName }}
{{- end }}
volumeClaimTemplates:
- metadata:
name: worker-logs
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 50Gi
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 50Gi
I expect all the workers to be able to connect to the service to which I am making the curl request.
It turns out that the environment was indeed the same, however the receiving machine didn't have the new IP of the node whitelisted.
When all the pods crashed, they took the node down with them and restarting the node gave it a new IP. Hence, connection timed out for the worker in that node.
To use a docker container from a private docker repo, kubernetes recommends creating a secret of type 'docker-registry' and referencing it in your deployment.
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
Then in your helm chart or kubernetes deployment file, use imagePullSecrets
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: foo
spec:
replicas: {{ .Values.replicaCount }}
template:
spec:
imagePullSecrets:
- name: regcred
containers:
- name: foo
image: foo.example.com
This works, but requires that all containers be sourced from the same registry.
How would you pull 2 containers from 2 registries (e.g. when using a sidecar that is stored separate from the primary container) ?
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: foo
spec:
replicas: {{ .Values.replicaCount }}
template:
spec:
containers:
- name: foo
image: foo.example.com
imagePullSecrets:
- name: foo-secret
- name: bar
image: bar.example.com
imagePullSecrets:
- name: bar-secret
I've tried creating 2 secrets foo-secret and bar-secret and referencing each appropriately, but I find it fails to pull both containers.
You have to include imagePullSecrets: directly at the pod level, but you can have multiple secrets there.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: foo
spec:
replicas: {{ .Values.replicaCount }}
template:
spec:
imagePullSecrets:
- name: foo-secret
- name: bar-secret
containers:
- name: foo
image: foo.example.com/foo-image
- name: bar
image: bar.example.com/bar-image
The Kubernetes documentation on this notes:
If you need access to multiple registries, you can create one secret for each registry. Kubelet will merge any imagePullSecrets into a single virtual .docker/config.json when pulling images for your Pods.