I try to run my private docker image along with the docker-dind container to be able to run docker commands from the private image in Kubernetes.
My only issue is that the docker run command does not read the docker-secrets so fails by requiring to run docker login. How could I pass the credentials to the docker run command?
Here the piece of my Kubernetes deployment:
containers:
- name: docker-private
image: docker:20.10
command: ['docker', 'run', '-p', '80:8000', 'private/image:latest' ]
resources:
requests:
cpu: 10m
memory: 256Mi
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
envFrom:
- secretRef:
name: docker-secret-keys
- name: dind-daemon
image: docker:20.10-dind
command: ["dockerd", "--host", "tcp://127.0.0.1:2375"]
resources:
requests:
cpu: 20m
memory: 512Mi
securityContext:
privileged: true
volumeMounts:
- name: docker-graph-storage
mountPath: /var/lib/docker
EDIT
I do have my certificate as Kubernetes secrets that I try to mount to the running docker but until now without any success :(
apiVersion: v1
data:
.dockerconfigjson: eyJhXXXXXXdoihfc9w8fwpeojfOFwhfoiuwehfo8wfhoi2ehfioewNlcm5hbWUiOiJlbGRhcmVudGas4hti45ytg45hgiVsZGFXXXXXXyQGVudG9yLmlvIiwiYXV0aCI6IlpXeGtZWEpsYm5SdmNqb3dObVl4WmpjM1lTMDVPRFZrTFRRNU5HRXRZVEUzTXkwMk5UYzBObVF4T0RjeFpUWT0ifX19XXXXXXXXXXX
kind: Secret
metadata:
name: staging-docker-keys
namespace: staging
resourceVersion: "6383"
uid: a7yduyd-xxxx-xxxx-xxxx-ae2ede3e4ed
type: kubernetes.io/dockerconfigjson
The final goal is to get the "inner docker" (that runs private/image:latest) be able to run any docker command without a need to login before each command.
docker:dind will create ca, server, client cert in /certs.
Just create emptyDir volume to share cert.
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
name: myapp
spec:
volumes:
- name: docker-tls-certdir
emptyDir: {}
containers:
- name: docker-private
image: docker:20.10
command: ['docker', 'run', '-p', '80:8000', 'nginx' ]
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
volumeMounts:
- name: docker-tls-certdir
mountPath: /certs
- name: dind-daemon
image: docker:20.10-dind
command: ["dockerd", "--host", "tcp://127.0.0.1:2375"]
securityContext:
privileged: true
volumeMounts:
- name: docker-tls-certdir
mountPath: /certs
Assuming you are not using docker cert authentication, but username and password you may follow the below path:
modify docker client image (docker:20.1) entrypoint using command field
command may look like below:
command: ["/bin/sh"]
args: ["-c", "docker login...;docker run..."]
Sample working pod using the idea:
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
name: myapp
spec:
containers:
- name: myapp
image: docker:20.10
command: ["/bin/sh"]
args: ["-c", "docker version;docker info"]
resources:
limits:
memory: "128Mi"
cpu: "500m"
Based on docs
EDIT:
If you do use docker cert authentication, you can have many options:
bake the certificates by extending docker client image and using it instead.
mount the certificates if you have them as Kubernetes secrets in the cluster
...
Ok, I finally created an access token on my docker repository and used it to perform the docker login command. It works just fine :)
Related
I have deployed a service on Knative. I iterated on the service code/Docker image and I try to redeploy it at the same address. I proceeded as follow:
Pushed the new Docker image on our private Docker repo
Updated the service YAML file to point to the new Docker image (see YAML below)
Delete the service with the command: kubectl -n myspacename delete -f myservicename.yaml
Recreate the service with the command: kubectl -n myspacename apply -f myservicename.yaml
During the deployment, the service shows READY = Unknown and REASON = RevisionMissing, and after a while, READY = False and REASON = ProgressDeadlineExceeded. When looking at the logs of the pod with the following command kubectl -n myspacename logs revision.serving.knative.dev/myservicename-00001, I get the message:
no kind "Revision" is registered for version "serving.knative.dev/v1" in scheme "pkg/scheme/scheme.go:28"
Here is the YAML file of the service:
---
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: myservicename
namespace: myspacename
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
autoscaling.knative.dev/metric: concurrency
autoscaling.knative.dev/target: '1'
autoscaling.knative.dev/minScale: '0'
autoscaling.knative.dev/maxScale: '5'
autoscaling.knative.dev/scaleDownDelay: 60s
autoscaling.knative.dev/window: 600s
spec:
tolerations:
- key: nvidia.com/gpu
operator: Exists
effect: NoSchedule
volumes:
- name: nfs-volume
persistentVolumeClaim:
claimName: myspacename-models-pvc
imagePullSecrets:
- name: myrobotaccount-pull-secret
containers:
- name: myservicename
image: quay.company.com/project/myservicename:0.4.0
ports:
- containerPort: 5000
name: user-port
protocol: TCP
resources:
limits:
cpu: "4"
memory: 36Gi
nvidia.com/gpu: 1
requests:
cpu: "2"
memory: 32Gi
volumeMounts:
- name: nfs-volume
mountPath: /tmp/static/
securityContext:
privileged: true
env:
- name: CLOUD_STORAGE_PASSWORD
valueFrom:
secretKeyRef:
name: myservicename-cloud-storage-password
key: key
envFrom:
- configMapRef:
name: myservicename-config
The protocol I followed above is correct, the problem was because of a bug in the code of the Docker image that Knative is serving. I was able to troubleshoot the issue by looking at the logs of the pods as follow:
First run the following command to get the pod name: kubectl -n myspacename get pods. Example of pod name = myservicename-00001-deployment-56595b764f-dl7x6
Then get the logs of the pod with the following command: kubectl -n myspacename logs myservicename-00001-deployment-56595b764f-dl7x6
There is a init container which copies keystore.jks from nexus repo into a volume during the build of docker file via curl. Then once the init container is alive the python code that takes that keystore.jks and makes necessary updates then init container dies. What we are trying to do is to store this keystore.jks as a secret in openshift BUT how to copy secret into volume once init container is alive? so that python code can use it as it was before? Thanks in advance for any comments/help!
As #larsks suggests you can mount the secret to volume and use it for the main container.
here sharing YAML configuration that might help you understand.
apiVersion: v1
kind: Secret
metadata:
name: ssh-key
namespace: acme
data:
id_rsa: {{ secret_value_base64_encoded }}
now adding secret to mount path
spec:
template:
spec:
containers:
- image: "my-image:latest"
name: my-app
...
volumeMounts:
- mountPath: "/var/my-app"
name: ssh-key
readOnly: true
initContainers:
- command:
- sh
- -c
- chown -R 1000:1000 /var/my-app #if any changes required
image: busybox:1.29.2
name: set-dir-owner
securityContext:
privileged: true
volumeMounts:
- mountPath: /var/my-app
name: ssh-key
volumes:
- name: ssh-key
secret:
secretName: ssh-key
as suggested better option is to directly mount the secret to the main container without init contianer.
spec:
template:
spec:
containers:
- image: "my-image:latest"
name: my-app
...
volumeMounts:
- mountPath: "/var/my-app"
name: ssh-key
readOnly: true
volumes:
- name: ssh-key
secret:
secretName: ssh-key
I tried running simple DaemonSet on kube cluster - the Idea was that other kube pods would connect to that containers docker daemon (dockerd) and execute commands on it. (The other pods are Jenkins slaves and would have just env DOCKER_HOST point to 'tcp://localhost:2375'); In short the config looks like this:
dind.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: dind
spec:
selector:
matchLabels:
name: dind
template:
metadata:
labels:
name: dind
spec:
# tolerations:
# - key: node-role.kubernetes.io/master
# effect: NoSchedule
containers:
- name: dind
image: docker:18.05-dind
resources:
limits:
memory: 2000Mi
requests:
cpu: 100m
memory: 500Mi
volumeMounts:
- name: dind-storage
mountPath: /var/lib/docker
volumes:
- name: dind-storage
emptyDir: {}
Error message when running
mount: mounting none on /sys/kernel/security failed: Permission denied
Could not mount /sys/kernel/security.
AppArmor detection and --privileged mode might break.
mount: mounting none on /tmp failed: Permission denied
I took the idea from medium post that didn't describe it fully: https://medium.com/hootsuite-engineering/building-docker-images-inside-kubernetes-42c6af855f25 describing docker of docker, docker in docker and Kaniko
found the solution
apiVersion: v1
kind: Pod
metadata:
name: dind
spec:
containers:
- name: jenkins-slave
image: gcr.io/<my-project>/myimg # it has docker installed on it
command: ['docker', 'run', '-p', '80:80', 'httpd:latest']
resources:
requests:
cpu: 10m
memory: 256Mi
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
- name: dind-daemon
image: docker:18.05-dind
resources:
requests:
cpu: 20m
memory: 512Mi
securityContext:
privileged: true
volumeMounts:
- name: docker-graph-storage
mountPath: /var/lib/docker
volumes:
- name: docker-graph-storage
emptyDir: {}
I deploy Redis container via Kubernetes and get the following warning:
WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled
Is it possible to disable THP via Kubernetes? Perhaps via init-containers?
Yes, with init-containers it's quite straightforward:
apiVersion: v1
kind: Pod
metadata:
name: thp-test
spec:
restartPolicy: Never
terminationGracePeriodSeconds: 1
volumes:
- name: host-sys
hostPath:
path: /sys
initContainers:
- name: disable-thp
image: busybox
volumeMounts:
- name: host-sys
mountPath: /host-sys
command: ["sh", "-c", "echo never >/host-sys/kernel/mm/transparent_hugepage/enabled"]
containers:
- name: busybox
image: busybox
command: ["cat", "/sys/kernel/mm/transparent_hugepage/enabled"]
Demo (notice that this is a system wide setting):
$ ssh THATNODE cat /sys/kernel/mm/transparent_hugepage/enabled
always [madvise] never
$ kubectl create -f thp-test.yaml
pod "thp-test" created
$ kubectl logs thp-test
always madvise [never]
$ kubectl delete pod thp-test
pod "thp-test" deleted
$ ssh THATNODE cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
Ay,
I don't know if what I did is a good idea but we needed to deactivate THP on all our K8S VMs for all our apps. So I used a DaemonSet instead of adding an init-container to all our stacks :
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: thp-disable
namespace: kube-system
spec:
selector:
matchLabels:
name: thp-disable
template:
metadata:
labels:
name: thp-disable
spec:
restartPolicy: Always
terminationGracePeriodSeconds: 1
volumes:
- name: host-sys
hostPath:
path: /sys
initContainers:
- name: disable-thp
image: busybox
volumeMounts:
- name: host-sys
mountPath: /host-sys
command: ["sh", "-c", "echo never >/host-sys/kernel/mm/transparent_hugepage/enabled"]
containers:
- name: busybox
image: busybox
command: ["watch", "-n", "600", "cat", "/sys/kernel/mm/transparent_hugepage/enabled"]
I think it's a little dirty but it works.
Hi I am running kubernetes cluster where I run mailhog container.
But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:
docker run mailhog/mailhog -auth-file=./auth.file
But I need to run it via Kubernetes pod. My pod looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
ports:
- containerPort: 8025
How to achieve to run Docker container with parameter -auth-file=./auth.file via kubernetes. Thanks.
I tried adding under containers
command: ["-auth-file", "/data/mailhog/auth.file"]
but then I get
Failed to start container with docker id 7565654 with error: Error response from daemon: Container command '-auth-file' not found or does not exist.
thanks to #lang2
here is my deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
volumes:
- name: secrets-volume
secret:
secretName: mailhog-login
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
resources:
limits:
cpu: 70m
memory: 30Mi
requests:
cpu: 50m
memory: 20Mi
volumeMounts:
- name: secrets-volume
mountPath: /data/mailhog
readOnly: true
ports:
- containerPort: 8025
- containerPort: 1025
args:
- "-auth-file=/data/mailhog/auth.file"
In kubernetes, command is equivalent of ENTRYPOINT. In your case, args should be used.
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#container-v1-core
You are on the right track. It's just that you also need to include the name of the binary in the command array as the first element. You can find that out by looking​ in the respective Dockerfile (CMD and/or ENTRYPOINT).
In this case:
command: ["Mailhog", "-auth-file", "/data/mailhog/auth.file"]
I needed similar task (my aim was passing the application profile to app) and what I did is the following:
Setting an environment variable in Deployment section of the kubernetes yml file.
env:
- name: PROFILE
value: "dev"
Using this environment variable in dockerfile as command line argument.
CMD java -jar -Dspring.profiles.active=${PROFILE} /opt/app/xyz-service-*.jar