I am trying to load elasticsearch.yml file using ConfigMap while installing ElasticSearch using Kubernetes.
kubectl create configmap elastic-config --from-file=./elasticsearch.yml
The elasticsearch.yml file is loaded in the container with root as its owner and read-only permission (https://github.com/kubernetes/kubernetes/issues/62099). Since, ElasticSearch will not start with root ownership, the pod crashes.
As a work-around, I tried to mount the ConfigMap to a different file and then copy it to the config directory using an initContainer. However, the file in the config directory does not seem to be updated.
Is there anything that I am missing or is there any other way to accomplish this?
ElasticSearch Kubernetes StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
labels:
app: elasticservice
spec:
serviceName: elasticsearch
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: elastic-config-vol
mountPath: /tmp/elasticsearch
- name: elastic-storage
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: docker-elastic
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.zen.ping.unicast.hosts
value: "elastic-service"
- name: discovery.zen.minimum_master_nodes
value: "1"
- name: node.master
value: "true"
- name: node.data
value: "true"
- name: ES_JAVA_OPTS
value: "-Xmx256m -Xms256m"
volumes:
- name: elastic-config-vol
configMap:
name: elastic-config
items:
- key: elasticsearch.yml
path: elasticsearch.yml
- name: elastic-config-dir
emptyDir: {}
- name: elastic-storage
emptyDir: {}
initContainers:
# elasticsearch will not run as non-root user, fix permissions
- name: fix-vol-permission
image: busybox
command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data
securityContext:
privileged: true
volumeMounts:
- name: elastic-storage
mountPath: /usr/share/elasticsearch/data
- name: fix-config-vol-permission
image: busybox
command:
- sh
- -c
- cp /tmp/elasticsearch/elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml
securityContext:
privileged: true
volumeMounts:
- name: elastic-config-dir
mountPath: /usr/share/elasticsearch/config
- name: elastic-config-vol
mountPath: /tmp/elasticsearch
# increase default vm.max_map_count to 262144
- name: increase-vm-max-map-count
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
- name: increase-the-ulimit
image: busybox
command:
- sh
- -c
- ulimit -n 65536
securityContext:
privileged: true
I use:
...
volumeMounts:
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
volumes:
- name : config
configMap:
name: es-configmap
without any permissions problem, but you can set permissions with defaultMode
Related
My company bought a software we're trying to deploy on IBM cloud, using kubernetes and given private docker repository. Once deployed, there is always a Kubernetes error : "Back-off restarting failed container". So I read logs in order to understand why the container is restarting and here is the error :
Caused by: java.io.FileNotFoundException: /var/yseop-log/yseop-manager.log (Permission denied)
So I deduced that I just had to change permissions in the Kubernetes file. Since I'm using a deployment, I tried the following initContainer :
initContainers:
- name: permission-fix
image: busybox
command: ['sh', '-c']
args: ['chmod -R 777 /var']
volumeMounts:
- mountPath: /var/yseop-engine
name: yseop-data
- mountPath: /var/yseop-data/yseop-manager
name: yseop-data
- mountPath: /var/yseop-log
name: yseop-data
This didn't worked because I'm not allowed to execute chmod on read-only folders as non root user.
So I tried remounting those volumes, but that also failed, because I'm not a root user.
I then found out about running as User and group. In order to find out which User and group I had to write in my security context, I read the dockerfile and here is the user and group :
USER 1001:0
So I tought I could just write this in my deployment file :
securityContext:
runAsUser: 1001
rusAsGroup: 0
Obvisouly, that didn't worked neither, because I'm not allowed to run as group 0
So I still don't know what to do in order to properly deploy this image. The image is working when doing a docker pull and exec on m computer, but it's not working on Kubernetes.
Here is my complete Volume file :
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
ibm.io/auto-create-bucket: "true"
ibm.io/auto-delete-bucket: "false"
ibm.io/bucket: ""
ibm.io/secret-name: "cos-write-access"
ibm.io/endpoint: https://s3.eu-de.cloud-object-storage.appdomain.cloud
name: yseop-pvc
namespace: ns
labels:
app: yseop-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: ibmc
volumeMode: Filesystem
And here is my full deployment file :
apiVersion: apps/v1
kind: Deployment
metadata:
name: yseop-manager
namespace: ns
spec:
selector:
matchLabels:
app: yseop-manager
template:
metadata:
labels:
app: yseop-manager
spec:
securityContext:
runAsUser: 1001
rusAsGroup: 0
initContainers:
- name: permission-fix
image: busybox
command: ['sh', '-c']
args: ['chmod -R 777 /var']
volumeMounts:
- mountPath: /var/yseop-engine
name: yseop-data
- mountPath: /var/yseop-data/yseop-manager
name: yseop-data
- mountPath: /var/yseop-log
name: yseop-data
containers:
- name: yseop-manager
image:IMAGE
imagePullPolicy: IfNotPresent
env:
- name: SECURITY_USERS_DEFAULT_ENABLED
value: "true"
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /var/yseop-engine
name: yseop-data
- mountPath: /var/yseop-data/yseop-manager
name: yseop-data
- mountPath: /var/yseop-log
name: yseop-data
imagePullSecrets:
- name: regcred
volumes:
- name: yseop-data
persistentVolumeClaim:
claimName: yseop-pvc
Thanks for helping
Can you please try including supplementary group ID in the security context like
SecurityContext:
runAsUser: 1001
fsGroup: 2000
By Default runAsGroup is 0 which is root. Below link might give more insight about this.
https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
Working Yaml Content
apiVersion: apps/v1
kind: Deployment
metadata:
name: yseop-manager
namespace: ns
spec:
selector:
matchLabels:
app: yseop-manager
template:
metadata:
labels:
app: yseop-manager
spec:
securityContext:
fsGroup: 2000
initContainers:
- name: permission-fix
image: busybox
command: ['sh', '-c']
args: ['chown -R root:2000 /var']
volumeMounts:
- mountPath: /var/yseop-engine
name: yseop-data
- mountPath: /var/yseop-data/yseop-manager
name: yseop-data
- mountPath: /var/yseop-log
name: yseop-data
containers:
- name: yseop-manager
image:IMAGE
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 1001
runAsGroup: 2000
env:
- name: SECURITY_USERS_DEFAULT_ENABLED
value: "true"
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /var/yseop-engine
name: yseop-data
- mountPath: /var/yseop-data/yseop-manager
name: yseop-data
- mountPath: /var/yseop-log
name: yseop-data
imagePullSecrets:
- name: regcred
volumes:
- name: yseop-data
persistentVolumeClaim:
claimName: yseop-pvc
I was not told by my company that we do have restrictives Pod Security Policies. Because of that, volumes are Read-only and there is no way I could have written anything in said volumes.
The solution is as follow :
volumes:
- name: yseop-data
emptyDir: {}
Then, I have to specify a path in volumeMounts (Which was already done) and create a PVC, so my Data would be persistent.
I have an issue with one of my project. Here is what I want to do :
Have a private docker registry on my cluster Kubernetes
Have a docker deamon running so that I can pull / push and build image directly inside the cluster
For this project I'm using some certificate to secure all those interactions.
1. How to reproduce :
Note: I'm working on a linux-based system
Here are the files that I'm using :
Deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: docker
spec:
replicas: 1
selector:
matchLabels:
app: docker
template:
metadata:
labels:
app: docker
spec:
containers:
- name: docker
image: docker:dind
resources:
limits:
cpu: "0.5"
memory: "256Mi"
requests:
memory: "128Mi"
securityContext:
privileged: true
volumeMounts:
- name: dind-client-cert
mountPath: /certs/client/
- name: docker-graph-storage
mountPath: /var/lib/docker
- name: dind-registry-cert
mountPath: >-
/etc/docker/certs.d/registry:5000/ca.crt
ports:
- containerPort: 2376
volumes:
- name: docker-graph-storage
emptyDir: {}
- name: dind-client-cert
persistentVolumeClaim:
claimName: certs-client
- name: dind-registry-cert
secret:
secretName: ca.crt
- name: init-reg-vol
secret:
secretName: init-reg
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:2
env:
- name: DOCKER_TLS_CERTDIR
value: /certs
- name: REGISTRY_HTTP_TLS_KEY
value: /certs/registry.pem
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: /certs/registry.crt
volumeMounts:
- name: dind-client-cert
mountPath: /certs/client/
- name: dind-registry-cert
mountPath: /certs/
- name: registry-data
mountPath: /var/lib/registry
ports:
- containerPort: 5000
volumes:
- name: dind-client-cert
persistentVolumeClaim:
claimName: certs-client
- name: dind-registry-cert
secret:
secretName: registry
- name: registry-data
persistentVolumeClaim:
claimName: registry-data
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: client
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: docker
command: ['sleep','200']
resources:
limits:
cpu: "0.5"
memory: "256Mi"
requests:
memory: "128Mi"
env:
- name: DOCKER_HOST
value: tcp://docker:2376
- name: DOCKER_TLS_VERIFY
value: '1'
- name: DOCKER_TLS_CERTDIR
value: /certs
- name: DOCKER_CERT_PATH
value: /certs/client
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: /certs/registry.crt
volumeMounts:
- name: dind-client-cert
mountPath: /certs/client/
readOnly: true
- name: dind-registry-cert
mountPath: /usr/local/share/ca-certificate/ca.crt
readOnly: true
volumes:
- name: dind-client-cert
persistentVolumeClaim:
claimName: certs-client
- name: dind-registry-cert
secret:
secretName: ca.crt
Services.yaml
---
apiVersion: v1
kind: Service
metadata:
name: docker
spec:
selector:
app: docker
ports:
- name: docker
protocol: TCP
port: 2376
targetPort: 2376
---
apiVersion: v1
kind: Service
metadata:
name: registry
spec:
selector:
app: registry
ports:
- name: registry
protocol: TCP
port: 5000
targetPort: 5000
Pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: certs-client
spec:
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Mi
status: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: registry-data
spec:
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
limits:
storage: 50Gi
requests:
storage: 2Gi
status: {}
For the cert files I have the following folder certs/ certs/client certs.d/registry:5000/ and I use these command line to generate the certs :
openssl req -newkey rsa:4096 -nodes -keyout ./certs/registry.pem -x509 -days 365 -out ./certs/registry.crt -subj "/C=''/ST=''/L=''/O=''/OU=''/CN=registry"
cp ./certs/registry.crt ./certs.d/registry\:5000/ca.crt
Then I use secrets to pass those certs inside the pods :
kubectl create secret generic registry --from-file=certs/registry.crt --from-file=certs/registry.pem
kubectl create secret generic ca.crt --from-file=certs/registry.crt
The to launch the project the following line is used :
kubectl apply -f pvc.yaml,deployment.yaml,service.yaml
2. My issues
I have a problem on my docker pods with this error :
Error: Error response from daemon: invalid volume specification: '/var/lib/kubelet/pods/727d0f2a-bef6-4217-a292-427c5d76e071/volumes/kubernetes.io~secret/dind-registry-cert:/etc/docker/certs.d/registry:5000/ca.crt:ro
So the problem seems to comme from the colon in the path name. Then I tried to escape the colon and I got this sublime error
error: error parsing deployment.yaml: error converting YAML to JSON: yaml: line 34: found unknown escape character
The real problem here is that if the folder is not named 'registry:5000' the certificat is not reconised as correct and I have a x509 error when trying to push an image from the client.
For the overall project I know that it can work like that since I already succes to deploy it localy with a docker-compose (here is the link to the github project if any of you are curious)
So I looked a bit on to it and found out that it's a recuring problem on docker (I mean on Docker Desktop for mount volumes on containers) but I can't find anything about the same issue on Kubernetes.
Do any of you have any lead / suggestion / workaround on this mater ?
As always, thanks for your times :)
------------------------------- EDIT following #HelloWorld answer -------------------------------
Thanks to the workaround with simlink the ca.cert is correctly mounted inside. Howerver since I was mounting it on the deployement that was use to run the docker deamon, the entrypoint of the container docker:dind was overwrite by the commands. For future reader here is the solution that I found : geting the entry-point.sh and running it manualy.
Here is the deployement as I write those lines :
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: docker
spec:
replicas: 1
selector:
matchLabels:
app: docker
template:
metadata:
labels:
app: docker
spec:
containers:
- name: docker
image: docker:dind
resources:
limits:
cpu: "0.5"
memory: "256Mi"
requests:
memory: "128Mi"
securityContext:
privileged: true
command: ['sh', '-c', 'mkdir -p /etc/docker/certs.d/registry:5000 && ln -s /random/registry.crt /etc/docker/certs.d/registry:5000/ca.crt && wget https://raw.githubusercontent.com/docker-library/docker/a73d96e731e2dd5d6822c99a9af4dcbfbbedb2be/19.03/dind/dockerd-entrypoint.sh && chmod +x dockerd-entrypoint.sh && ./dockerd-entrypoint.sh']
volumeMounts:
- name: dind-client-cert
mountPath: /certs/client/
readOnly: false
- name: dind-registry-cert
mountPath: /random/
readOnly: false
ports:
- containerPort: 2376
volumes:
- name: dind-client-cert
persistentVolumeClaim:
claimName: certs-client
- name: dind-registry-cert
secret:
secretName: ca.crt
I hope it will be usefull for someone in the futur :)
The only thing I come up with is using symlinks. I tested it and it works. I also tried searching for better solution but didn't find anything satisfying.
Have a look at this example:
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: centos:7
command: ['sh', '-c', 'mkdir -p /etc/docker/certs.d/registry:5000 && ln -s /some/random/path/ca.crt /etc/docker/certs.d/registry:5000/ca.crt && exec sleep 10000']
volumeMounts:
- mountPath: '/some/random/path'
name: registry-cert
volumes:
- name: registry-cert
secret:
secretName: my-secret
And here is a template secret i used:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
namespace: default
type: Opaque
data:
ca.crt: <<< some_random_Data >>>
I have mounted this secret into a /some/random/path location (without colon so it wouldn't throw errors) and created a symlink between /some/random/path/ca.crt and /etc/docker/certs.d/registry:5000/ca.crt.
Of course you also need to create a dir structure before running ln -s ..., that is why I run mkdir -p ....
Let me know if you have any further questions. I'd be happy to answer them.
I am having Jenkins running in K8s and now i am trying to run: docker build as one of the step in Jenkins build. Since Jenkins is running inside Docker, i came to the solution to use Docker in Docker from this post: https://medium.com/hootsuite-engineering/building-docker-images-inside-kubernetes-42c6af855f25
However, after I modified the deployment yaml file, it still does not work.
There are 2 containers running: Jenkins (Jenkins image) and dind (docker in docker image). I could run the docker command inside dind container but i can not run docker command in Jenkins or pod.
Here is the yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "9"
field.cattle.io/publicEndpoints: '[{"addresses":["10.0.0.111"],"port":80,"protocol":"HTTP","serviceName":"jenkins-with-did:jenkins-with-did","ingressName":"jenkins-with-did:jenkins-with-did","hostname":"jenkins.dtl.miproad.ad","allNodes":true}]'
creationTimestamp: "2020-04-30T06:38:40Z"
generation: 11
labels:
app.kubernetes.io/component: jenkins-master
app.kubernetes.io/instance: jenkins-with-did
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: jenkins
helm.sh/chart: jenkins-1.18.0
io.cattle.field/appId: jenkins-with-did
name: jenkins-with-did
namespace: jenkins-with-did
resourceVersion: "29233038"
selfLink: /apis/apps/v1/namespaces/jenkins-with-did/deployments/jenkins-with-did
uid: 6439c48d-c4ce-418c-8553-d06fee13c7d1
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: jenkins-master
app.kubernetes.io/instance: jenkins-with-did
strategy:
type: Recreate
template:
metadata:
annotations:
cattle.io/timestamp: "2020-04-30T18:15:50Z"
checksum/config: fda7089fede91f066c406bbba5e2a1d59f71183eebe9bca3fe7de19d13504058
field.cattle.io/ports: '[[{"containerPort":8080,"dnsName":"jenkins-with-did","hostPort":0,"kind":"ClusterIP","name":"http","protocol":"TCP","sourcePort":0},{"containerPort":50000,"dnsName":"jenkins-with-did","hostPort":0,"kind":"ClusterIP","name":"slavelistener","protocol":"TCP","sourcePort":0}]]'
creationTimestamp: null
labels:
app.kubernetes.io/component: jenkins-master
app.kubernetes.io/instance: jenkins-with-did
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: jenkins
helm.sh/chart: jenkins-1.18.0
spec:
containers:
- args:
- --argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD)
- --argumentsRealm.roles.$(ADMIN_USER)=admin
- --httpPort=8080
env:
- name: JAVA_OPTS
- name: JENKINS_OPTS
- name: JENKINS_SLAVE_AGENT_PORT
value: "50000"
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: jenkins-admin-password
name: jenkins-with-did
optional: false
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins-with-did
optional: false
image: jenkins/jenkins:lts
imagePullPolicy: Always
livenessProbe:
failureThreshold: 5
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 90
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
name: jenkins
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 50000
name: slavelistener
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
cpu: "2"
memory: 4Gi
requests:
cpu: 50m
memory: 256Mi
securityContext:
capabilities: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp
name: tmp
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /var/jenkins_config
name: jenkins-config
readOnly: true
- mountPath: /usr/share/jenkins/ref/secrets/
name: secrets-dir
- mountPath: /usr/share/jenkins/ref/plugins/
name: plugin-dir
- image: docker:18.05-dind
imagePullPolicy: IfNotPresent
name: dind
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/docker
name: dind-storage
dnsPolicy: ClusterFirst
initContainers:
- command:
- sh
- /var/jenkins_config/apply_config.sh
env:
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: jenkins-admin-password
name: jenkins-with-did
optional: false
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins-with-did
optional: false
image: jenkins/jenkins:lts
imagePullPolicy: Always
name: copy-default-config
resources:
limits:
cpu: "2"
memory: 4Gi
requests:
cpu: 50m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/docker
name: dind-storage
- mountPath: /tmp
name: tmp
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /var/jenkins_config
name: jenkins-config
- mountPath: /usr/share/jenkins/ref/secrets/
name: secrets-dir
- mountPath: /var/jenkins_plugins
name: plugin-dir
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 0
serviceAccount: jenkins-with-did
serviceAccountName: jenkins-with-did
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: dind-storage
- emptyDir: {}
name: plugins
- emptyDir: {}
name: tmp
- configMap:
defaultMode: 420
name: jenkins-with-did
name: jenkins-config
- emptyDir: {}
name: secrets-dir
- emptyDir: {}
name: plugin-dir
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-with-did
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2020-04-30T18:20:47Z"
lastUpdateTime: "2020-04-30T18:20:47Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2020-04-30T06:38:40Z"
lastUpdateTime: "2020-04-30T18:20:47Z"
message: ReplicaSet "jenkins-with-did-5db85986b6" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 11
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Thank you so much in advance!
Your idea is a valid approach.
The regular jenkins image does not provide the docker cli - therefore using docker does not work out of the box. You can either build your own jenkins image which provides the docker command or you can use a prebuilt jenkins image including the docker cli, for example: https://hub.docker.com/r/trion/jenkins-docker-client
You can do a hostpath volumes and mount /usr/bin/docker, /lib64 and /usr/lib64 from the node to your pod. This would need securityContext: -> privileged: true
In kubernetes, tomcat catalina.log is collected to stdout,but localhost_access_log.txt is output to file in the pod。 How do I to collect access log by kubernetes log driver?I am currently using filebeat
Deploy filebeat as a sidecar with tomcat and create a volume mount shared by both the tomcat and filebeat container. The filebeat container can read the log files created by tomcat container from the shared volume mount.
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat
labels:
app: tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: filebeat-sidecar
image: docker.elastic.co/beats/filebeat:7.5.0
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
volumeMounts:
- name: logs-volume
mountPath: /usr/local/tomcat/logs
- name: filebeat-config
mountPath: /usr/share/filebeat/filebeat.yml
subPath: filebeat.yml
- name: tomcat
image: tomcat
ports:
- containerPort: 8080
volumeMounts:
- name: logs-volume
mountPath: /usr/local/tomcat/logs
securityContext:
fsGroup: 1000
volumes:
- name: logs-volume
emptyDir: {}
- name: filebeat-config
configMap:
name: filebeat-sidecar-config
items:
- key: filebeat.yml
path: filebeat.yml
https://capstonec.com/2019/12/16/getting-tomcat-logs-from-kubernetes-pods/
I am trying to clone a private git repository(gitLab) into a kubernetes pod, using SSH keys for authentication. I have stored my keys in a secret. Here is the yaml file for the job that does the desired task.
Heres the same question, but doesnt give the exact solution :
Clone a secure git repo in Kubernetes pod
Logs of the init container after execution:
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
v3.7.1-66-gfc22ab4fd3 [http://dl-cdn.alpinelinux.org/alpine/v3.7/main]
v3.7.1-55-g7d5f104fa7 [http://dl-cdn.alpinelinux.org/alpine/v3.7/community]
OK: 9064 distinct packages available
OK: 23 MiB in 23 packages
Cloning into '/tmp'...
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
The yaml file which works perfectly for public repo:
apiVersion: batch/v1
kind: Job
metadata:
name: nest-build-kaniko
labels:
app: nest-kaniko-example
spec:
template:
spec:
containers:
-
image: 'gcr.io/kaniko-project/executor:latest'
name: kaniko
args: ["--dockerfile=/workspace/Dockerfile",
"--context=/workspace/",
"--destination=aws.dest.cred"]
volumeMounts:
-
mountPath: /workspace
name: source
-
name: aws-secret
mountPath: /root/.aws/
-
name: docker-config
mountPath: /kaniko/.docker/
initContainers:
-
name: download
image: alpine:3.7
command: ["/bin/sh","-c"]
args: ['apk add --no-cache git && git clone https://github.com/username/repo.git /tmp/']
volumeMounts:
-
mountPath: /tmp
name: source
restartPolicy: Never
volumes:
-
emptyDir: {}
name: source
-
name: aws-secret
secret:
secretName: aws-secret
-
name: docker-config
configMap:
name: docker-config
The yaml file after using git-sync for cloning private repository:
apiVersion: batch/v1
kind: Job
metadata:
name: nest-build-kaniko
labels:
app: nest-kaniko-example
spec:
template:
spec:
containers:
-
image: 'gcr.io/kaniko-project/executor:latest'
name: kaniko
args: ["--dockerfile=/workspace/Dockerfile",
"--context=/workspace/",
"--destination=aws.dest.cred"]
volumeMounts:
-
mountPath: /workspace
name: source
-
name: aws-secret
mountPath: /root/.aws/
-
name: docker-config
mountPath: /kaniko/.docker/
initContainers:
-
name: git-sync
image: gcr.io/google_containers/git-sync-amd64:v2.0.4
volumeMounts:
-
mountPath: /git/tmp
name: source
-
name: git-secret
mountPath: "/etc/git-secret"
env:
- name: GIT_SYNC_REPO
value: "git#gitlab.com:username/repo.git"
- name: GIT_SYNC_SSH
value: "true"
- name: GIT_SYNC_DEST
value: "/tmp"
- name: GIT_SYNC_ONE_TIME
value: "true"
securityContext:
runAsUser: 0
restartPolicy: Never
volumes:
-
emptyDir: {}
name: source
-
name: aws-secret
secret:
secretName: aws-secret
-
name: git-secret
secret:
secretName: git-creds
defaultMode: 256
-
name: docker-config
configMap:
name: docker-config
You can use git-sync
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: git-sync-test
spec:
selector:
matchLabels:
app: git-sync-test
serviceName: "git-sync-test"
replicas: 1
template:
metadata:
labels:
app: git-sync-test
spec:
containers:
- name: git-sync-test
image: <your-main-image>
volumeMounts:
- name: service
mountPath: /var/magic
initContainers:
- name: git-sync
image: k8s.gcr.io/git-sync-amd64:v2.0.6
imagePullPolicy: Always
volumeMounts:
- name: service
mountPath: /magic
- name: git-secret
mountPath: /etc/git-secret
env:
- name: GIT_SYNC_REPO
value: <repo-path-you-want-to-clone>
- name: GIT_SYNC_BRANCH
value: <repo-branch>
- name: GIT_SYNC_ROOT
value: /magic
- name: GIT_SYNC_DEST
value: <path-where-you-want-to-clone>
- name: GIT_SYNC_PERMISSIONS
value: "0777"
- name: GIT_SYNC_ONE_TIME
value: "true"
- name: GIT_SYNC_SSH
value: "true"
securityContext:
runAsUser: 0
volumes:
- name: service
emptyDir: {}
- name: git-secret
secret:
defaultMode: 256
secretName: git-creds # your-ssh-key
For more details check this link.
initContainers:
-
name: git-sync
image: gcr.io/google_containers/git-sync-amd64:v2.0.4
volumeMounts:
-
mountPath: /workspace
name: source
-
name: git-secret
mountPath: "/etc/git-secret"
env:
- name: GIT_SYNC_REPO
value: "git#gitlab.com:username/repo.git"
- name: GIT_SYNC_SSH
value: "true"
- name: GIT_SYNC_ROOT
value: /workspace
- name: GIT_SYNC_DEST
value: "tmp"
- name: GIT_SYNC_ONE_TIME
value: "true"
NOTE: set GIT_SYNC_ROOT env to /workspace
It'll clone in /workspace/tmp directory in your emptyDir source volume.