Kubernetes - Trying to create pod with init container - docker

I am trying to play with init pods. I want to use init container to create file and default container to check if file exist and sleep for a while.
my yaml:
apiVersion: v1
kind: Pod
metadata:
name: init-test-pod
spec:
containers:
- name: myapp-container
image: alpine
command: ['sh', '-c', 'if [ -e /workdir/test.txt ]; then sleep 99999; fi']
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', 'mkdir /workdir; echo>/workdir/test.txt']
When I am trying to debug from alpine image I use the command to create:
kubectl run alpine --rm -ti --image=alpine /bin/sh
If you don't see a command prompt, try pressing enter.
/ # if [ -e /workdir/test.txt ]; then sleep 3; fi
/ # mkdir /workdir; echo>/workdir/test.txt
/ # if [ -e /workdir/test.txt ]; then sleep 3; fi
/ *here shell sleeps for 3 seconds
/ #
And it seems like commands working as expected.
But on my real k8s cluster I have only CrashLoopBackOff for main container.
kubectl describe pod init-test-pod
Shows me only that error:
Containers:
myapp-container:
Container ID: docker://xxx
Image: alpine
Image ID: docker-pullable://alpine#sha256:xxx
Port: <none>
Host Port: <none>
Command:
sh
-c
if [ -e /workdir/test.txt ]; then sleep 99999; fi
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Ready: False
Restart Count: 3
Environment: <none>

The problem here is that your main container is not finding the folder you create. When your initial container completes running, the folder gets wiped with it. You will need to use a Persistent Volume to be able to share the folder between the two containers:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mypvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
name: init-test-pod
spec:
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: mypvc
containers:
- name: myapp-container
image: alpine
command: ['sh', '-c', 'if [ -f /workdir/test.txt ]; then sleep 99999; fi']
volumeMounts:
- name: mypvc
mountPath: /workdir
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', 'mkdir /workdir; echo>/workdir/test.txt']
volumeMounts:
- name: mypvc
mountPath: /workdir
You can as well look at emptyDir, so you won't need the PVC:
apiVersion: v1
kind: Pod
metadata:
name: init-test-pod
spec:
volumes:
- name: mydir
emptyDir: {}
containers:
- name: myapp-container
image: alpine
command: ['sh', '-c', 'if [ -f /workdir/test.txt ]; then sleep 99999; fi']
volumeMounts:
- name: mydir
mountPath: /workdir
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', 'mkdir /workdir; echo>/workdir/test.txt']
volumeMounts:
- name: mydir
mountPath: /workdir

That's because your 2 containers have separate filesystems. You need to share this file using an emtyDir volume:
apiVersion: v1
kind: Pod
metadata:
name: init-test-pod
spec:
containers:
- name: myapp-container
image: alpine
command: ['sh', '-c', 'if [ -e /workdir/test.txt ]; then sleep 99999; fi']
volumeMounts:
- mountPath: /workdir
name: workdir
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', 'mkdir /workdir; echo>/workdir/test.txt']
volumeMounts:
- mountPath: /workdir
name: workdir
volumes:
- name: workdir
emptyDir: {}

Related

cp: can't create '/node_modules/mongo-express/config.js': File exists

Problem with kubernetes volume mounts.
The mongo-express container has a file /node-modules/mongo-express/config.js
I need to overwrite the /node-modules/mongo-express/config.js with my /tmp/config.js
I am trying to copy my custom config.js under /tmp (volume mount by ConfigMaps) to the folder under the container path /node-modules/mongo-express.
But I am not able to do that and get the below error:
cp: can't create '/node_modules/mongo-express/config.js': File exists
Below we can find the deployment.yaml I am using to achieve this.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
spec:
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express:latest
command:
- sh
- -c
- cp /tmp/config.js /node_modules/mongo-express
ports:
- name: mongo-express
containerPort: 8081
volumeMounts:
- name: custom-config-js
mountPath: /tmp
volumes:
- name: custom-config-js
configMap:
name: mongodb-express-config-js
I tried:
cp -f /tmp/config.js /node_modules/mongo-express
cp -r /tmp/config.js /node_modules/mongo-express
\cp -r /tmp/config.js /node_modules/mongo-express
and much more. But with no success. Any help is much appreciated.
Most container images are immutable.
What you probably want here is a subPath mount instead:
volumeMounts:
- mountPath: /node_modules/mongo-express/config.js
name: custom-config-js
subPath: config.js

What is kubeflow gpu resource node allocation criteria?

I’m curious about the Kubeflow GPU Resource. I’m running the job below.
The only part where I specified the GPU Resource is on first container with only 1 GPU. However, the event message tells me 0/4 nodes are available: 4 Insufficient nvidia.com/gpu.
Why is this job searching for 4 nodes though I specified only 1 GPU resource? Does my interpretation have a problem? Thanks much in advance.
FYI) I have 3 worker nodes with each 1 gpu.
apiVersion: batch/v1
kind: Job
metadata:
name: saint-train-3
annotations:
sidecar.istio.io/inject: "false"
spec:
template:
spec:
initContainers:
- name: dataloader
image: <AWS CLI Image>
command: ["/bin/sh", "-c", "aws s3 cp s3://<Kubeflow Bucket>/kubeflowdata.tar.gz /s3-data; cd /s3-data; tar -xvzf kubeflowdata.tar.gz; cd kubeflow_data; ls"]
volumeMounts:
- mountPath: /s3-data
name: s3-data
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef: {key: AWS_ACCESS_KEY_ID, name: aws-secret}
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef: {key: AWS_SECRET_ACCESS_KEY, name: aws-secret}
containers:
- name: trainer
image: <Our Model Image>
command: ["/bin/sh", "-c", "wandb login <ID>; python /opt/ml/src/main.py --base_path='/s3-data/kubeflow_data' --debug_mode='0' --project='kubeflow-test' --name='test2' --gpu=0 --num_epochs=1 --num_workers=4"]
volumeMounts:
- mountPath: /s3-data
name: s3-data
resources:
limits:
nvidia.com/gpu: "1"
- name: gpu-watcher
image: pytorch/pytorch:latest
command: ["/bin/sh", "-c", "--"]
args: [ "while true; do sleep 30; done;" ]
volumeMounts:
- mountPath: /s3-data
name: s3-data
volumes:
- name: s3-data
persistentVolumeClaim:
claimName: test-claim
restartPolicy: OnFailure
backoffLimit: 6
0/4 nodes are available: 4 Insufficient nvidia.com/gpu
This is mean you haven't nodes with label nvidia.com/gpu

Missing write permissions to the following paths: /var/www/html/pub/media

kubectl -n magento logs magento-install-jssk6
I am getting Database found In ConfigModel.php line 166:Missing write permissions to the following paths: /var/www/html/pub/media in install job:
apiVersion: batch/v1
kind: Job
metadata:
name: magento-install
namespace: magento
spec:
template:
metadata:
name: install
labels:
app: magento-install
k8s-app: magento
spec:
containers:
- name: magento-setup
image: kiweeteam/magento2:vanilla-2.3.4-php7.3-fpm
command: ["/bin/sh"]
args:
- -c
- |
/bin/bash <<'EOF'
bin/install.sh
php bin/magento setup:perf:generate-fixtures setup/performance-toolkit/profiles/ce/small.xml
magerun index:list | awk '{print $2}' | tail -n+4 | xargs -I{} magerun index:set-mode schedule {}
magerun cache:flush
EOF
envFrom:
- configMapRef:
name: config
volumeMounts:
- mountPath: /var/www/html/pub/media
name: media
volumes:
- name: media
persistentVolumeClaim:
claimName: media
restartPolicy: OnFailure
and when I try to change permissions I am getting chown: changing
ownership of '/var/www/html/pub/media': Operation not permitted
It happens because you run chown as www-data user and the current owner of this directory is root.
You can resolve your issue by using the init container run as root (user with id 0). Below you can see a modified version of your magento-install Job with the init cotntainer already added:
apiVersion: batch/v1
kind: Job
metadata:
name: magento-install
namespace: magento
spec:
template:
metadata:
name: install
labels:
app: magento-install
k8s-app: magento
spec:
initContainers:
- name: magento-chown
securityContext:
runAsUser: 0
image: kiweeteam/magento2:vanilla-2.3.4-php7.3-fpm
command: ['sh', '-c', 'chown -R www-data:www-data /var/www/html/pub/media']
volumeMounts:
- name: media
mountPath: "/var/www/html/pub/media"
containers:
- name: magento-setup
image: kiweeteam/magento2:vanilla-2.3.4-php7.3-fpm
command: ["/bin/sh"]
args:
- -c
- |
/bin/bash <<'EOF'
bin/install.sh
php bin/magento setup:perf:generate-fixtures setup/performance-toolkit/profiles/ce/small.xml
magerun index:list | awk '{print $2}' | tail -n+4 | xargs -I{} magerun index:set-mode schedule {}
magerun cache:flush
EOF
envFrom:
- configMapRef:
name: config
volumeMounts:
- mountPath: /var/www/html/pub/media
name: media
volumes:
- name: media
persistentVolumeClaim:
claimName: media
restartPolicy: OnFailure
Once you attach to your newly created Pod by using:
kubectl exec -ti -n magento magento-install-z66qg -- /bin/bash
You'll see that the current owner of the /var/www/html/pub/media directory isn't any more root but www-data user:
www-data#magento-install-z66qg:~/html$ ls -ld /var/www/html/pub/media
drwxr-xr-x 3 www-data www-data 4096 Jul 27 18:45 /var/www/html/pub/media
We can simplify it even more. The init container doesn't even need to use the kiweeteam/magento2:vanilla-2.3.4-php7.3-fpm image. It might as well be a simple container based on busybox, which runs as root by default so you can omit the security context from the previous example and your initContainers section will look as follows:
initContainers:
- name: magento-chown
image: busybox
command: ['sh', '-c', 'chown -R www-data:www-data /var/www/html/pub/media']
volumeMounts:
- name: media
The final effect will be exactly the same.

elasticsearch.yml is read-only when loaded using Kubernetes ConfigMap

I am trying to load elasticsearch.yml file using ConfigMap while installing ElasticSearch using Kubernetes.
kubectl create configmap elastic-config --from-file=./elasticsearch.yml
The elasticsearch.yml file is loaded in the container with root as its owner and read-only permission (https://github.com/kubernetes/kubernetes/issues/62099). Since, ElasticSearch will not start with root ownership, the pod crashes.
As a work-around, I tried to mount the ConfigMap to a different file and then copy it to the config directory using an initContainer. However, the file in the config directory does not seem to be updated.
Is there anything that I am missing or is there any other way to accomplish this?
ElasticSearch Kubernetes StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
labels:
app: elasticservice
spec:
serviceName: elasticsearch
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: elastic-config-vol
mountPath: /tmp/elasticsearch
- name: elastic-storage
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: docker-elastic
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.zen.ping.unicast.hosts
value: "elastic-service"
- name: discovery.zen.minimum_master_nodes
value: "1"
- name: node.master
value: "true"
- name: node.data
value: "true"
- name: ES_JAVA_OPTS
value: "-Xmx256m -Xms256m"
volumes:
- name: elastic-config-vol
configMap:
name: elastic-config
items:
- key: elasticsearch.yml
path: elasticsearch.yml
- name: elastic-config-dir
emptyDir: {}
- name: elastic-storage
emptyDir: {}
initContainers:
# elasticsearch will not run as non-root user, fix permissions
- name: fix-vol-permission
image: busybox
command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data
securityContext:
privileged: true
volumeMounts:
- name: elastic-storage
mountPath: /usr/share/elasticsearch/data
- name: fix-config-vol-permission
image: busybox
command:
- sh
- -c
- cp /tmp/elasticsearch/elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml
securityContext:
privileged: true
volumeMounts:
- name: elastic-config-dir
mountPath: /usr/share/elasticsearch/config
- name: elastic-config-vol
mountPath: /tmp/elasticsearch
# increase default vm.max_map_count to 262144
- name: increase-vm-max-map-count
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
- name: increase-the-ulimit
image: busybox
command:
- sh
- -c
- ulimit -n 65536
securityContext:
privileged: true
I use:
...
volumeMounts:
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
volumes:
- name : config
configMap:
name: es-configmap
without any permissions problem, but you can set permissions with defaultMode

Disable Transparent Huge Pages from Kubernetes

I deploy Redis container via Kubernetes and get the following warning:
WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled
Is it possible to disable THP via Kubernetes? Perhaps via init-containers?
Yes, with init-containers it's quite straightforward:
apiVersion: v1
kind: Pod
metadata:
name: thp-test
spec:
restartPolicy: Never
terminationGracePeriodSeconds: 1
volumes:
- name: host-sys
hostPath:
path: /sys
initContainers:
- name: disable-thp
image: busybox
volumeMounts:
- name: host-sys
mountPath: /host-sys
command: ["sh", "-c", "echo never >/host-sys/kernel/mm/transparent_hugepage/enabled"]
containers:
- name: busybox
image: busybox
command: ["cat", "/sys/kernel/mm/transparent_hugepage/enabled"]
Demo (notice that this is a system wide setting):
$ ssh THATNODE cat /sys/kernel/mm/transparent_hugepage/enabled
always [madvise] never
$ kubectl create -f thp-test.yaml
pod "thp-test" created
$ kubectl logs thp-test
always madvise [never]
$ kubectl delete pod thp-test
pod "thp-test" deleted
$ ssh THATNODE cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
Ay,
I don't know if what I did is a good idea but we needed to deactivate THP on all our K8S VMs for all our apps. So I used a DaemonSet instead of adding an init-container to all our stacks :
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: thp-disable
namespace: kube-system
spec:
selector:
matchLabels:
name: thp-disable
template:
metadata:
labels:
name: thp-disable
spec:
restartPolicy: Always
terminationGracePeriodSeconds: 1
volumes:
- name: host-sys
hostPath:
path: /sys
initContainers:
- name: disable-thp
image: busybox
volumeMounts:
- name: host-sys
mountPath: /host-sys
command: ["sh", "-c", "echo never >/host-sys/kernel/mm/transparent_hugepage/enabled"]
containers:
- name: busybox
image: busybox
command: ["watch", "-n", "600", "cat", "/sys/kernel/mm/transparent_hugepage/enabled"]
I think it's a little dirty but it works.

Resources