Error in windows docker desktop kubernetes - docker

In kubernetes on windows docker desktop when I try to mount an empty directory I get the following error:
error: error when retrieving current configuration of:
Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
Name: "", Namespace: "default"
Object: &{map["apiVersion":"v1" "kind":"Pod" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "namespace":"default"] "spec":map["containers":[map["image":"nginx:alpine" "name":"nginx" "volumeMounts":[map["mountPath":"/usr/share/nginx/html" "name":"html" "readOnly":%!q(bool=true)]]] map["args":["while true; do date >> /html/index.html; sleep 10; done"] "command":["/bin/sh" "-c"] "image":"alpine" "name":"html-updater" "volumeMounts":[map["mountPath":"/html" "name":"html"]]]] "volumes":[map["emptyDir":map[] "name":"html"]]]]}
from server for: "nginx-alpine-emptyDir.pod.yml": resource name may not be empty
The error message seems a bit unclear and I cannot find what's going on.
My yaml configuration is the following:
apiVersion: v1
kind: Pod
spec:
volumes:
- name: html
emptyDir: {}
containers:
- name: nginx
image: nginx:alpine
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
readOnly: true
- name: html-updater
image: alpine
command: ["/bin/sh", "-c"]
args:
- while true; do date >> /html/index.html; sleep 10; done
volumeMounts:
- name: html
mountPath: /html

Forgot to add metadata name
metadata:
name: empty-dir-test
Code after change is:
apiVersion: v1
kind: Pod
metadata:
name: empty-dir-test
spec:
volumes:
- name: html
emptyDir: {}
containers:
- name: nginx
image: nginx:alpine
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
readOnly: true
- name: html-updater
image: alpine
command: ["/bin/sh", "-c"]
args:
- while true; do date >> /html/index.html; sleep 10; done
volumeMounts:
- name: html
mountPath: /html

Related

cp: can't create '/node_modules/mongo-express/config.js': File exists

Problem with kubernetes volume mounts.
The mongo-express container has a file /node-modules/mongo-express/config.js
I need to overwrite the /node-modules/mongo-express/config.js with my /tmp/config.js
I am trying to copy my custom config.js under /tmp (volume mount by ConfigMaps) to the folder under the container path /node-modules/mongo-express.
But I am not able to do that and get the below error:
cp: can't create '/node_modules/mongo-express/config.js': File exists
Below we can find the deployment.yaml I am using to achieve this.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
spec:
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express:latest
command:
- sh
- -c
- cp /tmp/config.js /node_modules/mongo-express
ports:
- name: mongo-express
containerPort: 8081
volumeMounts:
- name: custom-config-js
mountPath: /tmp
volumes:
- name: custom-config-js
configMap:
name: mongodb-express-config-js
I tried:
cp -f /tmp/config.js /node_modules/mongo-express
cp -r /tmp/config.js /node_modules/mongo-express
\cp -r /tmp/config.js /node_modules/mongo-express
and much more. But with no success. Any help is much appreciated.
Most container images are immutable.
What you probably want here is a subPath mount instead:
volumeMounts:
- mountPath: /node_modules/mongo-express/config.js
name: custom-config-js
subPath: config.js

Kubernetes Permission denied in container

My company bought a software we're trying to deploy on IBM cloud, using kubernetes and given private docker repository. Once deployed, there is always a Kubernetes error : "Back-off restarting failed container". So I read logs in order to understand why the container is restarting and here is the error :
Caused by: java.io.FileNotFoundException: /var/yseop-log/yseop-manager.log (Permission denied)
So I deduced that I just had to change permissions in the Kubernetes file. Since I'm using a deployment, I tried the following initContainer :
initContainers:
- name: permission-fix
image: busybox
command: ['sh', '-c']
args: ['chmod -R 777 /var']
volumeMounts:
- mountPath: /var/yseop-engine
name: yseop-data
- mountPath: /var/yseop-data/yseop-manager
name: yseop-data
- mountPath: /var/yseop-log
name: yseop-data
This didn't worked because I'm not allowed to execute chmod on read-only folders as non root user.
So I tried remounting those volumes, but that also failed, because I'm not a root user.
I then found out about running as User and group. In order to find out which User and group I had to write in my security context, I read the dockerfile and here is the user and group :
USER 1001:0
So I tought I could just write this in my deployment file :
securityContext:
runAsUser: 1001
rusAsGroup: 0
Obvisouly, that didn't worked neither, because I'm not allowed to run as group 0
So I still don't know what to do in order to properly deploy this image. The image is working when doing a docker pull and exec on m computer, but it's not working on Kubernetes.
Here is my complete Volume file :
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
ibm.io/auto-create-bucket: "true"
ibm.io/auto-delete-bucket: "false"
ibm.io/bucket: ""
ibm.io/secret-name: "cos-write-access"
ibm.io/endpoint: https://s3.eu-de.cloud-object-storage.appdomain.cloud
name: yseop-pvc
namespace: ns
labels:
app: yseop-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: ibmc
volumeMode: Filesystem
And here is my full deployment file :
apiVersion: apps/v1
kind: Deployment
metadata:
name: yseop-manager
namespace: ns
spec:
selector:
matchLabels:
app: yseop-manager
template:
metadata:
labels:
app: yseop-manager
spec:
securityContext:
runAsUser: 1001
rusAsGroup: 0
initContainers:
- name: permission-fix
image: busybox
command: ['sh', '-c']
args: ['chmod -R 777 /var']
volumeMounts:
- mountPath: /var/yseop-engine
name: yseop-data
- mountPath: /var/yseop-data/yseop-manager
name: yseop-data
- mountPath: /var/yseop-log
name: yseop-data
containers:
- name: yseop-manager
image:IMAGE
imagePullPolicy: IfNotPresent
env:
- name: SECURITY_USERS_DEFAULT_ENABLED
value: "true"
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /var/yseop-engine
name: yseop-data
- mountPath: /var/yseop-data/yseop-manager
name: yseop-data
- mountPath: /var/yseop-log
name: yseop-data
imagePullSecrets:
- name: regcred
volumes:
- name: yseop-data
persistentVolumeClaim:
claimName: yseop-pvc
Thanks for helping
Can you please try including supplementary group ID in the security context like
SecurityContext:
runAsUser: 1001
fsGroup: 2000
By Default runAsGroup is 0 which is root. Below link might give more insight about this.
https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
Working Yaml Content
apiVersion: apps/v1
kind: Deployment
metadata:
name: yseop-manager
namespace: ns
spec:
selector:
matchLabels:
app: yseop-manager
template:
metadata:
labels:
app: yseop-manager
spec:
securityContext:
fsGroup: 2000
initContainers:
- name: permission-fix
image: busybox
command: ['sh', '-c']
args: ['chown -R root:2000 /var']
volumeMounts:
- mountPath: /var/yseop-engine
name: yseop-data
- mountPath: /var/yseop-data/yseop-manager
name: yseop-data
- mountPath: /var/yseop-log
name: yseop-data
containers:
- name: yseop-manager
image:IMAGE
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 1001
runAsGroup: 2000
env:
- name: SECURITY_USERS_DEFAULT_ENABLED
value: "true"
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /var/yseop-engine
name: yseop-data
- mountPath: /var/yseop-data/yseop-manager
name: yseop-data
- mountPath: /var/yseop-log
name: yseop-data
imagePullSecrets:
- name: regcred
volumes:
- name: yseop-data
persistentVolumeClaim:
claimName: yseop-pvc
I was not told by my company that we do have restrictives Pod Security Policies. Because of that, volumes are Read-only and there is no way I could have written anything in said volumes.
The solution is as follow :
volumes:
- name: yseop-data
emptyDir: {}
Then, I have to specify a path in volumeMounts (Which was already done) and create a PVC, so my Data would be persistent.

elasticsearch.yml is read-only when loaded using Kubernetes ConfigMap

I am trying to load elasticsearch.yml file using ConfigMap while installing ElasticSearch using Kubernetes.
kubectl create configmap elastic-config --from-file=./elasticsearch.yml
The elasticsearch.yml file is loaded in the container with root as its owner and read-only permission (https://github.com/kubernetes/kubernetes/issues/62099). Since, ElasticSearch will not start with root ownership, the pod crashes.
As a work-around, I tried to mount the ConfigMap to a different file and then copy it to the config directory using an initContainer. However, the file in the config directory does not seem to be updated.
Is there anything that I am missing or is there any other way to accomplish this?
ElasticSearch Kubernetes StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
labels:
app: elasticservice
spec:
serviceName: elasticsearch
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: elastic-config-vol
mountPath: /tmp/elasticsearch
- name: elastic-storage
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: docker-elastic
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.zen.ping.unicast.hosts
value: "elastic-service"
- name: discovery.zen.minimum_master_nodes
value: "1"
- name: node.master
value: "true"
- name: node.data
value: "true"
- name: ES_JAVA_OPTS
value: "-Xmx256m -Xms256m"
volumes:
- name: elastic-config-vol
configMap:
name: elastic-config
items:
- key: elasticsearch.yml
path: elasticsearch.yml
- name: elastic-config-dir
emptyDir: {}
- name: elastic-storage
emptyDir: {}
initContainers:
# elasticsearch will not run as non-root user, fix permissions
- name: fix-vol-permission
image: busybox
command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data
securityContext:
privileged: true
volumeMounts:
- name: elastic-storage
mountPath: /usr/share/elasticsearch/data
- name: fix-config-vol-permission
image: busybox
command:
- sh
- -c
- cp /tmp/elasticsearch/elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml
securityContext:
privileged: true
volumeMounts:
- name: elastic-config-dir
mountPath: /usr/share/elasticsearch/config
- name: elastic-config-vol
mountPath: /tmp/elasticsearch
# increase default vm.max_map_count to 262144
- name: increase-vm-max-map-count
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
- name: increase-the-ulimit
image: busybox
command:
- sh
- -c
- ulimit -n 65536
securityContext:
privileged: true
I use:
...
volumeMounts:
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
volumes:
- name : config
configMap:
name: es-configmap
without any permissions problem, but you can set permissions with defaultMode

How do I copy a Kubernetes configmap to a write enabled area of a pod?

I am trying to deploy a redis sentinel deployment in Kubernetes. I have accomplished that but want to use ConfigMaps to allow us to change the IP address of the master in the sentinel.conf file. I started this but redis cant write to the config file because the mount point for configMaps are readOnly.
I was hoping to run an init container and copy the redis conf to a different dir just in the pod. But the init container couldn't find the conf file.
What are my options? Init Container? Something other than ConfigMap?
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis-sentinel
spec:
replicas: 3
template:
metadata:
labels:
app: redis-sentinel
spec:
hostNetwork: true
containers:
- name: redis-sentinel
image: IP/redis-sentinel
ports:
- containerPort: 63790
- containerPort: 26379
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /usr/local/etc/redis/conf
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: sentinel-redis-config
items:
- key: redis-config-sentinel
path: sentinel.conf
According to #P Ekambaram proposal, you can try this one:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis-sentinel
spec:
replicas: 3
template:
metadata:
labels:
app: redis-sentinel
spec:
hostNetwork: true
containers:
- name: redis-sentinel
image: redis:5.0.4
ports:
- containerPort: 63790
- containerPort: 26379
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /usr/local/etc/redis/conf
name: config
initContainers:
- name: copy
image: redis:5.0.4
command: ["bash", "-c", "cp /redis-master/redis.conf /redis-master-data/"]
volumeMounts:
- mountPath: /redis-master
name: config
- mountPath: /redis-master-data
name: data
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: example-redis-config
items:
- key: redis-config
path: redis.conf
In this example initContainer copy the file from ConfigMap into writable dir.
Note:
An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. As the name says, it is initially empty. Containers in the Pod can all read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each Container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.
Create a startup script. In that copy the configMap file that is mounted in a volume to writable location. Then run the container process.

How to mount volume for docker container via yaml manifest?

I try to launch container-vm machine with following yaml:
version: v1
kind: Pod
spec:
containers:
- name: simple-echo
image: gcr.io/google_containers/busybox
command: ['nc', '-p', '8080', '-l', '-l', '-e', 'echo', 'hello world!']
imagePullPolicy: Always
ports:
- containerPort: 8080
hostPort: 8080
protocol: TCP
volumeMounts:
- name: string
mountPath: /home
readOnly: false
restartPolicy: Always
dnsPolicy: Default
volumes:
- name: string
source:
# Either emptyDir for an empty directory
# emptyDir: {}
# Or hostDir for a pre-existing directory on the host
hostDir:
path: /home
I expect host home directory being accessible from the container.
However, container fails to start:
E0619 05:02:09.477574 2212 http.go:54] Failed to read URL: invalid pod:
[spec.volumes[0].source: invalid value '<*>(0xc2080b79e0){HostPath:<nil> EmptyDir:<nil> GCEPersistentDisk:<nil> AWSElasticBlockStore:<nil>
GitRepo:<nil> Secret:<nil> NFS:<nil> ISCSI:<nil> Glusterfs:<nil> PersistentVolumeClaimVolumeSource:<nil> RBD:<nil>}':
exactly 1 volume type is required spec.containers[0].volumeMounts[0].name: not found 'string']
What is the correct way to specify a volume for container?
Try replacing hostDir with hostPath as mentioned in v1beta3-conversion-tips-from-v1beta12.
Try replacing
volumes:
- name: string
source:
# Either emptyDir for an empty directory
# emptyDir: {}
# Or hostDir for a pre-existing directory on the host
hostDir:
path: /home
with
volumes:
- name: string
hostPath:
path: /home
at the bottom of your configuration.
There is a simple way to do it:
version: v1
kind: Pod
spec:
containers:
- name: simple-echo
image: gcr.io/google_containers/busybox
command: ['nc', '-p', '8080', '-l', '-l', '-e', 'echo', 'hello world!']
imagePullPolicy: Always
ports:
- containerPort: 8080
hostPort: 8080
protocol: TCP
volumeMounts:
- name: string
mountPath: /home
readOnly: false
restartPolicy: Always
dnsPolicy: Default
volumes:
- /path/dir/from/host:/name/of/dir/in/container

Resources