How to mount volume for docker container via yaml manifest? - docker

I try to launch container-vm machine with following yaml:
version: v1
kind: Pod
spec:
containers:
- name: simple-echo
image: gcr.io/google_containers/busybox
command: ['nc', '-p', '8080', '-l', '-l', '-e', 'echo', 'hello world!']
imagePullPolicy: Always
ports:
- containerPort: 8080
hostPort: 8080
protocol: TCP
volumeMounts:
- name: string
mountPath: /home
readOnly: false
restartPolicy: Always
dnsPolicy: Default
volumes:
- name: string
source:
# Either emptyDir for an empty directory
# emptyDir: {}
# Or hostDir for a pre-existing directory on the host
hostDir:
path: /home
I expect host home directory being accessible from the container.
However, container fails to start:
E0619 05:02:09.477574 2212 http.go:54] Failed to read URL: invalid pod:
[spec.volumes[0].source: invalid value '<*>(0xc2080b79e0){HostPath:<nil> EmptyDir:<nil> GCEPersistentDisk:<nil> AWSElasticBlockStore:<nil>
GitRepo:<nil> Secret:<nil> NFS:<nil> ISCSI:<nil> Glusterfs:<nil> PersistentVolumeClaimVolumeSource:<nil> RBD:<nil>}':
exactly 1 volume type is required spec.containers[0].volumeMounts[0].name: not found 'string']
What is the correct way to specify a volume for container?

Try replacing hostDir with hostPath as mentioned in v1beta3-conversion-tips-from-v1beta12.
Try replacing
volumes:
- name: string
source:
# Either emptyDir for an empty directory
# emptyDir: {}
# Or hostDir for a pre-existing directory on the host
hostDir:
path: /home
with
volumes:
- name: string
hostPath:
path: /home
at the bottom of your configuration.

There is a simple way to do it:
version: v1
kind: Pod
spec:
containers:
- name: simple-echo
image: gcr.io/google_containers/busybox
command: ['nc', '-p', '8080', '-l', '-l', '-e', 'echo', 'hello world!']
imagePullPolicy: Always
ports:
- containerPort: 8080
hostPort: 8080
protocol: TCP
volumeMounts:
- name: string
mountPath: /home
readOnly: false
restartPolicy: Always
dnsPolicy: Default
volumes:
- /path/dir/from/host:/name/of/dir/in/container

Related

How to copy kubernetes/openshift secrets into a volume for init container job?

There is a init container which copies keystore.jks from nexus repo into a volume during the build of docker file via curl. Then once the init container is alive the python code that takes that keystore.jks and makes necessary updates then init container dies. What we are trying to do is to store this keystore.jks as a secret in openshift BUT how to copy secret into volume once init container is alive? so that python code can use it as it was before? Thanks in advance for any comments/help!
As #larsks suggests you can mount the secret to volume and use it for the main container.
here sharing YAML configuration that might help you understand.
apiVersion: v1
kind: Secret
metadata:
name: ssh-key
namespace: acme
data:
id_rsa: {{ secret_value_base64_encoded }}
now adding secret to mount path
spec:
template:
spec:
containers:
- image: "my-image:latest"
name: my-app
...
volumeMounts:
- mountPath: "/var/my-app"
name: ssh-key
readOnly: true
initContainers:
- command:
- sh
- -c
- chown -R 1000:1000 /var/my-app #if any changes required
image: busybox:1.29.2
name: set-dir-owner
securityContext:
privileged: true
volumeMounts:
- mountPath: /var/my-app
name: ssh-key
volumes:
- name: ssh-key
secret:
secretName: ssh-key
as suggested better option is to directly mount the secret to the main container without init contianer.
spec:
template:
spec:
containers:
- image: "my-image:latest"
name: my-app
...
volumeMounts:
- mountPath: "/var/my-app"
name: ssh-key
readOnly: true
volumes:
- name: ssh-key
secret:
secretName: ssh-key

Error in windows docker desktop kubernetes

In kubernetes on windows docker desktop when I try to mount an empty directory I get the following error:
error: error when retrieving current configuration of:
Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
Name: "", Namespace: "default"
Object: &{map["apiVersion":"v1" "kind":"Pod" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "namespace":"default"] "spec":map["containers":[map["image":"nginx:alpine" "name":"nginx" "volumeMounts":[map["mountPath":"/usr/share/nginx/html" "name":"html" "readOnly":%!q(bool=true)]]] map["args":["while true; do date >> /html/index.html; sleep 10; done"] "command":["/bin/sh" "-c"] "image":"alpine" "name":"html-updater" "volumeMounts":[map["mountPath":"/html" "name":"html"]]]] "volumes":[map["emptyDir":map[] "name":"html"]]]]}
from server for: "nginx-alpine-emptyDir.pod.yml": resource name may not be empty
The error message seems a bit unclear and I cannot find what's going on.
My yaml configuration is the following:
apiVersion: v1
kind: Pod
spec:
volumes:
- name: html
emptyDir: {}
containers:
- name: nginx
image: nginx:alpine
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
readOnly: true
- name: html-updater
image: alpine
command: ["/bin/sh", "-c"]
args:
- while true; do date >> /html/index.html; sleep 10; done
volumeMounts:
- name: html
mountPath: /html
Forgot to add metadata name
metadata:
name: empty-dir-test
Code after change is:
apiVersion: v1
kind: Pod
metadata:
name: empty-dir-test
spec:
volumes:
- name: html
emptyDir: {}
containers:
- name: nginx
image: nginx:alpine
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
readOnly: true
- name: html-updater
image: alpine
command: ["/bin/sh", "-c"]
args:
- while true; do date >> /html/index.html; sleep 10; done
volumeMounts:
- name: html
mountPath: /html

elasticsearch.yml is read-only when loaded using Kubernetes ConfigMap

I am trying to load elasticsearch.yml file using ConfigMap while installing ElasticSearch using Kubernetes.
kubectl create configmap elastic-config --from-file=./elasticsearch.yml
The elasticsearch.yml file is loaded in the container with root as its owner and read-only permission (https://github.com/kubernetes/kubernetes/issues/62099). Since, ElasticSearch will not start with root ownership, the pod crashes.
As a work-around, I tried to mount the ConfigMap to a different file and then copy it to the config directory using an initContainer. However, the file in the config directory does not seem to be updated.
Is there anything that I am missing or is there any other way to accomplish this?
ElasticSearch Kubernetes StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
labels:
app: elasticservice
spec:
serviceName: elasticsearch
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: elastic-config-vol
mountPath: /tmp/elasticsearch
- name: elastic-storage
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: docker-elastic
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.zen.ping.unicast.hosts
value: "elastic-service"
- name: discovery.zen.minimum_master_nodes
value: "1"
- name: node.master
value: "true"
- name: node.data
value: "true"
- name: ES_JAVA_OPTS
value: "-Xmx256m -Xms256m"
volumes:
- name: elastic-config-vol
configMap:
name: elastic-config
items:
- key: elasticsearch.yml
path: elasticsearch.yml
- name: elastic-config-dir
emptyDir: {}
- name: elastic-storage
emptyDir: {}
initContainers:
# elasticsearch will not run as non-root user, fix permissions
- name: fix-vol-permission
image: busybox
command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data
securityContext:
privileged: true
volumeMounts:
- name: elastic-storage
mountPath: /usr/share/elasticsearch/data
- name: fix-config-vol-permission
image: busybox
command:
- sh
- -c
- cp /tmp/elasticsearch/elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml
securityContext:
privileged: true
volumeMounts:
- name: elastic-config-dir
mountPath: /usr/share/elasticsearch/config
- name: elastic-config-vol
mountPath: /tmp/elasticsearch
# increase default vm.max_map_count to 262144
- name: increase-vm-max-map-count
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
- name: increase-the-ulimit
image: busybox
command:
- sh
- -c
- ulimit -n 65536
securityContext:
privileged: true
I use:
...
volumeMounts:
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
volumes:
- name : config
configMap:
name: es-configmap
without any permissions problem, but you can set permissions with defaultMode

How do I copy a Kubernetes configmap to a write enabled area of a pod?

I am trying to deploy a redis sentinel deployment in Kubernetes. I have accomplished that but want to use ConfigMaps to allow us to change the IP address of the master in the sentinel.conf file. I started this but redis cant write to the config file because the mount point for configMaps are readOnly.
I was hoping to run an init container and copy the redis conf to a different dir just in the pod. But the init container couldn't find the conf file.
What are my options? Init Container? Something other than ConfigMap?
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis-sentinel
spec:
replicas: 3
template:
metadata:
labels:
app: redis-sentinel
spec:
hostNetwork: true
containers:
- name: redis-sentinel
image: IP/redis-sentinel
ports:
- containerPort: 63790
- containerPort: 26379
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /usr/local/etc/redis/conf
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: sentinel-redis-config
items:
- key: redis-config-sentinel
path: sentinel.conf
According to #P Ekambaram proposal, you can try this one:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis-sentinel
spec:
replicas: 3
template:
metadata:
labels:
app: redis-sentinel
spec:
hostNetwork: true
containers:
- name: redis-sentinel
image: redis:5.0.4
ports:
- containerPort: 63790
- containerPort: 26379
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /usr/local/etc/redis/conf
name: config
initContainers:
- name: copy
image: redis:5.0.4
command: ["bash", "-c", "cp /redis-master/redis.conf /redis-master-data/"]
volumeMounts:
- mountPath: /redis-master
name: config
- mountPath: /redis-master-data
name: data
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: example-redis-config
items:
- key: redis-config
path: redis.conf
In this example initContainer copy the file from ConfigMap into writable dir.
Note:
An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. As the name says, it is initially empty. Containers in the Pod can all read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each Container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.
Create a startup script. In that copy the configMap file that is mounted in a volume to writable location. Then run the container process.

How do pods in Google Container Engine talk/link to each other

I fiddle around with this example: https://cloud.google.com/container-engine/docs/tutorials/persistent-disk/
My modifications:
- I'm using my own Wordpress image [x] Works
Service starts (it needed more CPU 0.8 instead of 0.5, but now it works)
I want to use mariadb instead of mysql [ ] Fails!
I can't figure out how two pods link together!!!! ~5h + still failing
Here are my .yaml-Files
apiVersion: v1
kind: Pod
metadata:
name: wpsite
labels:
name: wpsite
spec:
containers:
- image: <my image on gcr.io>
name: wpsite
env:
- name: WORDPRESS_DB_PASSWORD
# Change this - must match mysql.yaml password.
value: example
ports:
- containerPort: 80
name: wpsite
volumeMounts:
# Name must match the volume name below.
- name: wpsite-disk
# Mount path within the container.
mountPath: /var/www/html
volumes:
- name: wpsite-disk
gcePersistentDisk:
# This GCE persistent disk must already exist.
pdName: wpsite-disk
fsType: ext4
service:
apiVersion: v1
kind: Service
metadata:
labels:
name: wpsite
name: wpsite
spec:
type: LoadBalancer
ports:
# The port that this service should serve on.
- port: 80
targetPort: 80
protocol: TCP
# Label keys and values that must match in order to receive traffic for this service.
selector:
name: wpsite
mariadb:
apiVersion: v1
kind: Pod
metadata:
name: mariadb
labels:
name: mariadb
spec:
containers:
- resources:
limits:
# 0.5 hat nicht funktioniert
# Fehlermeldung in: kubectl describe pod mariadb
cpu: 0.8
image: mariadb:10.1
name: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
# Change this password!
value: example
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
# This name must match the volumes.name below.
- name: mariadb-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mariadb-persistent-storage
gcePersistentDisk:
# This disk must already exist.
pdName: mariadb-disk
fsType: ext4
maria-db-service:
apiVersion: v1
kind: Service
metadata:
labels:
name: mariadb
name: mariadb
spec:
ports:
# The port that this service should serve on.
- port: 3306
# Label keys and values that must match in
# order to receive traffic for this service.
selector:
name: mysql
kubectl logs wpsite shows error messages like this: Warning: mysqli::mysqli(): php_network_getaddresses: getaddrinfo failed: Name or service not known in - on line 10
OK - found it out!
It's the name in mariadb-service.yaml
metadata.name must be mysql and not mariadb, the selector in mariadb-service must point to mariadb (the pod)
Here are the working files:
mariadb.yaml
apiVersion: v1
kind: Pod
metadata:
name: mariadb
labels:
name: mariadb
spec:
containers:
- resources:
limits:
# 0.5 hat nicht funktioniert
# Fehlermeldung in: kubectl describe pod mariadb
cpu: 0.8
image: mariadb:10.1
name: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
# Change this password!
value: example
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
# This name must match the volumes.name below.
- name: mariadb-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mariadb-persistent-storage
gcePersistentDisk:
# This disk must already exist.
pdName: mariadb-disk
fsType: ext4
mariadb-service.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
name: mysql
spec:
ports:
# The port that this service should serve on.
- port: 3306
# Label keys and values that must match in
# order to receive traffic for this service.
selector:
name: mariadb
wpsite.yaml
apiVersion: v1
kind: Pod
metadata:
name: wpsite
labels:
name: wpsite
spec:
containers:
- image: <change this to your imagename on gcr.io>
name: wpsite
env:
- name: WORDPRESS_DB_PASSWORD
# Change this - must match mysql.yaml password.
value: example
ports:
- containerPort: 80
name: wpsite
volumeMounts:
# Name must match the volume name below.
- name: wpsite-disk
# Mount path within the container.
mountPath: /var/www/html
volumes:
- name: wpsite-disk
gcePersistentDisk:
# This GCE persistent disk must already exist.
pdName: wpsite-disk
fsType: ext4
wpsite-service.yaml
apiVersion: v1
kind: Service
metadata:
name: wpsite
labels:
name: wpsite
spec:
type: LoadBalancer
ports:
# The port that this service should serve on.
- port: 80
targetPort: 80
protocol: TCP
# Label keys and values that must match in order to receive traffic for this service.
selector:
name: wpsite
With these settings I run: (my yaml-files are under gke)
$ kubectl create -f gke/mariadb.yaml
# Check
$ kubectl get pod
$ kubectl create -f gke/mariadb-service.yaml
# Check
$ kubectl get service mysql!!!! (name in mariadb = mysql)
$ kubectl create -f gke/wpsite.yaml
# Check
$ kubectl get pod
$ kubectl create -f gke/wpsite-service.yaml
# Check
$ kubectl describe service wpsite
Hope this helps someone...

Resources