cp: can't create '/node_modules/mongo-express/config.js': File exists - docker

Problem with kubernetes volume mounts.
The mongo-express container has a file /node-modules/mongo-express/config.js
I need to overwrite the /node-modules/mongo-express/config.js with my /tmp/config.js
I am trying to copy my custom config.js under /tmp (volume mount by ConfigMaps) to the folder under the container path /node-modules/mongo-express.
But I am not able to do that and get the below error:
cp: can't create '/node_modules/mongo-express/config.js': File exists
Below we can find the deployment.yaml I am using to achieve this.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
spec:
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express:latest
command:
- sh
- -c
- cp /tmp/config.js /node_modules/mongo-express
ports:
- name: mongo-express
containerPort: 8081
volumeMounts:
- name: custom-config-js
mountPath: /tmp
volumes:
- name: custom-config-js
configMap:
name: mongodb-express-config-js
I tried:
cp -f /tmp/config.js /node_modules/mongo-express
cp -r /tmp/config.js /node_modules/mongo-express
\cp -r /tmp/config.js /node_modules/mongo-express
and much more. But with no success. Any help is much appreciated.

Most container images are immutable.
What you probably want here is a subPath mount instead:
volumeMounts:
- mountPath: /node_modules/mongo-express/config.js
name: custom-config-js
subPath: config.js

Related

Why I cannot read files from a shared PersistentVolumeClaim between containers in Kubernetes?

I have a docker image felipeogutierrez/tpch-dbgen that I build using docker-compose and I push it to docker-hub registry using travis-CI.
version: "3.7"
services:
other-images: ....
tpch-dbgen:
build: ../docker/tpch-dbgen
image: felipeogutierrez/tpch-dbgen
volumes:
- tpch-dbgen-data:/opt/tpch-dbgen/data/
- datarate:/tmp/
stdin_open: true
and this is the Dockerfile to build this image:
FROM gcc AS builder
RUN mkdir -p /opt
COPY ./generate-tpch-dbgen.sh /opt/generate-tpch-dbgen.sh
WORKDIR /opt
RUN chmod +x generate-tpch-dbgen.sh && ./generate-tpch-dbgen.sh
In the end, this scripts creates a directory /opt/tpch-dbgen/data/ with some files that I would like to read from another docker image that I am running on Kubernetes. Then I have a Flink image that I create to run into Kubernetes. This image starts 3 Flink Task Managers and one stream application that reads files from the image tpch-dbgen-data. I think that the right approach is to create a PersistentVolumeClaim so I can share the directory /opt/tpch-dbgen/data/ from image felipeogutierrez/tpch-dbgen to my flink image in Kubernetes. So, first I have this file to create the PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tpch-dbgen-data-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Mi
Then, I am creating an initContainers to launch the image felipeogutierrez/tpch-dbgen and after that launch my image felipeogutierrez/explore-flink:1.11.1-scala_2.12:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flink-taskmanager
spec:
replicas: 3
selector:
matchLabels:
app: flink
component: taskmanager
template:
metadata:
labels:
app: flink
component: taskmanager
spec:
initContainers:
- name: tpch-dbgen
image: felipeogutierrez/tpch-dbgen
#imagePullPolicy: Always
env:
command: ["ls"]
# command: ['sh', '-c', 'for i in 1 2 3; do echo "job-1 `date`" && sleep 5s; done;', 'ls']
volumeMounts:
- name: tpch-dbgen-data
mountPath: /opt/tpch-dbgen/data
containers:
- name: taskmanager
image: felipeogutierrez/explore-flink:1.11.1-scala_2.12
#imagePullPolicy: Always
env:
args: ["taskmanager"]
ports:
- containerPort: 6122
name: rpc
- containerPort: 6125
name: query-state
livenessProbe:
tcpSocket:
port: 6122
initialDelaySeconds: 30
periodSeconds: 60
volumeMounts:
- name: flink-config-volume
mountPath: /opt/flink/conf/
- name: tpch-dbgen-data
mountPath: /opt/tpch-dbgen/data
securityContext:
runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary
volumes:
- name: flink-config-volume
configMap:
name: flink-config
items:
- key: flink-conf.yaml
path: flink-conf.yaml
- key: log4j-console.properties
path: log4j-console.properties
- name: tpch-dbgen-data
persistentVolumeClaim:
claimName: tpch-dbgen-data-pvc
The Flink stream application is starting but it cannot read the files on the directory /opt/tpch-dbgen/data of the image felipeogutierrez/tpch-dbgen. I am getting the error: java.io.FileNotFoundException: /opt/tpch-dbgen/data/orders.tbl (No such file or directory). It is strange because when I try to go into the container felipeogutierrez/tpch-dbgen I can list the files. So I suppose there is something wrong on my Kubernetes configuration. Does anyone know to point what I am missing on the Kubernetes configuration files?
$ docker run -i -t felipeogutierrez/tpch-dbgen /bin/bash
root#10c0944a95f8:/opt# pwd
/opt
root#10c0944a95f8:/opt# ls tpch-dbgen/data/
customer.tbl dbgen dists.dss lineitem.tbl nation.tbl orders.tbl part.tbl partsupp.tbl region.tbl supplier.tbl
Also, when I list the logs of the container tpch-dbgen I can see the directory tpch-dbgen that I want to read. Although I cannot execute the command command: ["ls tpch-dbgen"] inside my Kubernetes config file.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
flink-jobmanager-n9nws 1/1 Running 2 17m
flink-taskmanager-777cb5bf77-ncdl4 1/1 Running 0 4m54s
flink-taskmanager-777cb5bf77-npmrx 1/1 Running 0 4m54s
flink-taskmanager-777cb5bf77-zc2nw 1/1 Running 0 4m54s
$ kubectl logs flink-taskmanager-777cb5bf77-ncdl4 tpch-dbgen
generate-tpch-dbgen.sh
tpch-dbgen
Docker has an unusual feature where, under some specific circumstances, it will populate a newly created volume from the image. You should not rely on this functionality, since it completely ignores updates in the underlying images and it doesn't work on Kubernetes.
In your Kubernetes setup, you create a new empty PersistentVolumeClaim, and then mount this over your actual data in both the init and main containers. As with all Unix mounts, this hides the data that was previously in that directory. Nothing causes data to get copied into that volume. This works the same way as every other kind of mount, except the Docker named-volume mount: you'll see the same behavior if you change your Compose setup to do a host bind mount, or if you play around with your local development system using a USB drive as a "volume".
You need to make your init container (or something else) explicitly copy data into the directory. For example:
initContainers:
- name: tpch-dbgen
image: felipeogutierrez/tpch-dbgen
command:
- /bin/cp
- -a
- /opt/tpch-dbgen/data
- /data
volumeMounts:
- name: tpch-dbgen-data
mountPath: /data # NOT the same path as in the image
If the main process modifies these files in place, you can make the command be more intelligent, or write a script into your image that only copies the individual files in if they don't exist yet.
It could potentially make more sense to have your image generate the data files at startup time, rather than at image-build time. That could look like:
FROM gcc
COPY ./generate-tpch-dbgen.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/generate-tpch-dbgen.sh
CMD ["generate-tpch-dbgen.sh"]
Then in your init container, you can run the default command (the generate script) with the working directory set to the volume directory
initContainers:
- name: tpch-dbgen
image: felipeogutierrez/tpch-dbgen
volumeMounts:
- name: tpch-dbgen-data
mountPath: /opt/tpch-dbgen/data # or anywhere really
workingDir: /opt/tpch-dbgen/data # matching mountPath
I got to run the PersistentVolumeClaim and share it between pods. Basically I had to use a subPath property which I learned from this answer https://stackoverflow.com/a/43404857/2096986 and I am using a simple Job that I learned from this answer https://stackoverflow.com/a/64023672/2096986. The final results is below:
The Dockerfile:
FROM gcc AS builder
RUN mkdir -p /opt
COPY ./generate-tpch-dbgen.sh /opt/generate-tpch-dbgen.sh
WORKDIR /opt
RUN chmod +x /opt/generate-tpch-dbgen.sh
ENTRYPOINT ["/bin/sh","/opt/generate-tpch-dbgen.sh"]
and the script generate-tpch-dbgen.sh has to have this line in the end sleep infinity & wait to not finalize. The PersistentVolumeClaim is the same of the question. Then I create a Job with the subPath property.
apiVersion: batch/v1
kind: Job
metadata:
name: tpch-dbgen-job
spec:
template:
metadata:
labels:
app: flink
component: tpch-dbgen
spec:
restartPolicy: OnFailure
volumes:
- name: tpch-dbgen-data
persistentVolumeClaim:
claimName: tpch-dbgen-data-pvc
containers:
- name: tpch-dbgen
image: felipeogutierrez/tpch-dbgen
imagePullPolicy: Always
volumeMounts:
- mountPath: /opt/tpch-dbgen/data
name: tpch-dbgen-data
subPath: data
and I use it on the other deployment also with the subPath property.
apiVersion: apps/v1
kind: Deployment
metadata:
name: flink-taskmanager
spec:
replicas: 3
selector:
matchLabels:
app: flink
component: taskmanager
template:
metadata:
labels:
app: flink
component: taskmanager
spec:
volumes:
- name: flink-config-volume
configMap:
name: flink-config
items:
- key: flink-conf.yaml
path: flink-conf.yaml
- key: log4j-console.properties
path: log4j-console.properties
- name: tpch-dbgen-data
persistentVolumeClaim:
claimName: tpch-dbgen-data-pvc
containers:
- name: taskmanager
image: felipeogutierrez/explore-flink:1.11.1-scala_2.12
imagePullPolicy: Always
env:
args: ["taskmanager"]
ports:
- containerPort: 6122
name: rpc
- containerPort: 6125
name: query-state
livenessProbe:
tcpSocket:
port: 6122
initialDelaySeconds: 30
periodSeconds: 60
volumeMounts:
- name: flink-config-volume
mountPath: /opt/flink/conf/
- name: tpch-dbgen-data
mountPath: /opt/tpch-dbgen/data
subPath: data
securityContext:
runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary
Maybe the issue is the accessMode you set on your PVC. ReadWriteOnce means it can only be mounted by one POD.
See here for Details.
You could try to use ReadWriteMany.
Your generate-tpch-dbgen.sh script is executed while building the docker image resulting those files in /opt/tpch-dbgen/data directory. So, when you run the image, you can see those files.
But the problem with k8s pvc, when you mount the volume (initially empty) to your containers, it replaces the /opt/tpch-dbgen/data directory along with the files in it.
Solution:
Don't execute the generate-tpch-dbgen.sh while building the docker image, rather execute it in the runtime. Then, the files will be created in the shared pv from the init container.
Something like below:
FROM gcc AS builder
RUN mkdir -p /opt
COPY ./generate-tpch-dbgen.sh /opt/generate-tpch-dbgen.sh
RUN chmod +x /opt/generate-tpch-dbgen.sh
ENTRYPOINT ["/bin/sh","/opt/generate-tpch-dbgen.sh"]

How to correctly use subdirs with projected volumes and configMaps

I have an nginx web server deployed with docker swarm, and I want to be able to deploy it also with kubernetes.
Right now I'm having trouble inserting the nginx configuration files into the container.
I'll first post here what I already do in docker swarm, and then what I tried in kubernetes.
Dockerfile:
FROM "nginx:1.19.1-alpine" AS nginx
[...]
RUN \
rm -fv /etc/nginx/nginx.conf && \
ln -svT /usr/local/etc/nginx/nginx.conf \
/etc/nginx/nginx.conf && \
rm -frv /etc/nginx/conf.d && \
ln -svT /usr/local/etc/nginx/conf.d \
/etc/nginx/conf.d
[...]
Basically I set up the image so that I can place my custom nginx config files into /usr/local/etc/ instead of /etc/
Docker swarm:
docker-compose.yaml:
configs:
nginx_conf:
file: /run/configs/www/etc/nginx/nginx.conf
nginx_conf_security-parameters:
file: /run/configs/www/etc/nginx/conf.d/security-parameters.conf
nginx_conf_server:
file: /run/configs/www/etc/nginx/conf.d/server.conf
networks:
alejandro-colomar:
services:
www:
configs:
-
mode: 0440
source: nginx_conf
target: /usr/local/etc/nginx/nginx.conf
-
mode: 0440
source: nginx_conf_security-parameters
target: /usr/local/etc/nginx/conf.d/security-parameters.conf
-
mode: 0440
source: nginx_conf_server
target: /usr/local/etc/nginx/conf.d/server.conf
deploy:
placement:
constraints:
- node.role == worker
replicas: 1
restart_policy:
condition: any
image: "alejandrocolomar/www:0.16-a6-kube"
networks:
-
"alejandro-colomar"
ports:
- "32001:8080"
version: "3.8"
Here I take the nginx custom config files, which I first put in /run/configs/ with a script so that they are in ram, and introduce them in the container as configs in the right place (/usr/local/etc/nginx/ and its subdir conf.d/).
I'd like to do the same in kubernetes, and I read that I should use a projected volume for that (if it's doable with normal volumes or in some other way, without having to use any dirty workarounds, I'm also open to that), so I tried the following (after seeing some examples which weren't very clear to me):
config.sh:
kubectl create configmap "nginx-conf-cm" \
--from-file "/run/configs/www/etc/nginx/nginx.conf"
kubectl create configmap "nginx-conf-security-parameters-cm" \
--from-file "/run/configs/www/etc/nginx/conf.d/security-parameters.conf"
kubectl create configmap "nginx-conf-server-cm" \
--from-file "/run/configs/www/etc/nginx/conf.d/server.conf"
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: www-deploy
spec:
replicas: 1
selector:
matchLabels:
service: www-svc
template:
metadata:
labels:
service: www-svc
spec:
containers:
-
image: "alejandrocolomar/www:0.16-a6-kube"
name: www-container
volumeMounts:
-
mountPath: /usr/local/etc/nginx/
name: nginx-volume
readOnly: true
volumes:
-
name: nginx-volume
projected:
sources:
-
configMap:
name: nginx-conf-cm
path: "nginx.conf"
-
configMap:
name: nginx-conf-security-parameters-cm
path: "conf.d/security-parameters.conf"
-
configMap:
name: nginx-conf-server-cm
path: "conf.d/server.conf"
service.yaml: (This one I put it here only for completeness)
apiVersion: v1
kind: Service
metadata:
name: www
spec:
ports:
-
nodePort: 32001
port: 8080
selector:
service: www-svc
type: NodePort
The deployment failed, of course, but it wasn't very wrong I think. When I entered in the container to debug it, the problem was that the three files were placed into /usr/local/etc/nginx/, and the subdir conf.d/ was not created:
/usr/local/etc/nginx/nginx.conf
/usr/local/etc/nginx/security-parameters.conf
/usr/local/etc/nginx/server.conf
How should I fix that (presumably in deployment.yaml) so that I have the following files in the container?:
/usr/local/etc/nginx/nginx.conf
/usr/local/etc/nginx/conf.d/security-parameters.conf
/usr/local/etc/nginx/conf.d/server.conf
There are multiple ways to handle this. One solution would be to create 3 separate volumes from these individual config maps and mount each one into its appropriate destination file/folder.
volumes:
- name: nginx-cm
configMap:
name: nginx-conf-cm
- name: nginx-sec
configMap:
name: nginx-conf-security-parameters-cm
- name: nginx-serv
configMap:
name: nginx-conf-server-cm
...
containers:
-
image: "alejandrocolomar/www:0.16-a6-kube"
name: www-container
volumeMounts:
- mountPath: /usr/local/etc/nginx/nginx.conf
name: nginx-cm
subPath: nginx.conf
readOnly: true
- mountPath: /usr/local/etc/nginx/conf.d/security-parameters.conf
name: nginx-sec
subPath: security-parameters.conf
readOnly: true
- mountPath: /usr/local/etc/nginx/conf.d/server.conf
name: nginx-serv
subPath: server.conf
readOnly: true
using mountPath and subPath in volumeMounts allows you to pick specific file from a given config map (doesn't matter much since you have on file per cm) and mount it as a file (not overriding the other content in the existing folder).
To explain a bit the code above:
- mountPath: /usr/local/etc/nginx/nginx.conf
name: nginx-cm
subPath: nginx.conf
tells kubernetes to use volume with the name of nignx-cm (defined in a volume section). Pick file nginx.conf (via subpath) that can be found in the associated config map and expose it in the container (mountPath) at the location /usr/local/etc/nginx/nginx.conf.
I have not run the code so there might be typos
PS: note that it might be better to create a custom configuration file inside of ../etc/nginx/conf.d/ instead of overriding ../etc/nginx/nginx.conf. That way you wouldn't need to worry about destroying the original files ../etc/nginx/ and you could mount the whole volume instead of using subpath (to avoid issues with config updates)

How do I copy a Kubernetes configmap to a write enabled area of a pod?

I am trying to deploy a redis sentinel deployment in Kubernetes. I have accomplished that but want to use ConfigMaps to allow us to change the IP address of the master in the sentinel.conf file. I started this but redis cant write to the config file because the mount point for configMaps are readOnly.
I was hoping to run an init container and copy the redis conf to a different dir just in the pod. But the init container couldn't find the conf file.
What are my options? Init Container? Something other than ConfigMap?
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis-sentinel
spec:
replicas: 3
template:
metadata:
labels:
app: redis-sentinel
spec:
hostNetwork: true
containers:
- name: redis-sentinel
image: IP/redis-sentinel
ports:
- containerPort: 63790
- containerPort: 26379
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /usr/local/etc/redis/conf
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: sentinel-redis-config
items:
- key: redis-config-sentinel
path: sentinel.conf
According to #P Ekambaram proposal, you can try this one:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis-sentinel
spec:
replicas: 3
template:
metadata:
labels:
app: redis-sentinel
spec:
hostNetwork: true
containers:
- name: redis-sentinel
image: redis:5.0.4
ports:
- containerPort: 63790
- containerPort: 26379
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /usr/local/etc/redis/conf
name: config
initContainers:
- name: copy
image: redis:5.0.4
command: ["bash", "-c", "cp /redis-master/redis.conf /redis-master-data/"]
volumeMounts:
- mountPath: /redis-master
name: config
- mountPath: /redis-master-data
name: data
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: example-redis-config
items:
- key: redis-config
path: redis.conf
In this example initContainer copy the file from ConfigMap into writable dir.
Note:
An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. As the name says, it is initially empty. Containers in the Pod can all read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each Container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.
Create a startup script. In that copy the configMap file that is mounted in a volume to writable location. Then run the container process.

Store/Share data with a container in Kubernetes

I've dockerized a python project that requires the use of several CSVs (~2gb). In order to keep image size down I didn't include the CSVs in the build, instead opting to give the running container the data from a directory outside the container through a volume. Locally, when running through docker, I can just do
docker run -v ~/local/path/:/container/path my-image:latest
This works, but I'm not sure how to go about doing this in Kubernetes. I've been reading the documentation and am confused by the number of volume types, where the actual CSVs should be stored, etc.
Based on the information about the project that I've provided, is there an obvious solution?
If you'd like to replicate that exact behavior from Docker the most common way to do it is to use hostPath. Something like this:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: my-image:latest
name: my-container
volumeMounts:
- mountPath: /container/path
name: test-volume
volumes:
- name: test-volume
hostPath:
path: /usr/local/path
type: Directory
Here is a typical example of sharing between containers. You can keep your data in a separate container and code in a different container.
https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]
Hope it helps.

Share folder content between two containers in same pod

I have a Pod with two containers, Nginx and Rails. I want to share the public folder from the rails to the nginx container, but the public contains already files, I don't want the folder be empty.
Is there a way with a shared-volume?
I tried:
- name: rails-assets
hostPath:
path: /app/public
But im getting this error:
Error: failed to start container "nginx": Error response from daemon: {"message":"error while creating mount source path '/app/public': mkdir /app: read-only file system"}
Error syncing pod
Back-off restarting failed container
Thanks,
I fixed that problem creating a shared-volument shared-assets/ on the rails app. On the rails dockerfile I created a ENTRYPOINT with a bash script to copy the public/ files on the shared-assets/ folder. With this I can see the files now on the Nginx Container.
---
kind: Deployment
apiVersion: apps/v1beta1
metadata:
name: staging-deployment
spec:
replicas: 1
revisionHistoryLimit: 2
template:
metadata:
labels:
app: staging
spec:
containers:
- name: staging
image: some/container:v5
volumeMounts:
- mountPath: /var/run/
name: rails-socket
- mountPath: /app/shared-assets
name: rails-assets
- name: nginx
image: some/nginx:latest
volumeMounts:
- mountPath: /var/run/
name: rails-socket
- mountPath: /app
name: rails-assets
imagePullSecrets:
- name: app-backend-secret
volumes:
- name: rails-socket
emptyDir: {}
- name: rails-assets
emptyDir: {}
And the script ENTRYPOINT:
cp -r /app/public/ /app/shared-assets/

Resources