I am using WSL2 Debian and Docker Desktop and I want to persist my volume data within a local folder (a path to OneDrive in best case).
This works fine but with one exception, everything is owned by root:root. How can I specify the user/group permissions within the volume?
And is there any documentation for this anywhere?
apiVersion: apps/v1
kind: Deployment
metadata:
name: dummy-service
labels:
app: dummy-service
spec:
replicas: 3
selector:
matchLabels:
app: dummy-service
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: dummy-service
spec:
containers:
- name: dotnet
image: alpine
imagePullPolicy: Always
resources:
requests:
cpu: "100m"
memory: "40Mi"
limits:
memory: "64Mi"
ports:
- containerPort: 5000
volumeMounts:
- mountPath: "/app/wwwroot"
name: dummy-volume
readinessProbe:
httpGet:
path: /heartbeat
port: 5000
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 15
livenessProbe:
httpGet:
path: /heartbeat
port: 5000
scheme: HTTP
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 1
volumes:
- name: dummy-volume
persistentVolumeClaim:
claimName: dummy-pvc
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: dummy-sc
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: dummy-pv
spec:
capacity:
storage: 512Mi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: dummy-sc
local:
path: /run/desktop/mnt/host/c/Users/Markus/OneDrive/Workspace/Volume/Web
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dummy-pvc
spec:
storageClassName: dummy-sc
accessModes:
- ReadWriteMany
resources:
requests:
storage: 512Mi
I can think of three possible solutions for this issue:
You could use the Init Containers. This way the container in a pod which is running as a non-root user can have permissions for the mounted volume. See the example below:
initContainers:
- name: set-permissions
image: <image_name>
# Give user id 555 permissions for the mounted volume
command:
- chown
- -R
- 555:555
- /var/lib/data
volumeMounts:
- name: data
mountPath: /var/lib/data
Another way to give the non-root user an access to the folder where it wants to read and write data is to follow the steps below:
Create user group and assign group ID in Dockerfile.
Create user with user ID and add to the group in Dockerfile.
Change ownership recursively for the folders the user process wants to read/write.
Add the following lines into your Deployment's Pod spec:
spec:
securityContext:
runAsUser: 1099
runAsGroup: 1099
fsGroup: 1099
As described in the docs:
runAsUser: Specifies that for any Containers in the Pod, all processes run with user ID 1099.
runAsGroup: Specifies the primary group ID of 1099 for all processes within any containers of the Pod. If this field is omitted, the primary group ID of the containers will be root(0). Any files created will also be owned by user 1099 and group 1099 when runAsGroup is specified.
fsGroup: Specifies the owner of any volume attached will be owner by group ID 1099.
Configure volume permission and ownership change policy for Pods (I know it does not suit your use case but I will leave this option here for other community members).
Related
I am using Cassandra image w.r.t.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra
image: gcr.io/google-samples/cassandra:v13
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "K8Demo"
- name: CASSANDRA_DC
value: "DC1-K8Demo"
- name: CASSANDRA_RACK
value: "Rack1-K8Demo"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- /ready-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the stateful pod volumes.
volumeMounts:
- name: cassandra-data
mountPath: /cassandra_data
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
# do not use these in production until ssd GCEPersistentDisk or other ssd pd
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: fast
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: k8s.io/minikube-hostpath
parameters:
type: pd-ssd
Now I need to add below line to cassandra-env.sh in postStart or in cassandra yaml file:
-JVM_OPTS="$JVM_OPTS
-javaagent:$CASSANDRA_HOME/lib/cassandra-exporter-agent-<version>.jar"
Now I was able to achieve this, but after this step, Cassandra requires a restart but as it's already running as a pod, I don't know how to restart the process. So is there any way that this step is done prior to running the pod and not after it is up?
I was suggested below solution:-
This won’t work. Commands that run postStart don’t impact the running container. You need to change the startup commands passed to Cassandra.
The only way that I know to do this is to create a new container image in the artifactory based on the existing image. and pull from there.
But I don't know how to achieve this.
I have deployed my Jenkins as part of kubernetes yaml file and also enabled Persist volume claim, when my Jenkins pod is restarts, i lost my all the jobs and configuration which means i need to re-install all Jenkins suggest plugin, configure kubernetes cloud, configure git repo, and create new pipeline job.
cloud you please help me how to avoid above scenario.
vi jenkins-deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: jenkins-master
namespace: jenkins
labels:
app: jenkins-master
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-master
template:
metadata:
labels:
app: jenkins-master
spec:
securityContext:
fsGroup: 1000
containers:
- name: jenkins
image: jenkins/jenkins:lts
imagePullPolicy: Always
ports:
- containerPort: 8080
- containerPort: 50000
readinessProbe:
httpGet:
path: /login
port: 8080
initialDelaySeconds: 300
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 2
failureThreshold: 5
volumeMounts:
- mountPath: "/var"
name: jenkins-home
subPath: jenkins_home
resources:
limits:
cpu: 800m
memory: 3Gi
requests:
cpu: 100m
memory: 3Gi
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: pvc-jenkins-home
vi jenkins-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-jenkins-home
namespace: jenkins
spec:
storageClassName: efs
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
kubectl get pvc -n jenkins
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-jenkins-home Bound pvc-4ccf3f55-6894-4fee-88d7-58dd7584b837 10Mi RWO efs 59m
Please let me know if any details required from my side
Please remove the subpathfrom volumeMounts as subPath will overwrite everything under the /var directory. so it should be just like this
volumeMounts:
- mountPath: /var
name: jenkins-home
Whatever mount path i add for PVC it is creating lost+found folder and deleting all other content.
I am trying to setup deployment with PVC
FROM python:3.5 AS python-build
ADD . /test
WORKDIR /test
CMD [ "python3", "./run.py" ]
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: test-core
labels:
app: test-core
spec:
selector:
matchLabels:
app: test-core
tier: frontend
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: test-core
tier: frontend
spec:
containers:
- image: <My image>
securityContext:
privileged: true
runAsUser: 1000
resources:
requests:
memory: "128Mi"
cpu: .05
limits:
memory: "256Mi"
cpu: .10
name: test-core
ports:
- containerPort: 9595
name: http
- containerPort: 443
name: https
readinessProbe:
httpGet:
path: /
port: 9595
initialDelaySeconds: 5
periodSeconds: 3
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 4
envFrom:
- secretRef:
name: test-secret
- configMapRef:
name: test-new-configmap
volumeMounts:
- name: core-data
mountPath: /test
imagePullPolicy: Always
volumes:
- name: core-data
persistentVolumeClaim:
claimName: core-claim
When i apply this file from kubernetes it's giving an error in the log cannot find file run.py that mean PVC going empty.
Whatever mount path a add it is creating lsot+found folder and deleting all other content.
Thanks
According your Dockerfile, when you run docker build -t <imagename> ., it will copy all files on your current directory to container image. And when you start this container, it will look for run.py.
If one of theese files is run.py, which actually should be, then your deployment yaml file is not correct, because you mount another PV to that directory, which will overwrite your files which you copied before and it won't able to find run.py
Hope it helps.
So, forgive me. I just started learning docker and kubernets a month ago.
I've got this to the point where I have my .yml file that takes my Minecraft server and runs it. I now want ftp access. Currently, there's a drive for the world folder and the config folder for the server (since I can't put the entire directory on a mounted drive (right?) and those two folders need to save every time the image is rebuilt).
So, I want to be able to access /config. Preferably while the minecraft node is still reading and writing. A few questions here.
How do I make the most minimal FTP image possible when making the docker file for it? I am unable to figure out a scenario. Best I have is a base image on python:alpine and to use something like this
Is it even possible to have the node access the drive when it's in use by another? Or do I have to make some custom script in the interface im making that turns off the minecraft server and then starts up the FTP node?
Current yml:
apiVersion: v1
kind: Service
metadata:
name: lapitos
labels:
type: lapitos
spec:
type: LoadBalancer
ports:
- name: minecraft
port: 25565
protocol: TCP
targetPort: 25565
- name: minecraft-rcon
port: 25575
protocol: TCP
targetPort: 25575
selector:
app: lapitos
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: lapitos
spec:
serviceName: lapitos
replicas: 1
selector:
matchLabels:
app: lapitos
template:
metadata:
labels:
app: lapitos
spec:
containers:
- name: lapitos
image: gcr.io/mchostingnet-202204/lapitosbeta2
resource:
limits:
cpu: "2"
requests:
cpu: "2"
ports:
- containerPort: 25565
name: minecraft
volumeMounts:
- name: world
mountPath: /world
- name: config
mountPath: /config
- name: logs
mountPath: /logs
volumeClaimTemplates:
- metadata:
name: world
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 25Gi
- metadata:
name: config
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
- metadata:
name: logs
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
1.- Grab an ftp image that suits you from any registry and use it, instead of making your own. If still is a requirement, I don't know.
Note: Compute Engine has got port 21 blocked.
2.- Yes, you can. Volume access modes:
ReadWriteOnce – the volume can be mounted as read-write by a single node
ReadOnlyMany – the volume can be mounted read-only by many nodes
ReadWriteMany – the volume can be mounted as read-write by many nodes
Trying to set up PetSet using Kube-Solo
In my local dev environment, I have set up Kube-Solo with CoreOS. I'm trying to deploy a Kubernetes PetSet that includes a Persistent Volume Claim Template as part of the PetSet configuration. This configuration fails and none of the pods are ever started. Here is my PetSet definition:
apiVersion: apps/v1alpha1
kind: PetSet
metadata:
name: marklogic
spec:
serviceName: "ml-service"
replicas: 2
template:
metadata:
labels:
app: marklogic
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
terminationGracePeriodSeconds: 30
containers:
- name: 'marklogic'
image: {ip address of repo}:5000/dcgs-sof/ml8-docker-final:v1
imagePullPolicy: Always
command: ["/opt/entry-point.sh", "-l", "/opt/mlconfig.sh"]
ports:
- containerPort: 7997
name: health-check
- containerPort: 8000
name: app-services
- containerPort: 8001
name: admin
- containerPort: 8002
name: manage
- containerPort: 8040
name: sof-sdl
- containerPort: 8041
name: sof-sdl-xcc
- containerPort: 8042
name: ml8042
- containerPort: 8050
name: sof-sdl-admin
- containerPort: 8051
name: sof-sdl-cache
- containerPort: 8060
name: sof-sdl-camel
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
lifecycle:
preStop:
exec:
command: ["/etc/init.d/MarkLogic stop"]
volumeMounts:
- name: ml-data
mountPath: /var/opt/MarkLogic
volumeClaimTemplates:
- metadata:
name: ml-data
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 1Gi
In the Kubernetes dashboard, I see the following error message:
SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "ml-data-marklogic-0", which is unexpected.
It seems that being unable to create the Persistent Volume Claim is also preventing the image from ever being pulled from my local repository. Additionally, the Kubernetes Dashboard shows the request for the Persistent Volume Claims, but the state is continuously "pending".
I have verified the issue is with the Persistent Volume Claim. If I remove that from the PetSet configuration the deployment succeeds.
I should note that I was using MiniKube prior to this and would see the same message, but once the image was pulled and the pod(s) started the claim would take hold and the message would go away.
I am using
Kubernetes version: 1.4.0
Docker version: 1.12.1 (on my mac) & 1.10.3 (inside the CoreOS vm)
Corectl version: 0.2.8
Kube-Solo version: 0.9.6
I am not familiar with kube-solo.
However, the issue here might be that you are attempting to use a feature, dynamic volume provisioning, which is in beta, which does not have specific support for volumes in your environment.
The best way around this would be to create the persistent volumes that it expects to find manually, so that the PersistentVolumeClaim can find them.
The same error happened to me and found clues about the following config (considering volumeClaimTemplates and StorageClass) at the slack group and this pull request
volumeClaimTemplates:
- metadata:
name: cassandra-data
annotations:
volume.beta.kubernetes.io/storage-class: standard
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: kube-system
name: standard
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
labels:
kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/host-path