persistentVolumeReclaimPolicy on directly mounted NFS volumes - kubernetes - docker

I have directly mounted NFS volume for mysql data, need to implement storage policy for retaining data across pod deletion, and to avoid any corruption. please recommend some useful.
I did not find a way to enable persistentVolumeReclaimPolicy: Retain in directly mounted volumes . I know it can be done from PV/PVC creation but is it possible from statefulset volumes... Some guidelines is needed in understanding the yaml options for a particular object, where to get all the options(parameters) available for an object. currently googling for each options and trying - so hard.
I could not mount a configmap file (my.cnf) to a file in the pod. it removes the underlying files in the mountpath. curious to know how it is handled generally, do we need separate mount path for each config file.
code block
apiVersion: v1
kind: Service
metadata:
name: mymariadb
labels:
app: mymariadb
spec:
ports:
- port: 3306
name: mysql
targetPort: mysql
nodePort: 30003
type: NodePort
selector:
app: mymariadb
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mymariadb
labels:
app: mymariadb
spec:
serviceName: "mymariadb"
selector:
matchLabels:
app: mymariadb
template:
metadata:
labels:
app: mymariadb
spec:
containers:
- name: mariadb
image: mariadb:10.3.7
env:
- name: MYSQL_ROOT_PASSWORD
value: xxxx
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /data
subPath: mysql
- name: conf
mountPath: /etc/mysql # /conf.d removing files
resources:
requests:
cpu: 500m
memory: 2Gi
volumes:
- name: data
nfs:
server: 10.12.32.41
path: /data/mymariadb
spec:
persistentVolumeReclaimPolicy: Retain # not taking
- name: conf
configMap:
name: mycustconf
items:
- key: my.cnf
path: my.cnf

Firstly, I did not suggest nfs mount in Kubernetes platform for two reasons. From security perspective, another container can access the nfs mount on the worker nodes. The Second, from performances perspective, the connection between worker nodes and storage will be slower, to compare to another solutions. As you know, performance is so critical for db connections. I think you should evaluate that.
I suggest to you use one of the Cloud Native Storages. You can view them in the link below. Ceph and Gluster are popular products.
https://landscape.cncf.io/category=cloud-native-storage&format=card-mode&grouping=category
If you really want to continue with the nfs solution, you can check two points:
1) Did you check the access list on the storage appliance? You should see the worker nodes for the nfs mount.
2) After you try to mount the nfs storage on the worker nodes, you can try to import the deployment on your kubernetes cluster.

Related

Accessing CIFS files from pods

We have a docker image that is processing some files on a samba share.
For this we created a cifs share which is mounted to /mnt/dfs and files can be accessed in the container with:
docker run -v /mnt/dfs/project1:/workspace image
Now what I was aked to do is get the container into k8s and to acces a cifs share from a pod a cifs Volume driver usiong FlexVolume can be used. That's where some questions pop up.
I installed this repo as a daemonset
https://k8scifsvol.juliohm.com.br/
and it's up and running.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cifs-volumedriver-installer
spec:
selector:
matchLabels:
app: cifs-volumedriver-installer
template:
metadata:
name: cifs-volumedriver-installer
labels:
app: cifs-volumedriver-installer
spec:
containers:
- image: juliohm/kubernetes-cifs-volumedriver-installer:2.4
name: flex-deploy
imagePullPolicy: Always
securityContext:
privileged: true
volumeMounts:
- mountPath: /flexmnt
name: flexvolume-mount
volumes:
- name: flexvolume-mount
hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
Next thing to do is add a PeristentVolume, but that needs a capacity, 1Gi in the example. Does this mean that we lose all data on the smb server? Why should there be a capacity for an already existing server?
Also, how can we access a subdirectory of the mount /mnt/dfs from within the pod? So how to access data from /mnt/dfs/project1 in the pod?
Do we even need a PV? Could the pod just read from the host's mounted share?
apiVersion: v1
kind: PersistentVolume
metadata:
name: mycifspv
spec:
capacity:
storage: 1Gi
flexVolume:
driver: juliohm/cifs
options:
opts: sec=ntlm,uid=1000
server: my-cifs-host
share: /MySharedDirectory
secretRef:
name: my-secret
accessModes:
- ReadWriteMany
No, that field has no effect on the FlexVol plugin you linked. It doesn't even bother parsing out the size you pass in :)
Managed to get it working with the fstab/cifs plugin.
Copy its cifs script to /usr/libexec/kubernetes/kubelet-plugins/volume/exec and give it execute permissions. Also restart kubelet on all nodes.
https://github.com/fstab/cifs
Then added
containers:
- name: pablo
image: "10.203.32.80:5000/pablo"
volumeMounts:
- name: dfs
mountPath: /data
volumes:
- name: dfs
flexVolume:
driver: "fstab/cifs"
fsType: "cifs"
secretRef:
name: "cifs-secret"
options:
networkPath: "//dfs/dir"
mountOptions: "dir_mode=0755,file_mode=0644,noperm"
Now there is the /data mount inside the container pointing to //dfs/dir

How do you get UID:GID in the [1002120000, 1002129999] range to make it run in OpenShift?

This is with OpenShift Container Platform 4.3.
Consider this Dockerfile.
FROM eclipse-mosquitto
# Create folders
USER root
RUN mkdir -p /mosquitto/data /mosquitto/log
# mosquitto configuration
USER mosquitto
# This is crucial to me
COPY --chown=mosquitto:mosquitto ri45.conf /mosquitto/config/mosquitto.conf
EXPOSE 1883
And, this is my Deployment YAML.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mosquitto-broker
spec:
selector:
matchLabels:
app: mosquitto-broker
template:
metadata:
labels:
app: mosquitto-broker
spec:
containers:
- name: mosquitto-broker
image: org/repo/eclipse-mosquitto:1.0.1
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
volumeMounts:
- name: mosquitto-data
mountPath: /mosquitto/data
- name: mosquitto-log
mountPath: /mosquitto/log
ports:
- name: mqtt
containerPort: 1883
volumes:
- name: mosquitto-log
persistentVolumeClaim:
claimName: mosquitto-log
- name: mosquitto-data
persistentVolumeClaim:
claimName: mosquitto-data
When I do a oc create -f with the above YAML, I get this error, 2020-06-02T07:59:59: Error: Unable to open log file /mosquitto/log/mosquitto.log for writing. Maybe this is a permissions error; can't tell. Anyway, going by the eclipse/mosquitto Dockerfile, I see that mosquitto is a user with UID and GID of 1883. So, I added the securityContext as described here.
securityContext:
fsGroup: 1883
When I do a oc create -f with this modification, I get this error - securityContext.securityContext.runAsUser: Invalid value: 1883: must be in the ranges: [1002120000, 1002129999].
This approach of adding an initContainer to set permissions on volume does not work for me because, I have to be root to do that.
So, how do I enable the Eclipse mosquitto container to write to /mosquitto/log successfully?
There are multiple things to address here.
First off, you should make sure that you really want to bake a configuration file into your container image. Typically, configuration files are added via ConfigMaps or Secrets, as the configuration in cloud-native applications should typically come from the environment (OpenShift in your case).
Secondly, it seems that you are logging into a PersistentVolume, which is also a terrible practice, as the best practice would be to log to stdout. Of course, having application data (transaction logs) on a persistent volume makes sense.
As for your original question (that should no longer be relevant given the two points above), the issue can be approached using SecurityContextContraints (SCCs): Managing Security Context Constraints
So to resolve your issue you should use / create a SCC with runAsUser set correctly.

Kubernetes Persistent Volume and hostpath

I was experimenting with something with Kubernetes Persistent Volumes, I can't find a clear explanation in Kubernetes documentation and the behaviour is not the one I am expecting so I like to ask here.
I configured following Persistent Volume and Persistent Volume Claim.
kind: PersistentVolume
apiVersion: v1
metadata:
name: store-persistent-volume
namespace: test
spec:
storageClassName: hostpath
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Volumes/Data/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: store-persistent-volume-claim
namespace: test
spec:
storageClassName: hostpath
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
and the following Deployment and Service configuration.
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: store-deployment
namespace: test
spec:
replicas: 1
selector:
matchLabels:
k8s-app: store
template:
metadata:
labels:
k8s-app: store
spec:
volumes:
- name: store-volume
persistentVolumeClaim:
claimName: store-persistent-volume-claim
containers:
- name: store
image: localhost:5000/store
ports:
- containerPort: 8383
protocol: TCP
volumeMounts:
- name: store-volume
mountPath: /data
---
#------------ Service ----------------#
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: store
name: store
namespace: test
spec:
type: LoadBalancer
ports:
- port: 8383
targetPort: 8383
selector:
k8s-app: store
As you can see I defined '/Volumes/Data/data' as Persistent Volume and expecting that to mount that to '/data' container.
So I am assuming whatever in '/Volumes/Data/data' in the host should be visible at '/data' directory at container. Is this assumption correct? Because this is definitely not happening at the moment.
My second assumption is, whatever I save at '/data' should be visible at host, which is also not happening.
I can see from Kubernetes console that everything started correctly, (Persistent Volume, Claim, Deployment, Pod, Service...)
Am I understanding the persistent volume concept correctly at all?
PS. I am trying this in a Mac with Docker (18.05.0-ce-mac67(25042) -Channel edge), may be it should not work at Mac?
Thx for answers
Assuming you are using multi-node Kubernetes cluster, you should be able to see the data mounted locally at /Volumes/Data/data on the specific worker node that pod is running
You can check on which worker your pod is scheduled by using the command kubectl get pods -o wide -n test
Please note, as per kubernetes docs, HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster) PersistentVolume
It does work in my case.
As you are using the host path, you should check this '/data' in the worker node in which the pod is running.
Like the guy said above. You need to run a 'kubectl get po -n test -o wide' and you will see the node the pod is hosted on. Then if you SSH that worker you can see the volume

Kubernetes access Persistance volume mount externally

I have setup kubernetes Cluster and mounted volume mount as gcePersistentDisk in Google Cloud, It claims and mount successfully in Pods.
But i want to access this volume externally so that i can write it through git/ssh or manual. As disk is Already used and mounted i cannot access it.
How to write files through externally?
gcePersistentDisk is a network-based disk, and provisioned volumes can only be used by GCE
instances in the same project and zone.
The fact is that this kind of resource supports readWriteOnce and ReadOnlyMany.
You can use a GCE persistent storage to share data as read-only between multiple pods in the
same zone.
Back to your question: you can write on this volume only from one pod. No other pods can
use it as write storage - neither external nor from the same project.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: php
labels:
app: php
spec:
replicas: 1
selector:
matchLabels:
app: php
template:
metadata:
labels:
app: php
spec:
containers:
- image: php:7.1-apache
imagePullPolicy: Always
name: php
resources:
requests:
cpu: 200m
ports:
- containerPort: 80
name: php
volumeMounts:
- name: php-persistent-storage
mountPath: /var/www
volumes:
- name: php-persistent-storage
gcePersistentDisk:
pdName: php-phantomjs-disk
fsType: ext4

Multidrive access for Kubernetes Nodes

So, forgive me. I just started learning docker and kubernets a month ago.
I've got this to the point where I have my .yml file that takes my Minecraft server and runs it. I now want ftp access. Currently, there's a drive for the world folder and the config folder for the server (since I can't put the entire directory on a mounted drive (right?) and those two folders need to save every time the image is rebuilt).
So, I want to be able to access /config. Preferably while the minecraft node is still reading and writing. A few questions here.
How do I make the most minimal FTP image possible when making the docker file for it? I am unable to figure out a scenario. Best I have is a base image on python:alpine and to use something like this
Is it even possible to have the node access the drive when it's in use by another? Or do I have to make some custom script in the interface im making that turns off the minecraft server and then starts up the FTP node?
Current yml:
apiVersion: v1
kind: Service
metadata:
name: lapitos
labels:
type: lapitos
spec:
type: LoadBalancer
ports:
- name: minecraft
port: 25565
protocol: TCP
targetPort: 25565
- name: minecraft-rcon
port: 25575
protocol: TCP
targetPort: 25575
selector:
app: lapitos
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: lapitos
spec:
serviceName: lapitos
replicas: 1
selector:
matchLabels:
app: lapitos
template:
metadata:
labels:
app: lapitos
spec:
containers:
- name: lapitos
image: gcr.io/mchostingnet-202204/lapitosbeta2
resource:
limits:
cpu: "2"
requests:
cpu: "2"
ports:
- containerPort: 25565
name: minecraft
volumeMounts:
- name: world
mountPath: /world
- name: config
mountPath: /config
- name: logs
mountPath: /logs
volumeClaimTemplates:
- metadata:
name: world
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 25Gi
- metadata:
name: config
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
- metadata:
name: logs
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
1.- Grab an ftp image that suits you from any registry and use it, instead of making your own. If still is a requirement, I don't know.
Note: Compute Engine has got port 21 blocked.
2.- Yes, you can. Volume access modes:
ReadWriteOnce – the volume can be mounted as read-write by a single node
ReadOnlyMany – the volume can be mounted read-only by many nodes
ReadWriteMany – the volume can be mounted as read-write by many nodes

Resources