I have multi node kubernetes setup. I am trying to allocate a Persistent volume dynamically using storage classes with NFS volume plugin.
I found storage classes examples for glusterfs, aws-ebs, etc.but, I didn't find any example for NFS.
If I create PV and PVC only then NFS works very well(Without storage class).
I tried to write storage class file for NFS, by referring other plugins. please refer it below,
nfs-storage-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: kube-system
name: my-storage
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
labels:
kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/nfs
parameters:
path: /nfsfileshare
server: <nfs-server-ip>
nfs-pv-claim.yaml
apiVersion: v1
metadata:
name: demo-claim
annotations:
volume.beta.kubernetes.io/storage-class: my-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
It didn't worked. So, my question is, Can we write a storage class for NFS? Does it support dynamic provisioing?
As of August 2020, here's how things look for NFS persistence on Kubernetes:
You can
Put an NFS volume on a Pod directly:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
nfs:
path: /foo/bar
server: wherever.dns
Manually create a Persistent Volume backed by NFS, and mount it with a Persistent Volume Claim (PV spec shown below):
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
Use the (now deprecated) NFS PV provisioner from external-storage. This was last updated two years ago, and has been officially EOL'd, so good luck. With this route, you can make a Storage Class such as the one below to fulfill PVCs from your NFS server.
Update: There is a new incarnation of this provisioner as kubernetes-sigs/nfs-subdir-external-provisioner! It seems to work in a similar way to the old nfs-client provisioner, but is much more up-to-date. Huzzah!
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-nfs
provisioner: example.com/nfs
mountOptions:
- vers=4.1
Evidently, CSI is the future, and there is a NFS CSI driver. However, it doesn't support dynamic provisioning yet, so it's not really terribly useful.
Update (December 2020): Dynamic provisioning is apparently in the works (on master, but not released) for the CSI driver.
You might be able to replace external-storage's NFS provisioner with something from the community, or something you write. In researching this problem, I stumbled on a provisioner written by someone on GitHub, for example. Whether such provisioners perform well, are secure, or work at all is beyond me, but they do exist.
I'm looking into doing the same thing. I found https://github.com/kubernetes-incubator/external-storage/tree/master/nfs, which I think you based your provisioner on?
I think an nfs provider would need to create a unique directory under the path defined. I'm not really sure how this could be done.
Maybe this is better of as an github issue on the kubernetes repo.
Dynamic storage provisioning using NFS doesn't work, better use glusterfs. There's a good tutorial with fixed to common problems while setting up.
http://blog.lwolf.org/post/how-i-deployed-glusterfs-cluster-to-kubernetes/
I also tried to enable the NFS provisioner on my kubernetes cluster and at first it didn't work, because the quickstart guide does not mention that you need to apply the rbac.yaml as well (I opened a PR to fix this).
The nfs provisioner works fine for me if I follow these steps on my cluster:
https://github.com/kubernetes-incubator/external-storage/tree/master/nfs#quickstart
$ kubectl create -f deploy/kubernetes/deployment.yaml
$ kubectl create -f deploy/kubernetes/rbac.yaml
$ kubectl create -f deploy/kubernetes/class.yaml
Then you should be able to create PVCs like this:
$ kubectl create -f deploy/kubernetes/claim.yaml
You might want to change the folders used for the volume mounts in deployment.yaml to match it with your cluster.
The purpose of StorageClass is to create storage, e.g. from cloud providers (or "Provisioner" as they call it in the kubernetes docs). In case of NFS you only want to get access to existing storage and there is no creation involved. Thus you don't need a StorageClass. Please refer to this blog.
If you are using AWS, I believe you can use this image to create a NFS server:
https://hub.docker.com/r/alphayax/docker-volume-nfs
Related
I have a docker file which I've written for a React application. This app takes a .json config file that it uses at run time. The file doesn't contain any secrets.
So I've built the image without the config file, and now I'm unsure how to transfer the JSON file when I run it up.
I'm looking at deploying this in production using a CI/CD process which would entail:
gitlab (actions) building the image
pushing this to a docker repository
Kubernetes picking this up and running/starting the container
I think it's at the last point that I want to add the JSON configuration.
My question is: how do I add the config file to the application when k8's starts it up?
If I understand correctly, k8s doesn't have any local storage to create a volume from to copy it in? Can I give docker run a separate git repo where I can hold the config files?
You should take a look at configmap.
From k8s documentation configmap:
A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
In your case, you want as a volume to have a file.
apiVersion: v1
kind: ConfigMap
metadata:
name: your-app
data:
config.json: #you file name
<file-content>
A configmap can be create manually or generated from a file using:
Directly in the cluster:kubectl create configmap <name> --from-file <path-to-file>.
In a yaml file:kubectl create configmap <name> --from-file <path-to-file> --dry-run=client -o yaml > <file-name>.yaml.
When you got your configmap, you must modify your deployment/pod to add a volume.
apiVersion: apps/v1
kind: Deployment
metadata:
name: <your-name>
spec:
...
template:
metadata:
...
spec:
...
containers:
- name: <container-name>
...
volumeMounts:
- mountPath: '<path>/config.json'
name: config-volume
readOnly: true
subPath: config.json
volumes:
- name: config-volume
configMap:
name: <name-of-configmap>
To deploy to your cluster, you can use plain yaml or I suggest you take a look at Kustomize or Helm charts.
They are both popular system to deploy applications. If kustomize, there is the configmap generator feature that fit your case.
Good luck :)
I am new to Kubernetes but familiar with docker.
Docker Use Case
Usually, when I want to persist data I just create a volume with a name then attach it to the container, and even when I stop it then start another one with the same image I can see the data persisting.
So this is what i used to do in docker
docker volume create nginx-storage
run -it --rm -v nginx-storage:/usr/share/nginx/html -p 80:80 nginx:1.14.2
then I:
Create a new html file in /usr/share/nginx/html
Stop container
Run the same docker run command again (will create another container with same volume)
html file exists (which means data persisted in that volume)
Kubernetes Use Case
Usually, when I work with Kubernetes volumes I specify a PVC (PersistentVolumeClaim) and PV (PersistentVolume) using hostPath which will bind mount directory or a file from the host machine to the container.
what I want to do is reproduce the same behavior specified in the previous example (Docker Use Case) so how can I do that? Is Kubernetes creating volumes process is different from Docker? and if possible providing a YAML file would help me understand.
To a first approximation, you can't (portably) do this. Build your content into the image instead.
There are two big practical problems, especially if you're running a production-oriented system on a cloud-hosted Kubernetes:
If you look at the list of PersistentVolume types, very few of them can be used in ReadWriteMany mode. It's very easy to get, say, an AWSElasticBlockStore volume that can only be used on one node at a time, and something like this will probably be the default cluster setup. That means you'll have trouble running multiple pod replicas serving the same (static) data.
Once you do get a volume, it's very hard to edit its contents. Consider the aforementioned EBS volume: you can't edit it without being logged into the node on which it's mounted, which means finding the node, convincing your security team that you can have root access over your entire cluster, enabling remote logins, and then editing the file. That's not something that's actually possible in most non-developer Kubernetes setups.
The thing you should do instead is build your static content into a custom image. An image registry of some sort is all but required to run Kubernetes and you can push this static content server into the same registry as your application code.
FROM nginx:1.14.2
COPY . /usr/share/nginx/html
# Base image has a working CMD, no need to repeat it
Then in your deployment spec, set image: registry.example.com/nginx-frontend:20220209 or whatever you've chosen to name this build of this image, and do not use volumes at all. You'd deploy this the same way you deploy other parts of your application; you could use Helm or Kustomize to simplify the update process.
Correspondingly, in the plain-Docker case, I'd avoid volumes here. You don't discuss how files get into the nginx-storage named volume; if you're using imperative commands like docker cp or debugging tools like docker exec, those approaches are hard to script and are intrinsically local to the system they're running on. It's not easy to copy a Docker volume from one place to another. Images, though, can be pushed and pulled through a registry.
I managed to do that by creating a PVC only this is how I did it (with an Nginx image):
nginx-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
nginx-deployment.yaml
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template: # template for the pods
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
volumeMounts:
- mountPath: /usr/share/nginx/html
name: nginx-data
volumes:
- name: nginx-data
persistentVolumeClaim:
claimName: nginx-data
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: nginx
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
Once I run kubectl apply on the PVC then on the deployment going to localhost:30080 will show 404 not found page means that all data in the /usr/share/nginx/html was deleted once the container gets started and that's because it's bind mounting a dir from the k8s cluster node to that container as a volume:
/usr/share/nginx/html <-- dir in volume
/var/lib/k8s-pvs/nginx2-data/pvc-9ba811b0-e6b6-4564-b6c9-4a32d04b974f <-- dir from node (was automatically created)
I tried adding a new file into that container in the html dir as a new index.html file, then deleted the container, a new container was created by the pod and checking localhost:30080 worked with the newly created home page
I tried deleting the deployment and reapplying it (without deleting the PVC) checked localhost:30080 and everything still persists.
An alternative solution specified in the comments kubernetes.io/docs/tasks/configure-pod-container/… by
larsks
How to change our k8s cluster from docker.io to our private registry so we don't have to mention the docker registry host on every image?
you could set your MutatingAdmissionWebhook to modify image.spec.containers.image values
I wouldn't recommend doing this though.
This is a Community Wiki answer partially based on other user's comments, so feel free to edit it and add any additional details you consider important.
As you can read in this section of the official kubernetes documentation, it can be configured on a container runtime level ( in this case Docker ) by creating a Secret based on existing Docker credentials and later you can refer to such alternative cofiguration in your Pod specification as follows:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets: 👈
- name: regcred
We have one bare metal machine with one SSD and one HDD. The pods in Minikube (k8s) will claim some pvc and get some volumes. We want to enforce that these volumes are actually on our SSD, not HDD. How to do that? Thanks very much!
p.s. What I have tried: When wanting a PVC, we find that Minikube will assign one on /tmp/hostpath-provisioner/.... In addition, IMHO this is the path inside the Docker that Minikube itself runs, not the host machine path. Thus, I have tried to minikube mount /data/minikube-my-tmp-hostpath-provisioner:/tmp/hostpath-provisioner where the /data of host bare metal machine is on SSD (not HDD). However, this makes the pods unhappy, and after a restart they all fail... In addition, I find that only new files will be written to the newly mounted path, while the existing files will still be inside the container...
This sounds exactly like the reason for which storage classes exist:
A StorageClass provides a way for administrators to describe the “classes” of storage they offer. Different classes might map to quality-of-service levels, or to backup policies, or to arbitrary policies determined by the cluster administrators. Kubernetes itself is unopinionated about what classes represent. This concept is sometimes called “profiles” in other storage systems.
So in other words - you can create multiple storage classes with different performance or other characteristics. And then decide which one is most appropriate for each claim they create.
For example, this is a storage class you can use on minikube:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: k8s.io/minikube-hostpath
parameters:
type: pd-ssd
And you'll probably also need to create a PV, you can do that using:
apiVersion: v1
kind: PersistentVolume
metadata:
name: some-name-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /tmp/path
Then, finally, the PVC would like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: some-pvc
spec:
storageClassName: fast
resources:
requests:
storage: 100Mi
accessModes:
- ReadWriteOnce
I have a Jenkins pipeline using the kubernetes plugin to run a docker in docker container and build images:
pipeline {
agent {
kubernetes {
label 'kind'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
name: dind
...
I also have a pool of persistent volumes in the jenkins namespace each labelled app=dind. I want one of these volumes to be picked for each pipeline run and used as /var/lib/docker in my dind container in order to cache any image pulls on each run. I want to have a pool and caches, not just a single one, as I want multiple pipeline runs to be able to happen at the same time. How can I configure this?
This can be achieved natively in kubernetes by creating a persistent volume claim as follows:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dind
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector:
matchLabels:
app: dind
and mounting it into the Pod, but I'm not sure how to configure the pipeline to create and cleanup such a persistent volume claim.
First of all, I think the way you think it can be achieved natively in kubernetes - wouldn't work. You either have to re-use same PVC which will make build pods to access same PV concurrently, or if you want to have a PV per build - your PVs will be stuck in Released status and not automatically available for new PVCs.
There is more details and discussion available here https://issues.jenkins.io/browse/JENKINS-42422.
It so happens that I wrote two simple controllers - automatic PV releaser (that would find and make Released PVs Available again for new PVCs) and dynamic PVC provisioner (for Jenkins Kubernetes plugin specifically - so you can define a PVC as annotation on a Pod). Check it out here https://github.com/plumber-cd/kubernetes-dynamic-reclaimable-pvc-controllers. There is a full Jenkinsfile example here https://github.com/plumber-cd/kubernetes-dynamic-reclaimable-pvc-controllers/tree/main/examples/jenkins-kubernetes-plugin-with-build-cache.