Kubernetes (GKE) persistent volume resizing not working. - docker

I am trying to resize the persistent volume in Google Kubernetes Engine. but I ending up with an error
The PersistentVolumeClaim "pvc1" is invalid: spec: Forbidden: field is immutable after creation
I have been following https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/ guide.
Steps
1. Created a standard.yaml file with following content
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
parameters:
type: pd-standard
provisioner: kubernetes.io/gce-pd
allowVolumeExpansion: true
reclaimPolicy: Delete
2. Created gke-pvc.yml with following content
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 20Gi
3. Ran kubectl apply -f standard.yaml
Ran kubectl apply -f gke-pvc.yml
Now ran kubectl edit pvc pvc1 and changed storage from 20Gi to 30 Gi and saved the file but I got error
error: persistentvolumeclaims "pvc1" is invalid
error: persistentvolumeclaims "pvc1" is invalid
A copy of your changes has been stored to "/tmp/kubectl-edit-0hztl.yaml"
Please help me to solve this issue.

This is expected behavior on GKE. I believe feature is available on Kubernetes 1.11 but not yet released on GKE. If you want early access to feature, you may sign up here.

It is working currently, after you edit pvc, you get this message:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-02-17T23:31:42Z"
status: "True"
type: Resizing
and soon after, this:
message: Waiting for user to (re-)start a pod to finish file system resize of
volume on node.
status: "True"
type: FileSystemResizePending
Then just delete pod and your volume will be resized

Related

flexVolume fails to mount SMB drive

I am trying to mount a SMB drive with a PV/PVC in a kubernetes cluster using flexVolumes. I am getting the following error when I submit jobs.
MountVolume.SetUp failed for volume "smb-job" : mount command failed, status: Failure, reason: Caught exception A specified logon session does not exist. It may already have been terminated. with stack
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: smb-volume
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
flexVolume:
driver: microsoft.com/smb.cmd
secretRef:
name: "smb-secret"
options:
source: "\\\\ip_address\\test"
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: smb-pv-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
Cluster is a local cluster comprised of a windows node and a linux node, created with Rancher. Normal mounting of Samba drives through the command line is possible.
I am unsure of how to troubleshoot this.

Docker container does/doesnt work inside kubernetes

I am a bit confused here. It does work as normal docker container but when it goes inside a pod it doesnt. So here is how i do it.
Dockerfile in my local to create the image and publish to docker registry
FROM alpine:3.7
COPY . /var/www/html
CMD tail -f /dev/null
Now if i just pull the image(after deleting the local) and run as a container. It works and i can see my files inside /var/www/html.
Now i want to use that inside my kubernetes cluster.
Def : Minikube --vm-driver=none
I am running kube inside minikube with driver none option. So for single node cluster.
EDIT
I can see my data inside /var/www/html if i remove volume mounts and claim from deployment file.
Deployment file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
io.kompose.service: app
name: app
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: app
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
containers:
- image: kingshukdeb/mycode
name: pd-mycode
resources: {}
volumeMounts:
- mountPath: /var/www/html
name: claim-app-storage
restartPolicy: Always
volumes:
- name: claim-app-storage
persistentVolumeClaim:
claimName: claim-app-nginx
status: {}
PVC file
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: app-nginx1
name: claim-app-nginx
spec:
storageClassName: testmanual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
PV file
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-nginx1
labels:
type: local
spec:
storageClassName: testmanual
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/volumes/app"
Now when i run these files it creates the pod, pv, pvc and pvc is bound to pv. But if i go inside my container i dont see my files. hostpath is /data/volumes/app . Any ideas will be appreciated.
When PVC is bound to a pod, volume is mounted in location described in pod/deployment yaml file. In you case: mountPath: /var/www/html. That's why files "baked into" container image are not accessible (simple explanation why here)
You can confirm this by exec to the container by running kubectl exec YOUR_POD -i -t -- /bin/sh, and running mount | grep "/var/www/html".
Solution
You may solve this in many ways. It's best practice to keep your static data separate (i.e. in PV), and keep the container image as small and fast as possible.
If you transfer files you want to mount in PV to your hosts path /data/volumes/app they will be accessible in your pod, then you can create new image omitting the COPY operation. This way even if pod crashes changes to files made by your app will be saved.
If PV will be claimed by more than one pod, you need to change accessModes as described here:
The access modes are:
ReadWriteOnce – the volume can be mounted as read-write by a single node
ReadOnlyMany – the volume can be mounted read-only by many nodes
ReadWriteMany – the volume can be mounted as read-write by many nodes
In-depth explanation of Volumes in Kubernetes docs: https://kubernetes.io/docs/concepts/storage/persistent-volumes/

Kubernetes Storage type Hostpath- files mapping issue

Hi I am using latest kubernetes 1.13.1 and docker-ce (Docker version 18.06.1-ce, build e68fc7a).
I setup a deployment file that mount a file from the host (host-path) and mounts it inside a container (mountPath).
The bug is when I am trying to mount a find from the host to the container I get an error message that It's not a file. (Kubernetes think that the file is a directory for some reason)
When I am trying to run the containers using the command:
Kubectl create -f
it stay at ContainerCreating stage forever.
after deeper look on it using Kubectl describe pod it say:
Is has an error message the the file is not recognized as a file.
Here is the deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: notixxxion
name: notification
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: notification
spec:
containers:
- image: docker-registry.xxxxxx.com/xxxxx/nxxxx:laxxt
name: notixxxion
ports:
- containerPort: xxx0
#### host file configuration
volumeMounts:
- mountPath: /opt/notification/dist/hellow.txt
name: test-volume
readOnly: false
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /exec-ui/app-config/hellow.txt
# this field is optional
type: FileOrCreate
#type: File
status: {}
I have reinstalled the kubernetes cluster and it got little bit better.
kubernetes now can read files without any problem and the container in creating and running But, there is some other issue with the host path storage type:
hostPath containing mounts do not update as they change on the host even after I delete the pod and create it again
Check for file permissions which you are trying to mount!
As a last resort try using privileged mode.
Hope it helps!

persistentvolumeclaim "jenkins-volume-claim" not found

In my minikube I'm getting an error persistentvolumeclaim "jenkins-volume-claim" not found
I'm installing jenkins using helm with the command below:
helm install --name jenkins -f kubernetes/jenkins-values.yaml stable/jenkins --namespace jenkins-system
the snippet about Persistence in jenkins-values.yaml is below:
Persistence:
Enabled: true
## A manually managed Persistent Volume and Claim
## Requires Persistence.Enabled: true
## If defined, PVC must be created manually before volume will be bound
ExistingClaim: jenkins-volume-claim
I've created a persistence volume using the command below:
kubectl create -f persistence.yaml
persistence.yaml looks like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-volume
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
hostPath:
path: /data/jenkins-volume/
Question
I have persistence volume jenkins-volume created but am still getting error persistentvolumeclaim "jenkins-volume-claim" not found. How can I resolve this?
The error message points to missing PersistentVolumeClaim named jenkins-volume-claim. To create one, execute:
kubectl -n <namespace> create -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-volume-claim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 5Gi
EOF
Executing after that kubectl get pv should show the jenkins-volume PV in Bound status (assuming the PV has been created already with capacity of at least 5Gi).
Use selector(s) as described here to make sure the claim will bind to the desired pre-created PV (persistent volume) in case there are more than one PV available with proper capacity.
Look at this line,
## If defined, PVC must be created manually before volume will be bound
ExistingClaim: jenkins-volume-claim
So, you have to PersistentVolumeClaim not PersistentVolume with name jenkins-volume-claim.
See what is PersistentVolumeClaim from here: PersistentVolumeClaims

Kubernetes Persistent Volume and hostpath

I was experimenting with something with Kubernetes Persistent Volumes, I can't find a clear explanation in Kubernetes documentation and the behaviour is not the one I am expecting so I like to ask here.
I configured following Persistent Volume and Persistent Volume Claim.
kind: PersistentVolume
apiVersion: v1
metadata:
name: store-persistent-volume
namespace: test
spec:
storageClassName: hostpath
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Volumes/Data/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: store-persistent-volume-claim
namespace: test
spec:
storageClassName: hostpath
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
and the following Deployment and Service configuration.
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: store-deployment
namespace: test
spec:
replicas: 1
selector:
matchLabels:
k8s-app: store
template:
metadata:
labels:
k8s-app: store
spec:
volumes:
- name: store-volume
persistentVolumeClaim:
claimName: store-persistent-volume-claim
containers:
- name: store
image: localhost:5000/store
ports:
- containerPort: 8383
protocol: TCP
volumeMounts:
- name: store-volume
mountPath: /data
---
#------------ Service ----------------#
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: store
name: store
namespace: test
spec:
type: LoadBalancer
ports:
- port: 8383
targetPort: 8383
selector:
k8s-app: store
As you can see I defined '/Volumes/Data/data' as Persistent Volume and expecting that to mount that to '/data' container.
So I am assuming whatever in '/Volumes/Data/data' in the host should be visible at '/data' directory at container. Is this assumption correct? Because this is definitely not happening at the moment.
My second assumption is, whatever I save at '/data' should be visible at host, which is also not happening.
I can see from Kubernetes console that everything started correctly, (Persistent Volume, Claim, Deployment, Pod, Service...)
Am I understanding the persistent volume concept correctly at all?
PS. I am trying this in a Mac with Docker (18.05.0-ce-mac67(25042) -Channel edge), may be it should not work at Mac?
Thx for answers
Assuming you are using multi-node Kubernetes cluster, you should be able to see the data mounted locally at /Volumes/Data/data on the specific worker node that pod is running
You can check on which worker your pod is scheduled by using the command kubectl get pods -o wide -n test
Please note, as per kubernetes docs, HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster) PersistentVolume
It does work in my case.
As you are using the host path, you should check this '/data' in the worker node in which the pod is running.
Like the guy said above. You need to run a 'kubectl get po -n test -o wide' and you will see the node the pod is hosted on. Then if you SSH that worker you can see the volume

Resources