I am trying to mount a SMB drive with a PV/PVC in a kubernetes cluster using flexVolumes. I am getting the following error when I submit jobs.
MountVolume.SetUp failed for volume "smb-job" : mount command failed, status: Failure, reason: Caught exception A specified logon session does not exist. It may already have been terminated. with stack
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: smb-volume
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
flexVolume:
driver: microsoft.com/smb.cmd
secretRef:
name: "smb-secret"
options:
source: "\\\\ip_address\\test"
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: smb-pv-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
Cluster is a local cluster comprised of a windows node and a linux node, created with Rancher. Normal mounting of Samba drives through the command line is possible.
I am unsure of how to troubleshoot this.
Related
I am running mac OSX Catalina using the docker application with the Kubernetes option turned on. I create a PersistentVolume with the following yaml and command.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-data
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.1.250
path: "/volume1/docker"
kubectl apply -f pv.yml
This create and PersistentVolume with name pv-nfs-data. Next I then create a PersistentVolumeClaim with the following yaml and command
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
kubectl apply -f pvc.yml
This create a PersistentVolumeClaim with the name pvc-nfs-data however it doen't bind it to the available PersistentVolume (pv-nfs-data). Instead it creates an new one and binds it to that. How do I make the PersistentVolumeClaim bind to the available PersistentVolume
The Docker for Mac default storage class is the dynamic provisioning type, like you would get on AKS/GKE, where it allocates the physical storage as well.
→ kubectl get StorageClass
NAME PROVISIONER AGE
hostpath (default) docker.io/hostpath 191d
For a PVC to use an existing PV, you can disable the storage class and specify in the PV which PVC can use it with a claimRef.
Claim Ref
The PV includes a claimRef for the PVC you will create
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-data
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
claimRef:
namespace: insert-your-namespace-here
name: pv-nfs-data-claim
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.1.250
path: "/volume1/docker"
The PVC sets storageClassName to ''
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-nfs-data-claim
namespace: insert-your-namespace-here
spec:
storageClassName: ''
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Dynamic
You can go the dynamic route with NFS by adding an NFS dynamic provisioner, create a storage class for it and let kubernetes work the rest out. More recent version of Kubernetes (1.13+) can use the CSI NFS driver
I have kuberentes running in Openstack, and I want to use volumes provided by openstack rather than using NFS to manage volumes. I'm not sure where to start or if its even possible. i've tried bunch of stuff, no luck :(
Here is some methods I've tried so far.
I modified the /etc/kubernetes/manifests/kube-conroller yaml file. I mounted the cloud.conf file and added these lines
- --cloud-provider=openstack
- --cloud-config=/etc/kubernetes/cloud.conf
Then I ran this to create my storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openstack-test
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
namespace: mongo-dump
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
provisioner: kubernetes.io/cinder
parameters:
type: fast
availability: nova
Then created my pvc
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: cinder-claim
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
After this step its just stuck at pending. :9
I was experimenting with something with Kubernetes Persistent Volumes, I can't find a clear explanation in Kubernetes documentation and the behaviour is not the one I am expecting so I like to ask here.
I configured following Persistent Volume and Persistent Volume Claim.
kind: PersistentVolume
apiVersion: v1
metadata:
name: store-persistent-volume
namespace: test
spec:
storageClassName: hostpath
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Volumes/Data/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: store-persistent-volume-claim
namespace: test
spec:
storageClassName: hostpath
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
and the following Deployment and Service configuration.
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: store-deployment
namespace: test
spec:
replicas: 1
selector:
matchLabels:
k8s-app: store
template:
metadata:
labels:
k8s-app: store
spec:
volumes:
- name: store-volume
persistentVolumeClaim:
claimName: store-persistent-volume-claim
containers:
- name: store
image: localhost:5000/store
ports:
- containerPort: 8383
protocol: TCP
volumeMounts:
- name: store-volume
mountPath: /data
---
#------------ Service ----------------#
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: store
name: store
namespace: test
spec:
type: LoadBalancer
ports:
- port: 8383
targetPort: 8383
selector:
k8s-app: store
As you can see I defined '/Volumes/Data/data' as Persistent Volume and expecting that to mount that to '/data' container.
So I am assuming whatever in '/Volumes/Data/data' in the host should be visible at '/data' directory at container. Is this assumption correct? Because this is definitely not happening at the moment.
My second assumption is, whatever I save at '/data' should be visible at host, which is also not happening.
I can see from Kubernetes console that everything started correctly, (Persistent Volume, Claim, Deployment, Pod, Service...)
Am I understanding the persistent volume concept correctly at all?
PS. I am trying this in a Mac with Docker (18.05.0-ce-mac67(25042) -Channel edge), may be it should not work at Mac?
Thx for answers
Assuming you are using multi-node Kubernetes cluster, you should be able to see the data mounted locally at /Volumes/Data/data on the specific worker node that pod is running
You can check on which worker your pod is scheduled by using the command kubectl get pods -o wide -n test
Please note, as per kubernetes docs, HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster) PersistentVolume
It does work in my case.
As you are using the host path, you should check this '/data' in the worker node in which the pod is running.
Like the guy said above. You need to run a 'kubectl get po -n test -o wide' and you will see the node the pod is hosted on. Then if you SSH that worker you can see the volume
I have made up a little cluster (it is 1 Machine the master and two VM the nodes), now I have created a NFS directory to share a persistence volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs #nome di riferimento
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.57.1
path: "/mnt/shardisk"
and a claim that call it:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Mi
and finally a stupid pod to use it:
kind: Pod
apiVersion: v1
metadata:
name: nginx-nfs
spec:
volumes:
- name: storage
persistentVolumeClaim:
claimName: test-pvc
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: storage
now I have created a cluster from the physical machine and I have joined it from the VM, I have used callico for the network services (because flannel fail to start if someone know why it would be wonderful to solve it)
now if I try to do:
kubectl describe pod I see all work fine and so to kubectl logs nginx-nfs, but if I try to do kubectl exec -it nginx-nfs /bin/bash
all freeze for a very long time and after that I have this:
Error from server: error dialing backend: dial tcp 10.0.2.15:10250: getsockopt: connection timed out
I have "solve" it, i use kubernetes in 2 different lan and so the admin.conf have an ip that no match the current ip and it will not work, i have solve it creating same vm internal to the host and nat a static ip on it
In Kubernetes is it possible to add hostPath storage in Statefulset. If so, can someone help me with some example?
Yes but it is definitely for testing purposes.
First you need to create as many Persistent Volume as you need
kind: PersistentVolume
apiVersion: v1
metadata:
name: hp-pv-001
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data01"
kind: PersistentVolume
apiVersion: v1
metadata:
name: hp-pv-002
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data02"
...
Afterwards, add this VolumeClaimsTemplate to your Statefulset
volumeClaimTemplates:
- metadata:
name: my-hostpath-volume
spec:
storageClassName: manual
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi
selector:
matchLabels:
type: local
Another solution is using the hostpath dynamic provisioner. You do not have to create the PV bin advance but this remains a "proof-of-concept solution" as well and you will have to build and deploy the provisioner in your cluster.
A hostPath volume for StatefulSet should only be used in a single-node cluster, e.g. for development. Rescheduling of the pod will not work.
Instead, consider using a Local Persistent Volume for this kind of use cases.
The biggest difference is that the Kubernetes scheduler understands which node a Local Persistent Volume belongs to. With HostPath volumes, a pod referencing a HostPath volume may be moved by the scheduler to a different node resulting in data loss. But with Local Persistent Volumes, the Kubernetes scheduler ensures that a pod using a Local Persistent Volume is always scheduled to the same node.
Consider using local static provisioner for this, the Getting Started guide has instructions for how to use it in different environments.