So, forgive me. I just started learning docker and kubernets a month ago.
I've got this to the point where I have my .yml file that takes my Minecraft server and runs it. I now want ftp access. Currently, there's a drive for the world folder and the config folder for the server (since I can't put the entire directory on a mounted drive (right?) and those two folders need to save every time the image is rebuilt).
So, I want to be able to access /config. Preferably while the minecraft node is still reading and writing. A few questions here.
How do I make the most minimal FTP image possible when making the docker file for it? I am unable to figure out a scenario. Best I have is a base image on python:alpine and to use something like this
Is it even possible to have the node access the drive when it's in use by another? Or do I have to make some custom script in the interface im making that turns off the minecraft server and then starts up the FTP node?
Current yml:
apiVersion: v1
kind: Service
metadata:
name: lapitos
labels:
type: lapitos
spec:
type: LoadBalancer
ports:
- name: minecraft
port: 25565
protocol: TCP
targetPort: 25565
- name: minecraft-rcon
port: 25575
protocol: TCP
targetPort: 25575
selector:
app: lapitos
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: lapitos
spec:
serviceName: lapitos
replicas: 1
selector:
matchLabels:
app: lapitos
template:
metadata:
labels:
app: lapitos
spec:
containers:
- name: lapitos
image: gcr.io/mchostingnet-202204/lapitosbeta2
resource:
limits:
cpu: "2"
requests:
cpu: "2"
ports:
- containerPort: 25565
name: minecraft
volumeMounts:
- name: world
mountPath: /world
- name: config
mountPath: /config
- name: logs
mountPath: /logs
volumeClaimTemplates:
- metadata:
name: world
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 25Gi
- metadata:
name: config
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
- metadata:
name: logs
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
1.- Grab an ftp image that suits you from any registry and use it, instead of making your own. If still is a requirement, I don't know.
Note: Compute Engine has got port 21 blocked.
2.- Yes, you can. Volume access modes:
ReadWriteOnce – the volume can be mounted as read-write by a single node
ReadOnlyMany – the volume can be mounted read-only by many nodes
ReadWriteMany – the volume can be mounted as read-write by many nodes
Related
We have a docker image that is processing some files on a samba share.
For this we created a cifs share which is mounted to /mnt/dfs and files can be accessed in the container with:
docker run -v /mnt/dfs/project1:/workspace image
Now what I was aked to do is get the container into k8s and to acces a cifs share from a pod a cifs Volume driver usiong FlexVolume can be used. That's where some questions pop up.
I installed this repo as a daemonset
https://k8scifsvol.juliohm.com.br/
and it's up and running.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cifs-volumedriver-installer
spec:
selector:
matchLabels:
app: cifs-volumedriver-installer
template:
metadata:
name: cifs-volumedriver-installer
labels:
app: cifs-volumedriver-installer
spec:
containers:
- image: juliohm/kubernetes-cifs-volumedriver-installer:2.4
name: flex-deploy
imagePullPolicy: Always
securityContext:
privileged: true
volumeMounts:
- mountPath: /flexmnt
name: flexvolume-mount
volumes:
- name: flexvolume-mount
hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
Next thing to do is add a PeristentVolume, but that needs a capacity, 1Gi in the example. Does this mean that we lose all data on the smb server? Why should there be a capacity for an already existing server?
Also, how can we access a subdirectory of the mount /mnt/dfs from within the pod? So how to access data from /mnt/dfs/project1 in the pod?
Do we even need a PV? Could the pod just read from the host's mounted share?
apiVersion: v1
kind: PersistentVolume
metadata:
name: mycifspv
spec:
capacity:
storage: 1Gi
flexVolume:
driver: juliohm/cifs
options:
opts: sec=ntlm,uid=1000
server: my-cifs-host
share: /MySharedDirectory
secretRef:
name: my-secret
accessModes:
- ReadWriteMany
No, that field has no effect on the FlexVol plugin you linked. It doesn't even bother parsing out the size you pass in :)
Managed to get it working with the fstab/cifs plugin.
Copy its cifs script to /usr/libexec/kubernetes/kubelet-plugins/volume/exec and give it execute permissions. Also restart kubelet on all nodes.
https://github.com/fstab/cifs
Then added
containers:
- name: pablo
image: "10.203.32.80:5000/pablo"
volumeMounts:
- name: dfs
mountPath: /data
volumes:
- name: dfs
flexVolume:
driver: "fstab/cifs"
fsType: "cifs"
secretRef:
name: "cifs-secret"
options:
networkPath: "//dfs/dir"
mountOptions: "dir_mode=0755,file_mode=0644,noperm"
Now there is the /data mount inside the container pointing to //dfs/dir
I am trying to setup a persistent volume for K8s that is running in Docker Desktop for Windows. The end goal being I want to run Jenkins and not lose any work if docker/K8s spins down.
I have tried a couple of things but I'm either misunderstanding the ability to do this or I am setting something up wrong. Currently I have the environment setup like so:
I have setup a volume in docker for Jenkins. All I did was create the volume, not sure if I need more configuration here.
docker volume inspect jenkins-pv
[
{
"CreatedAt": "2020-05-20T16:02:42Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/jenkins-pv/_data",
"Name": "jenkins-pv",
"Options": {},
"Scope": "local"
}
]
I have also created a persistent volume in K8s pointing to the mount point in the Docker volume and deployed it.
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv-volume
labels:
type: hostPath
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
hostPath:
path: "/var/lib/docker/volumes/jenkins-pv/_data"
I have also created a pv claim and deployed that.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Lastly I have created a deployment for Jenkins. I have confirmed it works and I am able to access the app.
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-app
template:
metadata:
labels:
app: jenkins-app
spec:
containers:
- name: jenkins-pod
image: jenkins/jenkins:2.237-alpine
ports:
- containerPort: 50000
- containerPort: 8080
volumeMounts:
- name: jenkins-pv-volume
mountPath: /var/lib/docker/volumes/jenkins-pv/_data
volumes:
- name: jenkins-pv-volume
persistentVolumeClaim:
claimName: jenkins-pv-claim
However, the data does not persist quitting Docker and I have to reconfigure Jenkins every time I start. Did I miss something or how/what I am trying to do not possible? Is there a better or easier way to do this?
Thanks!
I figured out my issue, it was two fold.
I was trying to save data from the wrong location within the pod that was running Jenkins.
I was never writing the data back to docker shared folder.
To get this working I created a shared folder in Docker (C:\DockerShare).
Then I updated the host path in my Persistent Volume.
The format is /host_mnt/path_to_docker_shared_folder_location
Since I used C:\DockerShare my path is: /host_mnt/c/DockerShare
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins
labels:
type: hostPath
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /host_mnt/c/DockerShare/jenkins
I also had to update the Jenkins deployment because I was not actually saving any of the config.
I should have been saving data from /var/jenkins_home.
Deployment looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-app
template:
metadata:
labels:
app: jenkins-app
spec:
containers:
- name: jenkins-pod
image: jenkins/jenkins:2.237-alpine
ports:
- containerPort: 50000
- containerPort: 8080
volumeMounts:
- name: jenkins
mountPath: /var/jenkins_home
volumes:
- name: jenkins
persistentVolumeClaim:
claimName: jenkins
Anyways its working now and I hope this helps someone else when it comes to setting up a PV.
I have directly mounted NFS volume for mysql data, need to implement storage policy for retaining data across pod deletion, and to avoid any corruption. please recommend some useful.
I did not find a way to enable persistentVolumeReclaimPolicy: Retain in directly mounted volumes . I know it can be done from PV/PVC creation but is it possible from statefulset volumes... Some guidelines is needed in understanding the yaml options for a particular object, where to get all the options(parameters) available for an object. currently googling for each options and trying - so hard.
I could not mount a configmap file (my.cnf) to a file in the pod. it removes the underlying files in the mountpath. curious to know how it is handled generally, do we need separate mount path for each config file.
code block
apiVersion: v1
kind: Service
metadata:
name: mymariadb
labels:
app: mymariadb
spec:
ports:
- port: 3306
name: mysql
targetPort: mysql
nodePort: 30003
type: NodePort
selector:
app: mymariadb
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mymariadb
labels:
app: mymariadb
spec:
serviceName: "mymariadb"
selector:
matchLabels:
app: mymariadb
template:
metadata:
labels:
app: mymariadb
spec:
containers:
- name: mariadb
image: mariadb:10.3.7
env:
- name: MYSQL_ROOT_PASSWORD
value: xxxx
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /data
subPath: mysql
- name: conf
mountPath: /etc/mysql # /conf.d removing files
resources:
requests:
cpu: 500m
memory: 2Gi
volumes:
- name: data
nfs:
server: 10.12.32.41
path: /data/mymariadb
spec:
persistentVolumeReclaimPolicy: Retain # not taking
- name: conf
configMap:
name: mycustconf
items:
- key: my.cnf
path: my.cnf
Firstly, I did not suggest nfs mount in Kubernetes platform for two reasons. From security perspective, another container can access the nfs mount on the worker nodes. The Second, from performances perspective, the connection between worker nodes and storage will be slower, to compare to another solutions. As you know, performance is so critical for db connections. I think you should evaluate that.
I suggest to you use one of the Cloud Native Storages. You can view them in the link below. Ceph and Gluster are popular products.
https://landscape.cncf.io/category=cloud-native-storage&format=card-mode&grouping=category
If you really want to continue with the nfs solution, you can check two points:
1) Did you check the access list on the storage appliance? You should see the worker nodes for the nfs mount.
2) After you try to mount the nfs storage on the worker nodes, you can try to import the deployment on your kubernetes cluster.
I was experimenting with something with Kubernetes Persistent Volumes, I can't find a clear explanation in Kubernetes documentation and the behaviour is not the one I am expecting so I like to ask here.
I configured following Persistent Volume and Persistent Volume Claim.
kind: PersistentVolume
apiVersion: v1
metadata:
name: store-persistent-volume
namespace: test
spec:
storageClassName: hostpath
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Volumes/Data/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: store-persistent-volume-claim
namespace: test
spec:
storageClassName: hostpath
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
and the following Deployment and Service configuration.
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: store-deployment
namespace: test
spec:
replicas: 1
selector:
matchLabels:
k8s-app: store
template:
metadata:
labels:
k8s-app: store
spec:
volumes:
- name: store-volume
persistentVolumeClaim:
claimName: store-persistent-volume-claim
containers:
- name: store
image: localhost:5000/store
ports:
- containerPort: 8383
protocol: TCP
volumeMounts:
- name: store-volume
mountPath: /data
---
#------------ Service ----------------#
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: store
name: store
namespace: test
spec:
type: LoadBalancer
ports:
- port: 8383
targetPort: 8383
selector:
k8s-app: store
As you can see I defined '/Volumes/Data/data' as Persistent Volume and expecting that to mount that to '/data' container.
So I am assuming whatever in '/Volumes/Data/data' in the host should be visible at '/data' directory at container. Is this assumption correct? Because this is definitely not happening at the moment.
My second assumption is, whatever I save at '/data' should be visible at host, which is also not happening.
I can see from Kubernetes console that everything started correctly, (Persistent Volume, Claim, Deployment, Pod, Service...)
Am I understanding the persistent volume concept correctly at all?
PS. I am trying this in a Mac with Docker (18.05.0-ce-mac67(25042) -Channel edge), may be it should not work at Mac?
Thx for answers
Assuming you are using multi-node Kubernetes cluster, you should be able to see the data mounted locally at /Volumes/Data/data on the specific worker node that pod is running
You can check on which worker your pod is scheduled by using the command kubectl get pods -o wide -n test
Please note, as per kubernetes docs, HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster) PersistentVolume
It does work in my case.
As you are using the host path, you should check this '/data' in the worker node in which the pod is running.
Like the guy said above. You need to run a 'kubectl get po -n test -o wide' and you will see the node the pod is hosted on. Then if you SSH that worker you can see the volume
Trying to set up PetSet using Kube-Solo
In my local dev environment, I have set up Kube-Solo with CoreOS. I'm trying to deploy a Kubernetes PetSet that includes a Persistent Volume Claim Template as part of the PetSet configuration. This configuration fails and none of the pods are ever started. Here is my PetSet definition:
apiVersion: apps/v1alpha1
kind: PetSet
metadata:
name: marklogic
spec:
serviceName: "ml-service"
replicas: 2
template:
metadata:
labels:
app: marklogic
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
terminationGracePeriodSeconds: 30
containers:
- name: 'marklogic'
image: {ip address of repo}:5000/dcgs-sof/ml8-docker-final:v1
imagePullPolicy: Always
command: ["/opt/entry-point.sh", "-l", "/opt/mlconfig.sh"]
ports:
- containerPort: 7997
name: health-check
- containerPort: 8000
name: app-services
- containerPort: 8001
name: admin
- containerPort: 8002
name: manage
- containerPort: 8040
name: sof-sdl
- containerPort: 8041
name: sof-sdl-xcc
- containerPort: 8042
name: ml8042
- containerPort: 8050
name: sof-sdl-admin
- containerPort: 8051
name: sof-sdl-cache
- containerPort: 8060
name: sof-sdl-camel
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
lifecycle:
preStop:
exec:
command: ["/etc/init.d/MarkLogic stop"]
volumeMounts:
- name: ml-data
mountPath: /var/opt/MarkLogic
volumeClaimTemplates:
- metadata:
name: ml-data
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 1Gi
In the Kubernetes dashboard, I see the following error message:
SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "ml-data-marklogic-0", which is unexpected.
It seems that being unable to create the Persistent Volume Claim is also preventing the image from ever being pulled from my local repository. Additionally, the Kubernetes Dashboard shows the request for the Persistent Volume Claims, but the state is continuously "pending".
I have verified the issue is with the Persistent Volume Claim. If I remove that from the PetSet configuration the deployment succeeds.
I should note that I was using MiniKube prior to this and would see the same message, but once the image was pulled and the pod(s) started the claim would take hold and the message would go away.
I am using
Kubernetes version: 1.4.0
Docker version: 1.12.1 (on my mac) & 1.10.3 (inside the CoreOS vm)
Corectl version: 0.2.8
Kube-Solo version: 0.9.6
I am not familiar with kube-solo.
However, the issue here might be that you are attempting to use a feature, dynamic volume provisioning, which is in beta, which does not have specific support for volumes in your environment.
The best way around this would be to create the persistent volumes that it expects to find manually, so that the PersistentVolumeClaim can find them.
The same error happened to me and found clues about the following config (considering volumeClaimTemplates and StorageClass) at the slack group and this pull request
volumeClaimTemplates:
- metadata:
name: cassandra-data
annotations:
volume.beta.kubernetes.io/storage-class: standard
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: kube-system
name: standard
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
labels:
kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/host-path