How to mount NFS share with credentials with AKS POD container - azure-aks

I have a on-prem network file share //server_name/share which I can map as network drive by providing credentials.
This path I am currently link with Azure VM's path /mnt/share and in this VM I am running docker container applications and further it will be mounted to container.
- /mnt/share:/app/mnt
And finally my container application reading whatever available at //server_name/share.
Now I moved to Azure Kubernetes Service and here I am NOT able map path with my on-prem share //server_name/share.
What are the workaround for this issue? Azure File Sync I saw but this solution I don't want.
How to mount the share to the POD container?

You can setup your own NFS on the Kubernetes cluster if you are fine with that approach.
Or else you can mount the NFS file system to Kubernetes POD also.
for example :
volumeMounts:
- name: nfs-volume
mountPath: /var/your-destination
defining volume
volumes:
- name: nfs-volum
nfs:
server: nfs-server.yourdomain.com
path:/path/to/shared-folder
If you are looking for YAML for reference please check official examples : https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs
if you are looking forward to setup the NFS on K8s cluster you can use the : https://www.gluster.org/
Or https://min.io/
If you are looking forward to create the PV using the NFS
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: NFS server
path: /home/shared
readOnly: false
storageClassName: nfs
mountOptions:
- dir_mode=0777
- file_mode=0777
https://docs.openshift.com/enterprise/3.1/install_config/persistent_storage/persistent_storage_nfs.html

Related

How does AKS handle the .env file in a container?

Assume there is a backend application with a private key stored in a .env file.
For the project file structure:
|-App files
|-Dockerfile
|-.env
If I run the docker image locally, the application can be reached normally by using a valid public key during the API request. However, if I deploy the container into AKS cluster by using same docker image, the application failed.
I am wondering how the container in a AKS cluster handle the .env file. What should I do to solve this problem?
Moving this out of comments for better visibility.
First and most important is docker is not the same as kubernetes. What works on docker, won't work directly on kubernetes. Docker is a container runtime, while kubernetes is a container orchestration tool which sits on top of docker (not always docker now, containerd is used as well).
There are many resources on the internet which describe the key difference. For example this one is from microsoft docs
First configmaps and secrets should be created:
Creating and managing configmaps and creating and managing secrets
There are different types of secrets which can be created.
Use configmaps/secrets as environment variables.
Further referring to configMaps and secrets as environment variables looks like (configmaps and secrets have the same structure):
apiVersion: v1
kind: Pod
metadata:
name: pod-example
spec:
containers:
- ...
env:
-
name: ADMIN_PASS
valueFrom:
secretKeyRef: # here secretref is used for sensitive data
key: admin
name: admin-password
-
name: MYSQL_DB_STRING
valueFrom:
configMapKeyRef: # this is not sensitive data so can be used configmap
key: db_config
name: connection_string
...
Use configmaps/secrets as volumes (it will be presented as file).
Below the example of using secrets as files mounted in a specific directory:
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- ...
volumeMounts:
- name: secrets-files
mountPath: "/mnt/secret.file1" # "secret.file1" file will be created in "/mnt" directory
subPath: secret.file1
volumes:
- name: secrets-files
secret:
secretName: my-secret # name of the Secret
There's a good article which explains and shows use cases of secrets as well as its limitations e.g. size is limited to 1Mb.

Deploying bluespice-free in Kubernetes

According to this source, I can store to my/data/folder by following:
docker run -d -p 80:80 -v {/my/data/folder}:/data bluespice/bluespice-free
I have created following deployment but not sure how to use persistent volume.
apiVersion: apps/v1
kind: Deployment
metadata:
name: bluespice
namespace: default
labels:
app: bluespice
spec:
replicas: 1
selector:
matchLabels:
app: bluespice
template:
metadata:
labels:
app: bluespice
spec:
containers:
- name: bluespice
image: bluespice/bluespice-free
ports:
- containerPort: 80
env:
- name: bs_url
value: "https://bluespice.mycompany.local"
My persistent volume claim name is bluespice-pvc.
Also I have deployed the pod without persistent volume. Can I attach persistent volume on the fly to keep data?
if you want to mount a local directory, you don't have to deal with PVC since you can't force a specific host path in a PersistentVolumeClaim. For testing locally, you can use hostPath as it explained in the documentation:
A hostPath volume mounts a file or directory from the host node's filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.
For example, some uses for a hostPath are:
running a container that needs access to Docker internals; use a hostPath of /var/lib/docker
running cAdvisor in a container; use a hostPath of /sys
allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as
In addition to the required path property, you can optionally specify a type for a hostPath volume.
hostPath configuration example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: bluespice
namespace: default
labels:
app: bluespice
spec:
replicas: 1
selector:
matchLabels:
app: bluespice
template:
metadata:
labels:
app: bluespice
spec:
containers:
- image: bluespice/bluespice-free
name: bluespice
volumeMounts:
- mountPath: /data
name: bluespice-volume
volumes:
- name: bluespice-volume
hostPath:
# directory location on host
path: /my/data/folder
# this field is optional
type: Directory
However, if you want to move to a production cluster, you should consider more reliable option since allowing HostPaths has a lack of security and it's not portable:
HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the required file or directory, and mounted as ReadOnly.
If restricting HostPath access to specific directories through AdmissionPolicy, volumeMounts MUST be required to use readOnly mounts for the policy to be effective.
For more information about PersistentVolumes, you can check the official Kubernetes documents
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see AccessModes).
Therefore, I would recommend to use some cloud solutions like GCP or AWS, or at least by using a NFS share directly from Kubernetes. Also check this topic on StackOverFlow.
About your last question: it's impossible to attach Persistent Volume on the fly.

Kubernetes - Mounting Persistent Volume as root directory

I'm trying to create a new Kubernetes deployment that will allow me to persist a pod's state when it is restarted or shutdown. Just for some background, the Kubernetes instance is a managed Amazon EKS Cluster, and I am trying to incorporate an Amazon EFS-backed Persistent Volume that is mounted to the pod.
Unfortunately as I have it now, the PV mounts to /etc/ as desired, but the contents are nearly empty, except for some files that were modified during boot.
The deployment yaml looks as below:
kind: Deployment
apiVersion: apps/v1
spec:
replicas: 1
selector:
matchLabels:
app: testpod
template:
metadata:
creationTimestamp: null
labels:
app: testpod
spec:
volumes:
- name: efs
persistentVolumeClaim:
claimName: efs
containers:
- name: testpod
image: 'xxxxxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/testpod:latest'
args:
- /bin/init
ports:
- containerPort: 443
protocol: TCP
resources: {}
volumeMounts:
- name: efs
mountPath: /etc
subPath: etc
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- ALL
restartPolicy: Always
terminationGracePeriodSeconds: 60
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
Any ideas of what could be going wrong? I would expect /etc/ to be populated with the contents of the image.
Edit:
This seems to be working fine in Docker by using the same image, creating a volume with docker volume create <name> and then mounting it as -v <name>:/etc.
Kubernetes does not have the Docker feature that populates volumes based on the contents of the image. If you create a new volume (whether an emptyDir volume or something based on cloud storage like AWS EBS or EFS) it will start off empty, and hide whatever was in the container.
As such, you can’t mount a volume over large parts of the container; it won’t work to mount a volume over your application’s source tree, or over /etc as you show. For files in /etc in particular, a better approach would be to use a Kubernetes ConfigMap to hold specific files you want to add to that directory. (Store your config files in source control and add them as part of the deployment sequence; don’t try to persist untracked modifications to deployed files.)
my guess would be the mounts in containers works exactly the same way as mounts in operating system.. if you mount something at /etc you simply overwrite (better word 'cover') what has been there before.. if you mount empty EFS there will be empty folder
I tried what you tried in docker and (surprise for me) it works the way you describe.. it's likely because docker volumes are simply technologically something else than kubernetes volume claims (especially backed by EFS) this explains it: Docker mount to folder overriding content
tldr: if docker volume is empty files will be mirrored
I don't personally think with k8s and EFS you can achieve what you're trying to
I think you might be interested in "nsfdsuds", potentially: it establishes an overlayfs for a Kubernetes container in which the writable, top layer of the overlayfs can be on a PersistentVolume of your choice.
https://github.com/Sha0/nsfdsuds

Placing Files In A Kubernetes Persistent Volume Store On GKE

I am trying to run a Factorio game server on Kubernetes (hosted on GKE).
I have setup a Stateful Set with a Persistent Volume Claim and mounted it in the game server's save directory.
I would like to upload a save file from my local computer to this Persistent Volume Claim so I can access the save on the game server.
What would be the best way to upload a file to this Persistent Volume Claim?
I have thought of 2 ways but I'm not sure which is best or if either are a good idea:
Restore a disk snapshot with the files I want to the GCP disk which backs this Persistent Volume Claim
Mount the Persistent Volume Claim on an FTP container, FTP the files up, and then mount it on the game container
It turns out there is a much simpler way: The kubectl cp command.
This command lets you copy data from your computer to a container running on your cluster.
In my case I ran:
kubectl cp ~/.factorio/saves/k8s-test.zip factorio/factorio-0:/factorio/saves/
This copied the k8s-test.zip file on my computer to /factorio/saves/k8s-test.zip in a container running on my cluster.
See kubectl cp -h for more more detail usage information and examples.
You can create data-folder on your GoogleCloud:
gcloud compute ssh <your cloud> <your zone>
mdkir data
Then create PersistentVolume:
kubectl create -f hostpth-pv.yml
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-local
labels:
type: local
spec:
storageClassName: local
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/<user-name>/data"
Create PersistentVolumeClaim:
kubectl create -f hostpath-pvc.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: hostpath-pvc
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
selector:
matchLabels:
type: local
Then copy file to GCloud:
gcloud compute scp <your file> <your cloud> <your zone>
And at last mount this PersistentVolumeClaim to your pod:
...
volumeMounts:
- name: hostpath-pvc
mountPath: <your-path>
subPath: hostpath-pvc
volumes:
- name: hostpath-pvc
persistentVolumeClaim:
claimName: hostpath-pvc
And copy file to data-folder in GGloud:
gcloud compute scp <your file> <your cloud>:/home/<user-name>/data/hostpath-pvc <your zone>
You can just use Google Cloud Storage (https://cloud.google.com/storage/) since you're looking at serving a few files.
The other option is to use PersistenVolumeClaims. This will work better if you're not updating the files frequently because you will need to detach the disk from the Pods (so you need to delete the Pods) while doing this.
You can create a GCE persistent disk, attach it to a GCE VM, put files on it, then delete the VM and bring the PD to Kubernetes as PersistentVolumeClaim. There's doc on how to do that: https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#using_preexsiting_persistent_disks_as_persistentvolumes

How can I map a docker volume to a google compute engine persistent disk

I've gone through the steps to create a persistent disk in google compute engine and attach it to a running VM instance. I've also created a docker image with a VOLUME directive. It runs fine locally, in the docker run command, i can pass a -v option to mount a host directory as the volume. I thought there would be a similar command in kubectl, but I don't see one. How can I mount my persistent disk as the docker volume?
In your pod spec, you may specify a Kubernetes gcePersistentDisk volume (the spec.volumes field) and where to mount that volume into containers (the spec.containers.volumeMounts field). Here's an example:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
# This GCE PD must already exist.
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
Read more about Kubernetes volumes: http://kubernetes.io/docs/user-guide/volumes

Resources