Azure Files with Terraform (using CSI driver) - azure-aks

I want to share persistent volumes between many AKS pods using the Azure Files CSI driver.
Is this possible using Terraform ?
Someone has a documentation to implement this ?
(I see that CSI driver is in preview on Azure)

Related

Giving all containers in a Kubernetes service access to the same shared file system

New to Docker/K8s. I need to be able to mount all the containers (across all pods) on my K8s cluster to a shared file system, so that they can all read from and write to files on this shared file system. The file system needs to be something residing inside of -- or at the very least accessible to -- all containers in the K8s cluster.
As far as I can tell, I have two options:
I'm guessing K8s offers some type of persistent, durable block/volume storage facility? Maybe PV or PVC?
Maybe launch a Dockerized Samba container and give my others containers access to it somehow?
Does K8s offer this type of shared file system capability or do I need to do something like a Dockerized Samba?
NFS is a common solution to provide you the file sharing facilities. Here's a good explanation with example to begin with. Samba can be used if your file server is Windows based.
You are right you can use the File system in the backend with Access Mode ReadWriteMany.
ReadWirteMany will allow the container to mount to a single PVC and write on it.
You can also use the NFS system as suggested by the gohm'c, for NFS you can set up the GlusterFS or MinIO containers.
Read more about the Access mode ReadWriteMany : https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes

update container values at the runtime in Azure Kubernetes cluster

I have docker image/ACR running successfully in my AKS cluster.
Docker Image has configuration file with all credentials saved in it.
I want to change values of .config file at the time of kubernetes deployment creation.
I am using helm chart for deployment.
Do I need to mention these values in values.yaml file ?
How do I mention which file inside application needs to be updated with values from Azure key vault?
How can I achieve this ?
You could use Secrets Store CSI Driver for Kubernetes to use secrets from Azure Key Vault.
When you have an AKS and a ACR, why you need to specify credentials ? You can assign the Role "AcrPull" to the AKS identity and then the AKS is allows to pull images from your ACR.

how to Access to a network shared folder from docker

I am using Jenkins with docker, and I am looking to access a windows shared folder from my dockerNode. Is there a way, please?

How to Convert NFS into a Storage Class in kubernetes

I work in an media organisation where we deploy all our application on monolithic VMs but now we want move to kubernetes but we have major problem we have almost 40+NFS servers from which we are consuming the data in terabytes
The major problem is how do we read all this data from containers
The solutions we tried creating a
1.Persistent Volume and Persistent Volume Claim of the NFS which according to us is not a feasible solution as the data grow we have to create a new pv and pvc and create deployment
2.Mounting volumes on Kubernetes if we do this there would be no difference between kubernetes and VMs
3.Adding docker volumes to containers we were able to add the volume but we cannot see the data in the container
How can we make the existing nfs as storage class and use it or how to mount all the 40+ NFS servers on pods
It sounds like you need some form of object storage or block storage platform to manage the disks and automatically provisions disks for you.
You could use something like Rook for deploying Ceph into your cluster.
This will enable disk management in a much more friendly way, and help to automatically provision the NFS disks into your cluster.
Take a look at this: https://docs.ceph.com/docs/mimic/radosgw/nfs/
There is also the option of creating your own implementation using CRDs to trigger PV/PVC creation on certain actions/disks being mounted in your servers.

Mounting a Windows Share into Kubernetes pods

We are migrating some windows components (.net 4.5) to Linux containers (.net core). Existing system is a file processor, which watches a shared location and processes file from there. We can't force existing system to start dropping file at some other location, so new containerised application has to access from same shared location. Can a windows share be seen from docker containers? If yes, then how can I use Kubernete deployment file to achieve it?
Please advise.
Regards,
This feature is not working by default, but yes, you can do this. Here is the project which allows you to do this in a proper way. After volume driver setup, you can use Windows share as PersistentVolume in your Kubernetes cluster.
*Updates
Also you can use windows share as nfs volume in Kubernetes. Here you can find some examples of using nfs in Kubernetes

Resources