Creating Static Persistent Volume in AKS using credentials in Azure Key Vault - azure-aks

I am currently following this guide here to create a PV using an existing Azure File Share: https://learn.microsoft.com/en-us/azure/aks/azure-files-volume
The method is to store the storage account name and access key in a secret azure secret then use it in the csi section of the yaml file as below.
apiVersion: v1
kind: PersistentVolume
metadata:
name: azurefile
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: azurefile-csi
csi:
driver: file.csi.azure.com
readOnly: false
volumeHandle: unique-volumeid # make sure this volumeid is unique in the cluster
volumeAttributes:
resourceGroup: EXISTING_RESOURCE_GROUP_NAME # optional, only set this when storage account is not in the same resource group as agent node
shareName: aksshare
nodeStageSecretRef:
name: azure-secret
namespace: default
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict
- nosharesock
- nobrl
However, due to technical risk and security reasons, now I do not want to put storage account access key in the kubernetes namespace. Instead, I want to fetch the access key from Azure key vault and use it to mount the persistent volume to the azure files.
I have done some research and testing, but to no avail. Would appreciate help on this, thanks!

We can fetch the access key from keyvault to mount the persistent volume to fileshare using providerClass instead of storage account access key
I have created RG and AKS Cluster
While creating the AKS Cluster we have to enable the CSI Drivers
I have created the keyvault to secure our secrets
Resources>Keyvault>Create>RG,kvName>Review&Create
I have created the secrets using keyvaults
In KV Go-To secrets>click on Generate/import >give the name and secret value to create the secrets and its value(password)
Verify that your virtual machine scale set have their own system-assigned identity, if not we have to enable it
I have given the Access policy permissions to read the keyvault and its content
Go-To>keyvault>access-policy>create>
Permissions>select>secret Permissions
principal>select>id
application>select application id>create
Created the secretprovider class by using this provider class by this class will get the secrets from the key vault
secretproviderclass(check the file here)
Apply the below command to deploy the secret class
kubectl apply -f filename.yaml
By deploying the provider class the secrets will not be created, for that we have to create the POD which will mount the volume by utilizing the CSI drivers
pod.yaml(check this link)
Deploy the pod using below command
kubectl apply -f file.yaml
After the pod starts, the mounted content at the volume path that we specified in YAML file will be available
kubectl exec <pod_name> -- ls /mnt/secrets-store/
kubectl exec <pod_name> -- cat /mnt/secrets-store/<secret_name>
By using the above commands we will get the secret and secret value.

Related

Is there a way to configure docker hub pro user in kubernetes?

We've just bought a docker hub pro user so that we don't have to worry about pull rate limits.
Now, I'm currently having a problem trying to to set the docker hub pro user. Is there a way to set the credentials for hub.docker.com globally?
In the kubernetes docs I found following article: Kubernetes | Configure nodes for private registry
On every node I executed a docker login with the credentials, copied the config.json to /var/lib/kubelet and restarted kubelet. But I'm still getting an ErrImagePull because of those rate limits.
I've copied the config.json to the following places:
/var/lib/kubelet/config.json
/var/lib/kubelet/.dockercfg
/root/.docker/config.json
/.docker/config.json
There is an option to use a secret for authentification. The problem is, that we would need to edit hundreds of statefulsets, deployments and deamonsets. So it would be great to set the docker user globally.
Here's the config.json:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "[redacted]"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.13 (linux)"
}
}
To check if it actually logs in with the user I've created an access token in my account. There I can see the last login with said token. The last login was when I executed the docker login command. So the images that I try to pull aren't using those credentials.
Any ideas?
Thank you!
Kubernetes implements this using image pull secrets. This doc does a better job at walking through the process.
Using the Docker config.json:
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson
Or you can pass the settings directly:
kubectl create secret docker-registry <name> --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
Then use those secrets in your pod definitions:
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
Or to use the secret at a user level (Add image pull secret to service account)
kubectl get serviceaccounts default -o yaml > ./sa.yaml
open the sa.yaml file, delete line with key resourceVersion, add lines with imagePullSecrets: and save.
kind: ServiceAccount
metadata:
creationTimestamp: "2020-11-22T21:41:53Z"
name: default
namespace: default
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: afad07eb-f58e-4012-9ccf-0ac9762981d5
secrets:
- name: default-token-gkmp7
imagePullSecrets:
- name: regcred
Finally replace the serviceaccount with the new updated sa.yaml file
kubectl replace serviceaccount default -f ./sa.yaml
We use docker-registry as a proxy cache in our Kubernetes clusters, Docker Hub credentials may be set in the configuration. Docker daemons on Kubernetes nodes are configured to use the proxy by setting registry-mirror in /etc/docker/daemon.json.
This way, you do not need to modify any Kubernetes manifest to include pull secrets. Our complete setup is described in a blog post.
I ran into the same problem as OP. It turns out, putting docker credential files for kubelet works for kubernetes version 1.18 or higher. I have tested here and can confirm that kubelet 1.18 picks up the config.json placed in /var/lib/kubelet correctly and authenticates the docker registry.

connection strings for azure cache for redis to deploy in aks cluster

Can anyone provide me a sample yaml to integrate connection strings for azure cache for redis in one of the pod container to deploy in aks cluster?
The easiest way to provide external configuration in a POD running in Kubernetes is to use ConfigMaps
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-literal-values
This will allow you to create a configuration file that can be injected to your PODs a runtime.
From this ConfigMAp you will then inject the value using either an ENV VAR or a file mount.
Here's an example on how to inject the ConfigMap value using an ENV VAR
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data
if you are interested in have your ConfigMap content loaded as a volume such that your pod can read its configuration from a File, have a look at how to mount the ConfigMap
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume
As far as the Azure Redis Cache Connection String goes, you will find the connectiongString on the Access Key tab

Kubernetes & Gitlab: How to store password for private registry?

I want to run my application that is hosted in a private container registry on a Kubernetes cluster. I followed the instructions here and created a secret like this:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> \
--docker-username=<your-name> \
--docker-password=<your-pword> \
--docker-email=<your-email>
which is used in my deployment like this:
containers:
- image: registry.gitlab.com/xxxxx/xxxx
name: dockerdemo
resources: {}
imagePullSecrets:
- name: regcred
K8s is now able to pull the image from my private registry. Anyhow I don't feel comfortable that my user and password are stored in plain text in the cluster. Is there a better/more secure way to give the K8s cluster access to the registry maybe by a token?
Hence I am using Gitlab the solution for me know is not to store my user credentials in Kubernetes. Instead I am using a Deploy Token that can be removed any time and that only has access to the container registry.
The following steps are necessary here:
Open Gitlab and go to your project
Settings > Repository > Deploy Tokens
Create a token with scope read_registry
Create secret in K8S: kubectl create secret docker-registry regcred --docker-server=registry.gitlab.com --docker-username=<token_username> --docker-password=<token>
Thank you #Jonas for your links but this solution is what I was looking for.
Anyhow I don't feel comfortable that my user and password are stored in plain text in the cluster. Is there a better/more secure way to give the K8s cluster access to the registry maybe by a token?
See Encrypting Secret Data at Rest
for how to ensure that your Secrets is encrypted in etcd.
Alternatively you can consider to use Vault to store secrets. See e.g. How Monzo bank security team handle secrets

How to create "PersistentVolumeClaim" in Kubernetes on "Docker for windows"

In the "Juju" installation of kubernetes in Vsphere, we create pvc as follows,
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: db-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: fast
resources:
requests:
storage: 1Gi
with the storageClassName as "fast". What it the storage class we need to create a "PersistentVolumeClaim" in "Docker for windows" installation.
A StorageClass provides a way for administrators to describe the
“classes” of storage they offer. Different classes might map to
quality-of-service levels, or to backup policies, or to arbitrary
policies determined by the cluster administrators. Kubernetes itself
is unopinionated about what classes represent. This concept is
sometimes called “profiles” in other storage systems.
You can create several StorageClasses that fit your needs referring to vSphere examples in the official documentation:
vSphere
Create a StorageClass with a user specified disk format.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
diskformat: thin, zeroedthick and eagerzeroedthick. Default: "thin".
Create a StorageClass with a disk format on a user specified
datastore.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
datastore: VSANDatastore
datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in
the storage class, which in this case is VSANDatastore. This field is
optional. If the datastore is not specified, then the volume will be
created on the datastore specified in the vSphere config file used to
initialize the vSphere Cloud Provider.
Storage Policy Management inside kubernetes
Using existing vCenter SPBM policy
One of the most important features of vSphere for Storage Management
is policy based Management. Storage Policy Based Management (SPBM) is
a storage policy framework that provides a single unified control
plane across a broad range of data services and storage solutions.
SPBM enables vSphere administrators to overcome upfront storage
provisioning challenges, such as capacity planning, differentiated
service levels and managing capacity headroom.
The SPBM policies can be specified in the StorageClass using the
storagePolicyName parameter.
Virtual SAN policy support inside Kubernetes
Vsphere Infrastructure (VI) Admins will have the ability to specify
custom Virtual SAN Storage Capabilities during dynamic volume
provisioning. You can now define storage requirements, such as
performance and availability, in the form of storage capabilities
during dynamic volume provisioning. The storage capability
requirements are converted into a Virtual SAN policy which are then
pushed down to the Virtual SAN layer when a persistent volume (virtual
disk) is being created. The virtual disk is distributed across the
Virtual SAN datastore to meet the requirements.
You can see Storage Policy Based Management for dynamic provisioning
of volumes for more details on how to use storage policies for
persistent volumes management.
There are few vSphere examples which you try out for persistent
volume management inside Kubernetes for vSphere.
Hope I found the answer,
kubectl get storageclass gives output as follows,
NAME PROVISIONER AGE
hostpath (default) docker.io/hostpath 22h
then, we can use 'hostpath' as the value for 'storageClassName'

Does Kubernetes kubelet support Docker Credential Stores for private registries?

Docker has a mechanism for retrieving Docker registry passwords from a remote store, instead of just storing them in a config file - this mechanism is called a Credentials Store. It has a similar mechanism that are used to retrieve a password for a specific registry called Credential Helpers.
Basically, it involves defining a value in ~/.docker/config.json that is interpreted as the name of an executable.
{
"credsStore": "osxkeychain"
}
The value of the credsStore key has a prefix docker-credential- pre-pended to it and if that executable (e.g. docker-credential-osxkeychain) exists on the path then it will be executed and is expected to echo the username and password to stdout, which Docker will use to log in to a private registry. The idea is that the executable reaches out to a store and retrieves your password for you, so you don't have to have lots of files laying around in your cluster with your username/password encoded in them.
I can't get a Kubernetes kubelet to make use of this credential store. It seems to just ignore it and when Kubernetes attempts to download from a private registry I get a "no basic auth credentials" error. If I just have a config.json with the username / password in it then kubelet works ok.
Does Kubernetes support Docker credential stores/credential helpers and if so, how do I get them to work?
For reference, kubelet is running through systemd, the credential store executable is on the path and the config.json file is being read.
As of the moment of writing Kubernetes v1.14 does not support credential helpers as per official docs Configuring Nodes to Authenticate to a Private Registry
Note: Kubernetes as of now only supports the auths and HttpHeaders section of docker config. This means credential helpers (credHelpers or credsStore) are not supported.
Yes, Kubernetes has the same mechanism called secrets but with extended functionality, and it includes specific secret type called docker-registry. You can create your specific secret with credentials for docker registry:
$ kubectl create secret docker-registry myregistrykey \
--docker-server=DOCKER_REGISTRY_SERVER \
--docker-username=DOCKER_USER \
--docker-password=DOCKER_PASSWORD \
--docker-email=DOCKER_EMAIL
secret "myregistrykey" created.
and use it:
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey

Resources