I am kind of new to AKS deployment with volume mount. I want to create a pod in AKS with image; that image needs a volume mount with config.yaml file (that I already have and needs to be passed to that image to run successfully).
Below is the docker command that is working on local machine.
docker run -v <Absolute_path_of_config.yaml>:/config.yaml image:tag
I want to achieve same thing in AKS. When I tried to deploy same using Azure File Mount (with PersistentVolumeClaim) volume is getting attached. The question now is how to pass config.yaml file to that pod. I tried uploading config.yaml file to Azure File Share Volume that is attached in POD deployment without any success.
Below is the pod deployment file that I used
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: mypod
image: image:tag
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 1Gi
volumeMounts:
- mountPath: "/config.yaml"
name: volume
volumes:
- name: volume
persistentVolumeClaim:
claimName: my-azurefile-storage
Need help regarding how I can use that local config.yaml file for aks deployment so image can run properly.
Thanks in advance.
Create a kubernetes secret using config.yaml file.
kubectl create secret generic config-yaml --from-file=config.yaml
Mount it as a volume in the pod.
apiVersion: v1
kind: Pod
metadata:
name: config
spec:
containers:
- name: config
image: alpine
command:
- cat
resources: {}
tty: true
volumeMounts:
- name: config
mountPath: /config.yaml
subPath: config.yaml
volumes:
- name: config
secret:
secretName: config-yaml
Exec to the pod and view the file.
kubectl exec -it config sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ls
bin dev home media opt root sbin sys usr
config.yaml etc lib mnt proc run srv tmp var
/ # cat config.yaml
---
apiUrl: "https://my.api.com/api/v1"
username: admin
password: password
Related
I am running a Dask cluster and a Jupyter notebook server on cloud resources using Kubernetes and Helm
I am using a yaml file for the Dask cluster and Jupyter, initially taken from https://docs.dask.org/en/latest/setup/kubernetes-helm.html:
apiVersion: v1
kind: Pod
worker:
replicas: 2 #number of workers
resources:
limits:
cpu: 2
memory: 2G
requests:
cpu: 2
memory: 2G
env:
- name: EXTRA_PIP_PACKAGES
value: s3fs --upgrade
# We want to keep the same packages on the workers and jupyter environments
jupyter:
enabled: true
env:
- name: EXTRA_PIP_PACKAGES
value: s3fs --upgrade
resources:
limits:
cpu: 1
memory: 2G
requests:
cpu: 1
memory: 2G
an I am using another yaml file to create the storage locally.
#CREATE A PERSISTENT VOLUME CLAIM // attached to our pod config
apiVersion: 1
kind: PersistentVolumeClaim
metadata:
name: dask-cluster-persistent-volume-claim
spec:
accessModes:
- ReadWriteOne #can be used by a single node -ReadOnlyMany : for multiple nodes -ReadWriteMany: read/written to/by many nodes
ressources:
requests:
storage: 2Gi # storage capacity
I would like to add a persistent volume claim to the first yaml file, I couldn't figure out where the add volumes and volumeMounts.
if you have an idea, please share it, thank you
I started by creating a pvc claim with the YAML file:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pdask-cluster-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce #can be used by a single node -ReadOnlyMany : for multiple nodes -ReadWriteMany: read/written to/by many nodes
resources: # https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
requests:
storage: 2Gi
with lunching in bash:
kubectl apply -f Dask-Persistent-Volume-Claim.yaml
#persistentvolumeclaim/pdask-cluster-persistent-volume-claim created
I checked the creation of persitent volume:
kubectl get pv
I made major changes to the Dask cluster YAML: I added the volumes and volumeMounts where I read/write from a directory /data from the persistent volume created previously, I specified ServiceType to LoadBalancer with port:
apiVersion: v1
kind: Pod
scheduler:
name: scheduler
enabled: true
image:
repository: "daskdev/dask"
tag: 2021.8.1
pullPolicy: IfNotPresent
replicas: 1 #(should always be 1).
serviceType: "LoadBalancer" # Scheduler service type. Set to `LoadBalancer` to expose outside of your cluster.
# serviceType: "NodePort"
# serviceType: "ClusterIP"
#loadBalancerIP: null # Some cloud providers allow you to specify the loadBalancerIP when using the `LoadBalancer` service type. If your cloud does not support it this option will be ignored.
servicePort: 8786 # Scheduler service internal port.
# DASK WORKERS
worker:
name: worker # Dask worker name.
image:
repository: "daskdev/dask" # Container image repository.
tag: 2021.8.1 # Container image tag.
pullPolicy: IfNotPresent # Container image pull policy.
dask_worker: "dask-worker" # Dask worker command. E.g `dask-cuda-worker` for GPU worker.
replicas: 2
resources:
limits:
cpu: 2
memory: 2G
requests:
cpu: 2
memory: 2G
mounts: # Worker Pod volumes and volume mounts, mounts.volumes follows kuberentes api v1 Volumes spec. mounts.volumeMounts follows kubernetesapi v1 VolumeMount spec
volumes:
- name: dask-storage
persistentVolumeClaim:
claimName: pvc-dask-data
volumeMounts:
- name: dask-storage
mountPath: /save_data # folder for storage
env:
- name: EXTRA_PIP_PACKAGES
value: s3fs --upgrade
# We want to keep the same packages on the worker and jupyter environments
jupyter:
name: jupyter # Jupyter name.
enabled: true # Enable/disable the bundled Jupyter notebook.
#rbac: true # Create RBAC service account and role to allow Jupyter pod to scale worker pods and access logs.
image:
repository: "daskdev/dask-notebook" # Container image repository.
tag: 2021.8.1 # Container image tag.
pullPolicy: IfNotPresent # Container image pull policy.
replicas: 1 # Number of notebook servers.
serviceType: "LoadBalancer" # Scheduler service type. Set to `LoadBalancer` to expose outside of your cluster.
# serviceType: "NodePort"
# serviceType: "ClusterIP"
servicePort: 80 # Jupyter service internal port.
# This hash corresponds to the password 'dask'
#password: 'sha1:aae8550c0a44:9507d45e087d5ee481a5ce9f4f16f37a0867318c' # Password hash.
env:
- name: EXTRA_PIP_PACKAGES
value: s3fs --upgrade
resources:
limits:
cpu: 1
memory: 2G
requests:
cpu: 1
memory: 2G
mounts: # Worker Pod volumes and volume mounts, mounts.volumes follows kuberentes api v1 Volumes spec. mounts.volumeMounts follows kubernetesapi v1 VolumeMount spec
volumes:
- name: dask-storage
persistentVolumeClaim:
claimName: pvc-dask-data
volumeMounts:
- name: dask-storage
mountPath: /save_data # folder for storage
Then, I install my Daskconfiguration using helm:
helm install my-config dask/dask -f values.yaml
Finally, I accessed my jupyter interactively:
kubectl exec -ti [pod-name] -- /bin/bash
to examine the existence of the /data folder
I am using Docker for Windows 2.3.0.4 (stable) backed by WSL2 on Windows 10 version 2004 and with Kubernetes support enabled.
I am trying to create the following pod:
apiVersion: v1
kind: Pod
metadata:
name: api0
spec:
volumes:
- name: "mongo-data"
hostPath:
path: "/c/wr/volumes/mongo/data"
containers:
- name: db
image: mongo:3.6.19-xenial
volumeMounts:
- mountPath: "/data/db"
name: "mongo-data"
resources:
limits:
memory: "512Mi"
cpu: "1"
ports:
- containerPort: 27017
I have an issue with the mongo-data volume; when the pod is created via kubectl apply -f api0.yml the pod runs properly and the MongoDB collections are persisted after deleting and re-applying the pod.
But the C:\wr\volumes\mongo\data path that is mounted to the mongo db container does not contain the data files and is always empty
As I mentioned before, the state is persisted somewhere but not in the specified path.
What am I missing?
I tried specifying the path with the following formats:
/c/wr/volumes/mongo/data
//c/wr/volumes/mongo/data
//////c/wr/volumes/mongo/data
/mnt/c/wr/volumes/mongo/data
And I even tried referencing the /opt/data path in the wsl filesystem but the data files are never there.
I have a command to run docker,
docker run --name pre-core -itdp 8086:80 -v /opt/docker/datalook-pre-core:/usr/application app
In above command, /opt/docker/datalook-pre-core is host directory, /usr/application is container directory. The purpose is that container directory maps to host directory. So when container crashes, the directory functions as storage and data on it would be saved.
When I am going to use kubernetes to create a pod for this containter, how to write pod.yaml file?
I guess it is something like following:
apiVersion: v1
kind: Pod
metadata:
name: app-ykt
labels:
app: app-ykt
purpose: ykt_production
spec:
containers:
- name: app-ykt
image: app
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumnMounts:
- name: volumn-app-ykt
mountPath: /usr/application
volumns:
- name: volumn-app-ykt
????
I do not know what's the exact properties in yaml I shall write in my case?
This would be a hostPath volume: https://kubernetes.io/docs/concepts/storage/volumes/
volumes:
- name: volumn-app-ykt
hostPath:
# directory location on host
path: /opt/docker/datalook-pre-core
# this field is optional
type: Directory
However remember that while a container crash won't move things, other events can cause a pod to move to a different host so you need to be prepared to both deal with cold caches and to clean up orphaned caches.
I have a docker image with the option for property file like,
CMD java -jar /opt/test/test-service.war
--spring.config.location=file:/conf/application.properties
I use the -v volume mount in my docker run command as follows.
-v /usr/xyz/props/application.properties:/conf/application.properties
I am not sure how to achieve the same thing in Kubernetes.
I use minikube to run kubernetes in my local mac.
That should be an host path volume, illustrated with this example pod.
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
I've gone through the steps to create a persistent disk in google compute engine and attach it to a running VM instance. I've also created a docker image with a VOLUME directive. It runs fine locally, in the docker run command, i can pass a -v option to mount a host directory as the volume. I thought there would be a similar command in kubectl, but I don't see one. How can I mount my persistent disk as the docker volume?
In your pod spec, you may specify a Kubernetes gcePersistentDisk volume (the spec.volumes field) and where to mount that volume into containers (the spec.containers.volumeMounts field). Here's an example:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
# This GCE PD must already exist.
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
Read more about Kubernetes volumes: http://kubernetes.io/docs/user-guide/volumes