Deploying bluespice-free in Kubernetes - docker

According to this source, I can store to my/data/folder by following:
docker run -d -p 80:80 -v {/my/data/folder}:/data bluespice/bluespice-free
I have created following deployment but not sure how to use persistent volume.
apiVersion: apps/v1
kind: Deployment
metadata:
name: bluespice
namespace: default
labels:
app: bluespice
spec:
replicas: 1
selector:
matchLabels:
app: bluespice
template:
metadata:
labels:
app: bluespice
spec:
containers:
- name: bluespice
image: bluespice/bluespice-free
ports:
- containerPort: 80
env:
- name: bs_url
value: "https://bluespice.mycompany.local"
My persistent volume claim name is bluespice-pvc.
Also I have deployed the pod without persistent volume. Can I attach persistent volume on the fly to keep data?

if you want to mount a local directory, you don't have to deal with PVC since you can't force a specific host path in a PersistentVolumeClaim. For testing locally, you can use hostPath as it explained in the documentation:
A hostPath volume mounts a file or directory from the host node's filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.
For example, some uses for a hostPath are:
running a container that needs access to Docker internals; use a hostPath of /var/lib/docker
running cAdvisor in a container; use a hostPath of /sys
allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as
In addition to the required path property, you can optionally specify a type for a hostPath volume.
hostPath configuration example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: bluespice
namespace: default
labels:
app: bluespice
spec:
replicas: 1
selector:
matchLabels:
app: bluespice
template:
metadata:
labels:
app: bluespice
spec:
containers:
- image: bluespice/bluespice-free
name: bluespice
volumeMounts:
- mountPath: /data
name: bluespice-volume
volumes:
- name: bluespice-volume
hostPath:
# directory location on host
path: /my/data/folder
# this field is optional
type: Directory
However, if you want to move to a production cluster, you should consider more reliable option since allowing HostPaths has a lack of security and it's not portable:
HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the required file or directory, and mounted as ReadOnly.
If restricting HostPath access to specific directories through AdmissionPolicy, volumeMounts MUST be required to use readOnly mounts for the policy to be effective.
For more information about PersistentVolumes, you can check the official Kubernetes documents
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see AccessModes).
Therefore, I would recommend to use some cloud solutions like GCP or AWS, or at least by using a NFS share directly from Kubernetes. Also check this topic on StackOverFlow.
About your last question: it's impossible to attach Persistent Volume on the fly.

Related

Expose volumes in Helm just like in docker

I'm creating an application that is using helm(v3.3.0) + k3s. A program in a container uses different configuration files. As of now there are just few config files (that I added manually before building the image) but I'd like to add the possibility to add them dynamically when the container is running and not to lose them once the container/pod is dead. In docker I'd do that by exposing a folder like this:
docker run [image] -v /host/path:/container/path
Is there an equivalent for helm?
If not how would you suggest to solve this issue without stopping using helm/k3s?
In Kubernetes (Helm is just a tool for it) you need to do two things to mount host path inside container:
spec:
volumes:
# 1. Declare a 'hostPath' volume under pod's 'volumes' key:
- name: name-me
hostPath:
path: /path/on/host
containers:
- name: foo
image: bar
# 2. Mount the declared volume inside container using volume name
volumeMounts:
- name: name-me
mountPath: /path/in/container
Lots of other volumes types and examples in Kubernetes documentation.
Kubernetes has a dedicated construct for holding configuration files, ConfigMaps. Helm in turn has support for Accessing Files Inside Templates which can help you copy them into ConfigMap objects. A minimal setup here would look like:
# templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
config.ini: |
{{ .Files.Get "config.ini" | indent 4 }}
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment:
metadata: { ... }
spec:
template:
spec:
volumes:
- name: config-data
configMap:
name: my-config # matches ConfigMap metadata: { name: }
containers:
- volumeMounts:
- name: config-data # matches volume name: in this file
mountPath: /container/path
You can use Helm's templating constructs in various ways here: to dynamically construct the contents of the ConfigMap, to set an environment variable saying which file to use, and so on.
Do not use hostPath volumes here. Since Kubernetes is designed as a clustered environment, you do not have much control over which node a given pod will run on; you would have to copy these config files to every node in the cluster and try to update them all when a file changed. That's a huge maintenance problem, especially if you don't have direct filesystem access to the nodes.

docker data volume vs kubernetes persistent storage

docker engine supports data volumes
A Docker data volume persists after a container is deleted
docker run and docker-compose both support it:
docker run --volume data_vol:/mount/point
docker-compose with named volumes using top-level volumes key
kubernetes also supports persistent volumes, but does it support the same concept of having a data volume - that is, a volume which resides within a container?
if kubernetes supports a data volume (within a container):
would appreciate any reference to the documentation (or an example)
does it also support the migration of the data volume in the same manner it supports the migration of regular containers?
i found some related questions, but couldn't get the answer i am looking for.
What you are trying to say is:
If you do not specify a host path for a docker volume mount, docker dynamically provisions a path and persist it between restarts.
"that is, a volume which resides within a container"
Volume is generated outside of container and mounted later.
For example:
# data_vol location is decided by docker installation
docker run --volume data_vol:/mount/point
# host path is explicitly given
docker run --volume /my/host/path:/mount/point
In kubernetes terms, this is similar to dynamic provisioning. If you want dynamic provisioning, you need to have Storage classes depending on your storage backend.
Please read https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/ .
If you want to specify a host path, following is an example. You can also achieve similar results by using NFS, block storage etc.
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
hostPath:
path: /home/user/my-vol
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-ss
spec:
replicas: 1
selector:
matchLabels:
app: my-ss
serviceName: my-svc
template:
metadata:
labels:
app: my-ss
spec:
containers:
- image: ubuntu
name: my-container
volumeMounts:
- mountPath: /my-vol
name: my-vol
volumeClaimTemplates:
- metadata:
name: my-vol
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector:
matchLabels:
app: my-ss

How to mount folder with files in kubernetes

I am running a docker image that has certain configuration files within it. I need to persist/mount the same folder to the disk as new files will get added later on. When I use standard volume mount in kubernetes, it mounts an empty directory without the intial configuration files. How do I make sure my initial files are copied to the volume while mounting?
- mountPath: /tmp
name: my-vol
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: my-vol
persistentVolumeClaim:
claimName: wso2-disk2```
A possible solution could be the use the node storage mounted on containers (easiest way) or using a DFS solution like NFS, GlusterFS, and so on.
Another and recommended way to achieve what you need is to use a persistent volumes to share the same files between your containers.
Assuming you have a kubernetes cluster that has only one Node, and you want to share the path /mtn/data of your node with your pods (Source):
Create a PersistentVolume:
A hostPath PersistentVolume uses a file or directory on the Node to emulate network-attached storage.
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
Create a PersistentVolumeClaim:
Pods use PersistentVolumeClaims to request physical storage
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
Look at the PersistentVolumeClaim:
kubectl get pvc task-pv-claim
The output shows that the PersistentVolumeClaim is bound to your PersistentVolume, task-pv-volume.
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
task-pv-claim Bound task-pv-volume 10Gi RWO manual 30s
Create a deployment with 2 replicas for example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/mnt/data"
name: task-pv-storage
Now you can check inside both container the path /mnt/data has the same files.
If you have cluster with more than 1 node I recommend you to think about the other types of persistent volumes or using DFS.
References:
Configure persistent volumes
Persistent volumes
Volume Types
The suggested way to provide configurations to your pod is by creating a configmap for your configurations and mount it in your pod using volumes. This guide ( https://kubernetes.io/docs/concepts/storage/volumes/#configmap) descibes how to do that.
Other ways are to create a persistent volume and persistent volume claim in your cluster and copy your configuration file in that path. Mount the persistent volume in your pod.
You can also copy your configuration on one of the nodes in your cluster and mount that path using hostPath but this requires that your pod should also run on the same node as it tries to look for the path in that node. (Not a recommended approach)
Create configmap of the folder you would like to mount, the following creates configmap consisting of all the files in your-folder:
kubectl create configmap your-config --from-file=your-folder/
Then mount this to the volume and you will have the initial files in your folder. And note that you will need to mount it to subpath since you dont want it to overwrite everything in the directory.

Docker container does/doesnt work inside kubernetes

I am a bit confused here. It does work as normal docker container but when it goes inside a pod it doesnt. So here is how i do it.
Dockerfile in my local to create the image and publish to docker registry
FROM alpine:3.7
COPY . /var/www/html
CMD tail -f /dev/null
Now if i just pull the image(after deleting the local) and run as a container. It works and i can see my files inside /var/www/html.
Now i want to use that inside my kubernetes cluster.
Def : Minikube --vm-driver=none
I am running kube inside minikube with driver none option. So for single node cluster.
EDIT
I can see my data inside /var/www/html if i remove volume mounts and claim from deployment file.
Deployment file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
io.kompose.service: app
name: app
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: app
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
containers:
- image: kingshukdeb/mycode
name: pd-mycode
resources: {}
volumeMounts:
- mountPath: /var/www/html
name: claim-app-storage
restartPolicy: Always
volumes:
- name: claim-app-storage
persistentVolumeClaim:
claimName: claim-app-nginx
status: {}
PVC file
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: app-nginx1
name: claim-app-nginx
spec:
storageClassName: testmanual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
PV file
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-nginx1
labels:
type: local
spec:
storageClassName: testmanual
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/volumes/app"
Now when i run these files it creates the pod, pv, pvc and pvc is bound to pv. But if i go inside my container i dont see my files. hostpath is /data/volumes/app . Any ideas will be appreciated.
When PVC is bound to a pod, volume is mounted in location described in pod/deployment yaml file. In you case: mountPath: /var/www/html. That's why files "baked into" container image are not accessible (simple explanation why here)
You can confirm this by exec to the container by running kubectl exec YOUR_POD -i -t -- /bin/sh, and running mount | grep "/var/www/html".
Solution
You may solve this in many ways. It's best practice to keep your static data separate (i.e. in PV), and keep the container image as small and fast as possible.
If you transfer files you want to mount in PV to your hosts path /data/volumes/app they will be accessible in your pod, then you can create new image omitting the COPY operation. This way even if pod crashes changes to files made by your app will be saved.
If PV will be claimed by more than one pod, you need to change accessModes as described here:
The access modes are:
ReadWriteOnce – the volume can be mounted as read-write by a single node
ReadOnlyMany – the volume can be mounted read-only by many nodes
ReadWriteMany – the volume can be mounted as read-write by many nodes
In-depth explanation of Volumes in Kubernetes docs: https://kubernetes.io/docs/concepts/storage/persistent-volumes/

On a Kubernetes Pod, what is the ConfigMap directory location?

Many applications require configuration via some combination of config files, command line arguments, and environment variables. These configuration artifacts should be decoupled from image content in order to keep containerized applications portable. The ConfigMap API resource provides mechanisms to inject containers with configuration data while keeping containers agnostic of Kubernetes. ConfigMap can be used to store fine-grained information like individual properties or coarse-grained information like entire config files or JSON blobs.
I am unable to find where configmaps are saved. I know they are created however I can only read them via the minikube dashboard.
ConfigMaps in Kubernetes can be consumed in many different ways and mounting it as a volume is one of those ways.
You can choose where you would like to mount the ConfigMap on your Pod. Example from K8s documentation:
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
Pod
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "cat /etc/config/special.how" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
restartPolicy: Never
Note the volumes definition and the corresponding volumeMounts.
Other ways include:
Consumption via environment variables
Consumption via command-line arguments
Refer to the documentation for full examples.

Resources