Kubernetes multiple identical app and database deployments with different config - docker

The dilemma: Deploy multiple app and database container pairs with identical docker image and code, but different config (different clients using subdomains).
What are some logical ways to approach this, as it doesn't seem kubernetes has an integration that would support this kind of setup?
Possible Approaches
Use a single app service for all app deployments, a single database service for all database deployments. Have a single Nginx static file service and deployment running, that will serve static files from a static volume that is shared between the app deployments (all use the same set of static files). Whenever a new deployment is needed, have a bash script copy the app and database .yaml deployment files and sed text replace with the name of the client, and point to the correct configmap (which is manually written ofcourse) and kubectl apply them. A main nginx ingress will handle the incoming traffic and point to the correct pod through the app deployment service
Similar to the above except use a StatefulSet instead of separate deployments, and an init container to copy different configs to mounted volumes (only drawback is you cannot delete an item in the middle of a statefulset, which would be the case if you no longer need a specific container for a client, as well as this seems like a very hacky approach).
Ideally if a StatefulSet could use the downward api to dynamically choose a configmap name based on the index of the stateful set that would resolve the issue (where you would basically make your config files manually with the index in the name, and it would be selected appropriately). Something like:
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
envFrom:
- configMapRef:
name: $(POD_NAME)-config
However that functionality isn't available in kubernetes.

A templating engine like Helm can help with this. (I believe Kustomize, which ships with current Kubernetes, can do this too, but I'm much more familiar with Helm.) The basic idea is that you have a chart that contains the Kubernetes YAML files but can use a templating language (the Go text/template library) to dynamically fill in content.
In this setup generally you'd have Helm create both the ConfigMap and the matching Deployment; in the setup you describe you'd install it separately (a Helm release) for each tenant. Say the Nginx configurations were different enough that you wanted to store them in external files; the core parts of your chart would include
values.yaml (overridable configuration, helm install --set nginxConfig=bar.conf):
# nginxConfig specifies the name of the Nginx configuration
# file to embed.
nginxConfig: foo.conf
templates/configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-config
data:
nginx.conf: |-
{{ .Files.Get .Values.nginxConfig | indent 4 }}
deployment.yaml:
apiVersion: v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-nginx
spec:
...
volumes:
- name: nginx-config
configMap:
name: {{ .Release.Name }}-{{ .Chart.Name }}-config
The {{ .Release.Name }}-{{ .Chart.Name }} is a typical convention that allows installing multiple copies of the chart in the same namespace; the first part is a name you give the helm install command and the second part is the name of the chart itself. You can also directly specify the ConfigMap content, referring to other .Values... settings from the values.yaml file, use the ConfigMap as environment variables instead of files, and so on.

While dynamic structural replacement isn't possible (plus or minus, see below for the whole story), I believe you were in the right ballpark with your initContainer: thought; you can use the serviceAccount to fetch the configMap from the API in an initContainer: and then source that environment on startup by the main container:
initContainers:
- command:
- /bin/bash
- -ec
- |
curl -o /whatever/env.sh \
-H "Authorization: Bearer $(cat /var/run/secret/etc/etc)" \
https://${KUBERNETES_SERVICE_HOST}/api/v1/namespaces/${POD_NS}/configmaps/${POD_NAME}-config
volumeMounts:
- name: cfg # etc etc
containers:
- command:
- /bin/bash
- -ec
- "source /whatever/env.sh; exec /usr/bin/my-program"
volumeMounts:
- name: cfg # etc etc
volumes:
- name: cfg
emptyDir: {}
Here we have the ConfigMap fetching inline with the PodSpec, but if you had a docker container specialized for fetching ConfigMaps and serializing them into a format that your main containers could consume, I wouldn't expect the actual solution to be nearly that verbose
A separate, and a lot more complicated (but perhaps elegant) approach is a Mutating Admission Webhook, and it looks like they have even recently formalized your very use case with Pod Presets but it wasn't super clear from the documentation in which version that functionality first appeared, nor if there are any apiserver flags one must twiddle to take advantage of it.

PodPresets has been removed since v1.20, the more elegant solution, based on Mutating Admission Webhook, to solve this problem is available now https://github.com/spoditor/spoditor
Essentially, it uses a custom annotation on the PodSpec template, like:
annotations:
spoditor.io/mount-volume: |
{
"volumes": [
{
"name": "my-volume",
"secret": {
"secretName": "my-secret"
}
}
],
"containers": [
{
"name": "nginx",
"volumeMounts": [
{
"name": "my-volume",
"mountPath": "/etc/secrets/my-volume"
}
]
}
]
}
Now, nginx container in each Pod of the StatefulSet will try to mount its own dedicated secret in the pattern of my-secret-{pod ordinal}.
You will just need to make sure my-secret-0, my-secret-1, so on and so forth exists in the same namespace of the StatefulSet.
There're more advanced usage of the annotation in the documentation of the project.

Related

Is there a way to start an application, in a docker container, during deployment to Kubernetes, using Helm?

I am trying to avoid having to create three different images for separate deployment environments.
Some context on our current ci/cd pipeline:
For the CI portion, we build our app into a docker container and then submit that container to a security scan. Once the security scan is successful, the container gets put into a private container repository.
For the CD portion, using helm charts, we pull the container from the repository and then deploy to a company managed Kubernetes cluster.
There was an ask and the solution was to use a piece of software in the container. And for some reason (I'm the devops person and not the software engineer) the software needs environment variables (specific to the deployment environment) passed to it when it starts. How would we be able to start and pass environment variables to this software at deployment?
I could just create three different images with the environment variables but I feel like that is an anti-pattern. It takes away from the flexibility of having one image that can be deployed to different environments.
Can any one point me to resources that can accomplish starting an application with specific environment variables using Helm? I've looked but did not find a solution or anything that pointed me to the right direction. As a plan b, I'll just create three different images but I want to make sure that there is not a better way.
Depending on the container orchestration, you can pass the env in differnt ways:
Plain docker:
docker run -e MY_VAR=MY_VAL <image>
Docker compose:
version: '3'
services:
app:
image: '<image>'
environment:
- MY_VAR=my-value
Check on docker-compose docs
Kubernetes:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
spec:
containers:
- name: app
image: <image>
env:
- name: MY_VAR
value: "my value"
Check on kubernetes-docu
Helm:
Add the values in your values.yaml:
myKey: myValue
Then reference it in your helm template:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
spec:
containers:
- name: app
image: <image>
env:
- name: MY_VAR
value: {{ .Values.myKey }}
Check out the helm docs.

How to import a config file into a container from k8s

I have a docker file which I've written for a React application. This app takes a .json config file that it uses at run time. The file doesn't contain any secrets.
So I've built the image without the config file, and now I'm unsure how to transfer the JSON file when I run it up.
I'm looking at deploying this in production using a CI/CD process which would entail:
gitlab (actions) building the image
pushing this to a docker repository
Kubernetes picking this up and running/starting the container
I think it's at the last point that I want to add the JSON configuration.
My question is: how do I add the config file to the application when k8's starts it up?
If I understand correctly, k8s doesn't have any local storage to create a volume from to copy it in? Can I give docker run a separate git repo where I can hold the config files?
You should take a look at configmap.
From k8s documentation configmap:
A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
In your case, you want as a volume to have a file.
apiVersion: v1
kind: ConfigMap
metadata:
name: your-app
data:
config.json: #you file name
<file-content>
A configmap can be create manually or generated from a file using:
Directly in the cluster:kubectl create configmap <name> --from-file <path-to-file>.
In a yaml file:kubectl create configmap <name> --from-file <path-to-file> --dry-run=client -o yaml > <file-name>.yaml.
When you got your configmap, you must modify your deployment/pod to add a volume.
apiVersion: apps/v1
kind: Deployment
metadata:
name: <your-name>
spec:
...
template:
metadata:
...
spec:
...
containers:
- name: <container-name>
...
volumeMounts:
- mountPath: '<path>/config.json'
name: config-volume
readOnly: true
subPath: config.json
volumes:
- name: config-volume
configMap:
name: <name-of-configmap>
To deploy to your cluster, you can use plain yaml or I suggest you take a look at Kustomize or Helm charts­.
They are both popular system to deploy applications. If kustomize, there is the configmap generator feature that fit your case.
Good luck :)

How to attache a volume to kubernetes pod container like in docker?

I am new to Kubernetes but familiar with docker.
Docker Use Case
Usually, when I want to persist data I just create a volume with a name then attach it to the container, and even when I stop it then start another one with the same image I can see the data persisting.
So this is what i used to do in docker
docker volume create nginx-storage
run -it --rm -v nginx-storage:/usr/share/nginx/html -p 80:80 nginx:1.14.2
then I:
Create a new html file in /usr/share/nginx/html
Stop container
Run the same docker run command again (will create another container with same volume)
html file exists (which means data persisted in that volume)
Kubernetes Use Case
Usually, when I work with Kubernetes volumes I specify a PVC (PersistentVolumeClaim) and PV (PersistentVolume) using hostPath which will bind mount directory or a file from the host machine to the container.
what I want to do is reproduce the same behavior specified in the previous example (Docker Use Case) so how can I do that? Is Kubernetes creating volumes process is different from Docker? and if possible providing a YAML file would help me understand.
To a first approximation, you can't (portably) do this. Build your content into the image instead.
There are two big practical problems, especially if you're running a production-oriented system on a cloud-hosted Kubernetes:
If you look at the list of PersistentVolume types, very few of them can be used in ReadWriteMany mode. It's very easy to get, say, an AWSElasticBlockStore volume that can only be used on one node at a time, and something like this will probably be the default cluster setup. That means you'll have trouble running multiple pod replicas serving the same (static) data.
Once you do get a volume, it's very hard to edit its contents. Consider the aforementioned EBS volume: you can't edit it without being logged into the node on which it's mounted, which means finding the node, convincing your security team that you can have root access over your entire cluster, enabling remote logins, and then editing the file. That's not something that's actually possible in most non-developer Kubernetes setups.
The thing you should do instead is build your static content into a custom image. An image registry of some sort is all but required to run Kubernetes and you can push this static content server into the same registry as your application code.
FROM nginx:1.14.2
COPY . /usr/share/nginx/html
# Base image has a working CMD, no need to repeat it
Then in your deployment spec, set image: registry.example.com/nginx-frontend:20220209 or whatever you've chosen to name this build of this image, and do not use volumes at all. You'd deploy this the same way you deploy other parts of your application; you could use Helm or Kustomize to simplify the update process.
Correspondingly, in the plain-Docker case, I'd avoid volumes here. You don't discuss how files get into the nginx-storage named volume; if you're using imperative commands like docker cp or debugging tools like docker exec, those approaches are hard to script and are intrinsically local to the system they're running on. It's not easy to copy a Docker volume from one place to another. Images, though, can be pushed and pulled through a registry.
I managed to do that by creating a PVC only this is how I did it (with an Nginx image):
nginx-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
nginx-deployment.yaml
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template: # template for the pods
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
volumeMounts:
- mountPath: /usr/share/nginx/html
name: nginx-data
volumes:
- name: nginx-data
persistentVolumeClaim:
claimName: nginx-data
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: nginx
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
Once I run kubectl apply on the PVC then on the deployment going to localhost:30080 will show 404 not found page means that all data in the /usr/share/nginx/html was deleted once the container gets started and that's because it's bind mounting a dir from the k8s cluster node to that container as a volume:
/usr/share/nginx/html <-- dir in volume
/var/lib/k8s-pvs/nginx2-data/pvc-9ba811b0-e6b6-4564-b6c9-4a32d04b974f <-- dir from node (was automatically created)
I tried adding a new file into that container in the html dir as a new index.html file, then deleted the container, a new container was created by the pod and checking localhost:30080 worked with the newly created home page
I tried deleting the deployment and reapplying it (without deleting the PVC) checked localhost:30080 and everything still persists.
An alternative solution specified in the comments kubernetes.io/docs/tasks/configure-pod-container/… by
larsks

how to inspect the content of persistent volume by kubernetes on azure cloud service

I have packed the software to a container. I need to put the container to cluster by Azure Container Service. The software have outputs of an directory /src/data/, I want to access the content of the whole directory.
After searching, I have to solution.
use Blob Storage on azure, but then after searching, I can't find the executable method.
use Persistent Volume, but all the official documentation of azure and pages I found is about Persistent Volume itself, not about how to inspect it.
I need to access and manage my output directory on Azure cluster. In other words, I need a savior.
As I've explained here and here, in general, if you can interact with the cluster using kubectl, you can create a pod/container, mount the PVC inside, and use the container's tools to, e.g., ls the contents. If you need more advanced editing tools, replace the container image busybox with a custom one.
Create the inspector pod
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: pvc-inspector
spec:
containers:
- image: busybox
name: pvc-inspector
command: ["tail"]
args: ["-f", "/dev/null"]
volumeMounts:
- mountPath: /pvc
name: pvc-mount
volumes:
- name: pvc-mount
persistentVolumeClaim:
claimName: YOUR_CLAIM_NAME_HERE
EOF
Inspect the contents
kubectl exec -it pvc-inspector -- sh
$ ls /pvc
Clean Up
kubectl delete pod pvc-inspector

How to update Kubernetes secrets for all namespaces

I am using docker and kubernetes on Google Cloud Platform, with the Kubernetes Engine.
I have secrets configurated in a app.yaml file like so :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
namespace: $CI_COMMIT_REF_SLUG
labels:
app: app
spec:
replicas: 1
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: gcr.io/engagement-org/app:$CI_COMMIT_SHA
imagePullPolicy: Always
ports:
- containerPort: 9000
env:
- name: MAILJET_APIKEY_PUBLIC
valueFrom:
secretKeyRef:
name: mailjet
key: apikey_public
- name: MAILJET_APIKEY_PRIVATE
valueFrom:
secretKeyRef:
name: mailjet
key: apikey_private
Each time I push on a new branch, a new namespace is created through a deploy in my gitlab-ci file. Secrets are created like so :
- kubectl create secret generic mailjet --namespace=$CI_COMMIT_REF_SLUG --from-literal=apikey_public=$MAILJET_APIKEY_PUBLIC --from-literal=apikey_private=$MAILJET_APIKEY_PRIVATE || echo 'Secret already exist';
Now, I have updated my mailjet api keys and want to make the change to all namespaces.
I can edit the secret on each namespace by getting a shell on the pods and running kubectl edit secret mailjet --namespace=<namespace_name>
What I want is to send the new secret values to the new pods that will be created in the future. When I deploy a new one, it still uses the old values.
From what I understand, the gitlab-ci file uses the app.yaml file to replace the environment variables with values. But I don't understand where app.yaml finds the original values.
Thank you for your help.
In general, Kubernetes namespaces are designed to provide isolation for components running inside them. For this reason, the Kubernetes API is not really designed to perform update operations across namespaces, or make secrets usable across namespaces.
That being said, there are a few things to solve this issue.
1. Use a single namespace & Helm releases instead of separate namespaces
From the looks of it, you are using Gitlab CI to deploy individual branches to review environments (presumably using Gitlab's Review App feature?). The same outcome can be achieved by deploying all Review Apps into the same namespace, and using Helm to manage multiple deployments ("releases" in Helm-speak) of the same application within a single namespace.
Within the gitlab-ci.yml, creating a Helm release for a new branch might look similar to this:
script:
- helm upgrade --namespace default --install review-$CI_COMMIT_REF_SLUG ./path/to/chart
Of course, this requires that you have defined a Helm chart for your application (which, in essence is just a set of YAML templates with a set of default variables that can then be overridden for individual releases). Refer to the documentation (linked above) for more information on creating Helm charts.
2. Keep secrets in sync across namespaces
We have had a similar issue a while ago and resorted to writing a custom Kubernetes controller that keeps secrets in sync across namespaces. It's open source and you can find it on GitHub (use with caution, though). It is based on annotations and provides unidirectional propagation of changes from a single, authoritative parent secret:
apiVersion: v1
kind: Secret
metadata:
name: mailjet
namespace: some-kubernetes-namespace
annotations:
replicator.v1.mittwald.de/replicate-from: default/mailjet
With the secret replicator deployed in your cluster, using this annotation will propagate all changes made to the mailjet secret in the default namespace to all secrets in any namespaced annotated like show above.
Now there is a way to share or sync secret across namespaces and its by using the ClusterSecret operator:
https://github.com/zakkg3/ClusterSecret

Resources