I am using docker and kubernetes on Google Cloud Platform, with the Kubernetes Engine.
I have secrets configurated in a app.yaml file like so :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
namespace: $CI_COMMIT_REF_SLUG
labels:
app: app
spec:
replicas: 1
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: gcr.io/engagement-org/app:$CI_COMMIT_SHA
imagePullPolicy: Always
ports:
- containerPort: 9000
env:
- name: MAILJET_APIKEY_PUBLIC
valueFrom:
secretKeyRef:
name: mailjet
key: apikey_public
- name: MAILJET_APIKEY_PRIVATE
valueFrom:
secretKeyRef:
name: mailjet
key: apikey_private
Each time I push on a new branch, a new namespace is created through a deploy in my gitlab-ci file. Secrets are created like so :
- kubectl create secret generic mailjet --namespace=$CI_COMMIT_REF_SLUG --from-literal=apikey_public=$MAILJET_APIKEY_PUBLIC --from-literal=apikey_private=$MAILJET_APIKEY_PRIVATE || echo 'Secret already exist';
Now, I have updated my mailjet api keys and want to make the change to all namespaces.
I can edit the secret on each namespace by getting a shell on the pods and running kubectl edit secret mailjet --namespace=<namespace_name>
What I want is to send the new secret values to the new pods that will be created in the future. When I deploy a new one, it still uses the old values.
From what I understand, the gitlab-ci file uses the app.yaml file to replace the environment variables with values. But I don't understand where app.yaml finds the original values.
Thank you for your help.
In general, Kubernetes namespaces are designed to provide isolation for components running inside them. For this reason, the Kubernetes API is not really designed to perform update operations across namespaces, or make secrets usable across namespaces.
That being said, there are a few things to solve this issue.
1. Use a single namespace & Helm releases instead of separate namespaces
From the looks of it, you are using Gitlab CI to deploy individual branches to review environments (presumably using Gitlab's Review App feature?). The same outcome can be achieved by deploying all Review Apps into the same namespace, and using Helm to manage multiple deployments ("releases" in Helm-speak) of the same application within a single namespace.
Within the gitlab-ci.yml, creating a Helm release for a new branch might look similar to this:
script:
- helm upgrade --namespace default --install review-$CI_COMMIT_REF_SLUG ./path/to/chart
Of course, this requires that you have defined a Helm chart for your application (which, in essence is just a set of YAML templates with a set of default variables that can then be overridden for individual releases). Refer to the documentation (linked above) for more information on creating Helm charts.
2. Keep secrets in sync across namespaces
We have had a similar issue a while ago and resorted to writing a custom Kubernetes controller that keeps secrets in sync across namespaces. It's open source and you can find it on GitHub (use with caution, though). It is based on annotations and provides unidirectional propagation of changes from a single, authoritative parent secret:
apiVersion: v1
kind: Secret
metadata:
name: mailjet
namespace: some-kubernetes-namespace
annotations:
replicator.v1.mittwald.de/replicate-from: default/mailjet
With the secret replicator deployed in your cluster, using this annotation will propagate all changes made to the mailjet secret in the default namespace to all secrets in any namespaced annotated like show above.
Now there is a way to share or sync secret across namespaces and its by using the ClusterSecret operator:
https://github.com/zakkg3/ClusterSecret
Related
I am trying to avoid having to create three different images for separate deployment environments.
Some context on our current ci/cd pipeline:
For the CI portion, we build our app into a docker container and then submit that container to a security scan. Once the security scan is successful, the container gets put into a private container repository.
For the CD portion, using helm charts, we pull the container from the repository and then deploy to a company managed Kubernetes cluster.
There was an ask and the solution was to use a piece of software in the container. And for some reason (I'm the devops person and not the software engineer) the software needs environment variables (specific to the deployment environment) passed to it when it starts. How would we be able to start and pass environment variables to this software at deployment?
I could just create three different images with the environment variables but I feel like that is an anti-pattern. It takes away from the flexibility of having one image that can be deployed to different environments.
Can any one point me to resources that can accomplish starting an application with specific environment variables using Helm? I've looked but did not find a solution or anything that pointed me to the right direction. As a plan b, I'll just create three different images but I want to make sure that there is not a better way.
Depending on the container orchestration, you can pass the env in differnt ways:
Plain docker:
docker run -e MY_VAR=MY_VAL <image>
Docker compose:
version: '3'
services:
app:
image: '<image>'
environment:
- MY_VAR=my-value
Check on docker-compose docs
Kubernetes:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
spec:
containers:
- name: app
image: <image>
env:
- name: MY_VAR
value: "my value"
Check on kubernetes-docu
Helm:
Add the values in your values.yaml:
myKey: myValue
Then reference it in your helm template:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
spec:
containers:
- name: app
image: <image>
env:
- name: MY_VAR
value: {{ .Values.myKey }}
Check out the helm docs.
I am trying to run e2e tests on Kubernetes cluster but while running Pods are pulled from docker and the docker is using default username present in the git-hub and the limit is exceeding.
I need to pass my docker user credential while running e2e test.
Any thing i can export / pass my user credential while running e2e test.
I am using Ginkgo framework to trigger the e2e test
Welcome to community!
From kubernetes perspective it's possible to pass environment variables to containers running in pods. You'll need to specify them in your yaml file for pods.
Here is an example from kubernetes documentation:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Please find k8s documentation on how to set it up - define environment variable for containers
Once you manage with this part, you should consider doing it securely. For this matter it's advise to use kubernetes secrets.
In this kubernetes documentation (Distribute Credentials Securely Using Secrets) you will find all steps and examples on how you can do it.
Keeping this in mind, there might be other solutions build-in in e2e ginkgo solution.
Hey I have a wider problem as when I update secrets in kubernetes they are not implemented in pods unless they are ugprades/reschedules or just re-deployed; I saw the other stackoverflow post about it but noone of the solutions fit me Update kubernetes secrets doesn't update running container env vars
Also so the in-app solution of python script on pod to update its secret automatically https://medium.com/analytics-vidhya/updating-secrets-from-a-kubernetes-pod-f3c7df51770d but it seems like a long shot and I came up with solution to adding annotation to deployment manifest - and hoping it would re-schedule pods everytime a helm chart would put a new timestamp in it - it does put it but it doesn't reschedule - any thought how to force that behaviour ?
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxx
namespace: xxx
labels: xxx
annotations:
lastUpdate: {{ now }}
also I dont feel like adding this patch command to ci/cd deployment, as its arbitraty and - well doesnt feel like right solution
kubectl patch deployment mydeployment -p '{"spec":{"template":{"spec":{"containers":[{"name":"mycontainer","env":[{"name":"RESTART_","value":"'$(date +%s)'"}]}]}}}}'
didn't anyone else find better solution to re-deploy pods on changed secrets ?
Kubernetes by itself does not do rolling update of a deployment automatically when a secret is changed. So there needs to a controller which will do that for you automatically. Take a look at Reloader which is a controller that watches if some change happens in ConfigMap and/or Secret; then perform a rolling upgrade on relevant DeploymentConfig, Deployment, Daemonset and Statefulset.
Add reloader.stakater.com/auto annotation to the deployment with name xxx and have a ConfigMap called xxx-configmap or Secret called xxx-secret.
This will discover deployments/daemonsets/statefulset automatically where xxx-configmap or xxx-secret is being used either via environment variable or from volume mount. And it will perform rolling upgrade on related pods when xxx-configmap or xxx-secret are updated
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxx
namespace: xxx
labels: xxx
annotations:
reloader.stakater.com/auto: "true"
The dilemma: Deploy multiple app and database container pairs with identical docker image and code, but different config (different clients using subdomains).
What are some logical ways to approach this, as it doesn't seem kubernetes has an integration that would support this kind of setup?
Possible Approaches
Use a single app service for all app deployments, a single database service for all database deployments. Have a single Nginx static file service and deployment running, that will serve static files from a static volume that is shared between the app deployments (all use the same set of static files). Whenever a new deployment is needed, have a bash script copy the app and database .yaml deployment files and sed text replace with the name of the client, and point to the correct configmap (which is manually written ofcourse) and kubectl apply them. A main nginx ingress will handle the incoming traffic and point to the correct pod through the app deployment service
Similar to the above except use a StatefulSet instead of separate deployments, and an init container to copy different configs to mounted volumes (only drawback is you cannot delete an item in the middle of a statefulset, which would be the case if you no longer need a specific container for a client, as well as this seems like a very hacky approach).
Ideally if a StatefulSet could use the downward api to dynamically choose a configmap name based on the index of the stateful set that would resolve the issue (where you would basically make your config files manually with the index in the name, and it would be selected appropriately). Something like:
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
envFrom:
- configMapRef:
name: $(POD_NAME)-config
However that functionality isn't available in kubernetes.
A templating engine like Helm can help with this. (I believe Kustomize, which ships with current Kubernetes, can do this too, but I'm much more familiar with Helm.) The basic idea is that you have a chart that contains the Kubernetes YAML files but can use a templating language (the Go text/template library) to dynamically fill in content.
In this setup generally you'd have Helm create both the ConfigMap and the matching Deployment; in the setup you describe you'd install it separately (a Helm release) for each tenant. Say the Nginx configurations were different enough that you wanted to store them in external files; the core parts of your chart would include
values.yaml (overridable configuration, helm install --set nginxConfig=bar.conf):
# nginxConfig specifies the name of the Nginx configuration
# file to embed.
nginxConfig: foo.conf
templates/configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-config
data:
nginx.conf: |-
{{ .Files.Get .Values.nginxConfig | indent 4 }}
deployment.yaml:
apiVersion: v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-nginx
spec:
...
volumes:
- name: nginx-config
configMap:
name: {{ .Release.Name }}-{{ .Chart.Name }}-config
The {{ .Release.Name }}-{{ .Chart.Name }} is a typical convention that allows installing multiple copies of the chart in the same namespace; the first part is a name you give the helm install command and the second part is the name of the chart itself. You can also directly specify the ConfigMap content, referring to other .Values... settings from the values.yaml file, use the ConfigMap as environment variables instead of files, and so on.
While dynamic structural replacement isn't possible (plus or minus, see below for the whole story), I believe you were in the right ballpark with your initContainer: thought; you can use the serviceAccount to fetch the configMap from the API in an initContainer: and then source that environment on startup by the main container:
initContainers:
- command:
- /bin/bash
- -ec
- |
curl -o /whatever/env.sh \
-H "Authorization: Bearer $(cat /var/run/secret/etc/etc)" \
https://${KUBERNETES_SERVICE_HOST}/api/v1/namespaces/${POD_NS}/configmaps/${POD_NAME}-config
volumeMounts:
- name: cfg # etc etc
containers:
- command:
- /bin/bash
- -ec
- "source /whatever/env.sh; exec /usr/bin/my-program"
volumeMounts:
- name: cfg # etc etc
volumes:
- name: cfg
emptyDir: {}
Here we have the ConfigMap fetching inline with the PodSpec, but if you had a docker container specialized for fetching ConfigMaps and serializing them into a format that your main containers could consume, I wouldn't expect the actual solution to be nearly that verbose
A separate, and a lot more complicated (but perhaps elegant) approach is a Mutating Admission Webhook, and it looks like they have even recently formalized your very use case with Pod Presets but it wasn't super clear from the documentation in which version that functionality first appeared, nor if there are any apiserver flags one must twiddle to take advantage of it.
PodPresets has been removed since v1.20, the more elegant solution, based on Mutating Admission Webhook, to solve this problem is available now https://github.com/spoditor/spoditor
Essentially, it uses a custom annotation on the PodSpec template, like:
annotations:
spoditor.io/mount-volume: |
{
"volumes": [
{
"name": "my-volume",
"secret": {
"secretName": "my-secret"
}
}
],
"containers": [
{
"name": "nginx",
"volumeMounts": [
{
"name": "my-volume",
"mountPath": "/etc/secrets/my-volume"
}
]
}
]
}
Now, nginx container in each Pod of the StatefulSet will try to mount its own dedicated secret in the pattern of my-secret-{pod ordinal}.
You will just need to make sure my-secret-0, my-secret-1, so on and so forth exists in the same namespace of the StatefulSet.
There're more advanced usage of the annotation in the documentation of the project.
Is there any way to deploy lagom projects on kubernetes in different environments (ie. dev,stage,prod) such that I can use one image with multiple configuration overrides?
For example, let's say I have an environment variable, foo=bar-{{env}}. I want to build and publish one image and override configurations so that in dev foo=bar-dev and in prod foo=bar-prod.
Currently, my understanding is that the application.conf is tied to the image and cannot be overridden. If this is correct, is there a way to work around this so that I do not need to create multiple images, one for each environment?
You can do this in a few ways:
Static:
You can create 3 deployments in 3 namespaces and add the env variable to each deployment. You can manage these variables manually for each deployment:
apiVersion: v1
kind: Pod
metadata:
namespace: dev
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: java-demo-container
image: my-super-java-app
env:
- name: foo
value: "bar-dev"
- name: JAVA_HOME
value: "/opt/java/jdk1.7.0_05/bin/java"
Helm:
You can make helm chart and use 3 files with variables to deploy your application.
To develop charts, you could read the official documentation or find some examples in official Kubernetes repo
Alternatively, you can set typesafe config system property variable to override application.conf file
application.conf: default conf file use in development
foo: bar-dev
application.prod.conf:
include "application.conf"
foo: bar-prod
Set dockerfile system variable:
ENTRYPOINT java -Dconfig.resource="$CONFIG_FILE"
In kubernetes yml
env:
- name: CONFIG_FILE
value: "application.prod.conf"