Hey I have a wider problem as when I update secrets in kubernetes they are not implemented in pods unless they are ugprades/reschedules or just re-deployed; I saw the other stackoverflow post about it but noone of the solutions fit me Update kubernetes secrets doesn't update running container env vars
Also so the in-app solution of python script on pod to update its secret automatically https://medium.com/analytics-vidhya/updating-secrets-from-a-kubernetes-pod-f3c7df51770d but it seems like a long shot and I came up with solution to adding annotation to deployment manifest - and hoping it would re-schedule pods everytime a helm chart would put a new timestamp in it - it does put it but it doesn't reschedule - any thought how to force that behaviour ?
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxx
namespace: xxx
labels: xxx
annotations:
lastUpdate: {{ now }}
also I dont feel like adding this patch command to ci/cd deployment, as its arbitraty and - well doesnt feel like right solution
kubectl patch deployment mydeployment -p '{"spec":{"template":{"spec":{"containers":[{"name":"mycontainer","env":[{"name":"RESTART_","value":"'$(date +%s)'"}]}]}}}}'
didn't anyone else find better solution to re-deploy pods on changed secrets ?
Kubernetes by itself does not do rolling update of a deployment automatically when a secret is changed. So there needs to a controller which will do that for you automatically. Take a look at Reloader which is a controller that watches if some change happens in ConfigMap and/or Secret; then perform a rolling upgrade on relevant DeploymentConfig, Deployment, Daemonset and Statefulset.
Add reloader.stakater.com/auto annotation to the deployment with name xxx and have a ConfigMap called xxx-configmap or Secret called xxx-secret.
This will discover deployments/daemonsets/statefulset automatically where xxx-configmap or xxx-secret is being used either via environment variable or from volume mount. And it will perform rolling upgrade on related pods when xxx-configmap or xxx-secret are updated
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxx
namespace: xxx
labels: xxx
annotations:
reloader.stakater.com/auto: "true"
Related
On Minikube using KubeCtl, I run an image created by Docker using the following command:
kubectl run my-service --image=my-service-image:latest --port=8080 --image-pull-policy Never
But on Minukube, a different configuration is to be applied to the application. I prepared some environment variables in a deployment file and want to apply them to the images on Minikube. Is there a way to tell KubeCtl to run those images using a given deployment file or even a different way to provide the images with those values?
I tried the apply verb of KubeCtl for example, but it tries to create the pod instead of applying the configuration on it.
In minukube/kubernetes you need to apply the environment variables in the yaml file of your pod/deployment.
Here is a example of how you can configure the environment variables in a deployment spec:
apiVersion: apps/v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Here you can find more information abour environment variables.
In this case, if you want to change any value, you need to delete the pod and apply it again. But if you use deployment all modification can be done using kubectl apply command.
I am using YML to create pod and have specified the resource request and limit. Now I am not aware of how to modify the resource limits of a running pod. For example
memory-demo.yml
apiVersion: v1
kind: Pod
metadata:
name: memory-demo
namespace: mem-example
spec:
containers:
- name: memory-demo-ctr
image: polinux/stress
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
Then I run the command oc create -f memory-demo.yml and a pod name memory-demo get created.
My Question is what should I do to modify the memory limits from 200Mi --> 600 Mi, Do I need to delete the existing pod and recreate the pod using the modified YML file?
I am a total newbie. Need help.
First and for most, it is very unlikely that you really want to be (re)creating Pods directly. Dig into what Deployment is and how it works. Then you can simply apply the change to the spec template in deployment and kubernetes will upgrade all the pods in that deployment to match new spec in a hands free rolling update.
Live change of the memory limit for a running container is certainly possible but not by means of kubernetes (and will not be reflected in kube state if you do so). Look at docker update.
You can use replace command to modify an existing object based on the contents of the specified configuration file (documentation)
oc replace -f memory-demo.yml
EDIT: However, some spec can not be update. The only way is delete and re-create the pod.
I am using docker and kubernetes on Google Cloud Platform, with the Kubernetes Engine.
I have secrets configurated in a app.yaml file like so :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
namespace: $CI_COMMIT_REF_SLUG
labels:
app: app
spec:
replicas: 1
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: gcr.io/engagement-org/app:$CI_COMMIT_SHA
imagePullPolicy: Always
ports:
- containerPort: 9000
env:
- name: MAILJET_APIKEY_PUBLIC
valueFrom:
secretKeyRef:
name: mailjet
key: apikey_public
- name: MAILJET_APIKEY_PRIVATE
valueFrom:
secretKeyRef:
name: mailjet
key: apikey_private
Each time I push on a new branch, a new namespace is created through a deploy in my gitlab-ci file. Secrets are created like so :
- kubectl create secret generic mailjet --namespace=$CI_COMMIT_REF_SLUG --from-literal=apikey_public=$MAILJET_APIKEY_PUBLIC --from-literal=apikey_private=$MAILJET_APIKEY_PRIVATE || echo 'Secret already exist';
Now, I have updated my mailjet api keys and want to make the change to all namespaces.
I can edit the secret on each namespace by getting a shell on the pods and running kubectl edit secret mailjet --namespace=<namespace_name>
What I want is to send the new secret values to the new pods that will be created in the future. When I deploy a new one, it still uses the old values.
From what I understand, the gitlab-ci file uses the app.yaml file to replace the environment variables with values. But I don't understand where app.yaml finds the original values.
Thank you for your help.
In general, Kubernetes namespaces are designed to provide isolation for components running inside them. For this reason, the Kubernetes API is not really designed to perform update operations across namespaces, or make secrets usable across namespaces.
That being said, there are a few things to solve this issue.
1. Use a single namespace & Helm releases instead of separate namespaces
From the looks of it, you are using Gitlab CI to deploy individual branches to review environments (presumably using Gitlab's Review App feature?). The same outcome can be achieved by deploying all Review Apps into the same namespace, and using Helm to manage multiple deployments ("releases" in Helm-speak) of the same application within a single namespace.
Within the gitlab-ci.yml, creating a Helm release for a new branch might look similar to this:
script:
- helm upgrade --namespace default --install review-$CI_COMMIT_REF_SLUG ./path/to/chart
Of course, this requires that you have defined a Helm chart for your application (which, in essence is just a set of YAML templates with a set of default variables that can then be overridden for individual releases). Refer to the documentation (linked above) for more information on creating Helm charts.
2. Keep secrets in sync across namespaces
We have had a similar issue a while ago and resorted to writing a custom Kubernetes controller that keeps secrets in sync across namespaces. It's open source and you can find it on GitHub (use with caution, though). It is based on annotations and provides unidirectional propagation of changes from a single, authoritative parent secret:
apiVersion: v1
kind: Secret
metadata:
name: mailjet
namespace: some-kubernetes-namespace
annotations:
replicator.v1.mittwald.de/replicate-from: default/mailjet
With the secret replicator deployed in your cluster, using this annotation will propagate all changes made to the mailjet secret in the default namespace to all secrets in any namespaced annotated like show above.
Now there is a way to share or sync secret across namespaces and its by using the ClusterSecret operator:
https://github.com/zakkg3/ClusterSecret
I am running kubeadm alpha version to set up my kubernates cluster.
From kubernates , I am trying to pull docker images which is hosted in nexus repository.
When ever I am trying to create a pods , It is giving "ImagePullBackOff" every time. Can anybody help me on this ?
Detail for this are present in https://github.com/kubernetes/kubernetes/issues/41536
Pod definition :
apiVersion: v1
kind: Pod
metadata:
name: test-pod
labels:
name: test
spec:
containers:
- image: 123.456.789.0:9595/test
name: test
ports:
- containerPort: 8443
imagePullSecrets:
- name: my-secret
You need to refer to the secret you have just created from the Pod definition.
When you create the secret with kubectl create secret docker-registry my-secret --docker-server=123.456.789.0 ... the server must exactly match what's in your Pod definition - including the port number (and if it's a secure one then it also must match up with the docker command line in systemd).
Also, the secret must be in the same namespace where you are creating your Pod, but that seems to be in order.
I received similar error while launching containers from the amazon ECR registry. The issue was that I didn;t mention the exact "Image URI" location in deployment file.
I do have deployment with single pod, with my custom docker image like:
containers:
- name: mycontainer
image: myimage:latest
During development I want to push new latest version and make Deployment updated.
Can't find how to do that, without explicitly defining tag/version and increment it for each build, and do
kubectl set image deployment/my-deployment mycontainer=myimage:1.9.1
You can configure your pod with a grace period (for example 30 seconds or more, depending on container startup time and image size) and set "imagePullPolicy: "Always". And use kubectl delete pod pod_name.
A new container will be created and the latest image automatically downloaded, then the old container terminated.
Example:
spec:
terminationGracePeriodSeconds: 30
containers:
- name: my_container
image: my_image:latest
imagePullPolicy: "Always"
I'm currently using Jenkins for automated builds and image tagging and it looks something like this:
kubectl --user="kube-user" --server="https://kubemaster.example.com" --token=$ACCESS_TOKEN set image deployment/my-deployment mycontainer=myimage:"$BUILD_NUMBER-$SHORT_GIT_COMMIT"
Another trick is to intially run:
kubectl set image deployment/my-deployment mycontainer=myimage:latest
and then:
kubectl set image deployment/my-deployment mycontainer=myimage
It will actually be triggering the rolling-update but be sure you have also imagePullPolicy: "Always" set.
Update:
another trick I found, where you don't have to change the image name, is to change the value of a field that will trigger a rolling update, like terminationGracePeriodSeconds. You can do this using kubectl edit deployment your_deployment or kubectl apply -f your_deployment.yaml or using a patch like this:
kubectl patch deployment your_deployment -p \
'{"spec":{"template":{"spec":{"terminationGracePeriodSeconds":31}}}}'
Just make sure you always change the number value.
UPDATE 2019-06-24
Based on the #Jodiug comment if you have a 1.15 version you can use the command:
kubectl rollout restart deployment/demo
Read more on the issue:
https://github.com/kubernetes/kubernetes/issues/13488
Well there is an interesting discussion about this subject on the kubernetes GitHub project. See the issue: https://github.com/kubernetes/kubernetes/issues/33664
From the solutions described there, I would suggest one of two.
First
1.Prepare deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: registry.example.com/apps/demo:master
imagePullPolicy: Always
env:
- name: FOR_GODS_SAKE_PLEASE_REDEPLOY
value: 'THIS_STRING_IS_REPLACED_DURING_BUILD'
2.Deploy
sed -ie "s/THIS_STRING_IS_REPLACED_DURING_BUILD/$(date)/g" deployment.yml
kubectl apply -f deployment.yml
Second (one liner):
kubectl patch deployment web -p \
"{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
Of course the imagePullPolicy: Always is required on both cases.
kubectl rollout restart deployment myapp
This is the current way to trigger a rolling update and leave the old replica sets in place for other operations provided by kubectl rollout like rollbacks.
I use Gitlab-CI to build the image and then deploy it directly to GCK. If use a neat little trick to achieve a rolling update without changing any real settings of the container, which is changing a label to the current commit-short-sha.
My command looks like this:
kubectl patch deployment my-deployment -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"build\":\"$CI_COMMIT_SHORT_SHA\"}}}}}}"
Where you can use any name and any value for the label as long as it changes with each build.
Have fun!
It seems that k8s expects us to provide a different image tag for every deployment. My default strategy would be to make the CI system generate and push the docker images, tagging them with the build number: xpmatteo/foobar:456.
For local development it can be convenient to use a script or a makefile, like this:
# create a unique tag
VERSION:=$(shell date +%Y%m%d%H%M%S)
TAG=xpmatteo/foobar:$(VERSION)
deploy:
npm run-script build
docker build -t $(TAG) .
docker push $(TAG)
sed s%IMAGE_TAG_PLACEHOLDER%$(TAG)% foobar-deployment.yaml | kubectl apply -f - --record
The sed command replaces a placeholder in the deployment document with the actual generated image tag.
We could update it using the following command:
kubectl set image deployment/<<deployment-name>> -n=<<namespace>> <<container_name>>=<<your_dockerhub_username>>/<<image_name you want to set now>>:<<tag_of_the_image_you_want>>
For example,
kubectl set image deployment/my-deployment -n=sample-namespace my-container=alex/my-sample-image-from-dockerhub:1.1
where:
kubectl set image deployment/my-deployment - we want to set the image of the deployment named my-deployment
-n=sample-namespace - this deployment belongs to the namespace named as sample-namespace. If your deployment belongs to the default namespace, no need to mention this part in your command.
my-container is the container name which was previously mentioned in the YAML file of your deployment configuration.
alex/my-sample-image-from-dockerhub:1.1 is the new image which you want to set for the deployment and run the container for. Here, alex is the username of the dockerhub image(if applicable), my-sample-image-from-dockerhub:1.1 the image and tag you want to use.
Another option which is more suitable for debugging but worth mentioning is to check in revision history of your rollout:
$ kubectl rollout history deployment my-dep
deployment.apps/my-dep
REVISION CHANGE-CAUSE
1 <none>
2 <none>
3 <none>
To see the details of each revision, run:
kubectl rollout history deployment my-dep --revision=2
And then returning to the previous revision by running:
$kubectl rollout undo deployment my-dep --to-revision=2
And then returning back to the new one.
Like running ctrl+z -> ctrl+y (:
(*) The CHANGE-CAUSE is <none> because you should run the updates with the --record flag - like mentioned here:
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record
(**) There is a discussion regarding deprecating this flag.
I am using Azure DevOps to deploy the containerize applications, I am easily manage to overcome this problem by using the build ID
Everytime its builds and generate the new Build ID, I use this build ID as tag for docker image here is example
imagename:buildID
once your image is build (CI) successfully, in CD pipeline in deployment yml file I have give image name as
imagename:env:buildID
here evn:buildid is the azure devops variable which having value of build ID.
so now every time I have new changes to build(CI) and deploy(CD).
please comment if you need build definition for CI/CD.