in kubernetes, how to update pods to use updated configmap - docker

I'm running more than one replicas of pods with kubernetes deployment
and I'd like to update the replicas to use updated configmap in a rolling way. same like rolling-update works.
So that kubernetes will terminate pod and start sending traffic to the newly updated pods one at a time until all pods will be updated.
Can I use rolling-update with deployment?

Applying a change to the Deployment object will trigger a rolling-update. From the docs:
A Deployment’s rollout is triggered if and only if the Deployment’s pod template (that is, .spec.template) is changed, for example if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout.
So if you want to trigger a rolling update to update your configmap I would suggest you update a metadata label. Perhaps a CONFIG_VER key.

To automatically perform a rolling update of deployment on configmap update, you can also use a tool that my team has built and opensourced: Reloader which we are also using in production clusters of our customers.
Reloader watches changes in ConfigMap and Secret and updates the associated Deployments, Deamonsets and Statefulsets, based on the configured update strategy.

Related

Kubernetes: Is it safe to blindly use RollingUpdate?

Dears,
I am very new to Kubernetes and I'm currently working on the update process of my services (traefik, prometheus, ...). I want to avoid the compulsive real-time updates that may lead to bugs or crash. I am used to keep the control about what need to be updated and what does not.
So far, I understood that Kubernetes provides the field spec.updateStrategy.type with 2 possible values:
RollingUpdate: permanent auto-update
OnDelete: auto-update after the manual deletion of a pod
I am surprised to not find the same steps than with apt Debian tool: when I use apt update; apt upgrade, I get a list of what will be updated and I choose what I want to be updated.
When I came to Kubernetes, I imagined updates would allow to keep this two-steps spirit, something like :
Execute a command to compare the current docker images that are deployed on the cluster to the repos. This command would print the new existing version of each images.
Execute another command to choose what will be updated.
There is no stable, unstable, testing channels like Linux repositories with docker, then I have no way to make difference between the testing update and the trustworthy updates. I am affraid that RollingUpdate would deploy each new image without distinction.
Which lead to my main question: is it completely safe to blindly trust RollingUpdate ?
It is safe to trust the RollingUpdate StatefulSet update strategy. However, it is important to note that this is not an "auto-update".
A StatefulSet (and also a Deployment) has a template Pod spec that has an image:. That tends to name a fairly specific version of the image to run. In general Kubernetes will check to see if an image with that name and tag is already on the node, and if so, tries to run it without updating it (see Updating images, which discusses the imagePullPolicy: field).
So if your StatefulSet says image: prom/prometheus:2.26.0, for example, your Pods will use exactly that version of Prometheus, and no other. If you just say image: prom/prometheus with no tag (or with ...:latest) then each node will pull whatever version happens to be current at the time the Pod starts up, and you can easily wind up with out-of-sync versions.
The place the update strategy gets used, then, is when you change image: prom/prometheus:2.27.0. The image in a Pod spec can't be modified, so the existing pods need to be deleted and recreated. If you have updateStrategy: { type: RollingUpdate } (the default) then the StatefulSet controller will do this for you, specifically deleting and recreating the pods in order. If you have it set to OnDelete then you need to manually delete the pods.
Since you as the administrator control the image: then you can pick exactly what version is getting used; there is no such thing as a "stable update channel" that will automatically update. Correspondingly, you can deploy the updated StatefulSet YAML configuration in a pre-production environment to test out before you promote it to production.

Stop all Pods in a StatefulSet before scaling it up or down

My team is currently working on migrating a Discord chat bot to Kubernetes. We plan on using a StatefulSet for the main bot service, as each Shard (pod) should only have a single connection to the Gateway. Whenever a shard connects to said Gateway, it tells it its ID (in our case the pod's ordinal index) and how many shards we are running in total (the amount of replicas in the StatefulSet).
Having to tell the gateway the total number of shards means that in order to scale our StatefulSet up or down we'd have to stop all pods in that StatefulSet before starting new ones with the updated value.
How can I achieve that? Preferrably through configuration so I don't have to run a special command each time.
Try kubectl rollout restart sts <sts name> command. It'll restart the pods one by one in a RollingUpdate way.
Scale down the sts
kubectl scale --replicas=0 sts <sts name>
Scale up the sts
kubectl scale --replicas=<number of replicas> sts <sts name>
One way of doing this is,
Firstly get the YAML configuration of StatefulSets by running the below command and save it in a file:
kubectl get statefulset NAME -o yaml > sts.yaml
And then delete the StatefulSets by running the below command:
kubectl delete -f sts.yaml
And Finally, again create the StatefulSets by using the same configuration file which you got in the first step.
kubectl apply -f sts.yaml
I hope this answers your query to only delete the StatefulSets and to create the new StatefulSets as well.
Before any kubectl scale, since you need more control on your nodes, you might consider a kubectl drain first
When you are ready to put the node back into service, use kubectl uncordon, which will make the node schedulable again.
By draining the node where your pods are maanged, you would stop all pods, with the opportunity to scale the statefulset with the new value.
See also "How to Delete Pods from a Kubernetes Node" by Keilan Jackson
Start at least with kubectl cordon <nodename> to mark the node as unschedulable.
If your pods are controlled by a StatefulSet, first make sure that the pod that will be deleted can be safely deleted.
How you do this depends on the pod and your application’s tolerance for one of the stateful pods to become temporarily unavailable.
For example you might want to demote a MySQL or Redis writer to just a read-only slave, update and release application code to no longer reference the pod in question temporarily, or scale up the ReplicaSet first to handle the extra traffic that may be caused by one pod being unavailable.
Once this is done, delete the pod and wait for its replacement to appear on another node.

Restarting a kubernetes pod from another pod

I'm trying to use argo events to trigger a workflow where I push changes to a database, then I have to restart certain pods so that changes are taken into consideration. I know how to use argo to create kubernetes objects, but I don't know how I can use this to restart a pod from within a kubernetes object. Alternatively I can also launch a pod from within argo and its container would restart a docker container, is this possible? If so how?
You can a do zero downtime rolling update via argo rollouts.
A RollingUpdate slowly replaces the old version with the new version. As the new version comes up, the old version is scaled down in order to maintain the overall count of the application. This is the default strategy of the deployment object
Argo Rollouts also supports Canary and BlueGreen.

How to ensure readiness of container replicas in Kubernetes?

I'm new to Kubernetes and I was wondering if it is possible to have container replicas launching one at a time? In other words, if I deploy a compose file yielding a container or pod configuration with N replicas, is it possible (and if so how) to ensure that each replica waits for the previous one to be ready before launching?
I read about readiness probes, but if I understood them correctly, they ensure pod ordering instead of replica, or did I misunderstood?
Thanks
A StatefulSet has this property: given three replicas, the second one will not start until the first one is running and ready.
(Usually "replica" and "pod" mean the same thing. If you create a Deployment or StatefulSet with 3 replicas, and run kubectl get pods once it's done, you should see 3 pods.)
If you're using Kompose to do the deployment, there's at least a hint that it doesn't support StatefulSets; you need to write native Kubernetes YAML for this.
Kubernetes has the StatefulSet Object to manage a set of replica's of a Pod. The StatefulSet differs from the default Deployment in the sense that it provides guarantees about the ordering and uniqueness of these Pods. From the documentation:
For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.
As an example, see this blog on how to setup a StatefulSet for ElasticSearch.

How do I redeploy everything in kubernetes after updating a dockerfile?

I'm very new to kubernetes and all I want to do at this point is restart my cluster and have it run an updated dockerfile. I'm running kubernetes in google-cloud-platform by the way.
kubectl from version 1.15 should contain kubectl rollout restart
(according to this comment https://github.com/kubernetes/kubernetes/issues/33664#issuecomment-497242094)
You can use rolling update mechanism to update the service without outage which will update one pod at a time until the desired state match, and still your services are up and running. Of course we must have to update our containers inside the pod to protect our data and to get latest features out. Kubernetes makes it easy to roll out updates to your applications by modifying the deployments and managing them. It's major update time and we'll use easy way to tweak them.
Suppose you have front end, auth and back-end deployments and there is change into the auth or newer version, so you want to update auth deployment configuration file in which you could change its respective auth container image to newer version after building the new docker image and simply changing the image version in your .yaml file and apply as below
$ kubectl apply -f deployments/auth.yaml
Check that it succeed with the deployment describe command, you could see the rolling update strategy and figure out that right number of pods are always available. That uses the new replica set to ensure that we are running the latest version of auth container.
$ kubectl describe deployments auth
Once, the rolling update is complete, we can view the running pods for the auth service.
$ kubectl get pods
Check the time frame for which it is running. The new version of the auth pod has replaced the previous one. Once again check with the id of the new auth pod and verify. Updating deployment this way keeps us with a clean declarative approach to roll out changes to our application weather you have single or thousands of pods running.
You can use kubectl patch to trigger a redeploy for example adding a new label.
$ kubectl patch deployment your_deployment -p "{\"spec\": {\"template\": {\"metadata\": { \"labels\": { \"redeploy\": \"$(date +%s)\"}}}}}"
And now you should see a new ReplicaSet trying to deploy new pods for you!
https://www.kevinsimper.dk/posts/trigger-a-redeploy-in-kubernetes
(I also made it into a shortcut that can apply it to all deploys that match some sort of filter)

Resources