How to ensure readiness of container replicas in Kubernetes? - docker

I'm new to Kubernetes and I was wondering if it is possible to have container replicas launching one at a time? In other words, if I deploy a compose file yielding a container or pod configuration with N replicas, is it possible (and if so how) to ensure that each replica waits for the previous one to be ready before launching?
I read about readiness probes, but if I understood them correctly, they ensure pod ordering instead of replica, or did I misunderstood?
Thanks

A StatefulSet has this property: given three replicas, the second one will not start until the first one is running and ready.
(Usually "replica" and "pod" mean the same thing. If you create a Deployment or StatefulSet with 3 replicas, and run kubectl get pods once it's done, you should see 3 pods.)
If you're using Kompose to do the deployment, there's at least a hint that it doesn't support StatefulSets; you need to write native Kubernetes YAML for this.

Kubernetes has the StatefulSet Object to manage a set of replica's of a Pod. The StatefulSet differs from the default Deployment in the sense that it provides guarantees about the ordering and uniqueness of these Pods. From the documentation:
For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.
As an example, see this blog on how to setup a StatefulSet for ElasticSearch.

Related

Stop all Pods in a StatefulSet before scaling it up or down

My team is currently working on migrating a Discord chat bot to Kubernetes. We plan on using a StatefulSet for the main bot service, as each Shard (pod) should only have a single connection to the Gateway. Whenever a shard connects to said Gateway, it tells it its ID (in our case the pod's ordinal index) and how many shards we are running in total (the amount of replicas in the StatefulSet).
Having to tell the gateway the total number of shards means that in order to scale our StatefulSet up or down we'd have to stop all pods in that StatefulSet before starting new ones with the updated value.
How can I achieve that? Preferrably through configuration so I don't have to run a special command each time.
Try kubectl rollout restart sts <sts name> command. It'll restart the pods one by one in a RollingUpdate way.
Scale down the sts
kubectl scale --replicas=0 sts <sts name>
Scale up the sts
kubectl scale --replicas=<number of replicas> sts <sts name>
One way of doing this is,
Firstly get the YAML configuration of StatefulSets by running the below command and save it in a file:
kubectl get statefulset NAME -o yaml > sts.yaml
And then delete the StatefulSets by running the below command:
kubectl delete -f sts.yaml
And Finally, again create the StatefulSets by using the same configuration file which you got in the first step.
kubectl apply -f sts.yaml
I hope this answers your query to only delete the StatefulSets and to create the new StatefulSets as well.
Before any kubectl scale, since you need more control on your nodes, you might consider a kubectl drain first
When you are ready to put the node back into service, use kubectl uncordon, which will make the node schedulable again.
By draining the node where your pods are maanged, you would stop all pods, with the opportunity to scale the statefulset with the new value.
See also "How to Delete Pods from a Kubernetes Node" by Keilan Jackson
Start at least with kubectl cordon <nodename> to mark the node as unschedulable.
If your pods are controlled by a StatefulSet, first make sure that the pod that will be deleted can be safely deleted.
How you do this depends on the pod and your application’s tolerance for one of the stateful pods to become temporarily unavailable.
For example you might want to demote a MySQL or Redis writer to just a read-only slave, update and release application code to no longer reference the pod in question temporarily, or scale up the ReplicaSet first to handle the extra traffic that may be caused by one pod being unavailable.
Once this is done, delete the pod and wait for its replacement to appear on another node.

Kubernetes POD Failover

I am toying around with Kubernetes and have managed to deploy a statefull application (jenkins instance) to a single node.
It uses a PVC to make sure that I can persist my jenkins data (jobs, plugins etc).
Now I would like to experiment with failover.
My cluster has 2 digital ocean droplets.
Currently my jenkins pod is running on just one node.
When that goes down, Jenkins becomes unavailable.
I am now looking on how to accomplish failover in a sense that, when the jenkins pod goes down on my node, it will spin up on the other node. (so short downtime during this proces is ok).
Of course it has to use the same PVC, so that my data remains intact.
I believe, when reading, that a StatefulSet kan be used for this?
Any pointers are much appreciated!
Best regards
Digital Ocean's Kubernetes service only supports ReadWriteOnce access modes for PVCs (see here). This means the volume can only be attached to one node at a time.
I came across this blogpost which, while focused on Jenkins on Azure, has the same situation of only supporting ReadWriteOnce. The author states:
the drawback for me though lies in the fact that the access mode for Azure Disk persistent volumes is ReadWriteOnce. This means that an Azure disk can be attached to only one cluster node at a time. In the event of a node failure or update, it could take anywhere between 1-5 minutes for the Azure disk to get detached and attached to the next available node.
Note, Pod failure and node failures are different things. Since DO only supports ReadWriteOnce, there's no benefit to trying anything more sophisticated than what you have right now in terms of tolerance to node failure. Since it's ReadWriteOnce the volume will need to be unmounted from the failing node and re-mounted to the new node, and then a new Pod will get scheduled on the new node. Kubernetes will do this for you, and there's not much you can do to optimize it.
For Pod failure, you could use a Deployment since you want to read and write the same data, you don't want different PVs attached to the different replicas. There may be very limited benefit to this, you will have multiple replicas of the Pod all running on the same node, so it depends on how the Jenkins process scales and if it can support that type of scale horizontal out model while all writing to the same volume (as opposed to simply vertically scaling memory or CPU requests).
If you really want to achieve higher availability in the face of node and/or Pod failures, and the Jenkins workload you're deploying has a hard requirement on local volumes for persistent state, you will need to consider an alternative volume plugin like NFS, or moving to a different cloud provider like GKE.
Yes, you would use a Deployment or StatefulSet depending on the use case. For Jenkins, a StatefulSet would be appropriate. If the running pod becomes unavailable, the StatefulSet controller will see that and spawn a new one.
What you are describing is the default behaviour of Kubernetes for Pods that are managed by a controller, such as a Deployment.
You should deploy any application as a Deployment (or another controller) even if it consists just of a single Pod. You never really deploy Pods directly to Kubernetes. So, in this case, there's nothing special you need to do to get this behaviour.
When one of your nodes dies, the Pod dies too. This is detected by the Deployment controller, which creates a new Pod. This is in turn detected by the scheduler, which assigns the new Pod to a node. Since one of the nodes is down, it will assign the Pod to the other node that is still running. Once the Pod is assigned to this node, the kubelet of this node will run the container(s) of this Pod on this node.
Ok, let me try to anwser my own question here.
I think Amit Kumar Gupta came the closest to what I believe is going on here.
Since I am using a Deployment and my PVC in ReadWriteOnce, I am basically stuck with one pod, running jenkins, on one node.
weibelds answer made me realise that I was asking questions to about a concept that Kubernetes performs by default.
If my pod goes down (in my case i am shutting down a node on purpose by doing a hard power down to simulate a failure), the cluster (controller?) will detect this and spawn a new pod on another node.
All is fine so far, but then I noticed that my new pod as stuck in ContainerCreating state.
Running a describe on my new pod (the one in ContainerCreating state) showed this
Warning FailedAttachVolume 16m attachdetach-controller Multi-Attach error for volume "pvc-cb772fdb-492b-4ef5-a63e-4e483b8798fd" Volume is already used by pod(s) jenkins-deployment-6ddd796846-dgpnm
Warning FailedMount 70s (x7 over 14m) kubelet, cc-pool-bg6u Unable to mount volumes for pod "jenkins-deployment-6ddd796846-wjbkl_default(93747d74-b208-421c-afa4-8d467e717649)": timeout expired waiting for volumes to attach or mount for pod "default"/"jenkins-deployment-6ddd796846-wjbkl". list of unmounted volumes=[jenkins-home]. list of unattached volumes=[jenkins-home default-token-wd6p7]
Then it started to hit me, this makes sense.
It's a pitty, but it makes sense.
Since I did a hard power down on the node, the PV went down with it.
So now the controller tries to start a new pod, on a new node but it cant transfer the PV, since the one on the previous pod became unreachable.
As I read more on this, I read that DigitalOcean only supports ReadWriteOnce , which now leaves me wondering, how the hell can I achieve a simple failover for a stateful application on a Kubernetes Cluster on Digital Ocean that consists of just a couple of simple droplets?

Single node kubernetes cluster scale up

I have single node kubenertes cluster running on GKE. All the load is running on single node separated by namesapces.
now i would like to implement the auto-scaling. Is it possible i can scale mircoservices to new node but one pod is running My main node only.
what i am thinking
Main node : Running everything with 1 pod avaibility (Redis, Elasticsearch)
Scaled up node : Scaled up replicas only of stateless microservice
so is there any way i can implemet this using node auto scaeler or using affinity.
Issue is that right now i am running graylog, elasticsearch and redis and rabbitmq on single node having statefulsets and backed by volume i have to redeploy everything edit yaml file for adding affinity to all.
I'm not sure that I understand your question correctly but if I do then you may try to use taints and tolerations (node affinity). Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. All the details are available in the documentation here.
Assuming the issue you have is that the persistent volumes bound to your StatefulSets are only accessible from one node, then you can use the nodeAffinity field to constraint where the StatefulSet Pods can be scheduled. As mentioned in the documentation:
A PV can specify node affinity to define constraints that limit what
nodes this volume can be accessed from. Pods that use a PV will only
be scheduled to nodes that are selected by the node affinity.

3 tier architecture on kubernetes

I have 1 master kubernetes server and 9 nodes. In that, I want to run backend on 2 nodes and frontend on 2 nodes and DB on 3 nodes.
For all backend, frontend, DB I have ready DockerImage.
How to run an image using kubernetes on only desired(2 or 3).
Please share some ideas to achieve the same.
The Kubernetes scheduler most of the time will do a good job distributing the pods across the cluster. You may want to delegate that responsibility to the scheduler unless you have very specific requirements.
If you want to control this, you can use:
Node selectors
Node Affinity or Anti-Affinity
Directly specify the node name in the deployment spec
From these three, the recommended approach is to use node affinity or anti-affinity due to its flexibility.
Run the front end as a Deployment with desired replica count and let kubernetes manage it for you.
Run Backend as Deployment with desired number of replicas and Kubernetes will figure out how to run it. Use node selectors if you prefer specific nodes.
Run the DB as Deployment OR StatefulSet, Kubernetes will figure out how to run it.
https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
Use network policies to restrict traffic.
You may use labels and nodeSelector. Here it is:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/

Trying to understand what a Kubernetes Worker Node and Pod is compared to a Docker "Service"

I'm trying to learn Kubernetes to push up my microservices solution to some Kubernetes in the Cloud (e.g. Azure Kubernetes Service, etc)
As part of this, I'm trying to understand the main concepts, specifically around Pods + Workers and (in the yml file) Pods + Services. To do this, I'm trying to compare what I have inside my docker-compose file against the new concepts.
Context
I currently have a docker-compose.yml file which contains about 10 images. I've split the solution up into two 'networks': frontend and backend. The backend network contains 3 microservices and cannot be accessed at all via a browser. The frontend network contains a reverse-proxy (aka. Traefik, which is just like nginx) which is used to route all requests to the appropriate backend microservice and a simple SPA web app. All works 100% awesome.
Each backend Microservice has at least one of these:
Web API host
Background tasks host
So this means, I could scale out the WebApi hosts, if required .. but I should never scale out the background tasks hosts.
Here's a simple diagram of the solution:
So if the SPA app tries to request some data with the following route:
https://api.myapp.com/account/1 this will hit the reverse-proxy and match a rule to then forward onto <microservice b>/account/1
So it's from here, I'm trying to learn how to write up an Kubernetes deployment file based on these docker-compose concepts.
Questions
Each 'Pod' has it's own IP so I should create a Pod per container. (Yes, a Pod can have multiple containers and to me, that's like saying 'install these software products on the same machine')
A 'Worker Node' is what we replicate/scale out, so we should put our Pods into a Node based on the scaling scenario. For example, the Background Task hosts should go into one Node because they shouldn't be scaled. Also, the hardware requirements for that node are really small. While the Web Api's should go into another Node so they can be replicated/scaled out
If I'm on the right path with the understanding above, then I'll have a lot of nodes and pods ... which feels ... weird?
The pod is the unit of Workload, and has one or more containers. Exactly one container is normal. You scale that workload by changing the number of Pod Replicas in a ReplicaSet (or Deployment).
A Pod is mostly an accounting construct with no direct parallel to base docker. It's similar to docker-compose's Service. A pod is mostly immutable after creation. Like every resource in kubernetes, a pod is a declaration of desired state - containers to be running somewhere. All containers defined in a pod are scheduled together and share resources (IP, memory limits, disk volumes, etc).
All Pods within a ReplicaSet are both fungible and mortal - a request can be served by any pod in the ReplicaSet, and any pod can be replaced at any time. Each pod does get its own IP, but a replacement pod will probably get a different IP. And if you have multiple replicas of a pod they'll all have different IPs. You don't want to manage or track pod IPs. Kubernetes Services provide discovery (how do I find those pods' IPs) and routing (connect to any Ready pod without caring about its identity) and load balancing (round robin over that group of Pods).
A Node is the compute machine (VM or Physical) running a kernel and a kubelet and a dockerd. (This is a bit of a simplification. Other container runtimes than just dockerd exist, and the virtual-kubelet project aims to turn this assumption on its head.)
All pods are Scheduled on Nodes. When a pod (with containers) is scheduled on a node, the kubelet responsible for & running on that node does things. The kubelet talks to dockerd to start containers.
Once scheduled on a node, a pod is not moved to another node. Nodes are fungible & mortal too, though. If a node goes down or is being decommissioned, the pod will be evicted/terminated/deleted. If that pod was created by a ReplicaSet (or Deployment) then the ReplicaSet Controller will create a new replica of that pod to be scheduled somewhere else.
You normally start many (1-100) pods+containers on the same node+kubelet+dockerd. If you have more pods than that (or they need a lot of cpu/ram/io), you need more nodes. So the nodes are also a unit of scale, though very much indirectly wrt the web-app.
You do not normally care which Node a pod is scheduled on. You let kubernetes decide.

Resources