Does Kubenetes develop the code for creating Pod object? - docker

Does Kubernetes develop the code for creating Pod object or it is just a part of container engine like Docker or cri-o?

Pod is kubernetes specific abstraction for which kubernetes has implemented code. Pod is a logical grouping of containers. A pod can have one or more than one containers. When a user asks Kubernetes to create a pod with two containers in it Kubernetes API server takes the request and kubernetes code instructs kubelet to actually start the containers from the docker image.
Kubelet which is part of kubernetes uses docker or cri-o or containerd via a concept called Container runtime interface(CRI) to actually invoke lifecycle operations (start, stop etc)on the container.

Related

Kubernetes and Docker Relationship

What is the nature of the relationship between Docker and Kubernetes? Is it safe to assume that ALL Docker operations done within a Pod will treat the Pod as if it is a normal host machine?
For example, if I were to use the Python Docker SDK, attach to the /var/run/docker.sock, and create a volume, will this volume only exist within the Pod?
My main concern is that I know a Pod is virtualized, thus may not play nicely if I dig a little too deep via other virtualization tools like Docker.
It's important to understand what the responsibility of each of these concepts is.
A Docker container is in essence a boundary between the host OS and guest OS, that allows for a process to run in isolation (docs).
Kubernetes is an orchestration platform for running such containers (docs).
Finally a Pod is a kubernetes object that describes how a docker container is to be run (docs).
With that knowledge we can answer some of your questions;
What is the nature of the relationship between Docker and Kubernetes?
Kubernetes can run docker containers like your computer can, but it's optimised for this specific goal. Kubernetes is also an abstraction (or orchestration) layer, handling resources like network capability, disk space, and cpu cycles for you.
Is it safe to assume that ALL Docker operations done within a Pod will treat the Pod as if it is a normal host machine?
A Pod is not a host in any way. It's merely a description of how a docker container (or multiple) should run. Any resulting containers are running in the virtual space that is created by the kubernetes Nodes.
For example, if I were to use the Python Docker SDK, attach to the /var/run/docker.sock, and create a volume, will this volume only exist within the Pod?
This is something you can do on your local machine, and while technically you could do this on your Node as well, it's not a common use case.
Note that a docker container is isolated from any external factors like a mount or a network socket (which only happen at runtime, and don't change the state of the container itself). You can however configure a container (using a Pod object) to recreate the same conditions on your cluster.
If Kubernetes is running Docker (it's not guaranteed to) then that /var/run/docker.sock will be the host's Docker socket; there is not an additional layer of virtualization.
You shouldn't try to use Docker primitives in an application running in Kubernetes. The approach you describe can lead to data loss, even, if you try to create a Docker-native volume on a node but then a cluster autoscaler or some other task destroys the node. If you need to create storage or additional containers, you can use the Kubernetes API to create PersistentVolumeClaims, Jobs, and other Kubernetes-managed objects.

Need of pods if container was already there

I know what is the advantages of pod over the container it is there in the Kubernetes documentation but still unable to understand the same tasks and actions can be performed with container too then why we need pods in Kubernetes?
The K8s documentation describes containers and pods pretty well. But in essence:
A pod in the K8s context
A group of containers
Containers share networking. For example, the same IP address
Typically multi-container pods are used when you need a sidecar container. For example:
A proxy process to your main container.
A debug container with utilities.
A process that always needs to run together with your app.
A container that does some sort of networking changes that your app needs.
Allows you to set up a securityContext for all the pods in the container.
Allows you to set up a Disruption Budget policy to prevent downtime for example.
Allows you to use higher-level Kubernetes abstractions like Deployments, StatefulSets and Jobs.
Allows you to set Pod presets so that a pattern can be reused.
A container in the K8s context
A lower-level abstraction from a pod
Allows you to specify the image
Allows you to specify resources (mem/cpu)
Allows you to setup Liveness, Startup, and Readiness Probes.
Allows you to set up a securityContext just for the container individually

Does Kubernetes implement its own container or use Docker containers or Both?

Does Kubernetes implement its own container or use Docker containers or Both?
Can Kubernetes implement a container that is not a Docker container?
Kubernetes is a cluster technology and a container orchestration tool. It helps in Deploying containers, managing its life cycle, rolling updates, roll back, scaling up, scaling down, networking, routing and much more all that you need to run your application services inside containers.
Docker is a vitrualization technology that makes the apps, run time environments and dependencies all bundled together in a n image that can be deployed as a container.
K8s under the hood uses docker to deploy containers. In addition to docker., other container technologies like rkt and crio are also supported
Kubernetes implements a wrapper over the existing docker container(s), the wrapper named as pods. The reason behind using pod rather than directly container is that kubernetes requires more information to orchestrate the containers like restart policy, liveness probe, readiness probe. A liveness probe defines that container inside the pods is alive or not, restart policy defines the what to do with container when it failed. A readiness probe defines that container is ready to start serving.
So, Instead of adding those properties to the existing container, kubernetes had decided to write the wrapper on containers with all the necessary additional information.
Can Kubernetes implement a container that is not a Docker container?
Kubernetes can orchestrate a container which is not a Docker one, and that is because of cri-o
As explained in Kubic:
Contrary to what you might have heard, there are more ways to run containers than just the docker tool.
In fact there are an increasing number of options, such as:
runc: a CLI tool for spawning and running containers according to the OCI specification.
(OCI: Open Container Initiative: An open governance structure for the express purpose of creating open industry standards around container formats and runtime)
rkt from CoreOS, now (June 2019) almost dead, and with multiple pending security issues.
frakti: an hypervisor-based container runtime for Kubernetes, which lets Kubernetes run pods and containers directly inside hypervisors via runV.
It is light weighted and portable, but can provide much stronger isolation with independent kernel than linux-namespace-based container runtimes.
cri-containerd: the Containerd Plugin for Kubernetes Container Runtime Interface (it started as a standalone cri-containerd binary, which is now (since March 2018) end-of-life. cri-containerd is transitioning from a standalone binary that talks to containerd to a plugin within containerd.
and more.
Most of these follow the OCI standard defining how the runtimes start and run your containers, but they lack a standard way of interfacing with an orchestrator.
This makes things complicated for tools like kubernetes, which run on top of a container runtime to provide you with orchestration, high availability, and management.
Kubernetes therefore introduced a standard API to be able to talk to and manage a container runtime. This API is called the Container Runtime Interface (CRI), Dec. 2016.
Existing container runtimes like Docker use a “shim” (dockershim) to interface between Kubernetes and the runtime, but there is another way, using an interface that was designed to work with CRI natively. And that is where CRI-O comes into the picture.
Introduction to CRI-O
Started little over a year ago, CRI-O began as a Kubernetes incubator project, implementing the CRI Interface for OCI compliant runtimes.
Using the lightweight runc runtime to actually run the containers, the simplest way of describing CRI-O would be as a lightweight alternative to the Docker engine, especially designed for running with Kubernetes.
As of 6th Sept 2018 CRI-O is no longer an incubator project, but now an official part of the Kubernetes family of tools.
So it is important to understant cri-o, in order to get the relationship between Kubernetes and the containers it orchestrates.
See "Cloud Native Computing Foundation adopts CRI-O container runtime+tutorial"
It includes the architecture schema:
Sequence of launching new pod
Kubernetes control plane contacts the kubelet to launch a pod.
kublet forwards the request to the CRI-O daemon via kubernetes CRI (Container runtime interface) to launch the new pod.
CRI-O then uses the containers/image library to pull the image from a container registry.
Downloaded image is unpacked into the container’s root filesystems, using containers/storage library.
After the rootfs has been created for the container, CRI-O generates an OCI runtime specification json file describing how to run the container.
CRI-O then launches an OCI Compatible Runtime using the specification to run the container process.
Default OCI Runtime is runc for now.
Each container is monitored by a separate conmon process.
Networking for the pod is setup through use of CNI (Container Network Interface), so any CNI plugin can be used with CRI-O.

Deployment of docker images

I have a docker image. Whenever I run the container, I need to provide input files externally to the container and then some commands. So, if I am deploying that image to kubernetes, then how I am supposed to provide data if it is continuously running. Any leads will be appreciated.
In Kubernetes a pod is a group of one or more containers (such as Docker containers), with shared storage/network, and a specification for how to run the containers.
You can pass any kind of meta-data to the container using the Kubernetes deployment scripts. Check out this link for a sample.
Whenever the pods (containers) are restarted, the parameters are automatically passed by the pod specification yaml file.

Which Kubernetes component creates a new pod?

I have a problem to understand the kubernetes workflow:
So as I understand the flow:
You have a master which contains etcd, api-server, controller manager and scheduler.
You have nodes which contain pods (wich contain containers), kubelet and a proxy.
The proxy is working as a basic proxy to make it possible for a service to communicate with other nodes.
When a pod dies, the controller manager will see this (it 'reads' the replication controller which describes how many pods there normally are).
unclear:
The controller manager will inform the API-server (I'm not right about this).
The API-server will tell the scheduler to search a new place for the pod.
After the scheduler has found a good place, the API will inform kubelet to create a new pod.
I'm not sure about the last scenario? Can you tell me the right proces is a clear way?
Which component is creating the pod and container? Is it kubelet?
So it's the kubelet that actually creates the pods and talks to the docker daemon. If you do a docker ps -a on your nodes (as in not master) in your cluster, you'll see the containers in your pod running. So the workflow is run a kubectl command, that goes to the API server, which passes it to the controller, say that command was to spawn a pod, the controller relays that to the API server which then goes to the scheduler and tells it to spawn the pod. Then the kubelet is told to spawn said pod.
I suggest reading the Borg paper that Kubernetes is based on to better understand things in further detail. http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43438.pdf

Resources