docker kubernetes duplicate pods - docker

Why does docker kubernetes duplicate pods? I see on the dashboard some pods with k8s and with k8s_POD even my deployments.yaml has replica=1
Does anyone have any ideas on this?

All containers: in a kubernetes Pod share the same cluster's Pod IP address, and for each one of them 127.0.0.1 is the same as the others. The way that magic happens is via that k8s_POD_ container, which is the one running the pause image and is the only container which is assigned a kubernetes Pod IP via CNI. All containers in that Pod then use its network_namespace(7) to send and receive traffic within the cluster. That's also why one can restart a container without it losing the IP address, unlike deleting a Pod which gets a fresh one
To the best of my knowledge, those sandbox containers can exist even without any of the other containers in cases where the main container workloads cannot start due to pending volumes (or other resources, such a GPUs), since the CNI allocation process happens very early in the Pod lifecycle
I could have sworn it was in an existing question but I wasn't able to readily find it

Related

Is there any way to save the current state of kubernetes pod?

I have a pod running linux, i have installed many software/tools, if I restart the pod, k8s will start a new pod and I'll lose all my resources installed, is there any way to save the pod as docker image or any other way it can be persistant even after restarting pod.
Is there a way to download the container image from a pod in kuberentes environment? tried the solution, but wasn't helpful.
The answer in the link is not wrong, you will probably have to jump through some hoops. One method I can think of is to:
Run a container that has the docker cli installed, mounts the docker socket from the host, and has an node affinity rule so that the container is scheduled on the same node as the container you want to capture.
From within this container, you should be able to access the docker daemon running on the node, issue docker commands to capture, tag, push the updated image.
I wouldn't advise doing this though... have not tested it myself but I have done something "similar" before.
It would be better to create your own dockerfile, install software, and use that image for your containers.

how to differentiate docker container and kubernetes pods running in the same host

I was handed a kubernetes cluster to manage. But in the same node, I can see running docker containers (via docker ps) that I could not able to find/relate in the pods/deployments (via kubectl get pods/deployments).
I have tried kubectl describe and docker inspect but could not pick out any differentiating parameters.
How to differentiate which is which?
There will be many. At a minimum you'll see all the pod sandbox pause containers which are normally not visible. Plus possibly anything you run directly such as the control plane if not using static pods.

Need of pods if container was already there

I know what is the advantages of pod over the container it is there in the Kubernetes documentation but still unable to understand the same tasks and actions can be performed with container too then why we need pods in Kubernetes?
The K8s documentation describes containers and pods pretty well. But in essence:
A pod in the K8s context
A group of containers
Containers share networking. For example, the same IP address
Typically multi-container pods are used when you need a sidecar container. For example:
A proxy process to your main container.
A debug container with utilities.
A process that always needs to run together with your app.
A container that does some sort of networking changes that your app needs.
Allows you to set up a securityContext for all the pods in the container.
Allows you to set up a Disruption Budget policy to prevent downtime for example.
Allows you to use higher-level Kubernetes abstractions like Deployments, StatefulSets and Jobs.
Allows you to set Pod presets so that a pattern can be reused.
A container in the K8s context
A lower-level abstraction from a pod
Allows you to specify the image
Allows you to specify resources (mem/cpu)
Allows you to setup Liveness, Startup, and Readiness Probes.
Allows you to set up a securityContext just for the container individually

How to access local machine from a pod

I have a pod created on the local machine. I also have a script file on the local machine. I want to run that script file from the pod (I will be inside the pod and run the script present on the local host).
That script will update /etc/hosts of another pod. Is there a way where i can update the /etc/hosts of one pod from another pod? The pods are created from two different deployments.
I want to run that script file from the pod (I will be inside the pod and run the script present on the local host).
You can't do that. In a plain Docker context, one of Docker's key benefits is filesystem isolation, so the container can't see the host's filesystem at all unless parts of it are explicitly published into the container. In Kubernetes not only is there this restriction, but you also have limited control over which node you're running on, and there's potential trouble if one node has a given script and another doesn't.
Is there a way where i can update the /etc/hosts of one pod from another pod?
As a general rule, you should avoid using /etc/hosts for anything. Setting up a DNS service keeps things consistent and avoids having to manually edit files in a bunch of places.
Kubernetes provides a DNS service for you. In particular, if you define a Service, then the name of that Service will be visible as a DNS name (within the cluster); one pod can reach the other via first-service-name.default.svc.cluster.local. That's probably the answer you're actually looking for.
(If you really only have a single-node environment then Kubernetes adds a lot of complexity and not much benefit; consider plain Docker and Docker Compose instead.)
As an addition to David's answer - you can copy script from your host to a pod using cp:
kubectl cp [file-path] [pod-name]:/[path]
About your question in the comment. You can do it by exposing a deployment:
kubectl expose deployment/name
Which will result in creating a service, you can find more practical examples and approach in this section.
Thus after your specific Pod terminates you can still reach new Pods by the same port and Service. You can find more details here.
In the example from documentation you can see that nginx Pod has been created with a container port 80 and the expose command will have following effect:
This specification will create a Service which targets TCP port 80 on
any Pod with the run: my-nginx label, and expose it on an abstracted
Service port (targetPort: is the port the container accepts traffic
on, port: is the abstracted Service port, which can be any port other
pods use to access the Service). View Service API object to see the
list of supported fields in service definition
Other than that seems like David provided really good explanation here, and it would be finding out more about FQDN and DNS - which also connects with services.

Kubernetes pods versus Docker container in Google's codelabs tutorial

This question pertains to the Kubernetes tutorial on Google's CodeLabs found here: https://codelabs.developers.google.com/codelabs/cloud-compute-kubernetes/index.html?index=..%2F..%2Fgcp-next#15
I'm new to both Docker and Kubernetes and am confused over their use of the term "pods" which seems to contradict itself.
From that tutorial:
A Kubernetes pod is a group of containers, tied together for the purposes of administration and networking. It can contain one or more containers. All containers within a single pod will share the same networking interface, IP address, disk, etc. All containers within the same pod instance will live and die together. It's especially useful when you have, for example, a container that runs the application, and another container that periodically polls logs/metrics from the application container.
That is in-line with my understanding of how Kubernetes pods relate to containers, however they then go on to say:
Optional interlude: Look at your pod running in a Docker container on the VM
If you ssh to that machine (find the node the pod is running on by using kubectl describe pod | grep Node), you can then ssh into the machine with gcloud compute ssh . Finally, run sudo docker ps to see the actual pod
My problems with the above quote:
. "Look at your pod running in a Docker container" appears to be
backwards. Shouldn't it say "Look at your Docker container running
on the VM"?
"...run sudo docker ps to see the actual pod" doesn't make sense, since "docker ps" lists docker containers, not pods.
So am I way off base here or is the tutorial incorrect?
As mentioned above pod can run more than one container, but in fact to make it simple running more than one container in a pod is an exception and definitely not the common use. you may look at a pod as a container++ that's the easy way to look at it.
If you starting with kubernetes I have wrote the blog below that explain the main 3 entities you need to be familiar with to get started with kubernetes, which are pods, deployments and services.
here it is
http://codefresh.io/blog/kubernetes-snowboarding-everything-intro-kubernetes/
feedback welcome!
One nuance that most people don't know about docker running Kubenretes is that it is running a outdated version. I found that if I went to Google's cloud based solution for Kubernetes everything was quite easy to setup. Here is my sample code of how I set up Kubernetes with Docker.
I had to use the command line utility for Docker though to properly get everything to work. I think this should point you in the right direction.
(I've started learning Kubernetes and have some experience with Docker).
I think the important side of pods is that it may have container(s) which are not from Docker, but from some other implementation.
In this aspect the phrase in problem 1. is fully valid: output confirms that pod is in Docker, not anywhere else.
And reg. problem 2 - the phrase means that further details about the pod you should inspect from docker command. Theoretically different command may be needed in other cases.

Resources