This question pertains to the Kubernetes tutorial on Google's CodeLabs found here: https://codelabs.developers.google.com/codelabs/cloud-compute-kubernetes/index.html?index=..%2F..%2Fgcp-next#15
I'm new to both Docker and Kubernetes and am confused over their use of the term "pods" which seems to contradict itself.
From that tutorial:
A Kubernetes pod is a group of containers, tied together for the purposes of administration and networking. It can contain one or more containers. All containers within a single pod will share the same networking interface, IP address, disk, etc. All containers within the same pod instance will live and die together. It's especially useful when you have, for example, a container that runs the application, and another container that periodically polls logs/metrics from the application container.
That is in-line with my understanding of how Kubernetes pods relate to containers, however they then go on to say:
Optional interlude: Look at your pod running in a Docker container on the VM
If you ssh to that machine (find the node the pod is running on by using kubectl describe pod | grep Node), you can then ssh into the machine with gcloud compute ssh . Finally, run sudo docker ps to see the actual pod
My problems with the above quote:
. "Look at your pod running in a Docker container" appears to be
backwards. Shouldn't it say "Look at your Docker container running
on the VM"?
"...run sudo docker ps to see the actual pod" doesn't make sense, since "docker ps" lists docker containers, not pods.
So am I way off base here or is the tutorial incorrect?
As mentioned above pod can run more than one container, but in fact to make it simple running more than one container in a pod is an exception and definitely not the common use. you may look at a pod as a container++ that's the easy way to look at it.
If you starting with kubernetes I have wrote the blog below that explain the main 3 entities you need to be familiar with to get started with kubernetes, which are pods, deployments and services.
here it is
http://codefresh.io/blog/kubernetes-snowboarding-everything-intro-kubernetes/
feedback welcome!
One nuance that most people don't know about docker running Kubenretes is that it is running a outdated version. I found that if I went to Google's cloud based solution for Kubernetes everything was quite easy to setup. Here is my sample code of how I set up Kubernetes with Docker.
I had to use the command line utility for Docker though to properly get everything to work. I think this should point you in the right direction.
(I've started learning Kubernetes and have some experience with Docker).
I think the important side of pods is that it may have container(s) which are not from Docker, but from some other implementation.
In this aspect the phrase in problem 1. is fully valid: output confirms that pod is in Docker, not anywhere else.
And reg. problem 2 - the phrase means that further details about the pod you should inspect from docker command. Theoretically different command may be needed in other cases.
Related
I have a pod running linux, i have installed many software/tools, if I restart the pod, k8s will start a new pod and I'll lose all my resources installed, is there any way to save the pod as docker image or any other way it can be persistant even after restarting pod.
Is there a way to download the container image from a pod in kuberentes environment? tried the solution, but wasn't helpful.
The answer in the link is not wrong, you will probably have to jump through some hoops. One method I can think of is to:
Run a container that has the docker cli installed, mounts the docker socket from the host, and has an node affinity rule so that the container is scheduled on the same node as the container you want to capture.
From within this container, you should be able to access the docker daemon running on the node, issue docker commands to capture, tag, push the updated image.
I wouldn't advise doing this though... have not tested it myself but I have done something "similar" before.
It would be better to create your own dockerfile, install software, and use that image for your containers.
i have to say that my question is a little confusing but i'll try to be as clear as possible:
in docker there is a command to run a container and make it use another container's network the command is : docker run --net=container
so basically, i want to make k8s execute that command to create a pod, is that possible ? or is there any other alternative command for that in k8s ?
in another words, what command does the k8s api-server execute to create containers on worker nodes?
there is a lot of questions over there lol, i hope you will understand what i want to say ...
I am not sure if i understand what you want but if you want to capture a pod's traffic network you can use a service mesh like Istio or Linkerd
I worked with Istio and you can have metrics for all traffic within a cluster
I'm working on a system that spins up pods in k8s for user to work in for a while. They'll be running code, modifying files, etc. One thing I'd like to do is be able to effectively "export" their pod in it's modified state. In docker I'd just docker commit && docker save to bundle it all to a tar, but I can't see anything at all similar in the kubernetes api, kubectl, nor client libs.
Short answer: No, Kubernetes doesn't have an equivalent to docker commit/save.
As Markus Dresch mentioned in the comment:
kubernetes orchestrates containers, it does not create or modify them.
Kubernetes and Docker are 2 different tools for different purposes.
Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers.
You can find more information about Pull, Edit, and Push a Docker Image here.
I was handed a kubernetes cluster to manage. But in the same node, I can see running docker containers (via docker ps) that I could not able to find/relate in the pods/deployments (via kubectl get pods/deployments).
I have tried kubectl describe and docker inspect but could not pick out any differentiating parameters.
How to differentiate which is which?
There will be many. At a minimum you'll see all the pod sandbox pause containers which are normally not visible. Plus possibly anything you run directly such as the control plane if not using static pods.
I previously created a Flask server that spawns Docker containers using the Docker Python SDK. When a client hits a specific endpoint, the server would generate a container. It would maintain queues, and it would be able to kill containers that didn't respond to requests.
I want to migrate towards Kubernetes, but I am starting to think my current server won't be able to "spawn" jobs as pods automatically like in docker.
docker.from_env().containers.run('alpine', 'echo hello world')
Is Docker Swarm a better solution for this, or is there a hidden practice that is done in Kubernetes? Would the Kubernetes Python API be a logical solution for automatically generating pods and jobs, where the Flask server is a pod that manages other pods within the cluster?
'Kubectl run' is much like 'docker run' in that it will create a Pod with a container based on a docker image (e.g. How do i run curl command from within a Kubernetes pod). See https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/ for more comparison. But what you run with k8s are Pods/Jobs that contain containers rather than running containers directly so this will add an extra layer of complexity for you.
Kubernetes is more about orchestrating services rather than running short-lived jobs. It has some features and can be used to run jobs but that isn't its central focus. If you're going in that direction you may want to look at knative (and knative build) or kubeless as what you describe sounds rather like the serverless concept. Or if you are thinking more about Jobs then perhaps brigade (https://brigade.sh). (For more see https://www.quora.com/Is-Kubernetes-suited-for-long-running-batch-jobs) If you are looking to run web app workloads that serve requests then note that you don't need to kill containers that fail to respond on k8s as k8s will monitor and restart them for you.
I don't know swarm well enough to compare. I suspect it would be a bit easier for you as it is aimed more centrally at docker (the k8s API is intended to support other runtimes) but perhaps somebody else can comment on that. Whether using swarm instead helps you will I guess depend on your motivations.