Deployment of docker images - docker

I have a docker image. Whenever I run the container, I need to provide input files externally to the container and then some commands. So, if I am deploying that image to kubernetes, then how I am supposed to provide data if it is continuously running. Any leads will be appreciated.

In Kubernetes a pod is a group of one or more containers (such as Docker containers), with shared storage/network, and a specification for how to run the containers.
You can pass any kind of meta-data to the container using the Kubernetes deployment scripts. Check out this link for a sample.
Whenever the pods (containers) are restarted, the parameters are automatically passed by the pod specification yaml file.

Related

Is there any way to save the current state of kubernetes pod?

I have a pod running linux, i have installed many software/tools, if I restart the pod, k8s will start a new pod and I'll lose all my resources installed, is there any way to save the pod as docker image or any other way it can be persistant even after restarting pod.
Is there a way to download the container image from a pod in kuberentes environment? tried the solution, but wasn't helpful.
The answer in the link is not wrong, you will probably have to jump through some hoops. One method I can think of is to:
Run a container that has the docker cli installed, mounts the docker socket from the host, and has an node affinity rule so that the container is scheduled on the same node as the container you want to capture.
From within this container, you should be able to access the docker daemon running on the node, issue docker commands to capture, tag, push the updated image.
I wouldn't advise doing this though... have not tested it myself but I have done something "similar" before.
It would be better to create your own dockerfile, install software, and use that image for your containers.

Does kubernetes have an equivalent to docker commit/save

I'm working on a system that spins up pods in k8s for user to work in for a while. They'll be running code, modifying files, etc. One thing I'd like to do is be able to effectively "export" their pod in it's modified state. In docker I'd just docker commit && docker save to bundle it all to a tar, but I can't see anything at all similar in the kubernetes api, kubectl, nor client libs.
Short answer: No, Kubernetes doesn't have an equivalent to docker commit/save.
As Markus Dresch mentioned in the comment:
kubernetes orchestrates containers, it does not create or modify them.
Kubernetes and Docker are 2 different tools for different purposes.
Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers.
You can find more information about Pull, Edit, and Push a Docker Image here.

how to differentiate docker container and kubernetes pods running in the same host

I was handed a kubernetes cluster to manage. But in the same node, I can see running docker containers (via docker ps) that I could not able to find/relate in the pods/deployments (via kubectl get pods/deployments).
I have tried kubectl describe and docker inspect but could not pick out any differentiating parameters.
How to differentiate which is which?
There will be many. At a minimum you'll see all the pod sandbox pause containers which are normally not visible. Plus possibly anything you run directly such as the control plane if not using static pods.

Kubernetes and Docker Relationship

What is the nature of the relationship between Docker and Kubernetes? Is it safe to assume that ALL Docker operations done within a Pod will treat the Pod as if it is a normal host machine?
For example, if I were to use the Python Docker SDK, attach to the /var/run/docker.sock, and create a volume, will this volume only exist within the Pod?
My main concern is that I know a Pod is virtualized, thus may not play nicely if I dig a little too deep via other virtualization tools like Docker.
It's important to understand what the responsibility of each of these concepts is.
A Docker container is in essence a boundary between the host OS and guest OS, that allows for a process to run in isolation (docs).
Kubernetes is an orchestration platform for running such containers (docs).
Finally a Pod is a kubernetes object that describes how a docker container is to be run (docs).
With that knowledge we can answer some of your questions;
What is the nature of the relationship between Docker and Kubernetes?
Kubernetes can run docker containers like your computer can, but it's optimised for this specific goal. Kubernetes is also an abstraction (or orchestration) layer, handling resources like network capability, disk space, and cpu cycles for you.
Is it safe to assume that ALL Docker operations done within a Pod will treat the Pod as if it is a normal host machine?
A Pod is not a host in any way. It's merely a description of how a docker container (or multiple) should run. Any resulting containers are running in the virtual space that is created by the kubernetes Nodes.
For example, if I were to use the Python Docker SDK, attach to the /var/run/docker.sock, and create a volume, will this volume only exist within the Pod?
This is something you can do on your local machine, and while technically you could do this on your Node as well, it's not a common use case.
Note that a docker container is isolated from any external factors like a mount or a network socket (which only happen at runtime, and don't change the state of the container itself). You can however configure a container (using a Pod object) to recreate the same conditions on your cluster.
If Kubernetes is running Docker (it's not guaranteed to) then that /var/run/docker.sock will be the host's Docker socket; there is not an additional layer of virtualization.
You shouldn't try to use Docker primitives in an application running in Kubernetes. The approach you describe can lead to data loss, even, if you try to create a Docker-native volume on a node but then a cluster autoscaler or some other task destroys the node. If you need to create storage or additional containers, you can use the Kubernetes API to create PersistentVolumeClaims, Jobs, and other Kubernetes-managed objects.

How to handle "docker-in-docker" problem when using Jenkins inside K8S

New to Kubernetes, a little complex question needs help.
Background
Using Jenkins in GKE (Google Kubernetes Engine)
Want to use jenkins-docker plugin to provide the specific test environment for each type of tests
Don't want to mixin docker binary in the Jenkins image (because it is large)
Don't want docker-in-docker
More specifically, I don't want the Jenkins Pod be a new Docker Server
What I want
Each test environment can create a new pod in GKE Cluster, rather than creating containers inside the Jenkins Pod
P.S.
I have just read some articles, but half of them are telling about "how to use K8S to scale up the Jenkins (using jenkins-slave + jenkins-kubernates plugin)", another half are telling about how to "use docker plugin in a dockerized jenkins container on a bare metal machine (you can use /var/run/docker.sock to communicate between the host and the docker container)", but I cannot find **how to use docker plugin (to provide a specific environment) in a dockerized jenkins container inside K8S

Resources