I have running docker container A and I want to create pod with container A.
Is it possible?
If it isn't, Can I hold container state "created" in kubernetes?
I also tried setting containerID to the running containerID in the pod.yaml file, and tried to change the containerID to kubectl edit on the already running pod, but not all succeeded.
All together running the container and running the pod both are different.
If you want to run container A in a pod follow steps below:
1. create a docker image from container A and push to docker registry
2. create a deployment.yaml file for Kubernetes and mention this container docker pull URL and tag in image and tag section
3. deploy pod using kubectl apply -f deployment.yaml
There's no way to "import" a pre-existing Docker container into a Kubernetes pod. Kubernetes always manages the entire lifecycle of a container, including deciding which host to run it on.
If your workflow involves doing some manual setup in between docker create and docker start, you should try to automate this; Kubernetes has nothing equivalent and in fact sometimes it will work against you. If a node gets destroyed (either because an administrator drained it or because its hard disk crashed or something else) Kubernetes will try to relocate every pod that was there, which means containers will get destroyed and recreated somewhere else with no notice to you. If you use a deployment to manage your pods (and you should) you will routinely have multiple copies of a pod, and there you'd have to do your manual setup on all of them.
In short: plan on containers being destroyed and recreated regularly and without your intervention. Move as much setup as you can into your container's entrypoint, or if really necessary, an init container that runs in the pod. Don't expect to be able to manually set up a pod before it runs. Follow this approach in pure-Docker space, too: a single container on its own shouldn't be especially valuable and you should be able to docker rm && docker run a new copy of it without any particular problems.
Related
Quick question, do i need to docker compose up on airflow when i amend a secret in kubectl?
I've changed a password using the command line and kubectl in vscode and just want to know if it is necessary to run docker compose up now that it has been changed or not?
If you've installed your airflow system using helm charts directly on k8s. Then you don't have to do anything. Secrets are automatically refreshed inside pods by the kubelet. And you don't have to manipulate docker directly when you already have k8s installed and are interacting with it using kubectl. That's the whole point of having k8s.
If you're using both, you shouldn't, really. Just interact with k8s and forget about docker. You will almost never have to think about docker unless you are debugging some serious problem with k8s system itself.
Nah. Docker compose has nothing to do with it. You probably just need to restart your Pods somehow. I always just do a "Redeploy" through our Rancher interface. I'm sure there is a way to do that with kubectl as well. You just need to get the secret into the Pods, the image itself is unchanged.
We are using Helm Chart for deploying out application in Kubernetes cluster.
We have a statefulsets and headless service. To initialize mTLS, we have created a 'job' kind and in 'command' we are passing shell & python scripts are an arguments. And created a 'cronjob' kind to update of certificate.
We have written a 'docker-entrypoint.sh' inside 'docker image' for some initialization work & to generate TLS certificates.
Questions to ask :
Who (Helm Chart/ Kubernetes) take care of scaling/monitoring/restarting containers ?
Does it deploy new docker image if pod fails/restarts ?
Will docker ENTRYPOINT execute after container fails/restarts ?
Does 'job' & 'cronjob' execute if container restarts ?
What are the other steps taken by Kubernetes ? Would you also share container insights ?
Kubernetes and not helm will restart a failed container by default unless you set restartPolicy: Never in pod spec
Restarting of container is exactly same as starting it out first time. Hence in restart you can expect things to happen same way as it would when starting the container for first time.
Internally kubelet agent running in each kubernetes node delegates the task of starting a container to OCI complaint container runtime such as docker, containerd etc which then spins up the docker image as a container on the node.
I would expect entrypoint script to be executed in both start a restart of a container.
Does it deploy new docker image if pod fails/restarts ?
It creates a new container with same image as specified in the pod spec.
Does 'job' & 'cronjob' execute if container restarts ?
If a container which is part of cronjob fails kubernetes will keep restarting(unless restartPolicy: Never in pod spec) the container till the time job is not considered as failed .Check this for how to make a cronjob not restart a container on failure. You can specify backoffLimit to control number of times it will retry before the job is considered failed.
Scaling up is equivalent of scheduling and starting yet another instance of the same container on the same or altogether different Kubernetes node.
As a side note you should use higher level abstraction such as deployment instead of pod because when a pod fails Kubernetes tries to restart it on same node but when a deployment fails Kubernetes will try to restart it in other nodes as well if it's not able to start the pod on it's current scheduled node.
Background
I have a large Python service that runs on a desktop PC, and I need to have it run as part of a K8S deployment. I expect that I will have to make several small changes to make the service run in a deployment/pod before it will work.
Problem
So far, if I encounter an issue in the Python code, it takes a while to update the code, and get it deployed for another round of testing. For example, I have to:
Modify my Python code.
Rebuild the Docker container (which includes my Python service).
scp the Docker container over to the Docker Registry server.
docker load the image, update tags, and push it to the Registry back-end DB.
Manually kill off currently-running pods so the deployment restarts all pods with the new Docker image.
This involves a lot of lead time each time I need to debug a minor issue. Ideally, I've prefer being able to just modify the copy of my Python code already running on a pod, but I can't kill it (since the Python service is the default app that is launched, with PID=1), and K8S doesn't support restarting a pod (to my knowledge). Alternately, if I kill/start another pod, it won't have my local changes from the pod I was previously working on (which is by design, of course; but doesn't help with my debug efforts).
Question
Is there a better/faster way to rapidly deploy (experimental/debug) changes to the container I'm testing, without having to spend several minutes recreating container images, re-deploying/tagging/pushing them, etc? If I could find and mount (read-write) the Docker image, that might help, as I could edit the data within it directly (i.e. new Python changes), and just kill pods so the deployment re-creates them.
There are two main options: one is to use a tool that reduces or automates that flow, the other is to develop locally with something like Minikube.
For the first, there are a million and a half tools but Skaffold is probably the most common one.
For the second, you do something like ( eval $(minikube docker-env) && docker build -t myimagename . ) which will build the image directly in the Minikube docker environment so you skip steps 3 and 4 in your list entirely. You can combine this with a tool which detects the image change and either restarts your pods or updates the deployment (which restarts the pods).
Also FWIW using scp and docker load is very not standard, generally that would be combined into docker push.
I think your pain point is the container relied on the python code. You can find a way to exclude the source code from docker image build phase.
For my experience, I will create a docker image only include python package dependencies, and use volume to map source code dir to the container path, so you don't need to rebuild the image if no dependencies are added or removed.
Example
I have not much experience with k8s, but I believe it must be more or less the same as docker run.
Dockerfile
FROM python:3.7-stretch
COPY ./python/requirements.txt /tmp/requirements.txt
RUN pip install --no-cache-dir -r /tmp/requirements.txt
ENTRYPOINT ["bash"]
Docker container
scp deploy your code to the server, and map your host source path to the container source path like this:
docker run -it -d -v /path/to/your/python/source:/path/to/your/server/source --name python-service your-image-name
With volume mapping, your container no longer depend on the source code, you can easily change your source code without rebuilding your image.
We have harbor scanning containers before they have been deployed. Once they are scanned, we then deploy them to the platform (k8s).
Is there anyway to scan a container just say a few weeks down the line after it has been deployed? Without disturbing the deployment of course.
Thanks
I think we have to distinguish between a container (the running process) and the image from which a container is created/started.
If this is about finding out which image was used to create a container that is (still) running and to scan that image for (new) vulnerabilities...here is a way to get information about the images of all running containers in a pod:
kubectl get pods <pod-name> -o jsonpath={.status.containerStatuses[*].image}
I have an Image which i should add a dependency to it. Therefore I have tried to change the image when is running on the container and create new Image.
I have follow this article with the following commands after :
kubectl run my-app --image=gcr.io/my-project-id/my-app-image:v1 --port 8080
kubectl get pods
kubectl exec -it my-app-container-id -- /bin/bash
then in the shell of container, i have installed the dependency using "pip install NAME_OF_Dependncy".
Then I have exited from the shell of container and as it have been explained in the article, i should commit the change using this command :
sudo docker commit CONTAINER_ID nginx-template
But I can not find the corresponding command for Google Kubernetes Engine with kubectl
How should i do the commit in google container engine?
As with K8s Version 1.8. There is no way to do Hot Fix changes directly to the images.For example, Committing new image from running container. If you still change or add something by using exec it will stay until the container is running. It's not best practice in K8s eco-system.
The recommended way is to use Dockerfile and customise the images according to the necessity and requirements.After that, you can push that images to the registry(public/ private ) and deploy it with K8s manifest file.
Solution to your issue
Create a Dockerfile for your images.
Build the image by using Dockerfile.
Push the image to the registry.
write the deployment manifest file as well service manifest file.
apply the manifest file to the k8s cluster.
Now If you want to change/modify something, you just need to change/modify the Dockerfile and follow the remaining steps.
As you know that containers are a short living creature which does not have persist changed behaviour ( modified configuration, changing file system).Therefore, It's better to give new behaviour or modification at the Dockerfile.
Kubernetes Mantra
Kubernetes is Cloud Native product which means it does not matter whether you are using Google Cloud, AWS or Azure. It needs to have consistent behaviour on each cloud provider.