According to the documentation:
Always
every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image digest. If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet pulls the image with the resolved digest, and uses that image to launch the container.
Despite that, kubectl describe pod shows that the image is being pulled from the registry even though the image is present on the worker node (crictl images) and the pod runs normally with IfNotPresent or Never without re-downloading the image. How is this possible?
Related
I have a Docker build environment where I build containers locally and test them. When I'm done, I push them to our Dev GitLab container registry to be deployed to Kubernetes.
I've run into a situation where either Docker isn't pushing up the newest layers or GitLab is seeing layers from a previous version and just mounting that layer so when the container is deployed in Kubernetes, the container, despite the new tag, is running the old container image.
I've tried completely wiping my Docker image repository, rebuilding, and repushing and that didn't fix it. I tried using the red trash icon in GitLab to delete the old version of the tag I'm trying to use.
I added some echo's in the console output for the container so I know the new bits aren't being run but I can't figure out if the problem is Docker or GitLab or how to fix it. Anyone have any ideas?
TIA!
Disregard -- my workers had a docker image on them and because my imagePullPolicy in my YAML was not set, it was defaulting to using the cached image in Docker. I just had to clear the image on my worker node (docker rmi -f <image name>) and then I updated my deployment YAML to use an imagePullPolicy: Always.
I use minikube with Docker driver on Linux. For a manual workflow I can enable registry addon in minikube, push there my images and refer to them in deployment config file simply as localhost:5000/anything. Then they are pulled to a minikube's environment by its Docker daemon and deployments successfully start in here. As a result I get all the base images saved only on my local device (as I build my images using my local Docker daemon) and minikube's environment gets cluttered only by images that are pulled by its Docker daemon.
Can I implement the same workflow when use Skaffold? By default Skaffold uses minikube's environment for both building images and running containers out of them, and also it duplicates (sometimes even triplicates) my images inside minikube (don't know why).
Skaffold builds directly to Minikube's Docker daemon as an optimization so as to avoid the additional retrieve-and-unpack required when pushing to a registry.
I believe your duplicates are like the following:
$ (eval $(minikube docker-env); docker images node-example)
REPOSITORY TAG IMAGE ID CREATED SIZE
node-example bb9830940d8803b9ad60dfe92d4abcbaf3eb8701c5672c785ee0189178d815bf bb9830940d88 3 days ago 92.9MB
node-example v1.17.1-38-g1c6517887 bb9830940d88 3 days ago 92.9MB
Although these images have different tags, those tags are just pointers to the same Image ID so there is a single image being retained.
Skaffold normally cleans up left-over images from previous runs. So you shouldn't see the minikube daemon's space continuously growing.
An aside: even if those Image IDs were different, an image is made up of multiple layers, and those layers are shared across the images. So Docker's reported image sizes may not actually match the actual disk space consumed.
I loaded some docker images running
docker load --input <file>
I can then see these images when executing
docker image ls
After a while images start disappearing. Every few minutes there are less and less images listed. I did not run any of images yet. What could be the cause of this issue?
EDIT: This issue arises with docker inside minikube VM.
Since you've mentioned that Docker daemon runs inside minikube VM, I assume that you might hit K8s Garbage collection mechanism, which keeps system utilization on appropriate level and reduce amount of unused containers(built from images) by adjusting the specific thresholds.
These eviction thresholds are fully managed by Kubelet k8s node agent, cleaning uncertain images and containers according to the parameters(flags) propagated in kubelet configuration file.
Therefore, I guess you can investigate K8s eviction behavior looking at the certain thresholds, adjusted in kubelet config file which is generated by minikube bootstrapper in the following path /var/lib/kubelet/config.yaml.
As mention in #mk_sta answer to fix issue you need:
Create or edit /var/lib/kubelet/config.yaml with
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
imagefs.available: "5%"
Default value is 15%
minikube stop
minikube start --extra-config=kubelet.config=/var/lib/kubelet/config.yaml
Or free more space on docker partition.
https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/#create-the-config-file
https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#hard-eviction-thresholds
Does Kubernetes download Docker image automatically when i create a pod or should I use Docker pull manually to Download the image locally?
You do not need to run docker pull manually. The pod definition contains the image name to pull and Kubernetes will pull the image for you. You have several options in terms of defining how Kubernetes will decide to pull the image, using the imagePullPolicy: definition in your pod spec. Much of this is documented here, but basically you can pull if the image is not present, pull always, never update (once the image is local). Hopefully that doc can get you started.
Using Kubernetes for deployment:
Considering I have a Dockerfile, I build, then push to registry.
If I run a container on a host, the image gets pulled and the container is ran.
Now, if i update the Dockerfile, build and push again, without changing its tag then the image is changed in the registry, but the host has the image pulled, and it doesn't seem to go look for updates.
How do i force a pull to get the latest image when running a container?
I can manually pull the image, but I'd like to know if there is a 'formal way' of doing this (in the pod or rc templates?)
Thanks for insight.
Set an imagePullPolicy of Always on the container