How to get resolved sha digest for all images within Kubernetes yaml? - docker

Docker image tags are mutable, in that image:latest and image:1.0 can both point to image#sha256:....., but when version 1.1 is released, image:latest stored within a registry can be pointed to an image with a different sha digest. Pulling an image with a particular tag now does not mean that an identical image will be pulled next time.
If a Kubernetes YAMl resource definition refers to an image by tag (not by digest), is there a means of determining what sha digest each image will actually resolve to, before the resource definition is deployed? Is this functionality supported using kustomize or kubectl?
Use case is wanting to determine what has actually been deployed in one environment before deploying to another (I'd like to take a hash of the resolved resource definition and could then use this to understand whether image:1.0 to be deployed to PROD refers to the same image:1.0 that was deployed to UAT).
Are there any tools that can be used to support this functionality?
For example, given the following YAML, is there a way of replacing all images with their resolved digests?
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- name: image1
image: image1:1.1
command:
- /bin/sh -c some command
- name: image2
image: image2:2.2
command:
- /bin/sh -c some other command
To get something like this:
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- name: image1
image: image1#sha:....
command:
- /bin/sh -c some command
- name: image2
image: image2#sha:....
command:
- /bin/sh -c some other command
I'd like to be able to do something like pipe yaml (that might come from cat, kustomize or kubectl ... --dry-run) through a tool and then pass to kubectl apply -f:
cat mydeployment.yaml | some-tool | kubectl apply -f -
EDIT:
The background to this is the need to be able to prove to auditors/regulators that what is about to be deployed to one env (PROD) is exactly what has been successfully deployed to another env (UAT). I'd like to use normal tags in the deployment template and at the time of deploying to UAT, take a snapshot of the template with the tags replaced with the digests of the resolved images. That snapshot will be what is deployed (via kubectl or similar). When deploying to PROD, that same snapshot will be used.

This tool is supporting exactly what you need...
kbld: https://get-kbld.io/
Resolves name-tag pair reference (nginx:1.17) into digest reference
(index.docker.io/library/nginx#sha256:2539d4344...)
Looks integrates quite well with templating tools like Kustomize or even Helm

You can all the containers used info with this command. This will list all namespaces, with pod names, with container image name and sha256 of the image.
kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.namespace}{","}{.metadata.name}{","}{range .status.containerStatuses[*]}{.image}{", "}{.imageID}{", "}{end}{end}' | sort

is there a means of determining what sha digest each image will actually resolve to, before the resource definition is deployed?
No, and in the case you describe, it can vary by node. The Deployment will create some number of Pods, each Pod will get scheduled on some Node, and the Kubelet there will only pull the image if it doesn’t have something with that tag already. If you have two replicas, and you’ve changed the image a tag points to, then on node A it could use the older image that was already there, but on node B where there isn’t an image, it will pull and get the newer version.
The best practice here is to avoid changing the image a tag points to. Give each build coming out of your CI system a unique tag (a datestamp or source control commit ID for example) and use that in your Kubernetes object specifications. That avoids this problem entirely.
A workaround is to set
imagePullPolicy: Always
in your pod specs, which will force the node to pull a newer version, but this is unnecessary overhead in most cases.

Here's another on - k8s-digester from google folks. It's a bit different in a sense than the main focus is on cluster-side changes(via Adm Controller) even though client-side KRM functions seems to also be possible.
Overall, kbld seems to be more about development experience and adoption with your cli/CICD/orchestration, while k8s-digester is more about administration on the cluster side.

Related

Auto update container image when new build is released into kubernetes with gitlab ci/cd and helm [duplicate]

I have a private repository. I want to update my container image in Kubernetes automatically when the image is updated in my private repository. How can I achieve this?
Kubernetes natively does not have the feature of automatically redeploying pods when there is a new image. Ideally what you want is a tool which enables GitOps style deployment wherein state change in git will be synced to the Kubernetes cluster. There is Flux and ArgoCD open source tools which supports GitOps.
Recently there is an announcement to combine these two projects as ArgoFlux.
You should assign some sort of unique identifier to each build. This could be based off a source-control tag (if you explicitly tag releases), a commit ID, a build number, a time stamp, or something else; but the important detail is that each build creates a unique image with a unique name.
Once you do that, then your CI system needs to update the Deployment spec with a new image:. If you're using a tool like Kustomize or Helm, there are standard patterns to provide this; if you are using kubectl apply directly, it will need to modify the deployment spec in some way before it applies it.
This combination of things means that the Deployment's embedded pod spec will have changed in some substantial way (its image: has changed), which will cause the Kubernetes deployment controller to automatically do a rolling update for you. If this goes wrong, the ordinary Kubernetes rollback mechanisms will work fine (because the image with yesterday's tag is still in your repository). You do not need to manually set imagePullPolicy: or manually cause the deployment to restart, changing the image tag in the deployment is enough to cause a normal rollout to happen.
Have a look at the various image pull policies.
imagePullPolicy: always might come closest to what you need. I don't know if there is a way in "vanilla" K8s to achieve an automatic image pull, but I know that RedHat's OpenShift (or OKD, the free version) works with image streams, which do exactly what you ask for.
The imagePullPolicy and the tag of the image affect when the kubelet attempts to pull the specified image.
imagePullPolicy: IfNotPresent: the image is pulled only if it is not already present locally.
imagePullPolicy: Always: the image is pulled every time the pod is started.
imagePullPolicy is omitted and either the image tag is :latest or it is omitted: Always is applied.
imagePullPolicy is omitted and the image tag is present but not :latest: IfNotPresent is applied.
imagePullPolicy: Never: the image is assumed to exist locally. No attempt is made to pull the image.
So to achieve this you have to set imagePullPolicy: Always and restart you pod and it should pull a fresh latest copy of image. I don't think there is any other way in K8s
Container Images
I just wrote a bash script to achieve this.My imagePullPolicy option is always and i am running this script with crontab.(You can check everytime with infinite loops).It is checking repository and if any change occured , it is deleting the pod (automatically because of imagePullPolicy set Always) and pulling updated image.
#!/bin/bash
registry="repository_name"
username="user_name"
password="password"
##BY this you can append all images from repository to the array
#images=($( echo $(curl -s https://"$username":"$password"#"$registry"/v2/_catalog | jq .repositories | jq .[])))
##Or you can set manuaaly your image array
images=( image_name_0 image_name_1 )
for i in "${images[#]}"
do
old_image=$(cat /imagerecords/"$i".txt)
new_image=$(echo $(curl -s https://"$username":"$password"#"$registry"/v2/"$i"/manifests/latest | jq ."fsLayers" | jq .[]))
if [ "$old_image" == "$new_image" ];then
echo "image: "$i" is already up-to-date"
else
echo "image: " $i" is updating"
kubectl delete pod pod_name
echo $new_image > /imagerecords/"$i".txt
fi
done
This functionality is provided by open-source project argocd-image-updater:
https://github.com/argoproj-labs/argocd-image-updater

Building multiarchtecture docker images on ansible?

As it is right now, it's possible to docker build an image using the community.docker collection:
(Example from documentation)
- name: Build an image and push it to a private repo
community.docker.docker_image:
build:
path: ./sinatra
name: registry.ansible.com/chouseknecht/sinatra
tag: v1
push: yes
source: build
My question is simple. According to their documentation, the platform field only seems to allow for one architecture:
Platform in the format os[/arch[/variant]].
(Notice "platform" and not "platforms" and that type is "string" and not a list of strings)
Is it possible to multiarch build (for example, an amd64 and arm) using the community.docker collection? Of course, I can use shell/command instead using something like:
- name: Multiarch build
shell: |
docker buildx build --platform amd64,arm --push -t myimage .
But is it possible using what's available now within the collection?
I'm new to using Ansible to building Docker images, but I wanted to do this too and agree that specifying multiple platforms doesn't appear to be supported, so I put in a feature request on their GitHub here: https://github.com/ansible-collections/community.docker/issues/467.
Just to see what happened, I tried setting build.platform to "linux/arm64,linux/amd64", and the answer is that nothing happens. Ansible said that it was okay and nothing changed, even when I made an edit to the Dockerfile that should've triggered a build. I found that the same happens if I specify an invalid platform name.

GCP manage kubernetes autodeploy image path

In my project on GCP i setup an autmatedodeploy for a specific deploy on my kubernetes cluster, ath the end of procedure an image path like:
gcr.io/direct-variety-325450/cc-mirror:$COMMIT_SHA
was create.
If i see in my GCP "Container Registry" i se images wit tag like c15c5019183ded74814d570a9a33d2f95ecdfb32
Now my question is:
How can i in my deployment.yaml file specify the latest image name if there are no latest or other tag?
...
spec:
containers:
- name: django
image: ????
...
if i put:
gcr.io/direct-variety-325450/cc-mirror:$COMMIT_SHA
or:
gcr.io/direct-variety-325450/cc-mirror
i get an Error:
Cannot download Image, Image does not exist
What i have to put into my image: entry of deployment.yaml?
So many thanks in advance
Manuel
TL;DR: You need to specify the latest tag in your deployment.
In fact, Kubernetes automates a lot of thing for you. You tell what you want, Kubernetes compares its state with your wishes and perform actions.
If you don't specify the image tag, kubernetes will compare your wish (no tag) with the current state of the cluster (no tag) and because it's equal, it will do nothing.
Now how to automate the new tag deployment. Here no magic: you need a placeholder in your deployment.yaml file and to execute a sed in your file to replace the placeholder by the real value.
And then apply the change on this updated file.

Forcing Kubernetes to redeploy a deployment YAML if docker image updates? [duplicate]

This question already has answers here:
Kubernetes how to make Deployment to update image
(8 answers)
Closed 1 year ago.
I have this workflow where I write some code and a docker image is deployed under latest. Currently, it deploys to my container registry and then I run this kubectl apply file.yaml after the container deploys, but K8s doesn't seem to recognize that it needs to re-pull and rollout a new deployment with the newly pulled image.
How can I basically feed in the YAML spec of my deployments and just rollout restart the deployments?
Alternatively, is there a better approach? I unconditionally am rolling out deployment restarts on all my deployments this way.
#daniel-mann is correct to discourage the use of :latest.
Don't read the word 'latest' when you see the tag latest. It's a default tag and it breaks the ability to determine whether the image's content has changed.
A better mechanism is to tag your images by some invariant value... your code's hash, for example. This is what Docker does with its image hashes and that's the definitive best (but not easiest) way to identify images: [[image]]#sha256:.....
You can use some SemVer value. Another common mechanism is to use the code's git commit for its tag: git rev-parse HEAD or similar.
So, assuming you're now uniquely identify images by tags, how to update the Deployment? The docs provide various approaches:
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment
But these aren't good for robust deployments (lowercase-D). What you should also do is create unique Deployment manifests each time you change the image. Then, if you make a mistake and inadvertently deploy something untoward, you have a copy of what you did and you can correct it (making another manifest) and apply that. This is a principle behind immutable infrastructure.
So...
TAG=$(git rev-parse HEAD)
docker build \
--tag=${REPO}/${IMAGE}:${TAG} \
...
docker push ${REPO}/${IMAGE}:${TAG}
Then change the manifest (and commit the change to source control):
sed --in-place "s|image: IMAGE|image: ${REPO}/${IMAGE}:${TAG}|g" path/to/manifest.yaml
git add /path/to/manifest.yaml
git commit --message=...
Then apply the revised (but unique!) manifest to the cluster:
kubectl apply \
--filename=/path/to/manifest.yaml \
--namespace=${NAMESPACE}

Deploying Docker images using Ansible

After reviewing this amazing forum, i thought it's time to join in...
I'm having issue with a playbook that deploys multiple Dockers.
My Ansible version is: 2.5.1
My Python version is 3.6.9
My Linux Images are 18.04 from the site: OSboxes.
Docker service is installed and running on both of the machines.
According to this website, all you need to do is follow the instructions and everything will work perfectly. :)
https://www.techrepublic.com/article/how-to-deploy-a-container-with-ansible/
(The playbook i use is in the link above)
but after following the steps, and using the playbook, i've got this error.
TASK [Pull default Docker image] ******************************************************************************************************
fatal: [192.168.1.38]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (docker_image) module: source Supported parameters include: api_version, archive_path, buildargs, cacert_path, cert_path, container_limits, debug, docker_host, dockerfile, filter_logger, force, http_timeout, key_path, load_path, name, nocache, path, pull, push, repository, rm, ssl_version, state, tag, timeout, tls, tls_hostname, tls_verify, use_tls"}
I'll be happy for your support on this issue.
The source: pull option was added in Ansible 2.8. Since you are using Ansible 2.5.1, that option is not available.
You can either use a later version, 2.8 or above, or just remove that line from your playbook and it should work:
- name: Pull default Docker image
docker_image:
name: "{{ default_container_image }}"
You won't have the guarantee that the image has been newly pulled from a registry. If that's important in your case, you can remove any locally cached version of the image first:
- name: Remove Docker image
docker_image:
name: "{{ default_container_image }}"
state: absent
- name: Pull default Docker image
docker_image:
name: "{{ default_container_image }}"
So according to the doc of docker_image module of Ansible 2.5, there is indeed no parameter source.
Nevertheless, the doc of version 2.9 tells us it has been "added in 2.8"! So you have to update you Ansible version to be able to run the linked playbook as-is. That's you best option.
Otherwise, another option would be to keep your version 2.5 and simply remove the line 38.
(-) source: pull
But I don't know how was the default behaviour before 2.8, so I cannot garanty you that it will do what you expect!
Finally, got this playbook to sing! :)
I did the following.
upgraded the Ansibe version, so now it's running on version: 2.9.15.
my python3 version is:3.6.9
After upgrading the Ansible to the version i've mentioned above, i got and error message: Failed to import the required python library (Docker SDK for Python (python >==2.7) or docker-py (python 2.6)) on osboxes(this is my machine) python...
so, after Googling this error, i found this URL:
https://neutrollized.blogspot.com/2018/12/cannot-have-both-docker-py-and-docker.html
SO, i decided to remove the docker from my machines, including the python that was installed using pip (i used the command pip-list to see if there is docker installed, and remove it using: pip uninstall).
After removing the Docker from my machines, i added the playbook one more play. install docker-compose (that's what solve my problem, and it took care of the python versions).
Just follow the URL i attached in my answer.
According the error message in Ansible module docker_image a parameter seems to be used, which is not part of the parameters implemented for that module (yet). Also the error message lists already the parameter which are available. Same as in the documentation for the module.
An other possible reason might be that the line indent for some of the parameters isn't correct.

Resources