Building multiarchtecture docker images on ansible? - docker

As it is right now, it's possible to docker build an image using the community.docker collection:
(Example from documentation)
- name: Build an image and push it to a private repo
community.docker.docker_image:
build:
path: ./sinatra
name: registry.ansible.com/chouseknecht/sinatra
tag: v1
push: yes
source: build
My question is simple. According to their documentation, the platform field only seems to allow for one architecture:
Platform in the format os[/arch[/variant]].
(Notice "platform" and not "platforms" and that type is "string" and not a list of strings)
Is it possible to multiarch build (for example, an amd64 and arm) using the community.docker collection? Of course, I can use shell/command instead using something like:
- name: Multiarch build
shell: |
docker buildx build --platform amd64,arm --push -t myimage .
But is it possible using what's available now within the collection?

I'm new to using Ansible to building Docker images, but I wanted to do this too and agree that specifying multiple platforms doesn't appear to be supported, so I put in a feature request on their GitHub here: https://github.com/ansible-collections/community.docker/issues/467.
Just to see what happened, I tried setting build.platform to "linux/arm64,linux/amd64", and the answer is that nothing happens. Ansible said that it was okay and nothing changed, even when I made an edit to the Dockerfile that should've triggered a build. I found that the same happens if I specify an invalid platform name.

Related

Auto update container image when new build is released into kubernetes with gitlab ci/cd and helm [duplicate]

I have a private repository. I want to update my container image in Kubernetes automatically when the image is updated in my private repository. How can I achieve this?
Kubernetes natively does not have the feature of automatically redeploying pods when there is a new image. Ideally what you want is a tool which enables GitOps style deployment wherein state change in git will be synced to the Kubernetes cluster. There is Flux and ArgoCD open source tools which supports GitOps.
Recently there is an announcement to combine these two projects as ArgoFlux.
You should assign some sort of unique identifier to each build. This could be based off a source-control tag (if you explicitly tag releases), a commit ID, a build number, a time stamp, or something else; but the important detail is that each build creates a unique image with a unique name.
Once you do that, then your CI system needs to update the Deployment spec with a new image:. If you're using a tool like Kustomize or Helm, there are standard patterns to provide this; if you are using kubectl apply directly, it will need to modify the deployment spec in some way before it applies it.
This combination of things means that the Deployment's embedded pod spec will have changed in some substantial way (its image: has changed), which will cause the Kubernetes deployment controller to automatically do a rolling update for you. If this goes wrong, the ordinary Kubernetes rollback mechanisms will work fine (because the image with yesterday's tag is still in your repository). You do not need to manually set imagePullPolicy: or manually cause the deployment to restart, changing the image tag in the deployment is enough to cause a normal rollout to happen.
Have a look at the various image pull policies.
imagePullPolicy: always might come closest to what you need. I don't know if there is a way in "vanilla" K8s to achieve an automatic image pull, but I know that RedHat's OpenShift (or OKD, the free version) works with image streams, which do exactly what you ask for.
The imagePullPolicy and the tag of the image affect when the kubelet attempts to pull the specified image.
imagePullPolicy: IfNotPresent: the image is pulled only if it is not already present locally.
imagePullPolicy: Always: the image is pulled every time the pod is started.
imagePullPolicy is omitted and either the image tag is :latest or it is omitted: Always is applied.
imagePullPolicy is omitted and the image tag is present but not :latest: IfNotPresent is applied.
imagePullPolicy: Never: the image is assumed to exist locally. No attempt is made to pull the image.
So to achieve this you have to set imagePullPolicy: Always and restart you pod and it should pull a fresh latest copy of image. I don't think there is any other way in K8s
Container Images
I just wrote a bash script to achieve this.My imagePullPolicy option is always and i am running this script with crontab.(You can check everytime with infinite loops).It is checking repository and if any change occured , it is deleting the pod (automatically because of imagePullPolicy set Always) and pulling updated image.
#!/bin/bash
registry="repository_name"
username="user_name"
password="password"
##BY this you can append all images from repository to the array
#images=($( echo $(curl -s https://"$username":"$password"#"$registry"/v2/_catalog | jq .repositories | jq .[])))
##Or you can set manuaaly your image array
images=( image_name_0 image_name_1 )
for i in "${images[#]}"
do
old_image=$(cat /imagerecords/"$i".txt)
new_image=$(echo $(curl -s https://"$username":"$password"#"$registry"/v2/"$i"/manifests/latest | jq ."fsLayers" | jq .[]))
if [ "$old_image" == "$new_image" ];then
echo "image: "$i" is already up-to-date"
else
echo "image: " $i" is updating"
kubectl delete pod pod_name
echo $new_image > /imagerecords/"$i".txt
fi
done
This functionality is provided by open-source project argocd-image-updater:
https://github.com/argoproj-labs/argocd-image-updater

Docker: "Unable to pull 'tomcat:8.5' : no matching manifest for linux/arm/v7 in the manifest list entries"

On Raspberry Pi, I got an error when I tried to set up ODK-X Sync Endpoint.
I referred to the instruction from https://docs.odk-x.org/sync-endpoint-manual-setup/
This is the command I ran:
$ mvn clean install
The error message is as below:
Failed to execute goal io.fabric8:docker-maven-plugin:0.34.1:build (start) on project sync-endpoint-docker-swarm: Unable to pull 'tomcat:8.5' : no matching manifest for linux/arm/v7 in the manifest list entries
How can I solve this problem?
The error is fairly clear: tomcat:8.5, which sync-endpoint depends on, does not provide a manifest for the armv7 architecture. You can see this on the page for tomcat's 8.5 tag, showing that it only provides images for linux/amd64 and linux/arm64/v8.
If you need to use that Dockerfile as is and cannot change the tomcat version used, you will need to use a device with a matching architecture. If you are able to modify the Dockerfile and are willing to use a different version of tomcat, it appears that there are other tags which do support linux/arm32/v7.

Deploying Docker images using Ansible

After reviewing this amazing forum, i thought it's time to join in...
I'm having issue with a playbook that deploys multiple Dockers.
My Ansible version is: 2.5.1
My Python version is 3.6.9
My Linux Images are 18.04 from the site: OSboxes.
Docker service is installed and running on both of the machines.
According to this website, all you need to do is follow the instructions and everything will work perfectly. :)
https://www.techrepublic.com/article/how-to-deploy-a-container-with-ansible/
(The playbook i use is in the link above)
but after following the steps, and using the playbook, i've got this error.
TASK [Pull default Docker image] ******************************************************************************************************
fatal: [192.168.1.38]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (docker_image) module: source Supported parameters include: api_version, archive_path, buildargs, cacert_path, cert_path, container_limits, debug, docker_host, dockerfile, filter_logger, force, http_timeout, key_path, load_path, name, nocache, path, pull, push, repository, rm, ssl_version, state, tag, timeout, tls, tls_hostname, tls_verify, use_tls"}
I'll be happy for your support on this issue.
The source: pull option was added in Ansible 2.8. Since you are using Ansible 2.5.1, that option is not available.
You can either use a later version, 2.8 or above, or just remove that line from your playbook and it should work:
- name: Pull default Docker image
docker_image:
name: "{{ default_container_image }}"
You won't have the guarantee that the image has been newly pulled from a registry. If that's important in your case, you can remove any locally cached version of the image first:
- name: Remove Docker image
docker_image:
name: "{{ default_container_image }}"
state: absent
- name: Pull default Docker image
docker_image:
name: "{{ default_container_image }}"
So according to the doc of docker_image module of Ansible 2.5, there is indeed no parameter source.
Nevertheless, the doc of version 2.9 tells us it has been "added in 2.8"! So you have to update you Ansible version to be able to run the linked playbook as-is. That's you best option.
Otherwise, another option would be to keep your version 2.5 and simply remove the line 38.
(-) source: pull
But I don't know how was the default behaviour before 2.8, so I cannot garanty you that it will do what you expect!
Finally, got this playbook to sing! :)
I did the following.
upgraded the Ansibe version, so now it's running on version: 2.9.15.
my python3 version is:3.6.9
After upgrading the Ansible to the version i've mentioned above, i got and error message: Failed to import the required python library (Docker SDK for Python (python >==2.7) or docker-py (python 2.6)) on osboxes(this is my machine) python...
so, after Googling this error, i found this URL:
https://neutrollized.blogspot.com/2018/12/cannot-have-both-docker-py-and-docker.html
SO, i decided to remove the docker from my machines, including the python that was installed using pip (i used the command pip-list to see if there is docker installed, and remove it using: pip uninstall).
After removing the Docker from my machines, i added the playbook one more play. install docker-compose (that's what solve my problem, and it took care of the python versions).
Just follow the URL i attached in my answer.
According the error message in Ansible module docker_image a parameter seems to be used, which is not part of the parameters implemented for that module (yet). Also the error message lists already the parameter which are available. Same as in the documentation for the module.
An other possible reason might be that the line indent for some of the parameters isn't correct.

How to get resolved sha digest for all images within Kubernetes yaml?

Docker image tags are mutable, in that image:latest and image:1.0 can both point to image#sha256:....., but when version 1.1 is released, image:latest stored within a registry can be pointed to an image with a different sha digest. Pulling an image with a particular tag now does not mean that an identical image will be pulled next time.
If a Kubernetes YAMl resource definition refers to an image by tag (not by digest), is there a means of determining what sha digest each image will actually resolve to, before the resource definition is deployed? Is this functionality supported using kustomize or kubectl?
Use case is wanting to determine what has actually been deployed in one environment before deploying to another (I'd like to take a hash of the resolved resource definition and could then use this to understand whether image:1.0 to be deployed to PROD refers to the same image:1.0 that was deployed to UAT).
Are there any tools that can be used to support this functionality?
For example, given the following YAML, is there a way of replacing all images with their resolved digests?
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- name: image1
image: image1:1.1
command:
- /bin/sh -c some command
- name: image2
image: image2:2.2
command:
- /bin/sh -c some other command
To get something like this:
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- name: image1
image: image1#sha:....
command:
- /bin/sh -c some command
- name: image2
image: image2#sha:....
command:
- /bin/sh -c some other command
I'd like to be able to do something like pipe yaml (that might come from cat, kustomize or kubectl ... --dry-run) through a tool and then pass to kubectl apply -f:
cat mydeployment.yaml | some-tool | kubectl apply -f -
EDIT:
The background to this is the need to be able to prove to auditors/regulators that what is about to be deployed to one env (PROD) is exactly what has been successfully deployed to another env (UAT). I'd like to use normal tags in the deployment template and at the time of deploying to UAT, take a snapshot of the template with the tags replaced with the digests of the resolved images. That snapshot will be what is deployed (via kubectl or similar). When deploying to PROD, that same snapshot will be used.
This tool is supporting exactly what you need...
kbld: https://get-kbld.io/
Resolves name-tag pair reference (nginx:1.17) into digest reference
(index.docker.io/library/nginx#sha256:2539d4344...)
Looks integrates quite well with templating tools like Kustomize or even Helm
You can all the containers used info with this command. This will list all namespaces, with pod names, with container image name and sha256 of the image.
kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.namespace}{","}{.metadata.name}{","}{range .status.containerStatuses[*]}{.image}{", "}{.imageID}{", "}{end}{end}' | sort
is there a means of determining what sha digest each image will actually resolve to, before the resource definition is deployed?
No, and in the case you describe, it can vary by node. The Deployment will create some number of Pods, each Pod will get scheduled on some Node, and the Kubelet there will only pull the image if it doesn’t have something with that tag already. If you have two replicas, and you’ve changed the image a tag points to, then on node A it could use the older image that was already there, but on node B where there isn’t an image, it will pull and get the newer version.
The best practice here is to avoid changing the image a tag points to. Give each build coming out of your CI system a unique tag (a datestamp or source control commit ID for example) and use that in your Kubernetes object specifications. That avoids this problem entirely.
A workaround is to set
imagePullPolicy: Always
in your pod specs, which will force the node to pull a newer version, but this is unnecessary overhead in most cases.
Here's another on - k8s-digester from google folks. It's a bit different in a sense than the main focus is on cluster-side changes(via Adm Controller) even though client-side KRM functions seems to also be possible.
Overall, kbld seems to be more about development experience and adoption with your cli/CICD/orchestration, while k8s-digester is more about administration on the cluster side.

Deploying cgal docker

I'm trying to deploy the official CGAL docker. From reading the README I understand that after downloading the specific image (e.g I want to open a docker with ubuntu16+CGAL and all of it's dependencies) using the following command:
docker pull cgal/testsuite-docker:ubuntu # get a specific image by replacing TAG with some tag
I need to install the cgal library itself using the
./test_cgal.py --user **** --passwd **** --images cgal-testsuite/ubuntu
The thing is that eventually I want to start the docker with an interactive shell, i.e
docker run --rm -it -v $(pwd):/source somedocker
And I couldn't understand where is the generated image, after the CGAL installation script.
Those images are not for running CGAL. They are only images we use to define an environment for our testsuite, and run tests in it, including compiling CGAL.
test_cgal.py will download the integration branch, which is rarely working as it is the branch in which we merge our PR to test them nightly. Don't use this to get a working CGAL. To my knowledge, there is no such image as the one you are looking for. No official one anyways.
Furthermore, installing cgal at runtime in this image will not modify the image, once you close the container your installation will be lost. You need to specify how to install CGA in the Dockerfile of your image and
then build it if you want a "ready to use" image.
You can use the dockerfile of the image you found to write your own, as there should be all the dependencies specified in it, but you need to edit it to download CGAL and maybe build it if you don't want the header-only version. This is not done in test-cgal.py or anywhere in this docker repository.

Resources