Managing docker image versions - docker

I'm creating a CI process for a project. The project consists of 2 components:
petalinux image - the CI process builds linux image with petalinux-build and generates an SDK using petalinux-build --sdk. Don't have to know about petalinux to answer this question. Just know that this side of the projects creates an SDK which will be used in the second components of the project.
Each time I run this petalinux build, I also create a Docker image with the SDK in it, and I want to use this image when I build the second component - the application
Application - This is built using the Docker image created in the petalinux component.
The thing is, that each time I build the Docker image, I want to tag it with the version of the petalinux project and keep it in a Docker registry. So my Docker registry looks like:
-- sdk
|
+ - 1.0
|
+ - 1.3
|
+ - 1.6 .. etc.
Now, when I build the application, I want to use the latest sdk image. So basically I want each time I build the petalinux project to push 2 tags to the docker registry: the current version, say 1.9, and latest.
Does someone recognizes such a pattern? What's the best way to do this in Jenkins scripted?

To push several tags, just use shell script to specify several tags in docker build command, such as
docker build -t image_namespace/image_name:tag1 -t image_namespace/image_name:latest .
Then also push both of them, i.e.:
docker push image_namespace/image_name:tag1
docker push image_namespace/image_name:latest
You may also find multiline shell scripts useful - jenkins pipeline: multiline shell commands with pipe

Related

Github action to publish binaries compiled from docker buildx for multiple platforms

I am working on a project where I am creating multi stage docker builds, that compile C++ code, as part of build stages, using buildx and build-push, for amd64, arm64, arm7, etc. on Debian.
I would also like to publish the compiled binaries to GitHub releases for the various platforms, along with publishing the final docker image that also contains the compiled code.
I am aware of methods to e.g. cat / pipe data out of a container.
What I'd like to know is if there is a standard GitHub actions integrated way to publish content compiled in docker containers to GitHub, or if I need to manually copy the content from the containers after building them?

Docker multi stage builds, Kubernetes, and Distroless compatibility

I am facing "theoritical" compatility issues when using distroless-based containers with kubernetess 1.10.
Actually, distroless requires docker 17.5 (https://github.com/GoogleContainerTools/distroless) whereas kubernetes does support version 17.03 only (https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#external-dependencies)
is it possible to run distroless containers within kubernetes 1.10
clusters w/o any issue?
is it possible to build distroless based
images on a build server running docker 17.05 then deploying it on a
kubernetes 1.10 cluster (docker 17.03)?
The requirement for 17.05 is only to build a "distroless" image with docker build using multistage Dockerfile. When you have an image built, there is nothing stopping it from running on older Docker / containerd versions.
Docker has supported images with no distribution for ages now by using FROM: scratch and leaving it to the image author to populate whatever the software needs, which in some cases like fully static binaries might be only the binary of the software and nothing more :)
It seems that you might need Docker 17.05+ only for building images using multi-stage files.
After you build an image with the multi-stage Dockerfile, it will be the same image in the registry like if you build it in an old-fashioned way.
Taken from Use multi-stage builds:
With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image.
The end result is the same tiny production image as before, with a significant reduction in complexity.
Kubernetes does not use Dockerfiles for creating pods. It uses ready to run images from the Docker registry instead.
That's why I believe that you can use such images in Kubernetes Pods without any issues.
But anyway, to create and push your images, you have to use a build machine with Docker 17.05+ that can consume new multi-stage syntax in the Dockerfile.

Tag docker files with build numbers

I would like to publish docker images tagged with both semantic versions (like version 1.0) and also build numbers (like build 3). Operationally this might come about in the following way:
docker build -t my-project:1.0-1
# make minor changes to docker file
docker build -t my-project:1.0-2
# make minor changes to docker file
docker build -t my-project:1.0-3
# release new version of project
docker build -t my-project:1.1-1
I would expect some users to pin to particular build numbers
docker pull my-project:1.0-2
While other users would just ask for "the lastest of version 1.0"
docker pull my-project:1.0
Does this work? Is there a better way to accomplish this goal?
Yes, this works. A tag is just a friendly name attached to an image ID. Any given image can have as many tags as you would realistically want.
docker tag myproject my-project:1.0-2
docker tag myproject my-project:1.0
Then, if you docker images and find these tags, you'll see that the IMAGE ID for both tags is the same. Keep in mind you'd want to push both tagged images to your repository.
Looking at a couple of popular Docker Hub repos for inspiration:
ruby, python, postgres

How to automate Multi-Arch-Docker Image builds

I have dockerized a nodejs app on github. My Dockerfile is based on the offical nodejs images. The offical node-repo supports multiple architectures (x86, amd64, arm) seamlessly. This means I can build the exact same Dockerfile on different machines resulting in different images for the respective architecture.
So I am trying to offer the same architectures seamlessly for my app, too. But how?
My goal is automate it as much as possible.
I know I need in theory to create a docker-manifest, which acts as a docker-repo and redirects the end-users-docker-clients to their suitable images.
Docker-Hub itself can monitor a github repo and kick off an automated build. Thats would take care of the amd64 image. But what about the remaining architectures?
There is also the service called 'TravisCI' which I guess could take care of the arm-build with the help of qemu.
Then I think both repos could then be referenced statically by the manifest-repo. But this still leaves a couple architectures unfulfilled.
But using multiple services/ways of building the same app feels wrong. Does anyone know a better and more complete solution to this problem?
It's basically running the same dockerfile through a couple machines and recording them in a manifest.
Starting with Docker 18.02 CLI you can create multi-arch manifests and push them to the docker registries if you enabled client-side experimental features. I was able to use VSTS and create a custom build task for multi-arch tags after the build. I followed this pattern.
docker manifest create --amend {multi-arch-tag} {os-specific-tag-1} {os-specific-tag-2}
docker manifest annotate {multi-arch-tag} {os-specific-tag-1} --os {os-1} --arch {arch-1}
docker manifest annotate {multi-arch-tag} {os-specific-tag-2} --os {os-2} --arch {arch-2}
docker manifest push --purge {multi-arch-tag}
On a side note, I packaged the 18.02 docker CLI for Windows and Linux in my custom VSTS task so no install of docker was required. The manifest command does not appear to need the docker daemon to function correctly.

What are the pros and cons of docker pull and docker build from Dockerfile?

I have been playing around with docker for about a month and now I have a few images.
Recently, I want to share one of them to some other guy,
and I push that image X to my DockerHub, so that he can pull it from my repository.
However, this seems kind of a waste of time.
The total time spent here is the time I do docker push and the time he do docker pull.
If I just sent him the Dockerfile needed to build that image X, then the cost would be
the time I write a Dockerfile, the time to pass a text file, and the time he do docker build,
which is less than previous way since I maintain my Dockerfiles well.
So, that is my question: what are the pros/cons of these two approach?
Why Docker Inc. chose to launch a DockerHub service rather than a DockerfileHub service?
Any suggestions or answers would be appreciated.
Thanks a lot!
Let's assume you build an image from a Dockerfile and push that image to Docker Hub. During the build you download some sources and build a program. But when the build is done the sources become unavailable. Now the Dockerfile can't be used anymore but the image on Docker Hub is still working. That's a pro for Docker Hub.
But it can be a con too. For example if the sourcecode contains a terrible bug like Heartbleed or Shellshock. Then the sources get patched but the image on Docker Hub does not get updated.
In fact, the time you push image and the time you build image depend on your environment.
For example, you may prebuild a image for embedded system, but you won't want to build it on embedded system.
Docker Hub had provided an Automated Builds feature which will fetch Dockerfile from GitHub, and build image. So you can get the Dockerfile of image from GitHub, it's not necessary to have a service for sharing Dockerfile.

Resources