Is a Docker Hub Repository only for a single Docker File or multiple Docker FIles - docker

Is a Docker Hub Repository only for a single Docker File or multiple Docker Files ?
I am unclear, in my case in my have two repositories one for an Intel build (using Automated Build), and another for an Arm build of the same application that I had to build locally and push to Docker Hub.
Is that how you are meant to do it ?

With multi architecture & manifest it's possible to have many images for many architecture sharing the same tag.
I already answer one of your other post with this link : https://blog.slucas.fr/blog/docker-multiarch-manifest-hub-2/
Check this docker image, you can do a docker pull seblucas/alpine-homeassistant:latest for armhf, arm64 and amd64 without any problem (and each architecture will get its own image). The same is true for many other images provided by docker (alpine for example).

Yes, you can have multiple Dockerfiles in a repository, by using tags. Each tag corresponds to a Dockerfile, so you could two tags one called :intel and another one called :arm in the same repository.

Related

How to push multiple digest with different OS/ARCH under one tag in docker?

I am relatively new to docker and saw in other repositories that we can push multiple digests under same tag with different OS/ARCH in docker. For example:
How can I achieve the same? Right now whenever I do docker push [REPO_LINK] from different architectures, it replaces the last pushed one with it's architecture. Thanks!
You might be looking for fat manifest aka manifest list.
It enables building images with multiple architectures under same tag. You need to use docker manifest command, when using multiple machines.
Once you have pushed images from different machines, you have to finally combine the manifests of these images into single one (called as manifest list). See more from official docs.
This blog post was already mentioned in one comment, but you can still use that docker manifest example to combine manifests to single file, even if you are not working only on one machine.
Related question: Is it possible to push docker images for different architectures separately?
There are two options I know of.
First, you can have buildx run builds on multiple nodes, one for each platform, rather than using qemu. For that, you would use docker buildx create --append to add the additional nodes to the builder instance. The downside of this is you'll need the nodes accessible from the node running docker buildx which typically doesn't apply to ephemeral cloud build environments.
The second option is to use the experimental docker manifest command. Each builder would push a separate tag. And at the end of all those, you would use docker manifest create to build a manifest list and docker manifest push to push that to a registry. Since this is an experimental feature, you'll want to export DOCKER_CLI_EXPERIMENTAL=enabled to see it in the command line. (You can also modify ~/.docker/config.json to have an "experimental": "enabled" entry.)

Is it possible to push docker images for different architectures separately?

From what I know docker buildx build --push will overwrite existing image architectures with the one you specified in --platform parameter. As I understand you have to build and push for all architectures at the same time when using buildx. However, I know that official docker images use arm64 build farm to build linux/arm64 images. How is it possible? Do they just use docker push without buildx? If so, does it mean docker push doesn't overwrite existing architectures unlike buildx? What's the best way to do that if I want to build and push multiple architectures on separate machines?
You can build and push with separate commands on different hosts in a cluster, each sending to a different tag. And then after all tags for each platform have been pushed, you can use docker manifest to create a multiplatform manifest that points to all images with a single tag. This tool currently requires experimental support to be enabled.
Further details on docker manifest can be found in the docs: https://docs.docker.com/engine/reference/commandline/manifest/

How to automate Multi-Arch-Docker Image builds

I have dockerized a nodejs app on github. My Dockerfile is based on the offical nodejs images. The offical node-repo supports multiple architectures (x86, amd64, arm) seamlessly. This means I can build the exact same Dockerfile on different machines resulting in different images for the respective architecture.
So I am trying to offer the same architectures seamlessly for my app, too. But how?
My goal is automate it as much as possible.
I know I need in theory to create a docker-manifest, which acts as a docker-repo and redirects the end-users-docker-clients to their suitable images.
Docker-Hub itself can monitor a github repo and kick off an automated build. Thats would take care of the amd64 image. But what about the remaining architectures?
There is also the service called 'TravisCI' which I guess could take care of the arm-build with the help of qemu.
Then I think both repos could then be referenced statically by the manifest-repo. But this still leaves a couple architectures unfulfilled.
But using multiple services/ways of building the same app feels wrong. Does anyone know a better and more complete solution to this problem?
It's basically running the same dockerfile through a couple machines and recording them in a manifest.
Starting with Docker 18.02 CLI you can create multi-arch manifests and push them to the docker registries if you enabled client-side experimental features. I was able to use VSTS and create a custom build task for multi-arch tags after the build. I followed this pattern.
docker manifest create --amend {multi-arch-tag} {os-specific-tag-1} {os-specific-tag-2}
docker manifest annotate {multi-arch-tag} {os-specific-tag-1} --os {os-1} --arch {arch-1}
docker manifest annotate {multi-arch-tag} {os-specific-tag-2} --os {os-2} --arch {arch-2}
docker manifest push --purge {multi-arch-tag}
On a side note, I packaged the 18.02 docker CLI for Windows and Linux in my custom VSTS task so no install of docker was required. The manifest command does not appear to need the docker daemon to function correctly.

Build chain in the cloud?

(I understand this question is somewhat out of scope for stack overflow, because contains more problems and somewhat vague. Suggestions to ask it in the proper ways are welcome.)
I have some open source projects depending in each other.
The code resides in github, the builds happen in shippable, using docker images which in turn are built on docker hub.
I have set up an artifact repo and a debian repository where shippable builds put the packages, and docker builds use them.
The build chain looks like this in terms of deliverables:
pre-zenta docker image
zenta docker image (two steps of docker build because it would time out otherwise)
zenta debian package
zenta-tools docker image
zenta-tools debian package
xslt docker image
adadocs artifacts
Currently I am triggering the builds by pushing to github and sometimes rerunning failed builds on shippable after the docker build ran.
I am looking for solutions for the following problems:
Where to put Dockerfiles? Now they are in the repo of the package needing the resulting docker image for build. This way all information to build the package are in one place, but sometimes I have to trigger an extra build to have the package actually built.
How to trigger build automatically?
..., in a way supporting git-flow? For example if I change the code in zenta develop branch, I want to make sure that zenta-tools will build and test with the development version of it, before merging with master.
Are there a tool with which I can overview the health of the whole build chain?
Since your question is related to Shippable, I've created a support issue for you here - https://github.com/Shippable/support/issues/2662. If you are interested in discussing the best way to handle your scenario, you can also send me an email at support#shippable.com You can set up your entire flow, including building the docker images, using Shippable.

What are the pros and cons of docker pull and docker build from Dockerfile?

I have been playing around with docker for about a month and now I have a few images.
Recently, I want to share one of them to some other guy,
and I push that image X to my DockerHub, so that he can pull it from my repository.
However, this seems kind of a waste of time.
The total time spent here is the time I do docker push and the time he do docker pull.
If I just sent him the Dockerfile needed to build that image X, then the cost would be
the time I write a Dockerfile, the time to pass a text file, and the time he do docker build,
which is less than previous way since I maintain my Dockerfiles well.
So, that is my question: what are the pros/cons of these two approach?
Why Docker Inc. chose to launch a DockerHub service rather than a DockerfileHub service?
Any suggestions or answers would be appreciated.
Thanks a lot!
Let's assume you build an image from a Dockerfile and push that image to Docker Hub. During the build you download some sources and build a program. But when the build is done the sources become unavailable. Now the Dockerfile can't be used anymore but the image on Docker Hub is still working. That's a pro for Docker Hub.
But it can be a con too. For example if the sourcecode contains a terrible bug like Heartbleed or Shellshock. Then the sources get patched but the image on Docker Hub does not get updated.
In fact, the time you push image and the time you build image depend on your environment.
For example, you may prebuild a image for embedded system, but you won't want to build it on embedded system.
Docker Hub had provided an Automated Builds feature which will fetch Dockerfile from GitHub, and build image. So you can get the Dockerfile of image from GitHub, it's not necessary to have a service for sharing Dockerfile.

Resources