Is it possible to run Docker multi-stage build images on older versions of Docker? - docker

Creating multi stage builds requires Docker 17.05. Once created is it possible to use these images on older versions of the Docker daemon, or are the images themselves in a slightly different format?

Of course it's possible. The image format has not changed. It's just the build process that changed.

Related

Rebuild docker image by reusing the same tag?

I've gone thru multiple questions posted on the forum but didn't get a clarity regarding my requirement.
I'm building a docker image after every successful CI build, there will be hardly 1 to 2 lines of changes in Dockerfile for every successful build.
Docker Build Command:
$(docker_registry)/$(Build.Repository.Name):azul
Docker Push Command
$(docker_registry)/$(Build.Repository.Name):azul
I wanted to overwrite the current docker image with the latest one(from the latest CI build changes) but retain the same tag - azul. Does docker support this ?
Yes, docker supports it. Every line you execute results in a new layer in image, that contains the changes compared to the previous layer. After modifying the Dockerfile, new layers will be created and the same preceding layers will be reused.
If you want to clean build the whole image with no cached layers, you can use the —no-cache parameter.
Mechanically this works. The new image will replace the old one with that name. The old image will still be physically present on the build system but if you look at the docker images output it will say <none> for its name; commands like docker system prune can clean these up.
The problems with this approach are on the consumer end. If I docker run registry.example.com/image:azul, Docker will automatically pull the image only if it's not already present. This can result in you running an older version of the image that happens to be on a consumer's system. This is especially a problem in cluster environments like Kubernetes, where you need a change in the text of the image name in a Kubernetes deployment specification to trigger an update.
In a CI system especially, I'd recommend assigning some sort of unique tag to every build. This could be based on the source control commit ID, or the branch name and bind number, or the current date, or something else. You can create a fixed tag like this as a convenience to developers (an image is allowed to have multiple tags) but I'd plan to not use this for actual deployments.

Docker multi stage builds, Kubernetes, and Distroless compatibility

I am facing "theoritical" compatility issues when using distroless-based containers with kubernetess 1.10.
Actually, distroless requires docker 17.5 (https://github.com/GoogleContainerTools/distroless) whereas kubernetes does support version 17.03 only (https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#external-dependencies)
is it possible to run distroless containers within kubernetes 1.10
clusters w/o any issue?
is it possible to build distroless based
images on a build server running docker 17.05 then deploying it on a
kubernetes 1.10 cluster (docker 17.03)?
The requirement for 17.05 is only to build a "distroless" image with docker build using multistage Dockerfile. When you have an image built, there is nothing stopping it from running on older Docker / containerd versions.
Docker has supported images with no distribution for ages now by using FROM: scratch and leaving it to the image author to populate whatever the software needs, which in some cases like fully static binaries might be only the binary of the software and nothing more :)
It seems that you might need Docker 17.05+ only for building images using multi-stage files.
After you build an image with the multi-stage Dockerfile, it will be the same image in the registry like if you build it in an old-fashioned way.
Taken from Use multi-stage builds:
With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image.
The end result is the same tiny production image as before, with a significant reduction in complexity.
Kubernetes does not use Dockerfiles for creating pods. It uses ready to run images from the Docker registry instead.
That's why I believe that you can use such images in Kubernetes Pods without any issues.
But anyway, to create and push your images, you have to use a build machine with Docker 17.05+ that can consume new multi-stage syntax in the Dockerfile.

Should I Compile My Application Inside of a Docker Image

Although most of the time I am developing Java apps and am simply using Maven so my builds should be reproducible (at least that's what Maven says).
But say you are compiling a C++ program or something a little more involved, should you build inside of docker?
Or ideally use vagrant or another technology to produce reproduce able builds.
How do you manage reproducible build with docker?
You can, but not in your final image, as that would mean a much larger image than necessary: it would include all the compilation tool, instead of limiting to only what you need to execute the resulting binary.
You can see an alternative in "How do I build a Docker image for a Ruby project without build tools?"
I use an image to build,
I commit the resulting stopped container as a new image (with a volume including the resulting binary)
I use an execution image (one which only contain what you need to run), and copy the binary from the other image. I commit again the resulting container.
The final image includes the compiled binary and the execution environment.
I wanted to post an answer to this as well actually because to build on VonC's answer. Actually I just had Redhat Openshift training and they use a tool called Source to Image s2i, which uses docker to create docker images. And actually this strategy is great for managing a private (or public) cloud, where your build may be compiled on different machines, but you need to keep the build environment consistent.

What are the pros and cons of docker pull and docker build from Dockerfile?

I have been playing around with docker for about a month and now I have a few images.
Recently, I want to share one of them to some other guy,
and I push that image X to my DockerHub, so that he can pull it from my repository.
However, this seems kind of a waste of time.
The total time spent here is the time I do docker push and the time he do docker pull.
If I just sent him the Dockerfile needed to build that image X, then the cost would be
the time I write a Dockerfile, the time to pass a text file, and the time he do docker build,
which is less than previous way since I maintain my Dockerfiles well.
So, that is my question: what are the pros/cons of these two approach?
Why Docker Inc. chose to launch a DockerHub service rather than a DockerfileHub service?
Any suggestions or answers would be appreciated.
Thanks a lot!
Let's assume you build an image from a Dockerfile and push that image to Docker Hub. During the build you download some sources and build a program. But when the build is done the sources become unavailable. Now the Dockerfile can't be used anymore but the image on Docker Hub is still working. That's a pro for Docker Hub.
But it can be a con too. For example if the sourcecode contains a terrible bug like Heartbleed or Shellshock. Then the sources get patched but the image on Docker Hub does not get updated.
In fact, the time you push image and the time you build image depend on your environment.
For example, you may prebuild a image for embedded system, but you won't want to build it on embedded system.
Docker Hub had provided an Automated Builds feature which will fetch Dockerfile from GitHub, and build image. So you can get the Dockerfile of image from GitHub, it's not necessary to have a service for sharing Dockerfile.

Images are being cached even if there are changes

I have on Docker an automatic build for an image based on ubuntu with some custom configurations to re-use then as base image on other specific Dockerfiles for particular projects. This works okay.
I made a change to it, committed to github which then started and did the automatic build on Docker.
From one of these other projects, I'm calling at the beginning of the Dockerfile FROM myuser/myimage but its not getting the last image with the changes, but rather it keeps caching the old one.
Shouldn't this be automatically?
You need to docker pull the latest version. Docker looks for the image from FROM locally. It doesn't notice if that tag has been updated in the registry where it came from. I have a script that runs docker pull before building images.

Resources