Docker Hub: Repository Links for Automated Builds - docker

In Docker Hub one can configure Automated Builds by clicking on the corresponding button in the upper-right corner of the Builds tab. Apart from configuring a rebuild on pushing to the source-code repository containing the Dockerfile, one can also set "Repository Links" to "Enable for Base Image". This is intended to "Trigger a build in this repository whenever the base image is updated on Docker Hub".
I got this to work in some simple toy-example cases. But it fails to trigger on a more complex example. My Dockerfile looks something like this:
FROM mediawiki AS orig
FROM alpine AS build
COPY --from=orig <file> /
RUN <patch-command of file>
FROM mediawiki
COPY --from=build <file> /
Why does the rebuild not trigger if (either of) the base-images gets updated? Is this because I have more than one FROM line in the Dockerfile? Or did the warning "Only works for non-official images" apply to the base image instead of the destination image?
If the answer to my last question above is "yes", is there some way to still get the desired effect of rebuilding on base image updates?

"Only works for non-official images"
I'm fairly sure it doesn't work for any official images like alpine, golang, etc. The reason is that so many images depend on those base images that a single update would be a huge burden on their infrastructure to rebuild everyone's images.
My guess is that the logic to determine whether an image uses an official image or not is very basic and if it detects FROM <some-official-image> anywhere in your Dockerfile then it probably won't get automatically rebuilt.

Related

Super newbie multi-stage docker image build question

I need to build custom image which contains both terraform and the gcloud CLI. Very new to docker so I'm struggling with this even though it seems very straight forward. I need to make a multi stage image from the following two images:
google/cloud-sdk:slim
hashicorp/terraform:light
How can I copy the terraform binary from the hashicorp/terraform:light image to the google/cloud-sdk:slim image? Any fumbling I've done so far has given me countless errors. Just hoping somebody could give me an example of what this should look like because this is clearly not it:
FROM hashicorp/terraform:light AS builder
FROM google/cloud-sdk:slim
COPY --from=builder /usr/bin/env/terraform ./
Thanks!
That's not really the purpose of multistaging. For your case, you would want to pick either image and install the other tool, instead of copying from one to another.
Multistage is meant when you want to build an app but you don't want to add building dependencies to the final image in order to reduce the image size and reduce the attack surface.
So, for example, you could have a Go app and you would have two stages:
The first stage would build the binary, downloading all the required dependencies.
The second stage would copy the binary from the first stage, and that's it.

How to print override data with using multiple docker images in single Dockerfile?

I foubd out that if I use multiple docker images in single Dockerfile , the second one would always override or delete the data installed in former one , for example :
FROM nvidia/cuda:10.2-cudnn8-devel-ubuntu18.04
FROM python:3.8.10
CMD ["/bin/bash"]
The cuda-10.2 installed from former one is gone...But what I want is to have data installed from both docker images I use . Is there any way to achieve it ? Thanks
This concept is called "multi-stage builds" and it works in a different way from what you expect in your Dockerfile. It allows you to build multiple things in a single Dockerfile, and then hand-pick the parts you need in a single final image:
With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image.
To achieve what you want, you might try using multi-stage builds and COPY --from statements, but it probably won't work (for example if the base images use different OS distributions, or if you accidentally miss some files while copying).
What would work is writing a new Dockerfile using the instructions from both other Dockerfiles (python and cuda) and building an image from it. Note that you might need to adapt the commands executed in every one of the base files if they don't work as expected out of the box.
You can use multi-stage Docker builds. Need copy the data between stages but it possible. Read more about it:
https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds

How to use docker images when building artefacts in Actions?

TL;DR: I would like to use on a self-hosted Actions runner (itself a docker container on my docker engine) specific docker images to build artefacts that I would move between the build phases, and end with a standalone executable (not a docker container to be deployed). I do not know how to use docker containers as "building engines" in Actions.
Details: I have a home project consisting of a backend in Go (cross compiled to a standalone binary) and a frontend in Javascript (actually a framework: Quasar).
I develop on my laptop in Windows and use GitHub as the SCM.
The manual steps I do are:
build a static version of the frontend which lands in a directory spa
copy that directory to the backend directory
compile the executable that embeds the spa directory
copy (scp) this executable to the final destination
For development purposes this works fine.
I now would like to use Actions to automate the whole thing. I use docker based self-hosted runners (tcardonne/github-runner).
My problem: the containers do a great job isolating the build environment from the server they run on. They are however reused across build jobs and this may create conflicts. More importantly, the default versions of software provided by these containers is not the right (usually - latest) one.
The solution would be to run the build phases in disposable docker containers (that would base on the right image, shortening the build time as a collateral nice to have). Unfortunately, I do not know how to set this up.
Note: I do not want to ultimately create docker containers, I just want to use them as "building engines" and extract the artefacts from them, and share between the jobs (in my specific case - one job would be to build the front with quasar and generate a directory, the other one would be a compilation ending up with a standalone executable copied elsewhere)
Interesting premise, you can certainly do this!
I think you may be slightly mistaken with regards to:
They are however reused across build jobs and this may create conflicts
If you run a new container from an image, then you will start with a fresh instance of that container. Files, software, etc, all adhering to the original image definition. Which is good, as this certainly aids your efforts. Let me know if I have the wrong end of the stick in regards to the above though.
Base Image
You can define your own image for building, in order to mitigate shortfalls of public images that may not be up to date, or suit your requirements. In fact, this is a common pattern for CI, and Google does something similar with their cloud build configuration. For either approach below, you will likely want to do something like the following to ensure you have all the build tools you may
As a rough example:
FROM golang:1.16.7-buster
RUN apt update && apt install -y \
git \
make \
...
&& useradd <myuser> \
&& mkdir /dist
USER myuser
You could build and publish this with the following tag:
docker build . -t <containerregistry>:buildr/golang
It would also be recommended that you maintain a separate builder image for other types of projects, such as node, python, etc.
Approaches
Building with layers
If you're looking to leverage build caching for your applications, this will be the better option for you. Caching is only effective if nothing has changed, and since the projects will be built in isolation, it makes it relatively safe.
Building your app may look something like the following:
FROM <containerregistry>:buildr/golang as builder
COPY src/ .
RUN make dependencies
RUN make
RUN mv /path/to/compiled/app /dist
FROM scratch
COPY --from=builder /dist /dist
The gist of this is that you would start building your app within the builder image, such that it includes all the build deps you require, and then use a multi stage file to publish a final static container that includes your compiled source code, with no dependencies (using the scratch image as the smallest image possible ).
Getting the final files out of your image would be a bit harder using this approach, as you would have to run an instance of the container once published in order to mount the files and persist it to disk, or use docker cp to retrieve the files from a running container (not image) to your disk.
In Github actions, this would look like running a step that builds a Docker container, where the step can occur anywhere with docker accessibility
For example:
jobs:
docker:
runs-on: ubuntu-latest
steps:
...
- name: Build and push
id: docker_build
uses: docker/build-push-action#v2
with:
push: true
tags: user/app:latest
Building as a process
This one can not leverage build caching as well, but you may be able to do clever things like mounting a host npm cache into your container to aid in actions like npm restore.
This approach differs from the former in that the way you build your app will be defined via CI / a purposeful script, as opposed to the Dockerfile.
In this scenario, it would make more sense to define the CMD in the parent image, and mount your source code in, thus not maintaining a image per project you are building.
This would shift the responsibility of building your application from the buildtime of the image, to the runtime. Retrieving your code from the container would be doable through volume mounting for example:
docker run -v /path/to/src:/src /path/to/dist:/dist <containerregistry>:buildr/golang
If the CMD was defined in the builder, that single script would execute and build the mounted in source code, and subsequently publish to /dist in the container, which would then be persisted to your host via that volume mapping.
Of course, this applies if you're building locally. It actually becomes a bit nicer in a Github actions context if you wish to keep your build instructions there. You can choose to run steps within your builder container using something like the following suggestion
jobs:
...
container:
runs-on: ubuntu-latest
container: <containerregistry>:buildr/golang
steps:
- run: |
echo This job does specify a container.
echo It runs in the container instead of the VM.
name: Run in container
Within that run: spec, you could choose to call a build script, or enter the commands that might be present in the script yourself.
What you do with the compiled source is muchly up to you once acquired 👍
Chaining (Frontend / Backend)
You mentioned that you build static assets for your site and then embed them into your golang binary to be served.
Something like that introduces complications of course, but nothing untoward. If you do not need to retrieve your web files until you build your golang container, then you may consider taking the first approach, and copying the content from the published image as part of a Docker directive. This makes more sense if you have two separate projects, one for frontend and backend.
If everything is in one folder, then it sounds like you may just want to extend your build image to facilitate go and js, and then take the latter approach and define those build instructions in a script, makefile, or your run: config in your actions file
Conclusion
This is alot of info, I hope it's digestible for you, and more importantly, I hope it gives you some ideas as to how you can tackle your current issue. Let me know if you would like clarity in the comments

Get multistage dockerfile from image

I have tried docker history and dfimage for getting the dockerfile from a docker image.
From what I can see, any information about the multistage dockerfiles is not there. As I think about it, it makes sense. The final docker image just knows that files were copied in. It probably does not keep a reference to the layer that was used to construct it.
But I thought I would ask just to be sure. (It would be really helpful)
For example: I have a multistage docker file that, in the first stage builds a dot.net core application, then in the second stage copies the files from that build into an Nginx container.
Is there any way, given the final image, to get the dockerfile used to do the build?
Unfortunately this wont be possible since your final docker image won't contain anything from the "builder" stage. Basically the builder stage is a completely different image which was built, the files were copied from it during the build of the final image and than it was discarded.
The stages from the builder stage will live on in your build cache and you could even tag them to run some kind of docker image analyzer against them. However this does not help you, if you only have access to the final image...
No it is not possible. Docker image will only have its own history and not the multi stages that may have been used before it

Wrap origin public Dockerfile to manage build args, etc

I'm very new to Docker and stuff, so I wonder if I can change source official and public images from Docker.Hub (which I use in FROM directive) on-the-fly, while using them in my own container builds, kinda like chefs chef-rewind do?
For example, if I need to pass build-args to openresty/latest-centos to build it without modules I won't use. I need to put this
FROM openresty/latest-centos
in my Dockerfile, and what else should I do for openresty to be built only with modules I needed?
When you use the FROM directive in a Dockerfile, you are simply instructing Docker to use the named image as the base for the image that will be built with your Dockerfile. This does not cause the base image to be rebuilt, so there is no way to "pass parameters" to the build process.
If the openresty image does not meet your needs, you could:
Clone the openresty git repository,
Modify the Dockerfile,
Run docker build ... to build your own image
Alternatively, you can save yourself that work and just use the existing image and live with a few unused modules hanging around. If the modules are separate components, you could also issue the necessary commands in your Dockerfile to remove them.

Resources