Github actions runner environment doesn't build for arm images - docker

I get this error on my github build:
The requested image's platform (linux/arm/v7) does not match the detected host platform (linux/amd64) and no specific platform was requested
The runner environment I'm using is the ubuntu:latest and my container uses arm32v7/node base image.
Here's my Dockerfile:
FROM arm32v7/node AS appbuild
WORKDIR /app
COPY package.json ./
COPY src/ ./src
RUN npm install
FROM appbuild as release
ENTRYPOINT ["npm", "start"]
And here's my deployment to github yaml:
jobs:
docker:
runs-on: ubuntu-latest
The jobs are ran inside a ubuntu runner environment. I suspect the arm version of the node image I'm using is unable to run in that environment, is that correct?
Edit:
- name: Build controller image
if: steps.changed-files-plantcontroller.outputs.any_changed == 'true'
run: >
docker build -t ${{ env.image_name_controller }} ${{ env.context }}
env:
context: ./controller

You are correct. Docker on your arm64 build environment cannot build the arm32v7 image without Docker Buildx. You can solve this by using the buildx Github Action and qemu action, that prepares your environment to build multi-arch images.
Buildx is Docker's technology to build images for a target architecture not matching the host's (in this case the host arch is arm64). You can read about how this works here.
QEMU is an emulator used by Docker buildx. It's a dependency I learned the hard way was needed when writing a multi-arch workflow.
For more reference, check out this doc: https://github.com/docker/build-push-action/blob/master/docs/advanced/multi-platform.md
You should be able to build your image with a workflow with something like this:
name: ci
on:
push:
branches:
- 'master'
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout#v2
-
name: Set up QEMU dependency
uses: docker/setup-qemu-action#v1
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
- name: Build controller image
if: steps.changed-files-plantcontroller.outputs.any_changed == 'true'
uses: docker/build-push-action#v2
with:
context: ${{ env.context }}
tags: ${{ env.image_name_controller }}
platforms: linux/arm/v7
push: true
env:
context: ./controller
NOTE: Updated the last build command with how you have your Build controller image step written. It just translates the Docker command to how the Docker build-push-action builds.
An alternative command you could use in place of the build-push-action#v2:
docker buildx build -t ${{ env.image_name_controller }} --platform linux/arm/v7 --push ${{ env.context }}
The command assumes you want to push to a GitHub repo with the tag.
Unrelated, but there's an opportunity to simplify your Dockerfile.
When you're using buildx, Docker can handle pulling the right arm32v7 version of the image when the image you're pulling has multiple versions. Check out the node image tags and you'll see linux/arm/v7 listed as one of the OS/Arch tags. What this means is you could simplify your FROM arm32v7/node AS appbuild line to FROM node AS appbuild if you think you want to build for a different architecture sometime.
For example:
Github workflow that builds for amd64 and arm64
Dockerfile that pulls relevant amd64 or arm64 image depending on TARGETPLATFORM

Related

Correct Way To Build Multiple Docker Versions In GitHub Actions?

I have a GitHub Action that is almost like the one below. The action's purpose is to build a Dockerfile and push it to DockerHub.
name: DockerHub Run
on:
push:
branches:
- "master"
schedule:
- cron: "0 0 * * 0"
env:
DOCKERHUB_USERNAME: MyUser
OFFICIAL_TAG: MyUser/MyImage:latest
MAIN_REPO_NAME: MyUser/MyImage
DOCKERFILE_PATH: /
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Set up QEMU
uses: docker/setup-qemu-action#v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v2
- name: Login to DockerHub
uses: docker/login-action#v2
with:
username: ${{ env.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
- name: Build and push image to DockerHub
uses: docker/build-push-action#v3
with:
platforms: linux/amd64,linux/arm64
file: ${{ env.GITHUB_WORKSPACE }}/Dockerfile
push: true
tags: ${{ env.OFFICIAL_TAG }}
- name: Update repo description
uses: peter-evans/dockerhub-description#v2
with:
username: ${{ env.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
repository: ${{ env.MAIN_REPO_NAME }}
readme-filepath: ./readme.md
And according to DockerHub, the architecture is listed
However, I have a question about this line:
uses: docker/build-push-action#v3
with:
platforms: linux/amd64,linux/arm64
Im not sure if listing the platforms here actually compiles to those platforms. Keep in mind that GitHub is using ubuntu-latest which is x86-x64 and I do not have a ARM64 device to test.
Am I setting up correctly to build to ARM devices?
tldr
Im not sure if listing the platforms here actually compiles to those
platforms.
It does (assuming your the commands defined in your Dockerfile are platform conscious)
Am I setting up correctly to build to ARM devices?
Your config looks correct and should build for amd64 and arm64. You can test by adding a step after the push and checking the output:
# assuming your image is debian based
$ docker run \
--platform linux/amd64 \
--rm \
--entrypoint='' \
MyUser/MyImage:latest \
/bin/bash -c 'dpkg --print-architecture'
#output should be
amd64
$ docker run \
--platform linux/arm64 \
--rm \
--entrypoint='' \
MyUser/MyImage:latest \
/bin/bash -c 'dpkg --print-architecture'
#output should be
arm64
long answer
It "works" because of the emulators for the qemu emulators for a bunch of different platforms (aka docker/setup-qemu-action#v2) and then using docker buildx for the multi-platform images
The problem is that even though everything seems to build fine in CI this way, the artifacts never really get tested on their respective native platforms, so to answer your question 'Am I setting up correctly to build to ARM devices?' ... 🤷‍♂️
I find it similar to python and its universal2 wheels, where the cross-compilations are built, but not really ever tested (all very python and macos specific but the conversations point out challenges running integration/e2e tests for these multi platform artifacts):
https://github.com/actions/setup-python/issues/197
https://github.com/actions/runner-images/issues/4133
https://github.com/actions/python-versions/pull/114
https://github.com/actions/setup-python/issues/547
This github/community discussion also provides a little more depth on the multiplatform builds as well
https://github.com/community/community/discussions/38728#discussioncomment-4106829

How to implement semantic versioning of Docker images in GitHub Actions workflow?

I would like to achieve the following CI pipeline with GitHub Actions workflow.
Pull Request is merged -> GitHub action is being triggered -> docker image is being built with semantic version incremented by one or the version is based on GitHub tag - if it is possible to somehow tag the pull request merge.
How to achieve that or is the better approach?
I have tried with secrets but no to avail. How to implement semantic versioning in GitHub Actions workflow?
name: Docker Image CI
on:
push:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Build the Docker image
run: docker build . --file Dockerfile --tag my-image-name:${{github.ref_name}}
${{github.ref_name}} pulls the tag for you or run git command like git describe --abbrev=0 in the previous step to get the latest tag and append it to image name and use it like:
- name: Get Tag
id: vars
run: echo ::set-output name=tag::${git describe --abbrev=0}
- name: Build the Docker image
run: docker build . --file Dockerfile --tag my-image-name:${{ steps.vars.outputs.tag }}
You can use a lot of semver actions on the market place.
For example, I have tried using this - Semver action
This will bump your repo version, and you can use a git bash command to get that bumped version on the next job.
So combining with docker build, you can do something like:
jobs:
update-semver:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: haya14busa/action-update-semver#v1
id: version
with:
major_version_tag_only: true # (optional, default is "false")
Build:
name: Build Image
needs: [update-semver]
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout#v2
- name: Build image
run: |
tag_v=$(git describe --tags $(git rev-list --tags --max-count=1))
tag=$(echo $tag_v | sed 's/v//')
docker build -t my_image_${tag} .

Why can I run cargo in github actions without setting it up?

I have a GitHub workflow with job like this:
docker:
name: Docker image
needs: test-n-build
runs-on: ubuntu-20.04
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Set up QEMU
uses: docker/setup-qemu-action#v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
- name: Set docker image tag
if: startsWith(github.ref, 'refs/tags/')
run: echo "DOCKER_TAG=:$(cargo read-manifest | sed 's/.*"version":"\{0,1\}\([^,"]*\)"\{0,1\}.*/\1/')" >> $GITHUB_ENV
- name: Login to DockerHub
uses: docker/login-action#v1
- name: Build and push
uses: docker/build-push-action#v2
with:
push: ${{ startsWith(github.ref, 'refs/tags/') }}
This workflow builds docker image and pushes it to registry if tag pushed to branch. As seen there are a Set docker image tag step with cargo command used, but when I'm copy-pasted things I forgot to add setup rust action. But that step successfully executed and no error like command cargo not found not appears.
Is it because that job needs test-n-build job where I actually setup Rust, or QEMU installs Rust? How it finds cargo command?
As you can see on the Ubuntu-20.04 virtual environment, it is provided with some Rust Tools installed:
### Rust Tools
Cargo 1.58.0
Rust 1.58.1
Rustdoc 1.58.1
Rustup 1.24.3
Therefore, you wouldn't need any extra setup to install them in that case.
You can check the available virtual environment used as runners and their configurations here.

Run GitHub workflow on Docker image with a Dockerfile?

I would like to run my CI on a Docker image. How should I write my .github/workflow/main.yml?
name: CI
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
name: build
runs:
using: 'docker'
image: '.devcontainer/Dockerfile'
steps:
- uses: actions/checkout#v2
- name: Build
run: make
I get the error:
The workflow is not valid. .github/workflows/main.yml
(Line: 11, Col: 5): Unexpected value 'runs'
I managed to make it work but with an ugly workaround:
build:
name: Build Project
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout#v1
- name: Build docker images
run: >
docker build . -t foobar
-f .devcontainer/Dockerfile
- name: Build exam
run: >
docker run -v
$GITHUB_WORKSPACE:/srv
-w/srv foobar make
Side question: where can I find the documentation about this? All I found is how to write actions.
If you want to use a container to run your actions, you can use something like this:
jobs:
build:
runs-on: ubuntu-latest
container:
image: docker://{host}/{image}:{tag}
steps:
...
Here is an example.
If you want more details about the jobs.<job_id>.container and its sub-fields, you can check the official documentation.
Note that you can also use docker images at the step level: Example.
I am reposting my answer to another question, in order to be sure to find it while Googling it.
The best solution is to build, publish and re-use a Docker image based on your Dockerfile.
I would advise to create a custom build-and-publish-docker.yml action following the Github documentation: Publishing Docker images.
Assuming your repository is public, you should be able to automatically upload your image to ghcr.io without any required configuration. As an alternative, it's also possible to publish the image to Docker Hub.
Once your image is built and published (based on the on event of the action previously created, which can be triggered manually also), you just need to update your main.yml action so it uses the custom Docker image. Again, here is a pretty good documentation page about the container option: Running jobs in a container.
As an example, I'm sharing what I used in a personal repository:
Dockerfile: the Docker image to be built on CI
docker.yml: the action to build the Docker image
lint.yml: the action using the built Docker image

Build docker image locally in GitHub Actions using docker/build-push-action

I have several Dockerfiles in my project. One is for building basic image, which contains some business-level abstractions. Others are building services, based on the basic image.
So in my services' Dockerfiles I have something like
FROM my-project/base
# Adding some custom logic around basic stuff
I am using GitHub Actions as my CI/CD tool.
At first I had a step to install docker into my workers, and then ran something like:
- name: Build base image
working-directory: business
run: docker build -t my-project/base .
- name: Build and push service
working-directory: service
run: |
docker build -t my-ecr-repo/service .
docker push my-ecr-repo/service
But then I've found docker/build-push-action and decided to use it in my pipeline:
- name: Build business-layer container
uses: docker/build-push-action#v2
with:
load: true
tags: my-project/base
context: business
file: business/Dockerfile
- name: Build service
uses: docker/build-push-action#v2
with:
push: true
tags: my-ecr-repo/service
context: service
file: service/Dockerfile
As for now, the second step tries to download docker.io/my-project/base, and obviously cannot do it, because I never push base image:
ERROR: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
The question is:
What is the correct way to build an image, so it is accessible by the following building steps locally?
PS:
I don't want to push my naked basic image anywhere.
I believe you'll need to set load: true in both your base image and the final image. This changes the behavior to use the local docker engine for images. I believe you'll need to run a separate push if you do this, e.g.:
- name: Build business-layer container
uses: docker/build-push-action#v2
with:
load: true
tags: my-project/base
context: business
file: business/Dockerfile
- name: Build service
uses: docker/build-push-action#v2
with:
load: true
tags: my-ecr-repo/service
context: service
file: service/Dockerfile
- name: push service
run: |
docker push my-ecr-repo/service
The other option is to use a local registry. This has the advantage of supporting multi-platform builds. But you'll want to switch from load to push with your base image, and I'd pass the base image as a build arg to make it easier for use cases outside of Github actions, e.g.:
jobs:
local-registry:
runs-on: ubuntu-latest
services:
registry:
image: registry:2
ports:
- 5000:5000
steps:
- name: Login to DockerHub
uses: docker/login-action#v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# qemu should only be needed for multi-platform images
- name: Set up QEMU
uses: docker/setup-qemu-action#v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
with:
driver-opts: network=host
- name: Build business-layer container
uses: docker/build-push-action#v2
with:
push: true
tags: localhost:5000/my-project/base
context: business
file: business/Dockerfile
- name: Build service
uses: docker/build-push-action#v2
with:
push: true
tags: my-ecr-repo/service
context: service
file: service/Dockerfile
build-args: |
BASE_IMAGE=localhost:5000/my-project/base
And then your Dockerfile would allow the base image to be specified as a build arg:
ARG BASE_IMAGE=my-project/base
FROM ${BASE_IMAGE}
# ...
The github action docker/setup-buildx-action#v1 defaults to driver docker-container as documented.
This means builds will run, by default, in a container and thus images won't be available outside of the action.
The solution is to set the driver to docker:
...
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
with:
driver: docker # defaults to "docker-containerized"
- name: Build business-layer container
uses: docker/build-push-action#v2
with:
# using "load: true" forces the docker driver
# not necessary here, because we set it before
#load: true
tags: my-project/base:latest
context: business
file: business/Dockerfile
- name: Build service
uses: docker/build-push-action#v2
with:
# using "push: true" will lead to error:
# Error: buildx call failed with: auto-push is currently not implemented for docker driver
# so you will have to follow the solution outlined here:
# https://github.com/docker/build-push-action/issues/100#issuecomment-715352826
# and push the image manually following the build
#push: true
tags: my-ecr-repo/service:latest
context: service
file: service/Dockerfile
# list all images, you will see my-ecr-repo/service and my-project/base
- name: Look up images
run: docker image ls
# push the image manually, see above comment
- name: Push image
run: docker push my-ecr-repo/service:latest
...
For whatever reason, using load: true did not work for me when using build-push-action (it should just be needed on the first one, but tried adding them on both too). And I needed to use cache-from and cache-to for my first build so I could not change from the default "docker-container" driver.
What ended up working for me was to use the action only for the first build and do the second one with a run command.
Something like:
- name: Build business-layer container
uses: docker/build-push-action#v2
with:
load: true
tags: my-project/base
context: business
file: business/Dockerfile
- name: Build service
run:
docker build \
--file service/Dockerfile \
--tag my-ecr-repo/service \
service
- name: push service
run: |
docker push my-ecr-repo/service

Resources