Build docker image locally in GitHub Actions using docker/build-push-action - docker

I have several Dockerfiles in my project. One is for building basic image, which contains some business-level abstractions. Others are building services, based on the basic image.
So in my services' Dockerfiles I have something like
FROM my-project/base
# Adding some custom logic around basic stuff
I am using GitHub Actions as my CI/CD tool.
At first I had a step to install docker into my workers, and then ran something like:
- name: Build base image
working-directory: business
run: docker build -t my-project/base .
- name: Build and push service
working-directory: service
run: |
docker build -t my-ecr-repo/service .
docker push my-ecr-repo/service
But then I've found docker/build-push-action and decided to use it in my pipeline:
- name: Build business-layer container
uses: docker/build-push-action#v2
with:
load: true
tags: my-project/base
context: business
file: business/Dockerfile
- name: Build service
uses: docker/build-push-action#v2
with:
push: true
tags: my-ecr-repo/service
context: service
file: service/Dockerfile
As for now, the second step tries to download docker.io/my-project/base, and obviously cannot do it, because I never push base image:
ERROR: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
The question is:
What is the correct way to build an image, so it is accessible by the following building steps locally?
PS:
I don't want to push my naked basic image anywhere.

I believe you'll need to set load: true in both your base image and the final image. This changes the behavior to use the local docker engine for images. I believe you'll need to run a separate push if you do this, e.g.:
- name: Build business-layer container
uses: docker/build-push-action#v2
with:
load: true
tags: my-project/base
context: business
file: business/Dockerfile
- name: Build service
uses: docker/build-push-action#v2
with:
load: true
tags: my-ecr-repo/service
context: service
file: service/Dockerfile
- name: push service
run: |
docker push my-ecr-repo/service
The other option is to use a local registry. This has the advantage of supporting multi-platform builds. But you'll want to switch from load to push with your base image, and I'd pass the base image as a build arg to make it easier for use cases outside of Github actions, e.g.:
jobs:
local-registry:
runs-on: ubuntu-latest
services:
registry:
image: registry:2
ports:
- 5000:5000
steps:
- name: Login to DockerHub
uses: docker/login-action#v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# qemu should only be needed for multi-platform images
- name: Set up QEMU
uses: docker/setup-qemu-action#v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
with:
driver-opts: network=host
- name: Build business-layer container
uses: docker/build-push-action#v2
with:
push: true
tags: localhost:5000/my-project/base
context: business
file: business/Dockerfile
- name: Build service
uses: docker/build-push-action#v2
with:
push: true
tags: my-ecr-repo/service
context: service
file: service/Dockerfile
build-args: |
BASE_IMAGE=localhost:5000/my-project/base
And then your Dockerfile would allow the base image to be specified as a build arg:
ARG BASE_IMAGE=my-project/base
FROM ${BASE_IMAGE}
# ...

The github action docker/setup-buildx-action#v1 defaults to driver docker-container as documented.
This means builds will run, by default, in a container and thus images won't be available outside of the action.
The solution is to set the driver to docker:
...
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
with:
driver: docker # defaults to "docker-containerized"
- name: Build business-layer container
uses: docker/build-push-action#v2
with:
# using "load: true" forces the docker driver
# not necessary here, because we set it before
#load: true
tags: my-project/base:latest
context: business
file: business/Dockerfile
- name: Build service
uses: docker/build-push-action#v2
with:
# using "push: true" will lead to error:
# Error: buildx call failed with: auto-push is currently not implemented for docker driver
# so you will have to follow the solution outlined here:
# https://github.com/docker/build-push-action/issues/100#issuecomment-715352826
# and push the image manually following the build
#push: true
tags: my-ecr-repo/service:latest
context: service
file: service/Dockerfile
# list all images, you will see my-ecr-repo/service and my-project/base
- name: Look up images
run: docker image ls
# push the image manually, see above comment
- name: Push image
run: docker push my-ecr-repo/service:latest
...

For whatever reason, using load: true did not work for me when using build-push-action (it should just be needed on the first one, but tried adding them on both too). And I needed to use cache-from and cache-to for my first build so I could not change from the default "docker-container" driver.
What ended up working for me was to use the action only for the first build and do the second one with a run command.
Something like:
- name: Build business-layer container
uses: docker/build-push-action#v2
with:
load: true
tags: my-project/base
context: business
file: business/Dockerfile
- name: Build service
run:
docker build \
--file service/Dockerfile \
--tag my-ecr-repo/service \
service
- name: push service
run: |
docker push my-ecr-repo/service

Related

How to shorten docker build-push-action image name

I'm using docker/build-push-action and am trying to find a way to shorten the image name when it gets pushed to github registry.
Currently the docker image name is showing as
ghcr.io/organization-name/project-image:latest
Can this be shortened to just be the image name and tag without the ghcr.io/organization-name/
The name of a docker image tag needs to include a registry hostname when you use docker push (unless you are pushing to the default Docker’s public registry located at registry-1.docker.io)
It is how docker push knows where to push to.
That is why a typical "build and push" step looks like:
name: Build and Push Docker container
on:
push:
branches:
- main
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Build the Docker image
run: docker build -t ghcr.io/<<ACCOUNT NAME>>/<<IMAGE NAME>>:<<VERSION>> .
- name: Setup GitHub Container Registry
run: echo "$" | docker login https://ghcr.io -u $ --password-stdin
- name: push to GitHub Container Registry
run: docker push ghcr.io/<<ACCOUNT NAME>>/<<IMAGE NAME>>:<<VERSION>>
In your case, (using the docker/build-push-action), even if you were to use a local registry, you would still need to tag accordingly.
-
name: Build and push to local registry
uses: docker/build-push-action#v3
with:
context: .
push: true
tags: localhost:5000/name/app:latest
^^^^^^^^^^^^^^
# still a long tag name
Just specify in the tags argument, like:
- name: Build and push
uses: docker/build-push-action#v3
with:
push: true
tags: project-image:latest

Github action: docker/build-push-action#v2 set network=host

We use to build our projects using Github Actions and Docker. As you can imagine, on each push of our dev teams, a well-defined pipeline take the changes, build the new image and push it into the registry. In a couple of days the pipeline start to throw "bizarre" errors about connection issues. Just re-run the whole pipeline fixes it temporarily. Today, the pipeline has reached the point of no return. Every build got stucked on the same docker build step:
RUN apt/apk/yum update
...and the output is something like that:
Now, I managed to find the solution to this problem in this github issue thread. As suggested to several users, I tried to run docker build -t <image_name> --network=host . on a simple Dockerfile (which contains an alpine image running apk update command).
Everything works like a charm. Now I have to apply this fix to the github action pipeline.
First of all, let's take a look to the docker build phase, defined into the pipeline (for security reasons, I masked some part of the Dockfile):
- name: Build and push
uses: docker/build-push-action#v2
with:
context: .
push: true
file: Dockerfile
tags: |
<image>
build-args: |
<args>
cache-from: type=registry,ref=<image_cache>
cache-to: type=registry,ref=<image_cache>
Looking to the official documentation of docker/build-push-action#v2, we are allowed to define the network configuration during the build, simply adding
network: host
in with: customizations.
Following the official documentation of Docker, regarding network param, quote:
The use of --network=host is protected by the network.host
entitlement, which needs to be enabled when starting the buildkitd
daemon with --allow-insecure-entitlement network.host flag or in
buildkitd config, and for a build request with --allow network.host
flag.
So, combining both the documentation, I thought the right way to define the network param is something like that:
- name: Build and push
uses: docker/build-push-action#v2
with:
context: .
push: true
allow: network.host,security.insecure #NEW
network: host #NEW
file: Dockerfile
tags: |
<image>
build-args: |
<args>
cache-from: type=registry,ref=<image_cache>
cache-to: type=registry,ref=<image_cache>
but it doesn't work. Same situation, stucked on apk/apt upgrade for ages.
So I'm here to ask to you how to correctly configure docker/build-push-action#v2 stage in order to define the param network=host and overcome the connection issues.
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action#v2
with:
driver-opts: |
network=host
Based on #user19972112 solution, I figured out how to overcome this issue.
Into the docker/setup-buildx-action#v1 step, I added two properties:
buildkitd-flags: '--allow-insecure-entitlement network.host'
driver-opts: network=host
Then, into docker/build-push-action#v2 step, you have to allow and specify network equal to host:
allow: network.host
network: host
So, the result will be:
[...]
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action#v1
with:
version: latest
endpoint: builders
buildkitd-flags: '--allow-insecure-entitlement network.host'
driver-opts: network=host
[...]
- name: Build and push
uses: docker/build-push-action#v2
with:
context: .
push: true
allow: network.host
network: host
file: ./docker/.dockerfile
[...]

Refer to Docker image variable name in Github Action

I'm very new with github actions, caprover and docker images so I might be asking very stupid questions, sorry if it is the case. I searched for a good amount of time and could not understand it by myself...
So, I'm trying to deploy to caprover a docker image built just before in my github action. Please see below my .yml file:
name: Docker Image CI
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Build the Docker image
run: docker build . --file Dockerfile --tag my-image-name:latest
- name: Print image names
run: docker images -q my-image-name
- name: Deploy image
uses: floms/action-caprover#v1
with:
host: '${{ secrets.CAPROVER_SERVER }}'
password: '${{ secrets.CAPROVER_PASSWORD }}'
app: '${{ secrets.APP_NAME }}'
image: my-image-name:latest
The Build the Docker image step was successful, but the Deploy image one was not. The error message I got was:
Build started for ***
An explicit image name was provided (my-image-name:latest). Therefore, no build process is needed.
Pulling this image: my-image-name:latest This process might take a few minutes.
Build has failed!
----------------------
Deploy failed!
Error: (HTTP code 404) unexpected - pull access denied for my-image-name, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Based on the error message, I believe I do not give the correct image name to the floms/action-caprover#v1 action. It is why I created step 2 Print image names to try and understand better what is the real name of the image created. I tried several solutions for the field name image, but all resulted in an error...
Thanks for the all the help you could provide!
To make sure your CapRover instance is able to pull the image from your github docker registry, it needs to be registered on your CapRover instance.
TLDR: I don't see a publish step (for your docker image) in your GitHub Actions configuration. If you want to use the image name to push it to CapRover you will need to publish it to a registry, whether it is GitHub's Container Registry, Nexus registry, or any other registry.
To do that in your CapRover instance, you need to go into Cluster > Docker Registry Configuration > Add Remote Registry. Then you will proceed to enter the configuration for your GitHub Container Registry. Typically you will need a Personal Access Token to allow CapRover to communicate with GitHub instead of using your password.
Docker Registry Configuration
Remote Registry Configuration
Docker Registry Configuration - Remote Registry Configured
Thanks Yoel Nunez for pointing I should deploy to the registry before trying to publish to caprover.
I followed the doc on github and finally managed to publish to caprover using a github action. Below is the .yml file that worked perfectly.
name: Publish to caprover
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout#v3
- name: Log in to the Container registry
uses: docker/login-action#v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.PAT }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action#v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push
uses: docker/build-push-action#v3
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
- name: Deploy image
uses: floms/action-caprover#v1
with:
host: '${{ secrets.CAPROVER_SERVER }}'
password: '${{ secrets.CAPROVER_PASSWORD }}'
app: '${{ secrets.APP_NAME }}'
image: ${{ steps.meta.outputs.tags }}

Why can I run cargo in github actions without setting it up?

I have a GitHub workflow with job like this:
docker:
name: Docker image
needs: test-n-build
runs-on: ubuntu-20.04
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Set up QEMU
uses: docker/setup-qemu-action#v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
- name: Set docker image tag
if: startsWith(github.ref, 'refs/tags/')
run: echo "DOCKER_TAG=:$(cargo read-manifest | sed 's/.*"version":"\{0,1\}\([^,"]*\)"\{0,1\}.*/\1/')" >> $GITHUB_ENV
- name: Login to DockerHub
uses: docker/login-action#v1
- name: Build and push
uses: docker/build-push-action#v2
with:
push: ${{ startsWith(github.ref, 'refs/tags/') }}
This workflow builds docker image and pushes it to registry if tag pushed to branch. As seen there are a Set docker image tag step with cargo command used, but when I'm copy-pasted things I forgot to add setup rust action. But that step successfully executed and no error like command cargo not found not appears.
Is it because that job needs test-n-build job where I actually setup Rust, or QEMU installs Rust? How it finds cargo command?
As you can see on the Ubuntu-20.04 virtual environment, it is provided with some Rust Tools installed:
### Rust Tools
Cargo 1.58.0
Rust 1.58.1
Rustdoc 1.58.1
Rustup 1.24.3
Therefore, you wouldn't need any extra setup to install them in that case.
You can check the available virtual environment used as runners and their configurations here.

GitHub actions and Docker-compose

guys!
I need you help to run docker-compose build on github action. I have a docker-compose file and I can't understand how to build and deploy it in correct way besides of just copying docker-compose by ssh and run scripts there.
There's docker/build-push-action#v2 but it's not working with docker-compose.yml.
This strongly depends where do you want to push your images. But for instance if you use Azure ACR you can use this action
on: [push]
name: AzureCLISample
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: Azure Login
uses: azure/login#v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Azure CLI script
uses: azure/CLI#v1
with:
azcliversion: 2.0.72
inlineScript: |
az acr login --name <acrName>
docker-compose up
docker-compose push
And then just build and push your images. But this is an example. If you use ECR it would be similar I guess.
For DigitialOcean it would be like this:
steps:
- uses: actions/checkout#v2
- name: Build image
run: docker-compose up
- name: Install doctl # install the doctl on the runner
uses: digitalocean/action-doctl#v2
with:
token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}
- name: push image to digitalocean
run: |
doctl registry login
docker-compose push
You can find more details about this here

Resources