We use to build our projects using Github Actions and Docker. As you can imagine, on each push of our dev teams, a well-defined pipeline take the changes, build the new image and push it into the registry. In a couple of days the pipeline start to throw "bizarre" errors about connection issues. Just re-run the whole pipeline fixes it temporarily. Today, the pipeline has reached the point of no return. Every build got stucked on the same docker build step:
RUN apt/apk/yum update
...and the output is something like that:
Now, I managed to find the solution to this problem in this github issue thread. As suggested to several users, I tried to run docker build -t <image_name> --network=host . on a simple Dockerfile (which contains an alpine image running apk update command).
Everything works like a charm. Now I have to apply this fix to the github action pipeline.
First of all, let's take a look to the docker build phase, defined into the pipeline (for security reasons, I masked some part of the Dockfile):
- name: Build and push
uses: docker/build-push-action#v2
with:
context: .
push: true
file: Dockerfile
tags: |
<image>
build-args: |
<args>
cache-from: type=registry,ref=<image_cache>
cache-to: type=registry,ref=<image_cache>
Looking to the official documentation of docker/build-push-action#v2, we are allowed to define the network configuration during the build, simply adding
network: host
in with: customizations.
Following the official documentation of Docker, regarding network param, quote:
The use of --network=host is protected by the network.host
entitlement, which needs to be enabled when starting the buildkitd
daemon with --allow-insecure-entitlement network.host flag or in
buildkitd config, and for a build request with --allow network.host
flag.
So, combining both the documentation, I thought the right way to define the network param is something like that:
- name: Build and push
uses: docker/build-push-action#v2
with:
context: .
push: true
allow: network.host,security.insecure #NEW
network: host #NEW
file: Dockerfile
tags: |
<image>
build-args: |
<args>
cache-from: type=registry,ref=<image_cache>
cache-to: type=registry,ref=<image_cache>
but it doesn't work. Same situation, stucked on apk/apt upgrade for ages.
So I'm here to ask to you how to correctly configure docker/build-push-action#v2 stage in order to define the param network=host and overcome the connection issues.
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action#v2
with:
driver-opts: |
network=host
Based on #user19972112 solution, I figured out how to overcome this issue.
Into the docker/setup-buildx-action#v1 step, I added two properties:
buildkitd-flags: '--allow-insecure-entitlement network.host'
driver-opts: network=host
Then, into docker/build-push-action#v2 step, you have to allow and specify network equal to host:
allow: network.host
network: host
So, the result will be:
[...]
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action#v1
with:
version: latest
endpoint: builders
buildkitd-flags: '--allow-insecure-entitlement network.host'
driver-opts: network=host
[...]
- name: Build and push
uses: docker/build-push-action#v2
with:
context: .
push: true
allow: network.host
network: host
file: ./docker/.dockerfile
[...]
Related
I'm using docker/build-push-action and am trying to find a way to shorten the image name when it gets pushed to github registry.
Currently the docker image name is showing as
ghcr.io/organization-name/project-image:latest
Can this be shortened to just be the image name and tag without the ghcr.io/organization-name/
The name of a docker image tag needs to include a registry hostname when you use docker push (unless you are pushing to the default Docker’s public registry located at registry-1.docker.io)
It is how docker push knows where to push to.
That is why a typical "build and push" step looks like:
name: Build and Push Docker container
on:
push:
branches:
- main
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Build the Docker image
run: docker build -t ghcr.io/<<ACCOUNT NAME>>/<<IMAGE NAME>>:<<VERSION>> .
- name: Setup GitHub Container Registry
run: echo "$" | docker login https://ghcr.io -u $ --password-stdin
- name: push to GitHub Container Registry
run: docker push ghcr.io/<<ACCOUNT NAME>>/<<IMAGE NAME>>:<<VERSION>>
In your case, (using the docker/build-push-action), even if you were to use a local registry, you would still need to tag accordingly.
-
name: Build and push to local registry
uses: docker/build-push-action#v3
with:
context: .
push: true
tags: localhost:5000/name/app:latest
^^^^^^^^^^^^^^
# still a long tag name
Just specify in the tags argument, like:
- name: Build and push
uses: docker/build-push-action#v3
with:
push: true
tags: project-image:latest
I have a GitHub workflow with job like this:
docker:
name: Docker image
needs: test-n-build
runs-on: ubuntu-20.04
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Set up QEMU
uses: docker/setup-qemu-action#v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
- name: Set docker image tag
if: startsWith(github.ref, 'refs/tags/')
run: echo "DOCKER_TAG=:$(cargo read-manifest | sed 's/.*"version":"\{0,1\}\([^,"]*\)"\{0,1\}.*/\1/')" >> $GITHUB_ENV
- name: Login to DockerHub
uses: docker/login-action#v1
- name: Build and push
uses: docker/build-push-action#v2
with:
push: ${{ startsWith(github.ref, 'refs/tags/') }}
This workflow builds docker image and pushes it to registry if tag pushed to branch. As seen there are a Set docker image tag step with cargo command used, but when I'm copy-pasted things I forgot to add setup rust action. But that step successfully executed and no error like command cargo not found not appears.
Is it because that job needs test-n-build job where I actually setup Rust, or QEMU installs Rust? How it finds cargo command?
As you can see on the Ubuntu-20.04 virtual environment, it is provided with some Rust Tools installed:
### Rust Tools
Cargo 1.58.0
Rust 1.58.1
Rustdoc 1.58.1
Rustup 1.24.3
Therefore, you wouldn't need any extra setup to install them in that case.
You can check the available virtual environment used as runners and their configurations here.
I would like to run my CI on a Docker image. How should I write my .github/workflow/main.yml?
name: CI
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
name: build
runs:
using: 'docker'
image: '.devcontainer/Dockerfile'
steps:
- uses: actions/checkout#v2
- name: Build
run: make
I get the error:
The workflow is not valid. .github/workflows/main.yml
(Line: 11, Col: 5): Unexpected value 'runs'
I managed to make it work but with an ugly workaround:
build:
name: Build Project
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout#v1
- name: Build docker images
run: >
docker build . -t foobar
-f .devcontainer/Dockerfile
- name: Build exam
run: >
docker run -v
$GITHUB_WORKSPACE:/srv
-w/srv foobar make
Side question: where can I find the documentation about this? All I found is how to write actions.
If you want to use a container to run your actions, you can use something like this:
jobs:
build:
runs-on: ubuntu-latest
container:
image: docker://{host}/{image}:{tag}
steps:
...
Here is an example.
If you want more details about the jobs.<job_id>.container and its sub-fields, you can check the official documentation.
Note that you can also use docker images at the step level: Example.
I am reposting my answer to another question, in order to be sure to find it while Googling it.
The best solution is to build, publish and re-use a Docker image based on your Dockerfile.
I would advise to create a custom build-and-publish-docker.yml action following the Github documentation: Publishing Docker images.
Assuming your repository is public, you should be able to automatically upload your image to ghcr.io without any required configuration. As an alternative, it's also possible to publish the image to Docker Hub.
Once your image is built and published (based on the on event of the action previously created, which can be triggered manually also), you just need to update your main.yml action so it uses the custom Docker image. Again, here is a pretty good documentation page about the container option: Running jobs in a container.
As an example, I'm sharing what I used in a personal repository:
Dockerfile: the Docker image to be built on CI
docker.yml: the action to build the Docker image
lint.yml: the action using the built Docker image
Context
I'm attempting to build a CD pipeline in a GitHub action that creates a docker image, pushes it to Google Cloud Registry (GCR), and then restarts a VM instance with the latest image.
Problem
The VM Instance is somehow not running the latest docker image, even though the VM Instance page itself shows the right image tag, and I'm seeing the VM Instance being restarted.
The reason I suspect this is because when I SSH into the VM Instance, and run docker logs <container-id> I see logs that indicate that an old value of a Docker build argument is being used - specifically DATABASE_URL. (I've also confirmed that, after requesting endpoints, the appropriate logs show up here, so it seems like I'm viewing the right logs)
When I pull down this same image and run it locally, the DATABASE_URL is correct.
When I SSH into the VM Instance and run docker images I'm not seeing the newer images being listed. I must have broken something because this was working earlier, but I'm having trouble finding the issue here.
Question
Why isn't the VM Instance receiving the new docker images? (When running docker images, I don't see new docker images being listed).
Update
After checking the VM Instance's logs via:
sudo journalctl -u konlet-startup
I'm seeing the following error:
Error: Failed to start container: Error response from daemon: {"message":"pull access denied for gcr.io/..., repository does not exist or may require 'docker login': denied: Permission denied for \"...\" from request \"...". "}
So that explains why I'm no longer receiving new docker images.
Code Snippets
name: Continuous Delivery
on:
push:
branches: [master]
env:
PROJECT_ID: ${{ secrets.GCE_PROJECT }}
GCE_INSTANCE: ${{ secrets.GCE_INSTANCE }}
GCE_INSTANCE_ZONE: ${{ secrets.GCE_INSTANCE_ZONE }}
jobs:
build:
name: Update GCP VM instance
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
# Setup gcloud CLI
- uses: GoogleCloudPlatform/github-actions/setup-gcloud#master
with:
version: "290.0.1"
service_account_key: ${{ secrets.GCE_SA_KEY }}
project_id: ${{ secrets.GCE_PROJECT }}
# Configure Docker to use the gcloud command-line tool as a credential
# helper for authentication
- run: |-
gcloud --quiet auth configure-docker
# Build the Docker image
- name: Build
run: |-
docker build --no-cache --build-arg DATABASE_URL=${{ secrets.DATABASE_URL }} --tag "gcr.io/$PROJECT_ID/platform:$GITHUB_SHA" .
# Push the Docker image to Google Container Registry
- name: Publish
run: |-
docker push "gcr.io/$PROJECT_ID/platform:$GITHUB_SHA"
- name: Deploy
run: |-
gcloud compute instances update-container "$GCE_INSTANCE" \
--zone "$GCE_INSTANCE_ZONE" \
--container-image "gcr.io/$PROJECT_ID/platform:$GITHUB_SHA"
To be super sure that secrets.DATABASE_URL is actually correct, I spun up a server and sent a request within the Dockerfile:
RUN curl "http://my-url.ngrok.io/${DATABASE_URL}"
and I can confirm that it is indeed correct.
I have several Dockerfiles in my project. One is for building basic image, which contains some business-level abstractions. Others are building services, based on the basic image.
So in my services' Dockerfiles I have something like
FROM my-project/base
# Adding some custom logic around basic stuff
I am using GitHub Actions as my CI/CD tool.
At first I had a step to install docker into my workers, and then ran something like:
- name: Build base image
working-directory: business
run: docker build -t my-project/base .
- name: Build and push service
working-directory: service
run: |
docker build -t my-ecr-repo/service .
docker push my-ecr-repo/service
But then I've found docker/build-push-action and decided to use it in my pipeline:
- name: Build business-layer container
uses: docker/build-push-action#v2
with:
load: true
tags: my-project/base
context: business
file: business/Dockerfile
- name: Build service
uses: docker/build-push-action#v2
with:
push: true
tags: my-ecr-repo/service
context: service
file: service/Dockerfile
As for now, the second step tries to download docker.io/my-project/base, and obviously cannot do it, because I never push base image:
ERROR: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
The question is:
What is the correct way to build an image, so it is accessible by the following building steps locally?
PS:
I don't want to push my naked basic image anywhere.
I believe you'll need to set load: true in both your base image and the final image. This changes the behavior to use the local docker engine for images. I believe you'll need to run a separate push if you do this, e.g.:
- name: Build business-layer container
uses: docker/build-push-action#v2
with:
load: true
tags: my-project/base
context: business
file: business/Dockerfile
- name: Build service
uses: docker/build-push-action#v2
with:
load: true
tags: my-ecr-repo/service
context: service
file: service/Dockerfile
- name: push service
run: |
docker push my-ecr-repo/service
The other option is to use a local registry. This has the advantage of supporting multi-platform builds. But you'll want to switch from load to push with your base image, and I'd pass the base image as a build arg to make it easier for use cases outside of Github actions, e.g.:
jobs:
local-registry:
runs-on: ubuntu-latest
services:
registry:
image: registry:2
ports:
- 5000:5000
steps:
- name: Login to DockerHub
uses: docker/login-action#v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# qemu should only be needed for multi-platform images
- name: Set up QEMU
uses: docker/setup-qemu-action#v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
with:
driver-opts: network=host
- name: Build business-layer container
uses: docker/build-push-action#v2
with:
push: true
tags: localhost:5000/my-project/base
context: business
file: business/Dockerfile
- name: Build service
uses: docker/build-push-action#v2
with:
push: true
tags: my-ecr-repo/service
context: service
file: service/Dockerfile
build-args: |
BASE_IMAGE=localhost:5000/my-project/base
And then your Dockerfile would allow the base image to be specified as a build arg:
ARG BASE_IMAGE=my-project/base
FROM ${BASE_IMAGE}
# ...
The github action docker/setup-buildx-action#v1 defaults to driver docker-container as documented.
This means builds will run, by default, in a container and thus images won't be available outside of the action.
The solution is to set the driver to docker:
...
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
with:
driver: docker # defaults to "docker-containerized"
- name: Build business-layer container
uses: docker/build-push-action#v2
with:
# using "load: true" forces the docker driver
# not necessary here, because we set it before
#load: true
tags: my-project/base:latest
context: business
file: business/Dockerfile
- name: Build service
uses: docker/build-push-action#v2
with:
# using "push: true" will lead to error:
# Error: buildx call failed with: auto-push is currently not implemented for docker driver
# so you will have to follow the solution outlined here:
# https://github.com/docker/build-push-action/issues/100#issuecomment-715352826
# and push the image manually following the build
#push: true
tags: my-ecr-repo/service:latest
context: service
file: service/Dockerfile
# list all images, you will see my-ecr-repo/service and my-project/base
- name: Look up images
run: docker image ls
# push the image manually, see above comment
- name: Push image
run: docker push my-ecr-repo/service:latest
...
For whatever reason, using load: true did not work for me when using build-push-action (it should just be needed on the first one, but tried adding them on both too). And I needed to use cache-from and cache-to for my first build so I could not change from the default "docker-container" driver.
What ended up working for me was to use the action only for the first build and do the second one with a run command.
Something like:
- name: Build business-layer container
uses: docker/build-push-action#v2
with:
load: true
tags: my-project/base
context: business
file: business/Dockerfile
- name: Build service
run:
docker build \
--file service/Dockerfile \
--tag my-ecr-repo/service \
service
- name: push service
run: |
docker push my-ecr-repo/service