Here I have two workflows under a job. The only target we want to achieve is that, we want to reuse the container images by using cache or some other means. Similar way we do for node_modules
jobs:
build:
name: build
runs-on: [self-hosted, x64, linux, research]
container:
image: <sample docker image>
env:
NPM_AUTH_TOKEN: <sample token>
steps:
- uses: actions/checkout#v2
- name: Install
run: |
npm install
- name: Build
run: |
npm build
Test:
name: Test Lint
runs-on: [self-hosted, x64, linux, research]
container:
image: <sample docker image>
env:
NPM_AUTH_TOKEN: <sample token>
steps:
- uses: actions/checkout#v2
- name: Install Dependencies
run: npm ci
- name: Lint Check
run: npm run lint
I would suggest using the Docker's Build Push action for this purpose. Through the build-push-action, you can cache your container images by using the inline cache, registry cache or the experimental cache backend API:
Inline cache
name: Build and push
uses: docker/build-push-action#v2
with:
context: .
push: true
tags: user/app:latest
cache-from: type=registry,ref=user/app:latest
cache-to: type=inline
Refer to the Buildkit docs.
Registry Cache
name: Build and push
uses: docker/build-push-action#v2
with:
context: .
push: true
tags: user/app:latest
cache-from: type=registry,ref=user/app:buildcache
cache-to: type=registry,ref=user/app:buildcache,mode=max
Refer to Buildkit docs.
Cache backend API
name: Build and push
uses: docker/build-push-action#v2
with:
context: .
push: true
tags: user/app:latest
cache-from: type=gha
cache-to: type=gha,mode=max
Refer to Buildkit docs.
I personally prefer using the Cache backend API as its easy to setup and provides a great boost in reducing the overall CI pipeline run duration.
By looking at the comments, it seems you want to share Docker cache between workflows. In this case you can share Docker containers between jobs in a workflow using this example:
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout#v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
-
name: Build and push
uses: docker/build-push-action#v2
with:
context: .
file: ./Dockerfile
tags: myimage:latest
outputs: type=docker,dest=/tmp/myimage.tar
-
name: Upload artifact
uses: actions/upload-artifact#v2
with:
name: myimage
path: /tmp/myimage.tar
use:
runs-on: ubuntu-latest
needs: build
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
-
name: Download artifact
uses: actions/download-artifact#v2
with:
name: myimage
path: /tmp
-
name: Load Docker image
run: |
docker load --input /tmp/myimage.tar
docker image ls -a
In general, data is not shared between jobs in GitHub Actions (GHA). jobs actually will run in parallel on distinct ephemeral VMs unless you explicitly create a dependency with needs
GHA does provide a cache mechanism. For package manager type caching, they simplified it, see here.
For docker images, you either can use docker buildx cache and cache to a remote registry (including ghcr), or use the GHA cache action, which probably is easier. The syntax for actions/cache is pretty straightforward and clear on the page. For buildx, documentation always has been a bit of an issue (largely, I think, because he people building it are so smart that they do not realize how much we do not understand what is in their hearts), so you would need to configure the cache action, and then buildx to cache it.
Alternatively, you could do docker save imagename > imagename.tar and use that in the cache. There is a decent example of that here. No idea who wrote it, but it does the job.
Related
I would like to create a CI/CD pipeline with GitHub Action to create a nginx reverse proxy server for my vue.js frontend and for my NestJS server.
What I have done so far:
When I push or pull a request on the main branch, I run the tests.
If the tests pass, build the frontend and backend.
What I want to do now:
Create a docker image of the nginx reverse proxy server for my frontend and backend and push it to the docker registry.
name: CI/CD Pipeline - Runs All tests and if all pass, builds the frontend and backend and deploys and configures the nginx proxy server to serve the frontend and backend.
# 1) on push or pull request to main branch
# 2) run frontend and backend tests
# 3) if tests pass, build the frontend and backend and save the build artifacts (./WEB/dist and ./BACKEND/dist)
# 4) if build succeeds, create a docker image of the nginx proxy server and push it to the docker registry
# 5) if tests fail or build fails, do nothing
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
backend-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Use Node.js 16.x
uses: actions/setup-node#v3
with:
node-version: 16.x
cache: 'npm'
cache-dependency-path: 'BACKEND/package-lock.json'
- name: Execute Backend Unit tests
run: |
npm ci
npm run test
working-directory: ./BACKEND
frontend-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Use Node.js 16.x
uses: actions/setup-node#v3
with:
node-version: 16.x
cache: 'npm'
cache-dependency-path: 'WEB/package-lock.json'
- name: Execute Frontend Unit tests
run: |
npm ci
npm run test
working-directory: ./WEB
build-frontend:
runs-on: ubuntu-latest
needs: [backend-tests, frontend-tests]
steps:
- uses: actions/checkout#v3
- name: Use Node.js 16.x
uses: actions/setup-node#v3
with:
node-version: 16.x
cache: 'npm'
cache-dependency-path: 'WEB/package-lock.json'
- name: Build Frontend
run: |
npm ci
npm run build
working-directory: ./WEB
- name: Save Frontend Build Artifacts
uses: actions/upload-artifact#v2
with:
name: frontend-build-artifacts
path: ./WEB/dist
build-backend:
runs-on: ubuntu-latest
needs: [backend-tests, frontend-tests]
steps:
- uses: actions/checkout#v3
- name: Use Node.js 16.x
uses: actions/setup-node#v3
with:
node-version: 16.x
cache: 'npm'
cache-dependency-path: 'BACKEND/package-lock.json'
- name: Build Backend
run: |
npm ci
npm run build
working-directory: ./BACKEND
- name: Save Backend Build Artifacts
uses: actions/upload-artifact#v2
with:
name: backend-build-artifacts
path: ./BACKEND/dist
I tried to do it with this action:
But I don't know how to create a docker image of reverse nginx proxy server and push it to docker registry with this action.
Thanks in advance for your help.
I have a repository with a Dockerfile and a custom action described by action.yml which references that Dockerfile. At first, I referenced the Dockerfile locally as a path in action.yml using image: 'Dockerfile'. However because GitHub doesn't support caching for Docker image and rebuilds it on every run, it takes a long time to prepare and clutters the logs, and it also doesn't have the required entrypoint file. Thus I upload it to the GitHub container registry on push to master. The problem is now, that it always points to the master tag, which means the tests may be executed on an outdated version as it may not be deployed yet, and I also can't pin tags to the associated tag of the image or go back and get an older state. How can I synchronize the image on the GitHub container registry to the associated state in the repository?
action.yml
[...]
runs:
using: 'docker'
image: docker://ghcr.io/orgaization/repo:master
[...]
.github/workflows/test
on:
workflow_dispatch:
push:
branches: ['master']
pull_request_target:
branches: ['master']
jobs:
test-correct:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout#v2
- name: test
uses: ./
with:
[...]
.github/workflows/publish
on:
workflow_dispatch:
push:
branches: ['master']
pull_request_target:
branches: ['master']
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout#v2
[...]
- name: Log in to the Container registry
[...]
- name: Build and push Docker image
uses: docker/build-push-action#v2
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
I'm trying to create a workflow that builds and pushes a docker image to the docker hub using https://github.com/docker/build-push-action.
I have a monorepo with a folder structure like this:
project
api
Dockerfile
client
and this is the workflow:
name: deploy
on:
push:
branches:
- main
paths:
- "api/**"
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Set up QEMU
uses: docker/setup-qemu-action#v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
- name: Login to DockerHub
uses: docker/login-action#v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action#v2
with:
push: true
tags: repo/kheilyshad-api:latest
context: .
file: api/Dockerfile
it always fails with this error:
/usr/bin/docker buildx build --tag ***/kheilyshad-api:latest --iidfile /tmp/docker-build-push-yGQ2mf/iidfile --metadata-file /tmp/docker-build-push-yGQ2mf/metadata-file --file api/Dockerfile --push .
error: could not find api: stat api: no such file or directory
Error: buildx failed with: error: could not find api: stat api: no such file or directory
I have to build the docker image in the root so I use file to specify the path to Dockerfile. it's basically like this
docker build -f ./api/Dockerfile .
but it couldn't find the api folder. I tried to set the file to ./api/Dockerfile, api, etc. but none of them work.
Seems like you're not checking out the repo. Try to add actions/checkout#v2 to your steps:
steps:
- uses: actions/checkout#v2
I am using GitHub Actions to trigger the building of my dockerfile, it is uploading the container to GitHub Container Registry. In the last step i am connecting via SSH to my remote DigitalOcean Droplet and executing a script to pull and install the new image from GHCR. This workflow was good for me as I was only building a single container in the project. Now I am using docker compose as I need NGINX besides by API. I would like to keep the containers on a single dropplet as the project is not demanding in ressources at the moment.
What is the right way to automate deployment with Github Actions and Docker Compose to DigitalOcean on a single VM?
My currently known options are:
Skip building containers on GHCR and fetch the repo via ssh to start building on remote from source by executing a production compose file
Building each container on GHCR, copy the production compose file on remote to pull & install from GHCR
If you know more options, that may be cleaner or more efficient please let me know!
Unfortunatly I have found a docker-compose with Github Actions for CI question for reference.
GitHub Action for single Container
name: Github Container Registry to DigitalOcean Droplet
on:
# Trigger the workflow via push on main branch
push:
branches:
- main
# use only trigger action if the backend folder changed
paths:
- "backend/**"
- ".github/workflows/**"
jobs:
# Builds a Docker Image and pushes it to Github Container Registry
push_to_github_container_registry:
name: Push to GHCR
runs-on: ubuntu-latest
# use the backend folder as the default working directory for the job
defaults:
run:
working-directory: ./backend
steps:
# Checkout the Repository
- name: Checking out the repository
uses: actions/checkout#v2
# Setting up Docker Builder
- name: Set up Docker Builder
uses: docker/setup-buildx-action#v1
# Set Github Access Token with "write:packages & read:packages" scope for Github Container Registry.
# Then go to repository setings and add the copied token as a secret called "CR_PAT"
# https://github.com/settings/tokens/new?scopes=repo,write:packages&description=Github+Container+Registry
# ! While GHCR is in Beta make sure to enable the feature
- name: Logging into GitHub Container Registry
uses: docker/login-action#v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.CR_PAT }}
# Push to Github Container Registry
- name: Pushing Image to Github Container Registry
uses: docker/build-push-action#v2
with:
context: ./backend
version: latest
file: backend/dockerfile
push: true
tags: ghcr.io/${{ github.repository }}:latest
# Connect to existing Droplet via SSH and (re)installs add. runs the image
# ! Ensure you have installed the preconfigured Droplet with Docker
# ! Ensure you have added SSH Key to the Droplet
# ! - its easier to add the SSH Keys bevore createing the droplet
deploy_to_digital_ocean_dropplet:
name: Deploy to Digital Ocean Droplet
runs-on: ubuntu-latest
needs: push_to_github_container_registry
steps:
- name: Deploy to Digital Ocean droplet via SSH action
uses: appleboy/ssh-action#master
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.PRIVATE_KEY }}
port: ${{ secrets.PORT }}
script: |
# Stop all running Docker Containers
docker kill $(docker ps -q)
# Free up space
docker system prune -a
# Login to Github Container Registry
docker login https://ghcr.io -u ${{ github.repository_owner }} -p ${{ secrets.CR_PAT }}
# Pull the Docker Image
docker pull ghcr.io/${{ github.repository }}:latest
# Run a new container from a new image
docker run -d -p 80:8080 -p 443:443 -t ghcr.io/${{ github.repository }}:latest
Current Docker-Compose
version: "3"
services:
api:
build:
context: ./backend/api
networks:
api-network:
aliases:
- api-net
nginx:
build:
context: ./backend/nginx
ports:
- "80:80"
- "443:443"
networks:
api-network:
aliases:
- nginx-net
depends_on:
- api
networks:
api-network:
Thought I'd post this as an answer instead of a comment since it was cleaner.
Here's a gist: https://gist.github.com/Aldo111/702f1146fb88f2c14f7b5955bec3d101
name: Server Build & Push
on:
push:
branches: [main]
paths:
- 'server/**'
- 'shared/**'
- docker-compose.prod.yml
- Dockerfile
jobs:
build_and_push:
runs-on: ubuntu-latest
steps:
- name: Checkout the repo
uses: actions/checkout#v2
- name: Create env file
run: |
touch .env
echo "${{ secrets.SERVER_ENV_PROD }}" > .env
cat .env
- name: Build image
run: docker compose -f docker-compose.prod.yml build
- name: Install doctl
uses: digitalocean/action-doctl#v2
with:
token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}
- name: Log in to DO Container Registry
run: doctl registry login --expiry-seconds 600
- name: Push image to DO Container Registry
run: docker compose -f docker-compose.prod.yml push
- name: Deploy Stack
uses: appleboy/ssh-action#master
with:
host: ${{ secrets.GL_SSH_HOST }}
username: ${{ secrets.GL_SSH_USERNAME }}
key: ${{ secrets.GL_SSH_SECRET }}
port: ${{ secrets.GL_SSH_PORT }}
script: |
cd /srv/www/game
./init.sh
In the final step, the directory in my case just contains a .env file and my prod compose file but these things could also be rsyncd/copied/automated as another step in this workflow before actually running things.
My init.sh simply contains:
docker stack deploy -c <(docker-compose -f docker-compose.yml config) game --with-registry-auth
The with-registry-auth part is important since my docker-compose has image:....s that use containers in DigitalOcean's container registry. So on my server, I'd already logged in once when I first setup the directory.
With that, this docker command consumes my docker-compose.yml along with the environment vairables (i.e. docker-compose -f docker-compose.yml config will pre-process the compose file with the .env file in the same directory, since stack deploy doesn't use .env) and registry already authenticated, pulls the relevant images, and restarts things as needed!
This can definitely be cleaned up and made a lot simpler but it's been working pretty well for me in my use case.
I don't know how to run a cached Docker image in Github Actions.
I've followed a tutorial about Publishing Docker images to implement a task that would cache, build and push Docker image to a DockerHub.
I need to build, cache and run the image, the image publishing is optional.
My goal is to speed up CI workflow.
Here is the Github Actions workflow:
name: CI
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
pull_request:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- name: Check Out Repo
uses: actions/checkout#v2
with:
fetch-depth: 0
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action#v1
- name: Cache Docker layers
uses: actions/cache#v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Login to Docker Hub
uses: docker/login-action#v1
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action#v2
with:
context: ./
file: ./Dockerfile
builder: ${{ steps.buildx.outputs.name }}
push: true
tags: ivan123123/c_matrix_library:latest
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache
#- name: Run Docker container
# run: ???
# Upload gcovr code coverage report
- name: Upload GCC Code Coverage Report
uses: actions/upload-artifact#v2
with:
name: coveragereport
path: ./builddir/meson-logs/coveragereport/
- name: Upload code coverage reports to codecov.io page
run: bash <(curl -s https://codecov.io/bash)
Edit:
I've found no solution to running cached Docker image, but I have managed to build cached image every time I run CI workflow with docker/setup-buildx-action#v1 action. Because the image is cached, we don't need to download every Docker image dependencies thus saving time from 3 minutes originally to only 40 seconds.
Below is the Github Actions workflow:
name: CI
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Check Out Repo
uses: actions/checkout#v2
with:
fetch-depth: 0
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action#v1
- name: Cache register
uses: actions/cache#v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ hashFiles('**/Dockerfile') }}
- name: Build Docker image
uses: docker/build-push-action#v2
with:
context: ./
file: ./Dockerfile
builder: ${{ steps.buildx.outputs.name }}
load: true
tags: c_matrix_library:latest
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache
- name: Run Docker container
run: docker run -v "$(pwd):/app" c_matrix_library:latest
If you want to cache a published Docker image that lives in the Docker Repository, you can do:
- name: Restore MySQL Image Cache if it exists
id: cache-docker-mysql
uses: actions/cache#v3
with:
path: ci/cache/docker/mysql
key: cache-docker-mysql-5.7
- name: Update MySQL Image Cache if cache miss
if: steps.cache-docker-mysql.outputs.cache-hit != 'true'
run: docker pull mysql:5.7 && mkdir -p ci/cache/docker/mysql && docker image save mysql:5.7 --output ./ci/cache/docker/mysql/mysql-5.7.tar
- name: Use MySQL Image Cache if cache hit
if: steps.cache-docker-mysql.outputs.cache-hit == 'true'
run: docker image load --input ./ci/cache/docker/mysql/mysql-5.7.tar
- name: Start containers
run: docker compose up -d
When docker compose up runs, if a service uses the Docker image mysql:5.7 image, it's going to skip downloading it.
This might not fully answer you question since I think there is no actual way of running your cached image.
But you can speed up your build using Github's cache, I have posted a complete tutorial about this that you can read here
Summarizing you can setup Docker buildx and then use GH cache
with build-push-action:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
- name: Build and push
uses: docker/build-push-action#v2
with:
context: .
file: ./Dockerfile
push: true
tags: ivan123123/c_matrix_library:latest
cache-from: type=gha
cache-to: type=gha
Edit
Just found a reference in build-push action that might be useful to you:
https://github.com/docker/build-push-action/blob/master/docs/advanced/share-image-jobs.md
This question is a bit old now, but I've found the documented way of running a built image from the docker/build-push-action in a subsequent step. In short, you have to set up a local registry.
The yaml below has been directly copy + pasted from here.
name: ci
on:
push:
branches:
- 'main'
jobs:
docker:
runs-on: ubuntu-latest
services:
registry:
image: registry:2
ports:
- 5000:5000
steps:
-
name: Checkout
uses: actions/checkout#v3
-
name: Set up QEMU
uses: docker/setup-qemu-action#v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action#v2
with:
driver-opts: network=host
-
name: Build and push to local registry
uses: docker/build-push-action#v3
with:
context: .
push: true
tags: localhost:5000/name/app:latest
-
name: Inspect
run: |
docker buildx imagetools inspect localhost:5000/name/app:latest
Edit:
As mentioned by Romain in the comments. The initial solution will pull the image at the beginning of the workflow and as such will not use the image that is built during the workflow. The only solution seem to be running docker run yourself in the step:
- name: Run my docker image
run: >
docker run -t ivan123123/c_matrix_library:latest
...
On a side note. Using this solution might get a bit complicated if you use services in your job. In which case, the networking between your container and the service containers will be troublesome
Original answer:
To run the image you can use the following:
- name: Run my docker image
uses: docker://ivan123123/c_matrix_library:latest
with:
entrypoint: ...
args: ...
The entrypoint and args are optional. You can find more info here. One limitation though is that you can use any variable or context in the uses field. You can only hardcode the name and tag of the image.