I am using GitHub Actions to trigger the building of my dockerfile, it is uploading the container to GitHub Container Registry. In the last step i am connecting via SSH to my remote DigitalOcean Droplet and executing a script to pull and install the new image from GHCR. This workflow was good for me as I was only building a single container in the project. Now I am using docker compose as I need NGINX besides by API. I would like to keep the containers on a single dropplet as the project is not demanding in ressources at the moment.
What is the right way to automate deployment with Github Actions and Docker Compose to DigitalOcean on a single VM?
My currently known options are:
Skip building containers on GHCR and fetch the repo via ssh to start building on remote from source by executing a production compose file
Building each container on GHCR, copy the production compose file on remote to pull & install from GHCR
If you know more options, that may be cleaner or more efficient please let me know!
Unfortunatly I have found a docker-compose with Github Actions for CI question for reference.
GitHub Action for single Container
name: Github Container Registry to DigitalOcean Droplet
on:
# Trigger the workflow via push on main branch
push:
branches:
- main
# use only trigger action if the backend folder changed
paths:
- "backend/**"
- ".github/workflows/**"
jobs:
# Builds a Docker Image and pushes it to Github Container Registry
push_to_github_container_registry:
name: Push to GHCR
runs-on: ubuntu-latest
# use the backend folder as the default working directory for the job
defaults:
run:
working-directory: ./backend
steps:
# Checkout the Repository
- name: Checking out the repository
uses: actions/checkout#v2
# Setting up Docker Builder
- name: Set up Docker Builder
uses: docker/setup-buildx-action#v1
# Set Github Access Token with "write:packages & read:packages" scope for Github Container Registry.
# Then go to repository setings and add the copied token as a secret called "CR_PAT"
# https://github.com/settings/tokens/new?scopes=repo,write:packages&description=Github+Container+Registry
# ! While GHCR is in Beta make sure to enable the feature
- name: Logging into GitHub Container Registry
uses: docker/login-action#v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.CR_PAT }}
# Push to Github Container Registry
- name: Pushing Image to Github Container Registry
uses: docker/build-push-action#v2
with:
context: ./backend
version: latest
file: backend/dockerfile
push: true
tags: ghcr.io/${{ github.repository }}:latest
# Connect to existing Droplet via SSH and (re)installs add. runs the image
# ! Ensure you have installed the preconfigured Droplet with Docker
# ! Ensure you have added SSH Key to the Droplet
# ! - its easier to add the SSH Keys bevore createing the droplet
deploy_to_digital_ocean_dropplet:
name: Deploy to Digital Ocean Droplet
runs-on: ubuntu-latest
needs: push_to_github_container_registry
steps:
- name: Deploy to Digital Ocean droplet via SSH action
uses: appleboy/ssh-action#master
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.PRIVATE_KEY }}
port: ${{ secrets.PORT }}
script: |
# Stop all running Docker Containers
docker kill $(docker ps -q)
# Free up space
docker system prune -a
# Login to Github Container Registry
docker login https://ghcr.io -u ${{ github.repository_owner }} -p ${{ secrets.CR_PAT }}
# Pull the Docker Image
docker pull ghcr.io/${{ github.repository }}:latest
# Run a new container from a new image
docker run -d -p 80:8080 -p 443:443 -t ghcr.io/${{ github.repository }}:latest
Current Docker-Compose
version: "3"
services:
api:
build:
context: ./backend/api
networks:
api-network:
aliases:
- api-net
nginx:
build:
context: ./backend/nginx
ports:
- "80:80"
- "443:443"
networks:
api-network:
aliases:
- nginx-net
depends_on:
- api
networks:
api-network:
Thought I'd post this as an answer instead of a comment since it was cleaner.
Here's a gist: https://gist.github.com/Aldo111/702f1146fb88f2c14f7b5955bec3d101
name: Server Build & Push
on:
push:
branches: [main]
paths:
- 'server/**'
- 'shared/**'
- docker-compose.prod.yml
- Dockerfile
jobs:
build_and_push:
runs-on: ubuntu-latest
steps:
- name: Checkout the repo
uses: actions/checkout#v2
- name: Create env file
run: |
touch .env
echo "${{ secrets.SERVER_ENV_PROD }}" > .env
cat .env
- name: Build image
run: docker compose -f docker-compose.prod.yml build
- name: Install doctl
uses: digitalocean/action-doctl#v2
with:
token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}
- name: Log in to DO Container Registry
run: doctl registry login --expiry-seconds 600
- name: Push image to DO Container Registry
run: docker compose -f docker-compose.prod.yml push
- name: Deploy Stack
uses: appleboy/ssh-action#master
with:
host: ${{ secrets.GL_SSH_HOST }}
username: ${{ secrets.GL_SSH_USERNAME }}
key: ${{ secrets.GL_SSH_SECRET }}
port: ${{ secrets.GL_SSH_PORT }}
script: |
cd /srv/www/game
./init.sh
In the final step, the directory in my case just contains a .env file and my prod compose file but these things could also be rsyncd/copied/automated as another step in this workflow before actually running things.
My init.sh simply contains:
docker stack deploy -c <(docker-compose -f docker-compose.yml config) game --with-registry-auth
The with-registry-auth part is important since my docker-compose has image:....s that use containers in DigitalOcean's container registry. So on my server, I'd already logged in once when I first setup the directory.
With that, this docker command consumes my docker-compose.yml along with the environment vairables (i.e. docker-compose -f docker-compose.yml config will pre-process the compose file with the .env file in the same directory, since stack deploy doesn't use .env) and registry already authenticated, pulls the relevant images, and restarts things as needed!
This can definitely be cleaned up and made a lot simpler but it's been working pretty well for me in my use case.
Related
I want to automatically create an image using github actions and upload it after testing
However, when this build is executed inside the container, Unable to connect to newly launched container. (probably the execution was successful)
Of course it works when I try to do the same thing without using containers (on the machine).
Why the connection was failed when I tried in docker container?
jobs:
build_and_test:
runs-on: my-runner
container:
image: my-image:tag
credentials:
username: ${{ secrets.ID }}
password: ${{ secrets.PW }}
volumes:
- /var/run/docker.sock:/var/run/docker.sock
options: --user root
steps:
- uses: actions/checkout#v2
- name: build_image
run: |
...
# build image
...
- name: bvt_test
run: |
container=$(docker run --rm -itd -p ${port}:1234 ${new_image})
...
# test, but the connection failed
Thanks.
Here I have two workflows under a job. The only target we want to achieve is that, we want to reuse the container images by using cache or some other means. Similar way we do for node_modules
jobs:
build:
name: build
runs-on: [self-hosted, x64, linux, research]
container:
image: <sample docker image>
env:
NPM_AUTH_TOKEN: <sample token>
steps:
- uses: actions/checkout#v2
- name: Install
run: |
npm install
- name: Build
run: |
npm build
Test:
name: Test Lint
runs-on: [self-hosted, x64, linux, research]
container:
image: <sample docker image>
env:
NPM_AUTH_TOKEN: <sample token>
steps:
- uses: actions/checkout#v2
- name: Install Dependencies
run: npm ci
- name: Lint Check
run: npm run lint
I would suggest using the Docker's Build Push action for this purpose. Through the build-push-action, you can cache your container images by using the inline cache, registry cache or the experimental cache backend API:
Inline cache
name: Build and push
uses: docker/build-push-action#v2
with:
context: .
push: true
tags: user/app:latest
cache-from: type=registry,ref=user/app:latest
cache-to: type=inline
Refer to the Buildkit docs.
Registry Cache
name: Build and push
uses: docker/build-push-action#v2
with:
context: .
push: true
tags: user/app:latest
cache-from: type=registry,ref=user/app:buildcache
cache-to: type=registry,ref=user/app:buildcache,mode=max
Refer to Buildkit docs.
Cache backend API
name: Build and push
uses: docker/build-push-action#v2
with:
context: .
push: true
tags: user/app:latest
cache-from: type=gha
cache-to: type=gha,mode=max
Refer to Buildkit docs.
I personally prefer using the Cache backend API as its easy to setup and provides a great boost in reducing the overall CI pipeline run duration.
By looking at the comments, it seems you want to share Docker cache between workflows. In this case you can share Docker containers between jobs in a workflow using this example:
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout#v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
-
name: Build and push
uses: docker/build-push-action#v2
with:
context: .
file: ./Dockerfile
tags: myimage:latest
outputs: type=docker,dest=/tmp/myimage.tar
-
name: Upload artifact
uses: actions/upload-artifact#v2
with:
name: myimage
path: /tmp/myimage.tar
use:
runs-on: ubuntu-latest
needs: build
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
-
name: Download artifact
uses: actions/download-artifact#v2
with:
name: myimage
path: /tmp
-
name: Load Docker image
run: |
docker load --input /tmp/myimage.tar
docker image ls -a
In general, data is not shared between jobs in GitHub Actions (GHA). jobs actually will run in parallel on distinct ephemeral VMs unless you explicitly create a dependency with needs
GHA does provide a cache mechanism. For package manager type caching, they simplified it, see here.
For docker images, you either can use docker buildx cache and cache to a remote registry (including ghcr), or use the GHA cache action, which probably is easier. The syntax for actions/cache is pretty straightforward and clear on the page. For buildx, documentation always has been a bit of an issue (largely, I think, because he people building it are so smart that they do not realize how much we do not understand what is in their hearts), so you would need to configure the cache action, and then buildx to cache it.
Alternatively, you could do docker save imagename > imagename.tar and use that in the cache. There is a decent example of that here. No idea who wrote it, but it does the job.
I have a repository that has two folders and both of them have a Dockerfile inside. Only one of them has GitHub Actions configured to build the Dockerfile. It used to work just fine but now it does not trigger at all.
What could be the reason for it? This is the GitHub Action flow I have built. There are no exceptions.
name: Docker Image CI
on:
workflow_dispatch:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Login to GitHub Package Registry
run: echo ${{ secrets.GITHUB_TOKEN }} | docker login docker.pkg.github.com -u ${{ github.repository }} --password-stdin
- name: Build the Docker image
run: docker build -t dashboard:latest .
- name: Tag the Docker image
run: docker tag dashboard:latest docker.pkg.github.com/test/dashboard/dashboard:latest
- name: Push the Docker image to the registry
run: docker push docker.pkg.github.com/test/dashboard/dashboard:latest
I don't know how to run a cached Docker image in Github Actions.
I've followed a tutorial about Publishing Docker images to implement a task that would cache, build and push Docker image to a DockerHub.
I need to build, cache and run the image, the image publishing is optional.
My goal is to speed up CI workflow.
Here is the Github Actions workflow:
name: CI
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
pull_request:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- name: Check Out Repo
uses: actions/checkout#v2
with:
fetch-depth: 0
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action#v1
- name: Cache Docker layers
uses: actions/cache#v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Login to Docker Hub
uses: docker/login-action#v1
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action#v2
with:
context: ./
file: ./Dockerfile
builder: ${{ steps.buildx.outputs.name }}
push: true
tags: ivan123123/c_matrix_library:latest
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache
#- name: Run Docker container
# run: ???
# Upload gcovr code coverage report
- name: Upload GCC Code Coverage Report
uses: actions/upload-artifact#v2
with:
name: coveragereport
path: ./builddir/meson-logs/coveragereport/
- name: Upload code coverage reports to codecov.io page
run: bash <(curl -s https://codecov.io/bash)
Edit:
I've found no solution to running cached Docker image, but I have managed to build cached image every time I run CI workflow with docker/setup-buildx-action#v1 action. Because the image is cached, we don't need to download every Docker image dependencies thus saving time from 3 minutes originally to only 40 seconds.
Below is the Github Actions workflow:
name: CI
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Check Out Repo
uses: actions/checkout#v2
with:
fetch-depth: 0
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action#v1
- name: Cache register
uses: actions/cache#v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ hashFiles('**/Dockerfile') }}
- name: Build Docker image
uses: docker/build-push-action#v2
with:
context: ./
file: ./Dockerfile
builder: ${{ steps.buildx.outputs.name }}
load: true
tags: c_matrix_library:latest
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache
- name: Run Docker container
run: docker run -v "$(pwd):/app" c_matrix_library:latest
If you want to cache a published Docker image that lives in the Docker Repository, you can do:
- name: Restore MySQL Image Cache if it exists
id: cache-docker-mysql
uses: actions/cache#v3
with:
path: ci/cache/docker/mysql
key: cache-docker-mysql-5.7
- name: Update MySQL Image Cache if cache miss
if: steps.cache-docker-mysql.outputs.cache-hit != 'true'
run: docker pull mysql:5.7 && mkdir -p ci/cache/docker/mysql && docker image save mysql:5.7 --output ./ci/cache/docker/mysql/mysql-5.7.tar
- name: Use MySQL Image Cache if cache hit
if: steps.cache-docker-mysql.outputs.cache-hit == 'true'
run: docker image load --input ./ci/cache/docker/mysql/mysql-5.7.tar
- name: Start containers
run: docker compose up -d
When docker compose up runs, if a service uses the Docker image mysql:5.7 image, it's going to skip downloading it.
This might not fully answer you question since I think there is no actual way of running your cached image.
But you can speed up your build using Github's cache, I have posted a complete tutorial about this that you can read here
Summarizing you can setup Docker buildx and then use GH cache
with build-push-action:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
- name: Build and push
uses: docker/build-push-action#v2
with:
context: .
file: ./Dockerfile
push: true
tags: ivan123123/c_matrix_library:latest
cache-from: type=gha
cache-to: type=gha
Edit
Just found a reference in build-push action that might be useful to you:
https://github.com/docker/build-push-action/blob/master/docs/advanced/share-image-jobs.md
This question is a bit old now, but I've found the documented way of running a built image from the docker/build-push-action in a subsequent step. In short, you have to set up a local registry.
The yaml below has been directly copy + pasted from here.
name: ci
on:
push:
branches:
- 'main'
jobs:
docker:
runs-on: ubuntu-latest
services:
registry:
image: registry:2
ports:
- 5000:5000
steps:
-
name: Checkout
uses: actions/checkout#v3
-
name: Set up QEMU
uses: docker/setup-qemu-action#v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action#v2
with:
driver-opts: network=host
-
name: Build and push to local registry
uses: docker/build-push-action#v3
with:
context: .
push: true
tags: localhost:5000/name/app:latest
-
name: Inspect
run: |
docker buildx imagetools inspect localhost:5000/name/app:latest
Edit:
As mentioned by Romain in the comments. The initial solution will pull the image at the beginning of the workflow and as such will not use the image that is built during the workflow. The only solution seem to be running docker run yourself in the step:
- name: Run my docker image
run: >
docker run -t ivan123123/c_matrix_library:latest
...
On a side note. Using this solution might get a bit complicated if you use services in your job. In which case, the networking between your container and the service containers will be troublesome
Original answer:
To run the image you can use the following:
- name: Run my docker image
uses: docker://ivan123123/c_matrix_library:latest
with:
entrypoint: ...
args: ...
The entrypoint and args are optional. You can find more info here. One limitation though is that you can use any variable or context in the uses field. You can only hardcode the name and tag of the image.
I am trying to use the container option in a GitHub Actions workflow to run the entire job in a docker container. How do I specify the login credentials to retrieve this docker image from a private repository on docker hub?
jobs:
build:
runs-on: ubuntu-18.04
container: private_org/test-runner:1.0
I have successfully used the following docker-login "action" to authenticate with docker hub as a "step", but this does not get performed until after the job-level container gets initialized.
jobs:
build:
runs-on: ubuntu-18.04
steps:
- uses: azure/docker-login#v1
with:
username: me
password: ${{ secrets.MY_DOCKERHUB_PASSWORD }}
- name: test docker creds
run: docker pull private_org/test-runner:1.0
This was implemented recently. Use the following workflow definition:
jobs:
build:
container:
image: private_org/test-runner:1.0
credentials:
username: me
password: ${{ secrets.MY_DOCKERHUB_PASSWORD }}
Source:
https://github.blog/changelog/2020-09-24-github-actions-private-registry-support-for-job-and-service-containers/