I have a GitHub workflow with job like this:
docker:
name: Docker image
needs: test-n-build
runs-on: ubuntu-20.04
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Set up QEMU
uses: docker/setup-qemu-action#v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
- name: Set docker image tag
if: startsWith(github.ref, 'refs/tags/')
run: echo "DOCKER_TAG=:$(cargo read-manifest | sed 's/.*"version":"\{0,1\}\([^,"]*\)"\{0,1\}.*/\1/')" >> $GITHUB_ENV
- name: Login to DockerHub
uses: docker/login-action#v1
- name: Build and push
uses: docker/build-push-action#v2
with:
push: ${{ startsWith(github.ref, 'refs/tags/') }}
This workflow builds docker image and pushes it to registry if tag pushed to branch. As seen there are a Set docker image tag step with cargo command used, but when I'm copy-pasted things I forgot to add setup rust action. But that step successfully executed and no error like command cargo not found not appears.
Is it because that job needs test-n-build job where I actually setup Rust, or QEMU installs Rust? How it finds cargo command?
As you can see on the Ubuntu-20.04 virtual environment, it is provided with some Rust Tools installed:
### Rust Tools
Cargo 1.58.0
Rust 1.58.1
Rustdoc 1.58.1
Rustup 1.24.3
Therefore, you wouldn't need any extra setup to install them in that case.
You can check the available virtual environment used as runners and their configurations here.
Related
I have a GitHub Action that is almost like the one below. The action's purpose is to build a Dockerfile and push it to DockerHub.
name: DockerHub Run
on:
push:
branches:
- "master"
schedule:
- cron: "0 0 * * 0"
env:
DOCKERHUB_USERNAME: MyUser
OFFICIAL_TAG: MyUser/MyImage:latest
MAIN_REPO_NAME: MyUser/MyImage
DOCKERFILE_PATH: /
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Set up QEMU
uses: docker/setup-qemu-action#v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v2
- name: Login to DockerHub
uses: docker/login-action#v2
with:
username: ${{ env.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
- name: Build and push image to DockerHub
uses: docker/build-push-action#v3
with:
platforms: linux/amd64,linux/arm64
file: ${{ env.GITHUB_WORKSPACE }}/Dockerfile
push: true
tags: ${{ env.OFFICIAL_TAG }}
- name: Update repo description
uses: peter-evans/dockerhub-description#v2
with:
username: ${{ env.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
repository: ${{ env.MAIN_REPO_NAME }}
readme-filepath: ./readme.md
And according to DockerHub, the architecture is listed
However, I have a question about this line:
uses: docker/build-push-action#v3
with:
platforms: linux/amd64,linux/arm64
Im not sure if listing the platforms here actually compiles to those platforms. Keep in mind that GitHub is using ubuntu-latest which is x86-x64 and I do not have a ARM64 device to test.
Am I setting up correctly to build to ARM devices?
tldr
Im not sure if listing the platforms here actually compiles to those
platforms.
It does (assuming your the commands defined in your Dockerfile are platform conscious)
Am I setting up correctly to build to ARM devices?
Your config looks correct and should build for amd64 and arm64. You can test by adding a step after the push and checking the output:
# assuming your image is debian based
$ docker run \
--platform linux/amd64 \
--rm \
--entrypoint='' \
MyUser/MyImage:latest \
/bin/bash -c 'dpkg --print-architecture'
#output should be
amd64
$ docker run \
--platform linux/arm64 \
--rm \
--entrypoint='' \
MyUser/MyImage:latest \
/bin/bash -c 'dpkg --print-architecture'
#output should be
arm64
long answer
It "works" because of the emulators for the qemu emulators for a bunch of different platforms (aka docker/setup-qemu-action#v2) and then using docker buildx for the multi-platform images
The problem is that even though everything seems to build fine in CI this way, the artifacts never really get tested on their respective native platforms, so to answer your question 'Am I setting up correctly to build to ARM devices?' ... 🤷♂️
I find it similar to python and its universal2 wheels, where the cross-compilations are built, but not really ever tested (all very python and macos specific but the conversations point out challenges running integration/e2e tests for these multi platform artifacts):
https://github.com/actions/setup-python/issues/197
https://github.com/actions/runner-images/issues/4133
https://github.com/actions/python-versions/pull/114
https://github.com/actions/setup-python/issues/547
This github/community discussion also provides a little more depth on the multiplatform builds as well
https://github.com/community/community/discussions/38728#discussioncomment-4106829
I would like to achieve the following CI pipeline with GitHub Actions workflow.
Pull Request is merged -> GitHub action is being triggered -> docker image is being built with semantic version incremented by one or the version is based on GitHub tag - if it is possible to somehow tag the pull request merge.
How to achieve that or is the better approach?
I have tried with secrets but no to avail. How to implement semantic versioning in GitHub Actions workflow?
name: Docker Image CI
on:
push:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Build the Docker image
run: docker build . --file Dockerfile --tag my-image-name:${{github.ref_name}}
${{github.ref_name}} pulls the tag for you or run git command like git describe --abbrev=0 in the previous step to get the latest tag and append it to image name and use it like:
- name: Get Tag
id: vars
run: echo ::set-output name=tag::${git describe --abbrev=0}
- name: Build the Docker image
run: docker build . --file Dockerfile --tag my-image-name:${{ steps.vars.outputs.tag }}
You can use a lot of semver actions on the market place.
For example, I have tried using this - Semver action
This will bump your repo version, and you can use a git bash command to get that bumped version on the next job.
So combining with docker build, you can do something like:
jobs:
update-semver:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: haya14busa/action-update-semver#v1
id: version
with:
major_version_tag_only: true # (optional, default is "false")
Build:
name: Build Image
needs: [update-semver]
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout#v2
- name: Build image
run: |
tag_v=$(git describe --tags $(git rev-list --tags --max-count=1))
tag=$(echo $tag_v | sed 's/v//')
docker build -t my_image_${tag} .
I get this error on my github build:
The requested image's platform (linux/arm/v7) does not match the detected host platform (linux/amd64) and no specific platform was requested
The runner environment I'm using is the ubuntu:latest and my container uses arm32v7/node base image.
Here's my Dockerfile:
FROM arm32v7/node AS appbuild
WORKDIR /app
COPY package.json ./
COPY src/ ./src
RUN npm install
FROM appbuild as release
ENTRYPOINT ["npm", "start"]
And here's my deployment to github yaml:
jobs:
docker:
runs-on: ubuntu-latest
The jobs are ran inside a ubuntu runner environment. I suspect the arm version of the node image I'm using is unable to run in that environment, is that correct?
Edit:
- name: Build controller image
if: steps.changed-files-plantcontroller.outputs.any_changed == 'true'
run: >
docker build -t ${{ env.image_name_controller }} ${{ env.context }}
env:
context: ./controller
You are correct. Docker on your arm64 build environment cannot build the arm32v7 image without Docker Buildx. You can solve this by using the buildx Github Action and qemu action, that prepares your environment to build multi-arch images.
Buildx is Docker's technology to build images for a target architecture not matching the host's (in this case the host arch is arm64). You can read about how this works here.
QEMU is an emulator used by Docker buildx. It's a dependency I learned the hard way was needed when writing a multi-arch workflow.
For more reference, check out this doc: https://github.com/docker/build-push-action/blob/master/docs/advanced/multi-platform.md
You should be able to build your image with a workflow with something like this:
name: ci
on:
push:
branches:
- 'master'
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout#v2
-
name: Set up QEMU dependency
uses: docker/setup-qemu-action#v1
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
- name: Build controller image
if: steps.changed-files-plantcontroller.outputs.any_changed == 'true'
uses: docker/build-push-action#v2
with:
context: ${{ env.context }}
tags: ${{ env.image_name_controller }}
platforms: linux/arm/v7
push: true
env:
context: ./controller
NOTE: Updated the last build command with how you have your Build controller image step written. It just translates the Docker command to how the Docker build-push-action builds.
An alternative command you could use in place of the build-push-action#v2:
docker buildx build -t ${{ env.image_name_controller }} --platform linux/arm/v7 --push ${{ env.context }}
The command assumes you want to push to a GitHub repo with the tag.
Unrelated, but there's an opportunity to simplify your Dockerfile.
When you're using buildx, Docker can handle pulling the right arm32v7 version of the image when the image you're pulling has multiple versions. Check out the node image tags and you'll see linux/arm/v7 listed as one of the OS/Arch tags. What this means is you could simplify your FROM arm32v7/node AS appbuild line to FROM node AS appbuild if you think you want to build for a different architecture sometime.
For example:
Github workflow that builds for amd64 and arm64
Dockerfile that pulls relevant amd64 or arm64 image depending on TARGETPLATFORM
I wanted to run django test cases inside container.
I am able to pull private image from docker hub. but when I ran command to test, It is failed to run.
Anyone tried running test cases inside the container.
jobs:
test:
container:
image: abcd
credentials:
username: "<username>"
password: "<password>"
steps:
- uses: actions/checkout#v2
- name: Display Python version
run: |
python -m pip install --upgrade pip
pip install -r requirements/dev.txt
- name: run test
run: |
python3 manage.py test
In my experience, I found out that using GitHub's container instruction causes more confusion than simply running whatever you want on the runner itself, as if you are running it on your own machine.
A big majority of the tests I am running on GitHub actions are running in containers, and some require private DockerHub images.
I always do this:
Create a docker-compose.yml for development use, so I can test things locally.
Usually in CI, you might want slightly different things in your docker-compose (for example, no volume mappings) - if this is the case, I am creating another docker-compose.yml in a .ci subfolder.
My docker-compose.yml contains a test service, that runs whatever test (or test suite) I want.
Here is a sample GitHub actions file I am using:
name: Test
on:
pull_request:
push: { branches: master }
jobs:
test:
name: Run test suite
runs-on: ubuntu-latest
env:
COMPOSE_FILE: .ci/docker-compose.yml
DOCKER_USER: ${{ secrets.DOCKER_USER }}
DOCKER_PASS: ${{ secrets.DOCKER_PASS }}
steps:
- name: Checkout code
uses: actions/checkout#v2
- name: Login to DockerHub
run: docker login -u $DOCKER_USER -p $DOCKER_PASS
- name: Build docker images
run: docker-compose build
- name: Run tests
run: docker-compose run test
Of course, this entails setting up the two mentioned secrets, but other than that, I found this method to be:
Reliable
Portable (I switched from Travis CI with the same approach easily)
Compatible with dev environment
Easy to understand and reproduce both locally and in CI
I have a github repository, a docker repository and a Amazon ec2 instance. I am trying to create a CI/CD pipeline with these tools. The idea is to deploy a docker container to ec2 instance when a push happened to github repository master branch. I have used github actions to build the code, build docker image and push docker image to docker hub. Now I want to pull the latest image from docker hub to remote ec2 instance and run the same. For this I am trying to execute ansible command from github actions. But I need to specify .pem file as an argument to the ansible command. I tried to keep .pem file in github secretes, but it didn't work. I am really confused how to proceed with this.
Here is my github workflow file
name: helloworld_cicd
on:
push:
branches:
- master
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- name: Check out code into the Go module directory
uses: actions/checkout#v1
- name: Go Build
run: go build
- name: Docker build
run: docker build -t helloworld .
- name: Docker login
run: docker login --username=${{ secrets.docker_username }} --password=${{ secrets.docker_password }}
- name: Docker tag
run: docker tag helloworld vijinvv/helloworld:latest
- name: Docker push
run: docker push vijinvv/helloworld:latest
I tried to run something like
ansible all -i '3.15.152.219,' --private-key ${{ secrets.ssh_key }} -m rest of the command
but that didn't work. What would be the best way to solve this issue
I'm guessing what you meant by "it didn't work" is that ansible expects the private key to be a file, whereas you are supplying a string.
This page on github actions shows how to use secret files on github actions. The equivalent for your case would be to do the following steps:
gpg --symmetric --cipher-algo AES256 my_private_key.pem
Choose a strong passphrase and save this passphrase as a secret in github secrets. Call it LARGE_SECRET_PASSPHRASE
Commit your encrypted my_private_key.pem.gpg in git
Create a step in your actions that decrypts this file. It could look something like:
- name: Decrypt Pem
run: gpg --quiet --batch --yes --decrypt --passphrase="$LARGE_SECRET_PASSPHRASE" --output $HOME/secrets/my_private_key.pem my_private_key.pem.gpg
env:
LARGE_SECRET_PASSPHRASE: ${{ secrets.LARGE_SECRET_PASSPHRASE }}
Finally you can run your ansible command with ansible all -i '3.15.152.219,' --private-key $HOME/secrets/my_private_key.pem
You can easily use webfactory/ssh-agent to add your ssh private key. You can see its documentation and add the following stage before running the ansible command.
# .github/workflows/my-workflow.yml
jobs:
my_job:
...
steps:
- actions/checkout#v2
# Make sure the #v0.5.2 matches the current version of the
# action
- uses: webfactory/ssh-agent#v0.5.2
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
- ... other steps
SSH_PRIVATE_KEY must be the key that is registered in repository secrets. After that, run your ansible command without passing the private key file.