I m running jobs on Mac-os-11. I have integrated the SwiftLint locally as well and that is working fine. But When someone raise the pr I need to run the SwiftLint on GitHub actions. How can I do that. Below is the current yml file for actions.
name: Build & Test
on:
# Run tests when PRs are created or updated
pull_request:
types: [opened, synchronize, reopened, ready_for_review]
env:
# Defines the Xcode version
DEVELOPER_DIR: /Applications/Xcode_13.0.app/Contents/Developer
FETCH_DEPTH: 0
RUBY_VERSION: 2.7.1
defaults:
run:
shell: bash
jobs:
test:
name: Build & Test
if: ${{ github.event.pull_request.draft == false }}
runs-on: macos-11
steps:
- name: Checkout Project
uses: actions/checkout#v2.3.4
with:
fetch-depth: ${{ env.FETCH_DEPTH }}
- name: Restore Gem Cache
uses: actions/cache#v2.1.3
with:
path: vendor/bundle
key: ${{ runner.os }}-gem-${{ hashFiles('**/Gemfile.lock') }}
restore-keys: ${{ runner.os }}-gem-
- name: Restore Pod Cache
uses: actions/cache#v2.1.3
with:
path: Pods
key: ${{ runner.os }}-pods-${{ hashFiles('**/Podfile.lock') }}
restore-keys: ${{ runner.os }}-pods-
- name: Setup Ruby
uses: ruby/setup-ruby#v1.51.1
with:
bundler-cache: true
ruby-version: ${{ env.RUBY_VERSION }}
SwiftLint is working fine locally, But when I raise the pull request no SwiftLint warning are coming.
I am using this step:
- name: Lint
run: |
set -o pipefail
swiftlint lint --strict --quiet | sed -E 's/^(.*):([0-9]+):([0-9]+): (warning|error|[^:]+): (.*)/::\4 title=Lint error,file=\1,line=\2,col=\3::\5\n\1:\2:\3/'
It parses swiftlint warnings and errors into GitHub annotations which are visible in summary straight away.
Related
I have a problem. I am starting to work with Github Actions, and I got a working pipeline where I am trying to:
Build the docker application
Run the tests
Publish to docker repository
But the test running and publishing are 2 separate jobs, so I am building the image twice. Here is the workflow code:
name: Test & Publish Docker Image
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
run_application_tests:
name: Run test suite
runs-on: ubuntu-latest
env:
COMPOSE_FILE: docker-compose.yml
DOCKER_USER: ${{ secrets.DOCKER_USER }}
DOCKER_PASS: ${{ secrets.DOCKER_PASS }}
steps:
- name: Checkout code
uses: actions/checkout#v3
- name: Login To Docker Hub
uses: docker/login-action#v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build and run docker image
run: docker-compose up -d --build
- name: Run migrations
run: make migrate
- name: Run tests
run: make test
build_and_publish_docker_image:
needs: run_application_tests
name: Build & Publish Docker Images
runs-on: ubuntu-latest
steps:
- name: Checkout The Repo
uses: actions/checkout#v3
- name: Login To Docker Hub
uses: docker/login-action#v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build & Push Docker Image
uses: docker/build-push-action#v3
with:
push: true
tags: |
me/image:latest
me/image:${{ github.sha }}
The docker image is a Ruby-On-Rails application so the bundle install takes very long. Each build takes about 2-3 minutes.
I also tried adding:
cache-from: type=gha
cache-to: type=gha,mode=max
To the docker/build-push-action#v3, but that resulted in:
Error: buildx failed with: ERROR: cache export feature is currently not supported for docker driver. Please switch to a different driver (eg. "docker buildx create --use")
What can I change to improve this pipeline time based?
in your job run_application_tests you have this code:
- name: Login To Docker Hub
uses: docker/login-action#v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build and run docker image
run: docker-compose up -d --build
this code runs docker-compose why?
this is how your code should be if you want to run tests only:
jobs:
run_application_tests:
name: Run test suite
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout#v3
- name: Run migrations
run: make migrate
- name: Run tests
run: make test
When running the workflow in GitHub actions, rubocop errors out, but the code that it complains about is not present in my repo. How can I fix this?
Error:
Run bin/rubocop --parallel
vendor/bundle/ruby/2.7.0/gems/activerecord-import-1.1.0/.rubocop.yml: Lint/EndAlignment has the wrong namespace - should be Layout
vendor/bundle/ruby/2.7.0/gems/activerecord-import-1.1.0/.rubocop.yml: Metrics/LineLength has the wrong namespace - should be Layout
vendor/bundle/ruby/2.7.0/gems/activerecord-import-1.1.0/.rubocop.yml: Style/ElseAlignment has the wrong namespace - should be Layout
vendor/bundle/ruby/2.7.0/gems/activerecord-import-1.1.0/.rubocop.yml: Style/SpaceInsideParens has the wrong namespace - should be Layout
Error: The `Lint/HandleExceptions` cop has been renamed to `Lint/SuppressedException`.
(obsolete configuration found in vendor/bundle/ruby/2.7.0/gems/activerecord-import-1.1.0/.rubocop_todo.yml, please update it)
Error: Process completed with exit code 2.
GitHub Actions workflow yml file:
name: Verify
on: [push]
jobs:
linters:
name: Linters
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout#v2
- name: Setup Ruby and install gems
uses: ruby/setup-ruby#v1
with:
bundler-cache: true
- name: Setup Node
uses: actions/setup-node#v1
with:
node-version: 10.13.0
- name: Find yarn cache location
id: yarn-cache
run: echo "::set-output name=dir::$(yarn cache dir)"
- name: JS package cache
uses: actions/cache#v1
with:
path: ${{ steps.yarn-cache.outputs.dir }}
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: |
${{ runner.os }}-yarn-
- name: Install packages
run: |
yarn install --pure-lockfile
sudo apt-get -yqq install libpq-dev
gem install bundler
bundle install --jobs 4 --retry 3
- name: Run linters
run: |
bin/rubocop --parallel
tests:
name: Tests
runs-on: ubuntu-latest
services:
postgres:
image: postgres:11
env:
POSTGRES_USER: myapp
POSTGRES_DB: myapp_test
POSTGRES_PASSWORD: ""
ports: ["5432:5432"]
steps:
- name: Checkout code
uses: actions/checkout#v2
- name: Setup Ruby and install gems
uses: ruby/setup-ruby#v1
with:
bundler-cache: true
- name: Setup Node
uses: actions/setup-node#v1
with:
node-version: 10.13.0
- name: Find yarn cache location
id: yarn-cache
run: echo "::set-output name=dir::$(yarn cache dir)"
- name: JS package cache
uses: actions/cache#v1
with:
path: ${{ steps.yarn-cache.outputs.dir }}
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: |
${{ runner.os }}-yarn-
- name: Install packages
run: |
gem install bundler
bundle install --jobs 4 --retry 3
- name: Setup test database
env:
RAILS_ENV: test
PGHOST: localhost
PGUSER: myapp
run: |
bundle exec rails db:create
bundle exec rails db:migrate
- name: Run tests
run: |
bundle exec rails test
As you already noticed there is not really a benefit in running RuboCop against third-party code and external gems because they are not really under your control and you certainly do not want to "fix" them.
Therefore I suggest excluding folders with external code, for example, gems in the vendor/bundle folder. This can be done by adding the following lines to your project's .rubycop.yml configuration file:
AllCops:
Exclude:
- 'vendor/bundle/**/*'
See the RuboCop docs about excluding files and folders.
I am trying to stop my github CI from failing completely in case the build of a multi-arch docker images is successful for at least on architecture such that the successful builds of the those architectures are still pushed to docker hub. What I do so far:
name: 'build images'
on:
push:
branches:
- master
tags:
- '*'
schedule:
- cron: '0 4 1 * *'
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Prepare
id: prep
run: |
DOCKER_IMAGE=${{ secrets.DOCKER_USERNAME }}/${GITHUB_REPOSITORY#*/}
VERSION=latest
# If this is git tag, use the tag name as a docker tag
if [[ $GITHUB_REF == refs/tags/* ]]; then
VERSION="${GITHUB_REF#refs/tags/v}"
fi
TAGS="${DOCKER_IMAGE}:${VERSION}"
# If the VERSION looks like a version number, assume that
# this is the most recent version of the image and also
# tag it 'latest'.
if [[ $VERSION =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
TAGS="$TAGS,${DOCKER_IMAGE}:latest"
fi
# Set output parameters.
echo ::set-output name=tags::${TAGS}
echo ::set-output name=docker_image::${DOCKER_IMAGE}
- name: Set up QEMU
uses: docker/setup-qemu-action#v1
with:
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action#v1
- name: Inspect builder
run: |
echo "Name: ${{ steps.buildx.outputs.name }}"
echo "Endpoint: ${{ steps.buildx.outputs.endpoint }}"
echo "Status: ${{ steps.buildx.outputs.status }}"
echo "Flags: ${{ steps.buildx.outputs.flags }}"
echo "Platforms: ${{ steps.buildx.outputs.platforms }}"
- name: Login to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action#v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build
uses: docker/build-push-action#v2
with:
builder: ${{ steps.buildx.outputs.name }}
context: .
file: ./Dockerfile
platforms: linux/amd64,linux/arm64,linux/arm/v7
push: true
tags: ${{ steps.prep.outputs.tags }}
- name: Sync
uses: ms-jpq/sync-dockerhub-readme#v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
repository: xx/yy
readme: "./README.md"
What I also did is: create this CI for each architecture individually with an own architecture tag but that way, I do not have a "multi-arch" tag..
I would like to semantic versioning my docker images which are built and pushed to GitHub Container Registry by the GitHub Action.
I found a satisfying solution here: https://stackoverflow.com/a/69059228/12877180
According to the solution I reproduced the following YAML.
name: Docker CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
env:
REGISTRY: ghcr.io
jobs:
build-push:
# needs: build-test
name: Buid and push Docker image to GitHub Container registry
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- name: Checkout the repository
uses: actions/checkout#v2
- name: Login to GitHub Container registry
uses: docker/login-action#v1
env:
USERNAME: ${{ github.actor }}
PASSWORD: ${{ secrets.GITHUB_TOKEN }}
with:
registry: ${{ env.REGISTRY }}
username: ${{ env.USERNAME }}
password: ${{ env.PASSWORD }}
- name: Get lowercase repository name
run: |
echo "IMAGE=${REPOSITORY,,}">>${GITHUB_ENV}
env:
REPOSITORY: ${{ env.REGISTRY }}/${{ github.repository }}
- name: Build and export the image to Docker
uses: docker/build-push-action#v2
with:
context: .
file: ./docker/Dockerfile
target: final
push: true
tags: |
${{ env.IMAGE }}:${{ secrets.MAJOR }}.${{ secrets.MINOR }}
build-args: |
ENVIRONMENT=production
- name: Update Patch version
uses: hmanzur/actions-set-secret#v2.0.0
with:
name: 'MINOR'
value: $((${{ secrets.MINOR }} + 1))
repository: ${{ github.repository }}
token: ${{ secrets.GH_PAT }}
Unfortunately this does not work.
The initial value of the MINOR secret is 0. If the build-push job is executed very first time, the docker image is perfectly pushed to the GHCR with the ghcr.io/my-org/my-repo:0.0 syntax.
The purpose of the build-push job is then increment the MINOR secret by 1.
If the action job build-push is executed again after new event, I get error while trying to build docker image using the incremented tag.
/usr/bin/docker buildx build --build-arg ENVIRONMENT=production --tag ghcr.io/my-org/my-repo:***.*** --target final --iidfile /tmp/docker-build-push-HgjJR7/iidfile --metadata-file /tmp/docker-build-push-HgjJR7/metadata-file --file ./docker/Dockerfile --push .
error: invalid tag "ghcr.io/my-org/my-repo:***.***": invalid reference format
Error: buildx failed with: error: invalid tag "ghcr.io/my-org/my-repo:***.***": invalid reference format
You need to increment the version in a bash command like this:
- name: Autoincrement a new patch version
run: |
echo "NEW_PATCH_VERSION=$((${{ env.PATCH_VERSION }}+1))" >> $GITHUB_ENV
- name: Update patch version
uses: hmanzur/actions-set-secret#v2.0.0
with:
name: 'PATCH_VERSION'
value: ${{ env.NEW_PATCH_VERSION }}
repository: ${{ github.repository }}
token: ${{ secrets.REPO_ACCESS_TOKEN }}
I am trying to optimise my cicd deployment workflow for android and ios with cache but I have a problem with the cache path. When not caching the workflow works well but when caching, the fastlane action doesn't find flutter or pods and I get errors like "error: /Users/runner/work/xxx/xxx/ios/Flutter/Release.xcconfig:1: could not find included file 'Pods/Target Support Files/Pods-Runner/Pods-Runner.release.xcconfig' in search paths (in target 'Runner' from project 'Runner')"
name: Deploy staging
on:
workflow_dispatch:
inputs:
lane:
description: "Staging lane to use : alpha or beta"
required: true
default: "alpha"
jobs:
deploy-to-ios:
runs-on: macos-latest
timeout-minutes: 30
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Setup Flutter Cache
id: cache-flutter
uses: actions/cache#v2
with:
path: /Users/runner/hostedtoolcache/flutter
key: ${{ runner.os }}-flutter
restore-keys: |
${{ runner.os }}-flutter-
- name: Setup Flutter
uses: subosito/flutter-action#v1
if: steps.cache-flutter.outputs.cache-hit != 'true'
with:
channel: "stable"
- name: Setup Pods Cache
id: cache-pods
uses: actions/cache#v2
with:
path: Pods
key: ${{ runner.os }}-pods-${{ hashFiles('ios/Podfile.lock') }}
restore-keys: |
${{ runner.os }}-pods-
- name: Setup Pods
if: steps.cache-pods.outputs.cache-hit != 'true'
run: |
cd ios/
flutter pub get
pod install
# Setup Ruby, Bundler, and Gemfile dependencies
- name: Setup Ruby
uses: ruby/setup-ruby#v1
with:
ruby-version: "2.7.4"
bundler-cache: true
working-directory: ios
- name: Setup Fastlane Cache
id: cache-fastlane
uses: actions/cache#v2
with:
path: ./vendor/bundle
key: ${{ runner.os }}-fastlane-${{ hashFiles('ios/Gemfile.lock') }}
restore-keys: |
${{ runner.os }}-fastlane-
- name: Setup Fastlane
if: steps.cache-fastlane.outputs.cache-hit != 'true'
run: gem install fastlane
- name: Build and deploy with Fastlane 🚀
run: bundle exec fastlane ${{ github.event.inputs.lane || 'beta' }}
env:
MATCH_GIT_BASIC_AUTHORIZATION: ${{ secrets.MATCH_GIT_BASIC_AUTHORIZATION }}
MATCH_PASSWORD: ${{ secrets.MATCH_PASSWORD }}
APP_STORE_CONNECT_API_KEY_KEY_ID: ${{ secrets.APP_STORE_CONNECT_API_KEY_KEY_ID }}
APP_STORE_CONNECT_API_KEY_ISSUER_ID: ${{ secrets.APP_STORE_CONNECT_API_KEY_ISSUER_ID }}
APP_STORE_CONNECT_API_KEY_KEY: ${{ secrets.APP_STORE_CONNECT_API_KEY_KEY }}
working-directory: ios
Any idea how to find the path used in fastlane for flutter and pods and cache the files there so they are found ?