Fastlane Github Action Archive Fail with Run Script in Flutter iOS - ios

I'm currently using fastlane with github action to deploy my Flutter iOS apps to Testflight. I have all the keys and information matched up. However, the build keeps failing at phase Run Script.
`PhaseScriptExecution Run\ Script /Users/runner/Library/Developer/Xcode/DerivedData/Runner-avobjsyvghznfxgtyjuolmxrrfls/Build/Intermediates.noindex/ArchiveIntermediates/prod/IntermediateBuildFilesPath/Runner.build/Release-prod-iphoneos/Runner.build/Script-9740EEB61CF901F6004384FC.sh (in target 'Runner' from project 'Runner')`
The fastfile runs fine on my local machine (with the same keys set as ENV) but not on GitHub Action.
The script causing the issue is:
`/bin/sh "${FLUTTER_ROOT}/packages/flutter_tools/bin/xcode_backend.sh" build` in the **Run Script** phase of Xcode
I tried adding {} around the FLUTTER ROOT but nothing changes.
I added configuration of my Project Runner Info but still same.
The code of my Github Workflow yaml is as followed:
name: Flutter CI
on:
push:
branches: xxx
jobs:
deploy:
name: Deploying to Testflight
runs-on: macOS-latest
steps:
- name: Setup XCode on Machine
uses: maxim-lobanov/setup-xcode#v1
with:
xcode-version: '13.4'
- name: Setup Flutter
uses: subosito/flutter-action#v2.8.0
with:
channel: 'stable'
flutter-version: '3.3.9'
- name: Give permission to private repo
uses: shaunco/ssh-agent#git-repo-mapping
with:
ssh-private-key: |
${{ secrets.SECRET_REPO_DEPLOY_KEY }}
repo-mappings: |
github.com/anfin21/package_library
- name: Checkout repository
uses: actions/checkout#v1
- name: Install packages
run: flutter pub get
- name: Deploy iOS Beta to TestFlight via Fastlane
uses: maierj/fastlane-action#v1.4.0
with:
lane: closed_beta
subdirectory: ios
env:
APP_STORE_CONNECT_TEAM_ID: '${{ secrets.ITC_TEAM_ID }}'
DEVELOPER_APP_ID: '${{ secrets.APPLICATON_ID }}'
DEVELOPER_APP_IDENTIFIER: '${{ secrets.BUNDLE_IDENTIFIER }}'
DEVELOPER_PORTAL_TEAM_ID: '${{ secrets.DEVELOPER_PORTAL_TEAM_ID }}'
FASTLANE_APPLE_ID: '${{ secrets.FASTLANE_APPLE_EMAIL_ID }}'
FASTLANE_APPLE_APPLICATION_SPECIFIC_PASSWORD: '${{ secrets.FASTLANE_APPLE_APPLICATION_SPECIFIC_PASSWORD }}'
MATCH_PASSWORD: '${{ secrets.MATCH_PASSWORD }}'
GIT_AUTHORIZATION: '${{ secrets.GIT_AUTHORIZATION }}'
PROVISIONING_PROFILE_SPECIFIER: '${{ secrets.PROVISIONING_PROFILE_SPECIFIER }}'
TEMP_KEYCHAIN_PASSWORD: '${{ secrets.TEMP_KEYCHAIN_PASSWORD }}'
TEMP_KEYCHAIN_USER: '${{ secrets.TEMP_KEYCHAIN_USER }}'
APPLE_KEY_ID: '${{ secrets.APPLE_KEY_ID }}'
APPLE_ISSUER_ID: '${{ secrets.APPLE_ISSUER_ID }}'
APPLE_KEY_CONTENT: '${{ secrets.APPLE_KEY_CONTENT }}'
The FastFile of my iOS is as followed:
default_platform(:ios)
DEVELOPER_APP_ID = ENV["DEVELOPER_APP_ID"]
DEVELOPER_APP_IDENTIFIER = ENV["DEVELOPER_APP_IDENTIFIER"]
PROVISIONING_PROFILE_SPECIFIER = ENV["PROVISIONING_PROFILE_SPECIFIER"]
TEMP_KEYCHAIN_USER = ENV["TEMP_KEYCHAIN_USER"]
TEMP_KEYCHAIN_PASSWORD = ENV["TEMP_KEYCHAIN_PASSWORD"]
APPLE_ISSUER_ID = ENV["APPLE_ISSUER_ID"]
APPLE_KEY_ID = ENV["APPLE_KEY_ID"]
APPLE_KEY_CONTENT = ENV["APPLE_KEY_CONTENT"]
GIT_AUTHORIZATION = ENV["GIT_AUTHORIZATION"]
def delete_temp_keychain(name)
delete_keychain(
name: name
) if File.exist? File.expand_path("~/Library/Keychains/#{name}-db")
end
def create_temp_keychain(name, password)
create_keychain(
name: name,
password: password,
unlock: false,
timeout: 0
)
end
def ensure_temp_keychain(name, password)
delete_temp_keychain(name)
create_temp_keychain(name, password)
end
platform :ios do
lane :closed_beta do
keychain_name = TEMP_KEYCHAIN_USER
keychain_password = TEMP_KEYCHAIN_PASSWORD
ensure_temp_keychain(keychain_name, keychain_password)
api_key = app_store_connect_api_key(
key_id: APPLE_KEY_ID,
issuer_id: APPLE_ISSUER_ID,
key_content: APPLE_KEY_CONTENT,
duration: 1200,
in_house: false
)
cocoapods(
clean_install: true,
)
match(
type: 'appstore',
app_identifier: "#{DEVELOPER_APP_IDENTIFIER}",
git_basic_authorization: Base64.strict_encode64(GIT_AUTHORIZATION),
readonly: false,
keychain_name: keychain_name,
keychain_password: keychain_password,
api_key: api_key
)
gym(
configuration: "Release",
sdk: "iphoneos16.1",
workspace: "Runner.xcworkspace",
scheme: "prod",
export_method: "app-store",
export_options: {
provisioningProfiles: {
DEVELOPER_APP_ID => PROVISIONING_PROFILE_SPECIFIER
}
}
)
pilot(
apple_id: "#{DEVELOPER_APP_ID}",
app_identifier: "#{DEVELOPER_APP_IDENTIFIER}",
skip_waiting_for_build_processing: true,
skip_submission: true,
distribute_external: false,
notify_external_testers: false,
ipa: "./Runner.ipa"
)
delete_temp_keychain(keychain_name)
end
end

Related

Deployment to AWS ECS using Github Actions is failing

I have written a github actions workflow yaml file by following this guide. The workflow file is added below:-
name: Deploy to Staging Amazon ECS
on:
push:
branches:
- staging
env:
ECR_REPOSITORY: api-staging-jruby/api
ECS_CLUSTER: api_staging
J_RUBY_ECS_SERVICE: web-staging
J_RUBY_ECS_TASK_DEFINITION: infrastructure/staging/web-jruby-task-definition.json
J_RUBY_CONTAINER_NAME: api-staging-jruby
ANALYTICS_ECS_SERVICE: analytics-staging
ANALYTICS_ECS_TASK_DEFINITION: infrastructure/staging/analytics-task-definition.json
ANALYTICS_CONTAINER_NAME: analytics-staging
WORKER_ECS_SERVICE: worker-staging
WORKER_ECS_TASK_DEFINITION: infrastructure/staging/worker-task-definition.json
WORKER_CONTAINER_NAME: sidekiq-staging
CONSOLE_ECS_SERVICE: console-staging
CONSOLE_ECS_TASK_DEFINITION: infrastructure/staging/console-task-definition.json
CONSOLE_CONTAINER_NAME: api-console-staging
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
environment: staging
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#13d241b293754004c80624b5567555c4a39ffbe3
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_DEFAULT_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#aaf69d68aa3fb14c1d5a6be9ac61fe15b48453a2
- name: Build, tag, and push image to Amazon ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
run: |
# Build a docker container and
# push it to ECR so that it can
# be deployed to ECS.
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_OUTPUT
- name: Fill in the new image ID in the Amazon ECS task definition (JRuby)
id: task-def-jruby
uses: aws-actions/amazon-ecs-render-task-definition#v1.1.2
with:
task-definition: ${{ env.J_RUBY_ECS_TASK_DEFINITION }}
container-name: ${{ env.J_RUBY_CONTAINER_NAME }}
image: ${{ steps.build-image.outputs.image }}
- name: Deploy Amazon ECS task definition (JRuby)
uses: aws-actions/amazon-ecs-deploy-task-definition#v1.4.10
with:
task-definition: ${{ steps.task-def-jruby.outputs.task-definition }}
service: ${{ env.J_RUBY_ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true
- name: Fill in the new image ID in the Amazon ECS task definition (Analytics)
id: task-def-analytics
uses: aws-actions/amazon-ecs-render-task-definition#v1.1.2
with:
task-definition: ${{ env.ANALYTICS_ECS_TASK_DEFINITION }}
container-name: ${{ env.ANALYTICS_CONTAINER_NAME }}
image: ${{ steps.build-image.outputs.image }}
- name: Deploy Amazon ECS task definition (Analytics)
uses: aws-actions/amazon-ecs-deploy-task-definition#v1.4.10
with:
task-definition: ${{ steps.task-def-analytics.outputs.task-definition }}
service: ${{ env.ANALYTICS_ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true
- name: Fill in the new image ID in the Amazon ECS task definition (Worker)
id: task-def-worker
uses: aws-actions/amazon-ecs-render-task-definition#v1.1.2
with:
task-definition: ${{ env.WORKER_ECS_TASK_DEFINITION }}
container-name: ${{ env.WORKER_CONTAINER_NAME }}
image: ${{ steps.build-image.outputs.image }}
- name: Deploy Amazon ECS task definition (Worker)
uses: aws-actions/amazon-ecs-deploy-task-definition#v1.4.10
with:
task-definition: ${{ steps.task-def-worker.outputs.task-definition }}
service: ${{ env.WORKER_ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true
- name: Fill in the new image ID in the Amazon ECS task definition (Console)
id: task-def-console
uses: aws-actions/amazon-ecs-render-task-definition#v1.1.2
with:
task-definition: ${{ env.CONSOLE_ECS_TASK_DEFINITION }}
container-name: ${{ env.CONSOLE_CONTAINER_NAME }}
image: ${{ steps.build-image.outputs.image }}
- name: Deploy Amazon ECS task definition (Worker)
uses: aws-actions/amazon-ecs-deploy-task-definition#v1.4.10
with:
task-definition: ${{ steps.task-def-console.outputs.task-definition }}
service: ${{ env.CONSOLE_ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true
The workflow is failing in the step Deploy Amazon ECS task definition (JRuby) and I am unable to debug the cause of the issue.
I have also confirmed the image is uploaded into ECR. I turned on the debug logs to check them. Here's is the stack track:-
##[debug]Evaluating condition for step: 'Deploy Amazon ECS task definition (JRuby)'
##[debug]Evaluating: success()
##[debug]Evaluating success:
##[debug]=> true
##[debug]Result: true
##[debug]Starting: Deploy Amazon ECS task definition (JRuby)
##[debug]Loading inputs
##[debug]Evaluating: steps.task-def-jruby.outputs.task-definition
##[debug]Evaluating Index:
##[debug]..Evaluating Index:
##[debug]....Evaluating Index:
##[debug]......Evaluating steps:
##[debug]......=> Object
##[debug]......Evaluating String:
##[debug]......=> 'task-def-jruby'
##[debug]....=> Object
##[debug]....Evaluating String:
##[debug]....=> 'outputs'
##[debug]..=> Object
##[debug]..Evaluating String:
##[debug]..=> 'task-definition'
##[debug]=> '/home/runner/work/_temp/task-definition--16224-xjXb92vYNt3B-.json'
##[debug]Result: '/home/runner/work/_temp/task-definition--16224-xjXb92vYNt3B-.json'
##[debug]Evaluating: env.J_RUBY_ECS_SERVICE
##[debug]Evaluating Index:
##[debug]..Evaluating env:
##[debug]..=> Object
##[debug]..Evaluating String:
##[debug]..=> 'J_RUBY_ECS_SERVICE'
##[debug]=> 'web-staging'
##[debug]Result: 'web-staging'
##[debug]Evaluating: env.ECS_CLUSTER
##[debug]Evaluating Index:
##[debug]..Evaluating env:
##[debug]..=> Object
##[debug]..Evaluating String:
##[debug]..=> 'ECS_CLUSTER'
##[debug]=> 'api_staging'
##[debug]Result: 'api_staging'
##[debug]Loading env
Run aws-actions/amazon-ecs-deploy-task-definition#v1.4.10
##[debug]Registering the task definition
::set-output name=task-definition-arn::arn:aws:ecs:***:***:task-definition/web-staging-jruby:91
Warning: The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
##[debug]='arn:aws:ecs:***:***:task-definition/web-staging-jruby:91'
##[debug]Updating the service
Error: The container api-staging does not exist in the task definition.
##[debug]InvalidParameterException: The container api-staging does not exist in the task definition.
##[debug] at Request.extractError (/home/runner/work/_actions/aws-actions/amazon-ecs-deploy-task-definition/v1.4.10/dist/index.js:19497:27)
##[debug] at Request.callListeners (/home/runner/work/_actions/aws-actions/amazon-ecs-deploy-task-definition/v1.4.10/dist/index.js:22778:20)
##[debug] at Request.emit (/home/runner/work/_actions/aws-actions/amazon-ecs-deploy-task-definition/v1.4.10/dist/index.js:22750:10)
##[debug] at Request.emit (/home/runner/work/_actions/aws-actions/amazon-ecs-deploy-task-definition/v1.4.10/dist/index.js:21384:14)
##[debug] at Request.transition (/home/runner/work/_actions/aws-actions/amazon-ecs-deploy-task-definition/v1.4.10/dist/index.js:20720:10)
##[debug] at AcceptorStateMachine.runTo (/home/runner/work/_actions/aws-actions/amazon-ecs-deploy-task-definition/v1.4.10/dist/index.js:27746:12)
##[debug] at /home/runner/work/_actions/aws-actions/amazon-ecs-deploy-task-definition/v1.4.10/dist/index.js:27758:10
##[debug] at Request.<anonymous> (/home/runner/work/_actions/aws-actions/amazon-ecs-deploy-task-definition/v1.4.10/dist/index.js:20736:9)
##[debug] at Request.<anonymous> (/home/runner/work/_actions/aws-actions/amazon-ecs-deploy-task-definition/v1.4.10/dist/index.js:21386:12)
##[debug] at Request.callListeners (/home/runner/work/_actions/aws-actions/amazon-ecs-deploy-task-definition/v1.4.10/dist/index.js:22788:18)
##[debug]Node Action run completed with exit code 1
##[debug]Finishing: Deploy Amazon ECS task definition (JRuby)
As you can see from the above stack trace that api-staging is nowhere mentioned in the workflow yaml file. This is the reason I am unable to debug it. We are already deploying manually using a shell-script. So we are reusing the same cluster and services. There is a chance api-staging is coming from AWS but I am not 100% sure.
Edit:
Content of web-jruby-task-definition.json:
{
"cpu": "2048",
"memory": "5120",
"networkMode": "awsvpc",
"executionRoleArn": "arn:aws:iam::<account_id>:role/ecsTaskExecutionRole",
"requiresCompatibilities": [
"FARGATE"
],
"containerDefinitions": [
{
"name": "api-staging-jruby",
"image": "<account_id>.dkr.ecr.<region>.amazonaws.com/api-staging/api:latest",
"dockerLabels": {
"com.datadoghq.ad.instances": "[{\"host\": \"%%host%%\", \"port\": 8000}]",
"com.datadoghq.ad.check_names": "[\"api-staging\"]",
"com.datadoghq.ad.init_configs": "[{}]"
},
"portMappings": [
{
"containerPort": 8000,
"hostPort": 8000
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "api-staging",
"awslogs-region": "<region>",
"awslogs-stream-prefix": "jruby-staging-api"
}
},
"secrets": [
{
"name": "ELASTIC_SEARCH_URL",
"valueFrom": "arn:aws:secretsmanager:<region>:<account_id>:secret:staging/ELASTIC_SEARCH_URL-<id>"
},
...
]
},
{
"name": "datadog-agent",
"image": "datadog/agent:latest",
"essential": true,
"secrets": [
{
"name": "DD_API_KEY",
"valueFrom": "arn:aws:secretsmanager:<region>:<account_id>:secret:staging/environment-name-<id>"
},
...
]
}
],
"family": "web-staging-jruby"
}

Run SwiftLint on pull request GitHub actions

I m running jobs on Mac-os-11. I have integrated the SwiftLint locally as well and that is working fine. But When someone raise the pr I need to run the SwiftLint on GitHub actions. How can I do that. Below is the current yml file for actions.
name: Build & Test
on:
# Run tests when PRs are created or updated
pull_request:
types: [opened, synchronize, reopened, ready_for_review]
env:
# Defines the Xcode version
DEVELOPER_DIR: /Applications/Xcode_13.0.app/Contents/Developer
FETCH_DEPTH: 0
RUBY_VERSION: 2.7.1
defaults:
run:
shell: bash
jobs:
test:
name: Build & Test
if: ${{ github.event.pull_request.draft == false }}
runs-on: macos-11
steps:
- name: Checkout Project
uses: actions/checkout#v2.3.4
with:
fetch-depth: ${{ env.FETCH_DEPTH }}
- name: Restore Gem Cache
uses: actions/cache#v2.1.3
with:
path: vendor/bundle
key: ${{ runner.os }}-gem-${{ hashFiles('**/Gemfile.lock') }}
restore-keys: ${{ runner.os }}-gem-
- name: Restore Pod Cache
uses: actions/cache#v2.1.3
with:
path: Pods
key: ${{ runner.os }}-pods-${{ hashFiles('**/Podfile.lock') }}
restore-keys: ${{ runner.os }}-pods-
- name: Setup Ruby
uses: ruby/setup-ruby#v1.51.1
with:
bundler-cache: true
ruby-version: ${{ env.RUBY_VERSION }}
SwiftLint is working fine locally, But when I raise the pull request no SwiftLint warning are coming.
I am using this step:
- name: Lint
run: |
set -o pipefail
swiftlint lint --strict --quiet | sed -E 's/^(.*):([0-9]+):([0-9]+): (warning|error|[^:]+): (.*)/::\4 title=Lint error,file=\1,line=\2,col=\3::\5\n\1:\2:\3/'
It parses swiftlint warnings and errors into GitHub annotations which are visible in summary straight away.

How to build multi arch docker images with github CI without general failure if single architecture fails?

I am trying to stop my github CI from failing completely in case the build of a multi-arch docker images is successful for at least on architecture such that the successful builds of the those architectures are still pushed to docker hub. What I do so far:
name: 'build images'
on:
push:
branches:
- master
tags:
- '*'
schedule:
- cron: '0 4 1 * *'
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Prepare
id: prep
run: |
DOCKER_IMAGE=${{ secrets.DOCKER_USERNAME }}/${GITHUB_REPOSITORY#*/}
VERSION=latest
# If this is git tag, use the tag name as a docker tag
if [[ $GITHUB_REF == refs/tags/* ]]; then
VERSION="${GITHUB_REF#refs/tags/v}"
fi
TAGS="${DOCKER_IMAGE}:${VERSION}"
# If the VERSION looks like a version number, assume that
# this is the most recent version of the image and also
# tag it 'latest'.
if [[ $VERSION =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
TAGS="$TAGS,${DOCKER_IMAGE}:latest"
fi
# Set output parameters.
echo ::set-output name=tags::${TAGS}
echo ::set-output name=docker_image::${DOCKER_IMAGE}
- name: Set up QEMU
uses: docker/setup-qemu-action#v1
with:
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action#v1
- name: Inspect builder
run: |
echo "Name: ${{ steps.buildx.outputs.name }}"
echo "Endpoint: ${{ steps.buildx.outputs.endpoint }}"
echo "Status: ${{ steps.buildx.outputs.status }}"
echo "Flags: ${{ steps.buildx.outputs.flags }}"
echo "Platforms: ${{ steps.buildx.outputs.platforms }}"
- name: Login to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action#v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build
uses: docker/build-push-action#v2
with:
builder: ${{ steps.buildx.outputs.name }}
context: .
file: ./Dockerfile
platforms: linux/amd64,linux/arm64,linux/arm/v7
push: true
tags: ${{ steps.prep.outputs.tags }}
- name: Sync
uses: ms-jpq/sync-dockerhub-readme#v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
repository: xx/yy
readme: "./README.md"
What I also did is: create this CI for each architecture individually with an own architecture tag but that way, I do not have a "multi-arch" tag..

How to implement semantic versioning in GitHub Actions workflow?

I would like to semantic versioning my docker images which are built and pushed to GitHub Container Registry by the GitHub Action.
I found a satisfying solution here: https://stackoverflow.com/a/69059228/12877180
According to the solution I reproduced the following YAML.
name: Docker CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
env:
REGISTRY: ghcr.io
jobs:
build-push:
# needs: build-test
name: Buid and push Docker image to GitHub Container registry
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- name: Checkout the repository
uses: actions/checkout#v2
- name: Login to GitHub Container registry
uses: docker/login-action#v1
env:
USERNAME: ${{ github.actor }}
PASSWORD: ${{ secrets.GITHUB_TOKEN }}
with:
registry: ${{ env.REGISTRY }}
username: ${{ env.USERNAME }}
password: ${{ env.PASSWORD }}
- name: Get lowercase repository name
run: |
echo "IMAGE=${REPOSITORY,,}">>${GITHUB_ENV}
env:
REPOSITORY: ${{ env.REGISTRY }}/${{ github.repository }}
- name: Build and export the image to Docker
uses: docker/build-push-action#v2
with:
context: .
file: ./docker/Dockerfile
target: final
push: true
tags: |
${{ env.IMAGE }}:${{ secrets.MAJOR }}.${{ secrets.MINOR }}
build-args: |
ENVIRONMENT=production
- name: Update Patch version
uses: hmanzur/actions-set-secret#v2.0.0
with:
name: 'MINOR'
value: $((${{ secrets.MINOR }} + 1))
repository: ${{ github.repository }}
token: ${{ secrets.GH_PAT }}
Unfortunately this does not work.
The initial value of the MINOR secret is 0. If the build-push job is executed very first time, the docker image is perfectly pushed to the GHCR with the ghcr.io/my-org/my-repo:0.0 syntax.
The purpose of the build-push job is then increment the MINOR secret by 1.
If the action job build-push is executed again after new event, I get error while trying to build docker image using the incremented tag.
/usr/bin/docker buildx build --build-arg ENVIRONMENT=production --tag ghcr.io/my-org/my-repo:***.*** --target final --iidfile /tmp/docker-build-push-HgjJR7/iidfile --metadata-file /tmp/docker-build-push-HgjJR7/metadata-file --file ./docker/Dockerfile --push .
error: invalid tag "ghcr.io/my-org/my-repo:***.***": invalid reference format
Error: buildx failed with: error: invalid tag "ghcr.io/my-org/my-repo:***.***": invalid reference format
You need to increment the version in a bash command like this:
- name: Autoincrement a new patch version
run: |
echo "NEW_PATCH_VERSION=$((${{ env.PATCH_VERSION }}+1))" >> $GITHUB_ENV
- name: Update patch version
uses: hmanzur/actions-set-secret#v2.0.0
with:
name: 'PATCH_VERSION'
value: ${{ env.NEW_PATCH_VERSION }}
repository: ${{ github.repository }}
token: ${{ secrets.REPO_ACCESS_TOKEN }}

Github workflow cache path with fastlane and flutter

I am trying to optimise my cicd deployment workflow for android and ios with cache but I have a problem with the cache path. When not caching the workflow works well but when caching, the fastlane action doesn't find flutter or pods and I get errors like "error: /Users/runner/work/xxx/xxx/ios/Flutter/Release.xcconfig:1: could not find included file 'Pods/Target Support Files/Pods-Runner/Pods-Runner.release.xcconfig' in search paths (in target 'Runner' from project 'Runner')"
name: Deploy staging
on:
workflow_dispatch:
inputs:
lane:
description: "Staging lane to use : alpha or beta"
required: true
default: "alpha"
jobs:
deploy-to-ios:
runs-on: macos-latest
timeout-minutes: 30
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Setup Flutter Cache
id: cache-flutter
uses: actions/cache#v2
with:
path: /Users/runner/hostedtoolcache/flutter
key: ${{ runner.os }}-flutter
restore-keys: |
${{ runner.os }}-flutter-
- name: Setup Flutter
uses: subosito/flutter-action#v1
if: steps.cache-flutter.outputs.cache-hit != 'true'
with:
channel: "stable"
- name: Setup Pods Cache
id: cache-pods
uses: actions/cache#v2
with:
path: Pods
key: ${{ runner.os }}-pods-${{ hashFiles('ios/Podfile.lock') }}
restore-keys: |
${{ runner.os }}-pods-
- name: Setup Pods
if: steps.cache-pods.outputs.cache-hit != 'true'
run: |
cd ios/
flutter pub get
pod install
# Setup Ruby, Bundler, and Gemfile dependencies
- name: Setup Ruby
uses: ruby/setup-ruby#v1
with:
ruby-version: "2.7.4"
bundler-cache: true
working-directory: ios
- name: Setup Fastlane Cache
id: cache-fastlane
uses: actions/cache#v2
with:
path: ./vendor/bundle
key: ${{ runner.os }}-fastlane-${{ hashFiles('ios/Gemfile.lock') }}
restore-keys: |
${{ runner.os }}-fastlane-
- name: Setup Fastlane
if: steps.cache-fastlane.outputs.cache-hit != 'true'
run: gem install fastlane
- name: Build and deploy with Fastlane 🚀
run: bundle exec fastlane ${{ github.event.inputs.lane || 'beta' }}
env:
MATCH_GIT_BASIC_AUTHORIZATION: ${{ secrets.MATCH_GIT_BASIC_AUTHORIZATION }}
MATCH_PASSWORD: ${{ secrets.MATCH_PASSWORD }}
APP_STORE_CONNECT_API_KEY_KEY_ID: ${{ secrets.APP_STORE_CONNECT_API_KEY_KEY_ID }}
APP_STORE_CONNECT_API_KEY_ISSUER_ID: ${{ secrets.APP_STORE_CONNECT_API_KEY_ISSUER_ID }}
APP_STORE_CONNECT_API_KEY_KEY: ${{ secrets.APP_STORE_CONNECT_API_KEY_KEY }}
working-directory: ios
Any idea how to find the path used in fastlane for flutter and pods and cache the files there so they are found ?

Resources