I have written a github actions workflow yaml file by following this guide. The workflow file is added below:-
name: Deploy to Staging Amazon ECS
on:
push:
branches:
- staging
env:
ECR_REPOSITORY: api-staging-jruby/api
ECS_CLUSTER: api_staging
J_RUBY_ECS_SERVICE: web-staging
J_RUBY_ECS_TASK_DEFINITION: infrastructure/staging/web-jruby-task-definition.json
J_RUBY_CONTAINER_NAME: api-staging-jruby
ANALYTICS_ECS_SERVICE: analytics-staging
ANALYTICS_ECS_TASK_DEFINITION: infrastructure/staging/analytics-task-definition.json
ANALYTICS_CONTAINER_NAME: analytics-staging
WORKER_ECS_SERVICE: worker-staging
WORKER_ECS_TASK_DEFINITION: infrastructure/staging/worker-task-definition.json
WORKER_CONTAINER_NAME: sidekiq-staging
CONSOLE_ECS_SERVICE: console-staging
CONSOLE_ECS_TASK_DEFINITION: infrastructure/staging/console-task-definition.json
CONSOLE_CONTAINER_NAME: api-console-staging
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
environment: staging
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#13d241b293754004c80624b5567555c4a39ffbe3
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_DEFAULT_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#aaf69d68aa3fb14c1d5a6be9ac61fe15b48453a2
- name: Build, tag, and push image to Amazon ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
run: |
# Build a docker container and
# push it to ECR so that it can
# be deployed to ECS.
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_OUTPUT
- name: Fill in the new image ID in the Amazon ECS task definition (JRuby)
id: task-def-jruby
uses: aws-actions/amazon-ecs-render-task-definition#v1.1.2
with:
task-definition: ${{ env.J_RUBY_ECS_TASK_DEFINITION }}
container-name: ${{ env.J_RUBY_CONTAINER_NAME }}
image: ${{ steps.build-image.outputs.image }}
- name: Deploy Amazon ECS task definition (JRuby)
uses: aws-actions/amazon-ecs-deploy-task-definition#v1.4.10
with:
task-definition: ${{ steps.task-def-jruby.outputs.task-definition }}
service: ${{ env.J_RUBY_ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true
- name: Fill in the new image ID in the Amazon ECS task definition (Analytics)
id: task-def-analytics
uses: aws-actions/amazon-ecs-render-task-definition#v1.1.2
with:
task-definition: ${{ env.ANALYTICS_ECS_TASK_DEFINITION }}
container-name: ${{ env.ANALYTICS_CONTAINER_NAME }}
image: ${{ steps.build-image.outputs.image }}
- name: Deploy Amazon ECS task definition (Analytics)
uses: aws-actions/amazon-ecs-deploy-task-definition#v1.4.10
with:
task-definition: ${{ steps.task-def-analytics.outputs.task-definition }}
service: ${{ env.ANALYTICS_ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true
- name: Fill in the new image ID in the Amazon ECS task definition (Worker)
id: task-def-worker
uses: aws-actions/amazon-ecs-render-task-definition#v1.1.2
with:
task-definition: ${{ env.WORKER_ECS_TASK_DEFINITION }}
container-name: ${{ env.WORKER_CONTAINER_NAME }}
image: ${{ steps.build-image.outputs.image }}
- name: Deploy Amazon ECS task definition (Worker)
uses: aws-actions/amazon-ecs-deploy-task-definition#v1.4.10
with:
task-definition: ${{ steps.task-def-worker.outputs.task-definition }}
service: ${{ env.WORKER_ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true
- name: Fill in the new image ID in the Amazon ECS task definition (Console)
id: task-def-console
uses: aws-actions/amazon-ecs-render-task-definition#v1.1.2
with:
task-definition: ${{ env.CONSOLE_ECS_TASK_DEFINITION }}
container-name: ${{ env.CONSOLE_CONTAINER_NAME }}
image: ${{ steps.build-image.outputs.image }}
- name: Deploy Amazon ECS task definition (Worker)
uses: aws-actions/amazon-ecs-deploy-task-definition#v1.4.10
with:
task-definition: ${{ steps.task-def-console.outputs.task-definition }}
service: ${{ env.CONSOLE_ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true
The workflow is failing in the step Deploy Amazon ECS task definition (JRuby) and I am unable to debug the cause of the issue.
I have also confirmed the image is uploaded into ECR. I turned on the debug logs to check them. Here's is the stack track:-
##[debug]Evaluating condition for step: 'Deploy Amazon ECS task definition (JRuby)'
##[debug]Evaluating: success()
##[debug]Evaluating success:
##[debug]=> true
##[debug]Result: true
##[debug]Starting: Deploy Amazon ECS task definition (JRuby)
##[debug]Loading inputs
##[debug]Evaluating: steps.task-def-jruby.outputs.task-definition
##[debug]Evaluating Index:
##[debug]..Evaluating Index:
##[debug]....Evaluating Index:
##[debug]......Evaluating steps:
##[debug]......=> Object
##[debug]......Evaluating String:
##[debug]......=> 'task-def-jruby'
##[debug]....=> Object
##[debug]....Evaluating String:
##[debug]....=> 'outputs'
##[debug]..=> Object
##[debug]..Evaluating String:
##[debug]..=> 'task-definition'
##[debug]=> '/home/runner/work/_temp/task-definition--16224-xjXb92vYNt3B-.json'
##[debug]Result: '/home/runner/work/_temp/task-definition--16224-xjXb92vYNt3B-.json'
##[debug]Evaluating: env.J_RUBY_ECS_SERVICE
##[debug]Evaluating Index:
##[debug]..Evaluating env:
##[debug]..=> Object
##[debug]..Evaluating String:
##[debug]..=> 'J_RUBY_ECS_SERVICE'
##[debug]=> 'web-staging'
##[debug]Result: 'web-staging'
##[debug]Evaluating: env.ECS_CLUSTER
##[debug]Evaluating Index:
##[debug]..Evaluating env:
##[debug]..=> Object
##[debug]..Evaluating String:
##[debug]..=> 'ECS_CLUSTER'
##[debug]=> 'api_staging'
##[debug]Result: 'api_staging'
##[debug]Loading env
Run aws-actions/amazon-ecs-deploy-task-definition#v1.4.10
##[debug]Registering the task definition
::set-output name=task-definition-arn::arn:aws:ecs:***:***:task-definition/web-staging-jruby:91
Warning: The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
##[debug]='arn:aws:ecs:***:***:task-definition/web-staging-jruby:91'
##[debug]Updating the service
Error: The container api-staging does not exist in the task definition.
##[debug]InvalidParameterException: The container api-staging does not exist in the task definition.
##[debug] at Request.extractError (/home/runner/work/_actions/aws-actions/amazon-ecs-deploy-task-definition/v1.4.10/dist/index.js:19497:27)
##[debug] at Request.callListeners (/home/runner/work/_actions/aws-actions/amazon-ecs-deploy-task-definition/v1.4.10/dist/index.js:22778:20)
##[debug] at Request.emit (/home/runner/work/_actions/aws-actions/amazon-ecs-deploy-task-definition/v1.4.10/dist/index.js:22750:10)
##[debug] at Request.emit (/home/runner/work/_actions/aws-actions/amazon-ecs-deploy-task-definition/v1.4.10/dist/index.js:21384:14)
##[debug] at Request.transition (/home/runner/work/_actions/aws-actions/amazon-ecs-deploy-task-definition/v1.4.10/dist/index.js:20720:10)
##[debug] at AcceptorStateMachine.runTo (/home/runner/work/_actions/aws-actions/amazon-ecs-deploy-task-definition/v1.4.10/dist/index.js:27746:12)
##[debug] at /home/runner/work/_actions/aws-actions/amazon-ecs-deploy-task-definition/v1.4.10/dist/index.js:27758:10
##[debug] at Request.<anonymous> (/home/runner/work/_actions/aws-actions/amazon-ecs-deploy-task-definition/v1.4.10/dist/index.js:20736:9)
##[debug] at Request.<anonymous> (/home/runner/work/_actions/aws-actions/amazon-ecs-deploy-task-definition/v1.4.10/dist/index.js:21386:12)
##[debug] at Request.callListeners (/home/runner/work/_actions/aws-actions/amazon-ecs-deploy-task-definition/v1.4.10/dist/index.js:22788:18)
##[debug]Node Action run completed with exit code 1
##[debug]Finishing: Deploy Amazon ECS task definition (JRuby)
As you can see from the above stack trace that api-staging is nowhere mentioned in the workflow yaml file. This is the reason I am unable to debug it. We are already deploying manually using a shell-script. So we are reusing the same cluster and services. There is a chance api-staging is coming from AWS but I am not 100% sure.
Edit:
Content of web-jruby-task-definition.json:
{
"cpu": "2048",
"memory": "5120",
"networkMode": "awsvpc",
"executionRoleArn": "arn:aws:iam::<account_id>:role/ecsTaskExecutionRole",
"requiresCompatibilities": [
"FARGATE"
],
"containerDefinitions": [
{
"name": "api-staging-jruby",
"image": "<account_id>.dkr.ecr.<region>.amazonaws.com/api-staging/api:latest",
"dockerLabels": {
"com.datadoghq.ad.instances": "[{\"host\": \"%%host%%\", \"port\": 8000}]",
"com.datadoghq.ad.check_names": "[\"api-staging\"]",
"com.datadoghq.ad.init_configs": "[{}]"
},
"portMappings": [
{
"containerPort": 8000,
"hostPort": 8000
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "api-staging",
"awslogs-region": "<region>",
"awslogs-stream-prefix": "jruby-staging-api"
}
},
"secrets": [
{
"name": "ELASTIC_SEARCH_URL",
"valueFrom": "arn:aws:secretsmanager:<region>:<account_id>:secret:staging/ELASTIC_SEARCH_URL-<id>"
},
...
]
},
{
"name": "datadog-agent",
"image": "datadog/agent:latest",
"essential": true,
"secrets": [
{
"name": "DD_API_KEY",
"valueFrom": "arn:aws:secretsmanager:<region>:<account_id>:secret:staging/environment-name-<id>"
},
...
]
}
],
"family": "web-staging-jruby"
}
Related
I'm trying to setup scheduled container rebuilds on my latest release (git tag).
I'm already building containers on main branch and version tags, but i'd like to expand the version tags to be a scheduled rebuild to pickup base image security updates. I can't figure out how to do scheduled actions on only the latest tag.
Suggestions welcome. My example repository is github.com/ruckc/container-openldap. I reuse this same workflow frequently, and just trying to improve it to handle base image updates.
on:
push:
branches: ['main']
tags:
- 'v*'
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.actor }}/openldap
jobs:
build-and-push:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout#v2
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action#v2
with:
platforms: arm64,amd64
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v2
- name: Log in to the Container registry
uses: docker/login-action#v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action#v4
with:
tags: |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern=version
type=semver,pattern={{major}}.{{minor}}
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
uses: docker/build-push-action#v3
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/arm64,linux/amd64
cache-from: type=gha
cache-to: type=gha,mode=max
I'm using the below workflow code (found in the github documentation) to build and publish a docker image to the Github Container Registry.
name: Create and publish a Docker image
on:
push:
branches: ['release']
pull_request:
branches: ['release']
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout#v3
- name: Log in to the Container registry
uses: docker/login-action#f054a8b539a109f9f41c372932f1ae047eff08c9
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action#98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
uses: docker/build-push-action#ad44023a93711e3deb337508980b4b5e9bcdc5dc
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
This works and I now see a public docker image under "Packages" on the github repo. When I click on the image, I am directed to a github page with more information about the image (official docs here):
"Install from the command line:"
docker pull ghcr.io/OWNER/IMAGE_NAME:pr-75
And its Digest sha: sha256:04ea7757e34c4fae527bbe6fb56eb984f54543f2313775572f0817d696ecf48a
I want to add a new job to the same workflow, that pulls the image to a virtual machine using ssh.
deploy:
- name: Deploy to Digital Ocean droplet via SSH action
uses: appleboy/ssh-action#v0.1.4
with:
host: ${{ secrets.DO_HOST }}
username: root
key: ${{ secrets.DO_PRIVATE_SSHKEY }}
port: 22
script: |
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
This fails with:
err: invalid reference format: repository name must be lowercase (lowercasing it is not enough, read on)
Of course I cannot hard-code docker pull ghcr.io/OWNER/IMAGE_NAME:pr-75 or the Digest sha, because each new branch will increment in its PR number, so the pr-75 tag will change.
How can I deploy the image that was just published? Seems I can either use the tag value or the sha and how can I retrieve those values in real time?
There are two jobs in the above workflow:
"build-and-push-image"
"deploy"
The first one uses the docker/metadata-action to retrieve the tag name ghcr.io/OWNER/IMAGE_NAME:pr-75 which is used in the next step to name the image when docker/build-push-action is used.
I have simply used the docker/metadata-action again in the second job:
deploy:
needs: build-and-push-image
runs-on: ubuntu-latest
steps:
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action#69f6fc9d46f2f8bf0d5491e4aabe0bb8c6a4678a
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Deploy to Digital Ocean droplet via SSH action
uses: appleboy/ssh-action#v0.1.4
with:
host: ${{ secrets.DO_HOST }}
username: root
key: ${{ secrets.DO_PRIVATE_SSHKEY }}
port: 22
script: |
docker pull ${{ steps.meta.outputs.tags }}
I am trying to stop my github CI from failing completely in case the build of a multi-arch docker images is successful for at least on architecture such that the successful builds of the those architectures are still pushed to docker hub. What I do so far:
name: 'build images'
on:
push:
branches:
- master
tags:
- '*'
schedule:
- cron: '0 4 1 * *'
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Prepare
id: prep
run: |
DOCKER_IMAGE=${{ secrets.DOCKER_USERNAME }}/${GITHUB_REPOSITORY#*/}
VERSION=latest
# If this is git tag, use the tag name as a docker tag
if [[ $GITHUB_REF == refs/tags/* ]]; then
VERSION="${GITHUB_REF#refs/tags/v}"
fi
TAGS="${DOCKER_IMAGE}:${VERSION}"
# If the VERSION looks like a version number, assume that
# this is the most recent version of the image and also
# tag it 'latest'.
if [[ $VERSION =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
TAGS="$TAGS,${DOCKER_IMAGE}:latest"
fi
# Set output parameters.
echo ::set-output name=tags::${TAGS}
echo ::set-output name=docker_image::${DOCKER_IMAGE}
- name: Set up QEMU
uses: docker/setup-qemu-action#v1
with:
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action#v1
- name: Inspect builder
run: |
echo "Name: ${{ steps.buildx.outputs.name }}"
echo "Endpoint: ${{ steps.buildx.outputs.endpoint }}"
echo "Status: ${{ steps.buildx.outputs.status }}"
echo "Flags: ${{ steps.buildx.outputs.flags }}"
echo "Platforms: ${{ steps.buildx.outputs.platforms }}"
- name: Login to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action#v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build
uses: docker/build-push-action#v2
with:
builder: ${{ steps.buildx.outputs.name }}
context: .
file: ./Dockerfile
platforms: linux/amd64,linux/arm64,linux/arm/v7
push: true
tags: ${{ steps.prep.outputs.tags }}
- name: Sync
uses: ms-jpq/sync-dockerhub-readme#v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
repository: xx/yy
readme: "./README.md"
What I also did is: create this CI for each architecture individually with an own architecture tag but that way, I do not have a "multi-arch" tag..
I would like to semantic versioning my docker images which are built and pushed to GitHub Container Registry by the GitHub Action.
I found a satisfying solution here: https://stackoverflow.com/a/69059228/12877180
According to the solution I reproduced the following YAML.
name: Docker CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
env:
REGISTRY: ghcr.io
jobs:
build-push:
# needs: build-test
name: Buid and push Docker image to GitHub Container registry
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- name: Checkout the repository
uses: actions/checkout#v2
- name: Login to GitHub Container registry
uses: docker/login-action#v1
env:
USERNAME: ${{ github.actor }}
PASSWORD: ${{ secrets.GITHUB_TOKEN }}
with:
registry: ${{ env.REGISTRY }}
username: ${{ env.USERNAME }}
password: ${{ env.PASSWORD }}
- name: Get lowercase repository name
run: |
echo "IMAGE=${REPOSITORY,,}">>${GITHUB_ENV}
env:
REPOSITORY: ${{ env.REGISTRY }}/${{ github.repository }}
- name: Build and export the image to Docker
uses: docker/build-push-action#v2
with:
context: .
file: ./docker/Dockerfile
target: final
push: true
tags: |
${{ env.IMAGE }}:${{ secrets.MAJOR }}.${{ secrets.MINOR }}
build-args: |
ENVIRONMENT=production
- name: Update Patch version
uses: hmanzur/actions-set-secret#v2.0.0
with:
name: 'MINOR'
value: $((${{ secrets.MINOR }} + 1))
repository: ${{ github.repository }}
token: ${{ secrets.GH_PAT }}
Unfortunately this does not work.
The initial value of the MINOR secret is 0. If the build-push job is executed very first time, the docker image is perfectly pushed to the GHCR with the ghcr.io/my-org/my-repo:0.0 syntax.
The purpose of the build-push job is then increment the MINOR secret by 1.
If the action job build-push is executed again after new event, I get error while trying to build docker image using the incremented tag.
/usr/bin/docker buildx build --build-arg ENVIRONMENT=production --tag ghcr.io/my-org/my-repo:***.*** --target final --iidfile /tmp/docker-build-push-HgjJR7/iidfile --metadata-file /tmp/docker-build-push-HgjJR7/metadata-file --file ./docker/Dockerfile --push .
error: invalid tag "ghcr.io/my-org/my-repo:***.***": invalid reference format
Error: buildx failed with: error: invalid tag "ghcr.io/my-org/my-repo:***.***": invalid reference format
You need to increment the version in a bash command like this:
- name: Autoincrement a new patch version
run: |
echo "NEW_PATCH_VERSION=$((${{ env.PATCH_VERSION }}+1))" >> $GITHUB_ENV
- name: Update patch version
uses: hmanzur/actions-set-secret#v2.0.0
with:
name: 'PATCH_VERSION'
value: ${{ env.NEW_PATCH_VERSION }}
repository: ${{ github.repository }}
token: ${{ secrets.REPO_ACCESS_TOKEN }}
Summary:
Github Actions allows using Docker containers to run jobs, but it doesn't seem to allow providing a dynamic value for this container image (using environment variables).
This works (not the desired solution):
jobs:
pytest-test:
container:
image: ghcr.io/ashrafgt/test:latest ...
This does not work (the desired solution):
jobs:
pytest-test:
container:
# env variables defined at the start of the workflow
image: ${{ env.REGISTRY_NAME }}/test:${{ env.IMAGE_TAG }}
...
Giving this error:
Invalid workflow file : .github/workflows/workflow.yaml
The workflow is not valid. Unrecognized named-value: 'env'. Located at position 1 within expression: env.REGISTRY_NAME
Are there any ways to do this besides doing a run: docker run ...?
Full Example:
In this example, I try to build and push a Docker image (tagged with the current commit SHA) then use the same image to run unit tests:
name: Main CI Pipeline
on: [push]
env:
REGISTRY_NAME: ghcr.io/${{ github.repository_owner }}
REGISTRY_USERNAME: ${{ github.actor }}
REGISTRY_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
IMAGE_TAG: ${{ github.sha }}
jobs:
docker-build:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout#v2
- uses: docker/setup-buildx-action#v1
- uses: docker/login-action#v1
with:
registry: ${{ env.REGISTRY_NAME }}
username: ${{ env.REGISTRY_USERNAME }}
password: ${{ env.REGISTRY_PASSWORD }}
- uses: docker/build-push-action#v2
with:
tags: ${{ env.REGISTRY_NAME }}/test:${{ env.IMAGE_TAG }}
push: true
pytest-test:
needs: docker-build
runs-on: ubuntu-latest
permissions:
contents: read
packages: read
container:
image: ${{ env.REGISTRY_NAME }}/test:${{ env.IMAGE_TAG }}
steps:
- uses: actions/checkout#v2
- run: pytest
Please find the full repository here.
The full error message is:
Invalid workflow file : .github/workflows/workflow.yaml#L38
The workflow is not valid. .github/workflows/workflow.yaml (Line: 38, Col: 14): Unrecognized named-value: 'env'. Located at position 1 within expression: env.REGISTRY_NAME
Please find the Github Actions run here.
Fix Attempts:
Using only container instead of container.image:
jobs:
pytest-test:
container: ${{ env.REGISTRY_NAME }}/test:${{ env.IMAGE_TAG }}
...
Using the docker:// syntax for a single step:
jobs:
pytest-test:
steps:
- uses: docker://${{ env.REGISTRY_NAME }}/test:${{ env.IMAGE_TAG }}
entrypoint: pytest
...
Both fix attempts failed with the same error as the original syntax.