How to pull docker image from ECR in Githubactions - docker

I'm trying to pull docker image from ECR and deploy it on ec2 instance. However it's throwing an error like
docker pull $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
======END======
err: invalid reference format
2022/11/03 15:31:54 Process exited with status 1
My yml file is:
name: Docker Image CI
on:
push:
branches: [ "main" ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.TF_USER_AWS_KEY }}
aws-secret-access-key: ${{ secrets.TF_USER_AWS_SECRET }}
aws-region: us-east-1
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build, tag, and push image to Amazon ECR
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: githubactions
IMAGE_TAG: githubactions_image
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
- name: Docker pull & run from github
uses: appleboy/ssh-action#master
with:
host: ec2-3-86-102-151.compute-1.amazonaws.com
username: ec2-user
key: ${{ secrets.ACTIONS_PRIVATE_KEY }}
envs: GITHUB_SHA
script: |
docker pull $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
I spent a lot of time and I can't really understand what's wrong. Any idea really appreciated.

Your issue is with the Env vars.
You are confusing the github server to the ec2 server.
The env var you mention in the github yml do no exist on the remote machine(ec2).
The cmd -> docker pull $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG failing cause the remote mahicne(ec2) do not know about your github env vars.
from the docs here you can see that there is an easy built-in way to pass it to the ec2 instance
- name: pass environment
uses: appleboy/ssh-action#master
+ env:
+ FOO: "BAR"
+ BAR: "FOO"
+ SHA: ${{ github.sha }}
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.KEY }}
port: ${{ secrets.PORT }}
+ envs: FOO,BAR,SHA
script: |
echo "I am $FOO"
echo "I am $BAR"
echo "sha: $SHA"
hope it helps and goodluck

I would first add an ls and echo:
run: |
pwd
ls -arth
echo "docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG ."
docker ...
That way, I would check if I am in the right folder (with a Dockerfile in it), and if all variables are indeed valued.
If you see <aregistry>/<arepo>: (meaning no tag), the final ':' might be enough to trigger the error message.
That or:
the name use the wrong hyphen '-' as in this issue.
the image name has an invalid character like a \
the image name uses the wrong syntax

Related

redhat-actions/buildah-build#v2 failure while performing build from Containerfile

I'm trying to setup github workflow for building image and pushing it to the registry using redhat-actions actions:
workflow.yaml
name: build-maven-runner
on:
workflow_dispatch:
jobs:
build-test-push:
outputs:
image-url: ${{ steps.push-to-artifactory.outputs.registry-path }}
image-digest: ${{ steps.push-to-artifactory.outputs.digest }}
name: build-job
env:
runner_memorylimit: 2Gi
runner_cpulimit: 2
runs-on: [ linux ]
steps:
- name: Clone
uses: actions/checkout#v2
- name: Pre-Login
# podman-login: requires docker config repo auths
# Error: TypeError: Cannot set property 'some.repo.com' of undefined
mkdir /home/runner/.docker/
cat <<EOT >> /home/runner/.docker/config.json
{
"auths": {
"some.repo.com": {}
}
}
EOT
- name: Login
uses: redhat-actions/podman-login#v1
with:
registry: some.repo.com
username: ${{ secrets.USERNAME }}
password: ${{ secrets.PASSWORD }}
auth_file_path: /tmp/podman-run-1000/containers/auth.json
- name: Build
id: build-image
uses: redhat-actions/buildah-build#v2
with:
image: some-image
tags: latest
containerfiles: ./config/Dockerfile
tls-verify: false
- name: Push
id: push-to-artifactory
uses: redhat-actions/push-to-registry#v2
with:
image: ${{ steps.build-image.outputs.image }}
tags: latest
registry: some.other.repo.com/project
username: ${{ secrets.USERNAME }}
password: ${{ secrets.PASSWORD }}
tls-verify: false
./config/Dockerfile
FROM .../openshift/origin-cli:4.10
USER root
RUN sudo yum update -y
RUN sudo yum install -y maven
RUN maven -version
RUN oc version
But Build step failing resulting in:
/usr/bin/buildah version
Version: 1.22.3
Go Version: go1.15.2
Image Spec: 1.0.1-dev
Runtime Spec: 1.0.2-dev
CNI Spec: 0.4.0
libcni Version:
image Version: 5.15.2
Git Commit:
Built: Thu Jan 1 00:00:00 1970
OS/Arch: linux/amd64
Overriding storage mount_program with "fuse-overlayfs" in environment
Performing build from Containerfile
/usr/bin/buildah bud -f /runner/_work/some-project/some-project/config/Dockerfile --format docker --tls-verify=false -t some-image:latest /runner/_work/some-project/some-project
chown /home/runner/.local/share/containers/storage/overlay/l: operation not permitted
time="2022-12-12T16:13:52Z" level=warning msg="failed to shutdown storage: \"chown /home/runner/.local/share/containers/storage/overlay/l: operation not permitted\""
time="2022-12-12T16:13:52Z" level=error msg="exit status 125"
Error: Error: buildah exited with code 125
I'm fairly out of ideas at this point.. I was thinking if it has to do with storage.conf like mentioned here, but even overriding storage.conf still having same error. Originally this how storage.conf looks like:
[storage]
driver = "overlay"
runroot = "/run/containers/storage"
graphroot = "/var/lib/containers/storage"
[storage.options]
additionalimagestores = [
]
[storage.options.overlay]
mountopt = "nodev,metacopy=on"
[storage.options.thinpool]
Does the problem lies deeper like in Dockerfile image ```openshif/origin-cli?
Any help would be appreciated
I ran into this issue today because I was doing some tests locally, typically your CICD should give the correct permissions to your containers (or the workers running your jobs). I fixed this issue by adding the --privileged flag while running my container, I do not recommend using that mode in production unless you are really sure what you are doing. Perhaps not exactly your issue but dropping it here in case it helps someone else.

Github workflow: requested access to the resource is denied

I am trying to use GitHub workflow to build an ASP.NET 6 project using Dockerfile then push the image to a private Azure Registry using docker.
Here is my .yml file
name: Docker Image CI
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Login To Azure Container Registr
uses: Azure/docker-login#v1
with:
login-server: ${{ secrets.ACR_HOST }}
username: ${{ secrets.ACR_USERNAME }}
password: ${{ secrets.ACR_PASSWWORD }}
- name: Build And Push Docker Images
uses: docker/build-push-action#v3.1.1
with:
push: true
file: ./Dockerfile
tags: companyname/projectname:${{ github.run_number }}
In the above, the Dockerfile is located in the root of my project's code.
However, the the build runs I get the following error
Error: buildx failed with: error: denied: requested access to the resource is denied
In the Secrets > Action section in my repository settings, I added ACR_HOST, ACR_USERNAME and ACR_PASSWORD secrets.
When viewing the logs, this issue seems to happen after this line in the logs
pushing companyname/projectname:2 with docker:
How can I solve this issue?
UPDATED
I changed the .yml script to the following
name: Docker Image CI
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Login To Azure Container Registr
uses: Azure/docker-login#v1
with:
login-server: mycontainer.azurecr.io
username: "The admin username"
password: "The admin password"
- run: cat ${{ env.DOCKER_CONFIG }}/config.json
- name: Build And Push Docker Images
uses: docker/build-push-action#v3.1.1
with:
push: true
file: ./Dockerfile
tags: companyname/projectname:${{ github.run_number }}
The added step (i.e., cat ${{ env.DOCKER_CONFIG }}/config.json) displayed a json string that look like this
{"auths":{"mycontainer.azurecr.io":{"auth":"BASE64 string with the admin username:password as expected"}}}
The base64 string was formatted like this username:password
I am assuming that the step Azure/docker-login#v1 has no issue and stages the token for docker/build-push-action#v3.1.1 correctly.
If I set the push flag to false in the docker/build-push-action#v3.1.1 step, the workflow runs with no issue. So from what I can tell, the issue is when the step docker/build-push-action#v3.1.1 tries to push the created image to the Azure registry.
I use my local machine to login using the same credentials and all worked with no issue docker login mycontainer.azurecr.io
Additionally, the login request from my local machine is logged into Azure portal. However, I do not see the request when I run the workflow.
I think that main issue is that the step docker/build-push-action#v3.1.1 does not attempt to login before it pushes the image.
I followed the instructions here and it worked.

Cypress code coverage, Pipeline Docker save logs error

im running my cypress code coverage report with using "npx nyc report --reporter=lcov --reporter=text-summary" also i have script which is "yarn e2e:coverage" but i want to see result in github actions log;
**
- name: Save logs
continue-on-error: true
if: ${{ always() }}
env:
COMMIT_SHA: ${{ steps.vars.outputs.sha_short }}
run: |
docker ps
docker cp cypress_test:/cypress-coverage cypress-coverage
- name: Compress action step
uses: a7ul/tar-action#v1.1.0
id: compress
with:
command: c
cwd: .
files: |
cypress-coverage
outPath: test_cypress_coverage.tar
- name: Archive coverage
continue-on-error: true
if: ${{ always() }}
env:
COMMIT_SHA: ${{ steps.vars.outputs.sha_short }}
uses: actions/upload-artifact#v2
with:
name: "${{ steps.date.outputs.yyyymmdd }}_E2E_test_coverage_${{ env.COMMIT_SHA }}"
path: |
test_cypress_coverage.tar
retention-days: 7
**
Im using this part for it but there is a error "Error: No such container:path: cypress_test:/cypress-coverage" do you guys have a any idea how to find correct path ? How to see log of coverage result ? Btw i can get artifacts its working as expected but i got save logs error.
If the compress_test container has a bash, I would try and check:
if there is a log file produced at that stage
where it is.
That would be:
docker exec -it compress_test find / -name "cypress-coverage*"
That way, you can see for yourself where the file is.

Does dockerized github actions support network options for docker run parameters

I am using self hosted github runners for vpn access to some software and I am trying to use a dockerized github action on the self hosted runners but I am having issues because I need to specify the --network host flag when github action runs docker run. Is there a way to have the github action use the network of the host?
As far as I know, it is not possible. It's not available on steps either. Options are available on jobs though. The only other way is for you to create a composite action and run docker run ... directly in it. Here is one that I wrote for my own workflow. It's slightly more complicated but it allows you to automatically pass environment variable from the runner to the docker container based on the variable name prefix:
name: Docker start container
description: Start a detached container
inputs:
image:
description: The image to use
required: true
name:
description: The container name
required: true
options:
description: Additional options to pass to docker run
required: false
default: ''
command:
description: The command to run
required: false
default: ''
env_pattern:
description: The environment variable pattern to pass to the container
required: false
default: ''
outputs:
cid:
description: Container ID
value: ${{ steps.info.outputs.cid }}
runs:
using: composite
steps:
- name: Run
shell: bash
run: >
variables='';
for i in $(env | grep '${{ inputs.env_pattern }}' | awk -F '=' '{print $1}'); do
variables="--env ${i} ${variables}";
done;
docker run -d
--name ${{ inputs.name }}
--network host
--cidfile ${{ inputs.name }}.cid
${variables}
${{ inputs.options }}
${{ inputs.image }}
${{ inputs.command }}
- name: Info
id: info
shell: bash
run: echo "::set-output name=cid::$(cat ${{ inputs.name }}.cid)"
and to use it:
- name: Start app container
uses: ./.github/actions/docker-start-container
with:
image: myapp/myapp:latest
name: myapp
env_pattern: 'MYAPP_'
options: --entrypoint entrypoint.sh
command: >
--check
-v

How to use a variable docker image in github-actions?

I am trying to write a custom github-action that runs some commands in a docker container but allows the user to select which docker container they are run in (i.e. so I can run the same build instructions across different versions of the runtime environment)
My gut instinct was to have my .github/actions/main/action.yml file as
name: 'Docker container command execution'
inputs:
dockerfile:
default: Dockerfile_r_latest
runs:
using: 'docker'
image: '${{ inputs.dockerfile }}'
args:
- /scripts/commands.sh
However this errors with:
##[error](Line: 7, Col: 10): Unrecognized named-value: 'inputs'. Located at position 1 within expression: inputs.dockerfile
Any help would be appreciated !
File References
My .github/workflow/build_and_test.yml file is:
name: Test Package
on:
[push, pull_request]
jobs:
R_latest:
name: Test on latest
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
name: Checkout project
- uses: ./.github/actions/main
name: Build and test
with:
dockerfile: Dockerfile_r_latest
And my Dockerfile .github/actions/main/Dockerfile_r_latest is:
FROM rocker/verse:latest
ADD scripts /scripts
ENTRYPOINT [ "bash", "-c" ]
Interesting approach! I'm not sure if it's possible to use expressions in the image field of the action metadata. I would guess that the only fields that can take expressions instead of hardcoded strings are the args for the image so that the inputs can be passed.
For reference this is the args section of the action.yml metadata.
https://help.github.com/en/articles/metadata-syntax-for-github-actions#args
I think there are other ways to achieve what you want to do. Have you tried using the jobs.<job_id>.container syntax? That allows you to specify an image that the steps of a job will run in. It will require that you publish the image to a public repository, though. So take care not to include any secrets.
For example, if you published your image to Docker Hub at gowerc/r-latest your workflow might look something like this:
name: Test Package
on:
[push, pull_request]
jobs:
R_latest:
name: Test on latest
runs-on: ubuntu-latest
container: gowerc/r-latest
steps:
- uses: actions/checkout#master
name: Checkout project
- name: Build and test
run: ./scripts/commands.sh
ref: https://help.github.com/en/articles/workflow-syntax-for-github-actions#jobsjob_idcontainer
Alternatively, you can also specify your image at the step level with uses. You could then pass a command via args to execute your script.
name: my workflow
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Check container
uses: docker://alpine:3.8
with:
args: /bin/sh -c "cat /etc/alpine-release"
ref: https://help.github.com/en/github/automating-your-workflow-with-github-actions/workflow-syntax-for-github-actions#example-using-a-docker-hub-action
In addition to #peterevans answer, I would add there's a 3rd option where you can use a simple docker run command and pass any env that you have defined.
That helped to solve 3 things :
Reuse a custom docker image being build within the steps for testing actions. It seems not possible to do so with uses as it first tries to pull that image that doesn't exist yet in a Setup job step that occurs before any steps of the job.
This specific image can also be stored in a private docker registry
Be able to use a variable for the docker image
My workflow looks like this :
name: Build-Test-Push
on:
push:
branches:
- master
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
ECR_REGISTRY: ${{ secrets.AWS_ECR_REGISTRY }}
ECR_REPOSITORY: myproject/myimage
IMAGE_TAG: ${{ github.sha }}
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- name: Checking out
uses: actions/checkout#v2
with:
ref: master
- name: Login to AWS ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build
run: |
docker pull $ECR_REGISTRY/$ECR_REPOSITORY || true
docker build . -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG -t $ECR_REGISTRY/$ECR_REPOSITORY:latest
- name: Test
run: |
docker run $ECR_REGISTRY/$ECR_REPOSITORY:latest /bin/bash -c "make test"
- name: Push
run: |
docker push $ECR_REGISTRY/$ECR_REPOSITORY
Here is another approach. The Docker image to use is passed to a cibuild shell script that takes care of pulling the right image.
GitHub workflow file:
name: 'GH Actions CI'
on:
push:
branches: ['*master', '*0.[0-9]?.x']
pull_request:
# The branches below must be a subset of the branches above
branches: ['*master', '*0.[0-9]?.x']
jobs:
build:
name: Build
runs-on: ubuntu-latest
strategy:
fail-fast: true
matrix:
include:
- FROM: 'ubuntu:focal'
- FROM: 'ubuntu:bionic'
- FROM: 'ubuntu:xenial'
- FROM: 'debian:buster'
- FROM: 'debian:stretch'
- FROM: 'opensuse/leap'
- FROM: 'fedora:33'
- FROM: 'fedora:32'
- FROM: 'centos:8'
steps:
- name: Checkout repository
uses: actions/checkout#v2
with:
# We must fetch at least the immediate parents so that if this is
# a pull request then we can checkout the head.
fetch-depth: 2
# If this run was triggered by a pull request event, then checkout
# the head of the pull request instead of the merge commit.
- run: git checkout HEAD^2
if: ${{ github.event_name == 'pull_request' }}
- name: Run CI
env:
FROM: ${{ matrix.FROM }}
run: script/cibuild
Bash script script/cibuild:
#!/bin/bash
set -e
docker run --name my-docker-container $FROM script/custom-script.sh
docker cp my-docker-container:/usr/src/my-workdir/my-outputs .
docker rm my-docker-container
echo "cibuild Done!"
Put your custom commands in script/custom-script.sh.

Resources