Below is the github action
name: Build image
uses: docker/build-push-action#v3.2.0
with:
build-args:
secrets: |
"github_token=${{ inputs.token }}"
"uname=${{ github.actor }}"
"Mysecret=SecretValue"
in the Docker file
RUN --mount=type=secret,id=github_token \
cat /run/secrets/github_token
RUN --mount=type=secret,id=github_actor \
cat /run/secrets/github_actor
RUN --mount=type=secret,id=github_actor \
varu=$(cat /run/secrets/github_actor)
RUN --mount=type=secret,id=github_token \
var=$(cat /run/secrets/github_token)
RUN echo $var
I'm not able to consume the secrets, those are printing but not able to assing the secret to a variables for use in next statement.
If I want to get Mysecret value to a variable in docker file how do I get it?
Related
Here is my attempt at creating a Docker Host Context via GitHub Actions:
name: CICD
on:
push:
branches:
- main
- staging
workflow_dispatch:
jobs:
build_and_deploy_monitoring:
concurrency: monitoring
runs-on: [self-hosted, linux, X64]
steps:
- uses: actions/checkout#v2
- name: Save secrets to mon.env files
run: |
echo "DATA_SOURCE_NAME=${{ secrets.DB_DATASOURCE }}" >> mon.env
echo "GF_SECURITY_ADMIN_USER=${{ secrets.GF_ADMIN_USER }}" >> mon.env
echo "GF_SECURITY_ADMIN_PASSWORD=${{ secrets.GF_ADMIN_PASS }}" >> mon.env
echo "DISCORD_TOKEN=${{ secrets.DISCORD_TOKEN }}" >> mon.env
echo "PROMCORD_PREFIX=promcord_" >> mon.env
echo "DB_CONNECTION_STRING=${{ secrets.DBC_STRING }}" >> mon.env
# - name: Setup SSH stuff
# run: |
# sudo mkdir -p ~/.ssh/
# sudo echo "${{ secrets.SSH_KEY }}" >> ~/.ssh/tempest
# sudo chmod 0400 ~/.ssh/tempest
# sudo echo "${{ secrets.KNOWN_HOSTS }}" >> ~/.ssh/known_hosts
# sudo echo -e "Host ${{ secrets.SSH_HOST }}\n\tHostName ${{ secrets.SSH_HOST }}\n\tUser ${{ secrets.SSH_USER }}\n\tIdentityFile ~/.ssh/tempest" >> ~/.ssh/config
- name: Install docker-compose
run: sudo pip install docker-compose
- name: Create context for docker host
run: docker context create remote --docker
- name: Set default context for docker
run: docker context use remote
- name: Always build the monitoring stack
run: COMPOSE_PARAMIKO_SSH=1 COMPOSE_IGNORE_ORPHANS=1 docker-compose --context remote -f docker-compose-monitoring.yml up --build -d
The output is:
0s
Run docker context create remote --docker
docker context create remote --docker
shell: /usr/bin/bash -e {0}
/actions-runner/actions-runner/_work/_temp/05fc146a-237e-4a92-b27d-796451184c0c.sh: line 1: docker: command not found
Error: Process completed with exit code 127.
I am trying to create a workflow that is able to create a docker compose for some monitoring tools. I have set up GitHub runners to do this and it has been successful for everything until the docker host section. The error is given above. Can I get some help as I am completely stumped?
My goal is to export data from a unit test inside a multistage docker container. I have a docker create, docker cp, and docker rm that work in my terminal but when I added it to my docker-image.yml it fails to run and displays this error "Error: Process completed with exit code .". Also, I added in the unit test code for a github action that can't be accessed since the build fails.
[enter image description here][1]
- name: Build the Docker image
run: |
echo "${{ env.app_version }}"
echo "${{ github.run_number }}"
BUILD_NUMBER=${{ github.run_number }}
VERSION_NUMBER=${{ env.app_version }}
FULL_VERSION=${VERSION_NUMBER}.${BUILD_NUMBER}
docker build . --file Dockerfile --tag placeholder/${SERVICE_NAME}:${FULL_VERSION} --build-arg BUILD_NUMBER=${BUILD_NUMBER}
docker tag placeholder/${SERVICE_NAME}:${FULL_VERSION} placeholder/${SERVICE_NAME}:latest
echo "full_version=$FULL_VERSION" >> $GITHUB_ENV
**docker create --name unit_test test-export
docker cp unit_test:/app/surefire-reports extracted
docker rm unit_test**
# Runs a set of commands using the runners shell
- name: Run a multi-line script
run: |
echo Add other actions to build,
echo test and deploy your project.
ls -lath target/surefire-reports/
- name: Publish Unit Test Results
# You may pin to the exact commit or the version.
# Uses: EnricoMi/publish-unit-test-result-action#4a00ba50806e7658e5005bb91acdb3274714595a
uses: EnricoMi/publish-unit-test-result-action#v1.31
with:
files: target/surefire-reports/*.xml
I'm trying to pass some dynamically created arguments within a composite GitHub Action.
The documentation however is lacking examples on how to pass arguments in this case to the docker container.
https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#runsstepsuses
See here the thing I'm trying to achieve.
runs:
using: 'composite'
steps:
- name: compose arguments
id: compose-args
shell: bash
run: |
encoded_github="$(echo '${{ inputs.github_context }}' | base64)"
encoded_runner="$(echo '${{ inputs.runner_context }}' | base64)"
args=('${{ inputs.command }}')
args+=('${{ inputs.subcommand }}')
args+=('--github-context')
args+=("${encoded_github}")
args+=('--runner-context')
args+=("${encoded_runner}")
args+=('${{ inputs.arguments }}')
echo "::set-output name=provenance_args::$(echo "[$(printf "\"%s\"," ${args[*]})]" | sed 's/,]$/]/')"
- name: Debug arguments
shell: bash
run: |
echo Running slsa-provenance with following arguments
echo ${{ steps.compose-args.outputs.provenance_args }}
- uses: 'docker://ghcr.io/philips-labs/slsa-provenance:v0.5.0-draft'
with:
args: ${{ fromJSON(steps.compose-args.outputs.provenance_args) }}
fromJSON is giving me a JSON object from the composed array of bash arguments. I made the assumption this uses: 'docker://…part should receive it's arguments in the same way a docker based action would receive.
e.g.:
runs:
using: 'docker'
image: 'docker://ghcr.io/philips-labs/slsa-provenance:v0.4.0'
args:
- "generate"
- '${{ inputs.subcommand }}'
- "-artifact_path"
- '${{ inputs.artifact_path }}'
- "-output_path"
- '${{ inputs.output_path }}'
- "-github_context"
- '${{ inputs.github_context }}'
- "-runner_context"
- '${{ inputs.runner_context }}'
- "-tag_name"
- '${{ inputs.tag_name }}'
Unfortunately I'm getting the following error in the GitHub actions workflow.
The template is not valid. philips-labs/slsa-provenance-action/v0.5.0-draft/action.yaml (Line: 47, Col: 15): A sequence was not expected
See here the workflow. https://github.com/philips-labs/slsa-provenance-action/runs/4618706311?check_suite_focus=true
How can I resolve this error?
Is it resolvable with current approach?
Is this a missing feature?
What would be an alternative?
Take a look at this GitHub Action https://github.com/mr-smithers-excellent/docker-build-push
it has buildArgs as an input so it can be a solution for your case
for instance:
steps:
- uses: actions/checkout#v2
name: Check out the code
- uses: mr-smithers-excellent/docker-build-push#v5
name: Build & push Docker image
with:
image: repo/image
tags: v1, latest
registry: registry-url.io
dockerfile: ./your/path/Dockerfile
buildArgs: Test=true
I think you can just run via bash command:
- name: pull image
shell: bash
run: |
docker pull ghcr.io/philips-labs/slsa-provenance:v0.4.0
- name: run container
shell: bash
run: |
docker run --rm -i \
--workdir /github/workspace \
-v "/var/run/docker.sock":"/var/run/docker.sock" \
-v ${{ runner.temp }}/_github_home:"/github/home" \
-v ${{ github.workflow }}:"/github/workflow" \
-v ${{ runner.temp }}/_runner_file_commands:"/github/file_commands" \
-v ${{ github.workspace }}:"/github/workspace" \
ghcr.io/philips-labs/slsa-provenance:v0.4.0 \ # image
generate \ # start pass the args here, after the image
${{ inputs.subcommand }} \
-artifact_path ${{ inputs.artifact_path }} \
-output_path ${{ inputs.output_path }} \
-github_context ${{ inputs.github_context }} \
-runner_context ${{ inputs.runner_context }} \
-tag_name ${{ inputs.tag_name }}
I am modifying my docker-publish file to build a docker image so it can work with Arm64. The previous version was working fine with x86 architecture, but now I need to make it work for Arm 64 so I just changed the way the docker builds the images.
The build process works fine but somehow the git push stopped working and I am getting the error
Error response from daemon: No such image: myimage-arm64:latest
This is my docker-publish.yml
name: Docker
on:
push:
# Publish `master` as Docker `latest` image.
branches:
- master
# Publish `v1.2.3` tags as releases.
tags:
- v*
# Run tests for any PRs.
pull_request:
env:
IMAGE_NAME: myimage-arm64
jobs:
# Push image to GitHub Packages.
# See also https://docs.docker.com/docker-hub/builds/
push:
runs-on: ubuntu-latest
if: github.event_name == 'push'
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout#v2
- name: Set up QEMU
uses: docker/setup-qemu-action#v1
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action#v1
- name: Prepare multiarch docker
run: docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
- name: Builder create
run: docker buildx create --use
- name: Log into registry
run: echo "${{ secrets.GITHUB_TOKEN }}" | docker login docker.pkg.github.com -u ${{ github.actor }} --password-stdin
- name: Build image
run: |
docker buildx build \
--tag $IMAGE_NAME \
--file Dockerfile \
--platform linux/arm64 .
- name: Push image
run: |
IMAGE_ID=docker.pkg.github.com/${{ github.repository }}/$IMAGE_NAME
# Change all uppercase to lowercase
IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]')
# Strip git ref prefix from version
# VERSION=$(echo "${{ github.ref }}" | sed -e 's,.*/\(.*\),\1,')
# I changed this so it takes the version from a file on my project
VERSION=$(cat version)
# Strip "v" prefix from tag name
[[ "${{ github.ref }}" == "refs/tags/"* ]] && VERSION=$(echo $VERSION | sed -e 's/^v//')
# Use Docker `latest` tag convention
[ "$VERSION" == "master" ] && VERSION=latest
echo IMAGE_ID=$IMAGE_ID
echo VERSION=$VERSION
###
The two previous echo print the correct stuff
I get the error in these last two lines
###
docker tag $IMAGE_NAME $IMAGE_ID:$VERSION
docker push $IMAGE_ID:$VERSION
Any help? The push phase was working fine previously and I haven't touched it to make it work with arm64
EDIT 1:
I modified the procedure following the answers but still it does not work (error: tag is needed when pushing to register)
- name: Log into registry
run: echo "${{ secrets.GITHUB_TOKEN }}" | docker login docker.pkg.github.com -u ${{ github.actor }} --password-stdin
- name: Builder create
run: docker buildx create --use
- name: Build image
run: |
IMAGE_ID=docker.pkg.github.com/${{ github.repository }}/$IMAGE_NAME
VERSION=$(cat version)
echo TAG=$IMAGE_ID:$VERSION
docker buildx build --push \
--tag $IMAGE_ID:$VERSION \
--file Dockerfile \
--platform linux/arm64 .
Precisely, the logs are these ones:
Run IMAGE_ID=docker.pkg.github.com/GiamBoscaro/portfolio-website/$IMAGE_NAME
TAG=docker.pkg.github.com/UserName/RepoName/ImageName:1.2.0
#1 [internal] booting buildkit
#1 sha256:bfa0dddd89a9c970aa189079c1d31d17f7a75edd434bb19ad90432b27b266e3a
#1 pulling image moby/buildkit:buildx-stable-1
#1 pulling image moby/buildkit:buildx-stable-1 0.4s done
#1 creating container buildx_buildkit_intelligent_volhard0
#1 creating container buildx_buildkit_intelligent_volhard0 0.9s done
#1 DONE 1.3s
error: tag is needed when pushing to registry
Error: Process completed with exit code 1.
EDIT 2: Finally fixed the issue. Even if it's not the best way, here's the code that works. I switched over to the new container registry and moved the docker login in the same job of docker buildx:
jobs:
push:
runs-on: ubuntu-latest
if: github.event_name == 'push'
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout#v2
- name: Set up QEMU
uses: docker/setup-qemu-action#v1
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action#v1
- name: Prepare multiarch docker
run: docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
- name: Builder create
run: docker buildx create --use
- name: Build image
run: |
IMAGE_ID=ghcr.io/${{ github.actor }}/$IMAGE_NAME
# Change all uppercase to lowercase
IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]')
VERSION=$(cat version)
echo TAG=$IMAGE_ID:$VERSION
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
docker buildx build --push \
--tag $IMAGE_ID:$VERSION \
--file Dockerfile.arm \
--platform linux/arm64 .
Buildx runs builds within a separate container, not directly in your docker engine. And the output of buildx does not store the resulting image in the local docker engine. This doesn't work when you get into multi-platform images anyway, so you typically push directly to the registry. It's much more efficient to avoid moving layers around that didn't change in the registry, and allows you to manage multi-platform images (everything loaded into the docker engine is dereferenced to a single platform).
If you really want to save the output to the local docker engine, you can use --load in the buildx command. However, the preferred option is to use the build-push-action that builds your tag directly and pushes it in one step. This would mean reordering your steps to determine the versions and other variables first, and then run the build against that. You can see an example of this in my own project which was assembled from various other docker examples out there.
Here's a quick untested attempt to make that change:
- name: Prepare
id: prep
run: |
IMAGE_ID=docker.pkg.github.com/${{ github.repository }}/$IMAGE_NAME
# Change all uppercase to lowercase
IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]')
# Strip git ref prefix from version
# VERSION=$(echo "${{ github.ref }}" | sed -e 's,.*/\(.*\),\1,')
# I changed this so it takes the version from a file on my project
VERSION=$(cat version)
# Strip "v" prefix from tag name
[[ "${{ github.ref }}" == "refs/tags/"* ]] && VERSION=$(echo $VERSION | sed -e 's/^v//')
# Use Docker `latest` tag convention
[ "$VERSION" == "master" ] && VERSION=latest
echo IMAGE_ID=$IMAGE_ID
echo VERSION=$VERSION
echo ::set-output name=version::${VERSION}
echo ::set-output name=docker_tag::${IMAGE_ID}:${VERSION}
- name: Build and push
uses: docker/build-push-action#v2
with:
context: .
file: Dockerfile
platforms: linux/arm64
push: true
tags: ${{ steps.prep.outputs.docker_tag }}
From the updated question, this is the entire command being run:
docker buildx build --push
The next command to run would be:
--tag $IMAGE_ID:$VERSION ...
I'm sure you're saying "Wait, what? There's a trailing slash, that's a multi-line command!" But there's also whitespace after that slash, so instead of escaping a linefeed, you've escaped a space character. Docker treats that space as the one arg and will attempt to build with the context being a directory named . To fix, remove the trailing whitespace after the backslash.
I am trying to use gsutil to copy a file from GCS into a Run container during the build step.
The steps I have tried:
RUN pip install gsutil
RUN gsutil -m cp -r gs://BUCKET_NAME $APP_HOME/artefacts
The error:
ServiceException: 401 Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object.
CommandException: 1 file/object could not be transferred.
The command '/bin/sh -c gsutil -m cp -r gs://BUCKET_NAME $APP_HOME/artefacts' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
The service account (default compute & cloudbuild) does have access to GCS, and I have also tried to gsutil config -a and with various other flags with no success!
I am not sure on exactly how I should authenticate to successfully access the bucket.
Here my github action job
jobs:
build:
name: Build image
runs-on: ubuntu-latest
env:
BRANCH: ${GITHUB_REF##*/}
SERVICE_NAME: ${{ secrets.SERVICE_NAME }}
PROJECT_ID: ${{ secrets.PROJECT_ID }}
steps:
- name: Checkout
uses: actions/checkout#v2
# Setup gcloud CLI
- uses: google-github-actions/setup-gcloud#master
with:
service_account_key: ${{ secrets.SERVICE_ACCOUNT_KEY }}
project_id: ${{ secrets.PROJECT_ID }}
export_default_credentials: true
# Download the file locally
- name: Get_file
run: |-
gsutil cp gs://BUCKET_NAME/path/to/file .
# Build docker image
- name: Image_build
run: |-
docker build -t gcr.io/$PROJECT_ID/$SERVICE_NAME .
# Configure docker to use the gcloud command-line tool as a credential helper
- run: |
gcloud auth configure-docker -q
# Push image to Google Container Registry
- name: Image_push
run: |-
docker push gcr.io/$PROJECT_ID/$SERVICE_NAME
You have to set 3 secrets:
SERVICE_ACCOUNT_KEY: which is your service account key file
SERVICE_NAME: the name of your container
PROJECT_ID: the project where to deploy your image
Because you download the file locally, the file is locally present in the Docker build. Then, simply COPY it in the docker file and do what you want with it.
UPDATE
If you want to do this in docker, you can achieve this like that
Dockerfile
FROM google/cloud-sdk:alpine as gcloud
WORKDIR /app
ARG KEY_FILE_CONTENT
RUN echo $KEY_FILE_CONTENT | gcloud auth activate-service-account --key-file=- \
&& gsutil cp gs://BUCKET_NAME/path/to/file .
....
FROM <FINAL LAYER>
COPY --from=gcloud /app/<myFile> .
....
The Docker build command
docker build --build-arg KEY_FILE_CONTENT="YOUR_KEY_FILE_CONTENT" \
-t gcr.io/$PROJECT_ID/$SERVICE_NAME .
YOUR_KEY_FILE_CONTENT depends on your environment. Here some solution to inject it:
On Github Action: ${{ secrets.SERVICE_ACCOUNT_KEY }}
On your local environment: $(cat my_key.json)
I see you tagged Cloud Build,
You can use step like this:
steps:
- name: gcr.io/cloud-builders/gsutil
args: ['cp', 'gs://mybucket/results.zip', 'previous_results.zip']
# operations that use previous_results.zip and produce new_results.zip
- name: gcr.io/cloud-builders/gsutil
args: ['cp', 'new_results.zip', 'gs://mybucket/results.zip']