i want to deploy my DBT/Bigquery project in a Docker container using CICD through Github actions. I am struggling to get the GCP credentials into the container. I put the credentials in a Github secret, as I obviously cannot put the credential file on Github. How can I pass the Github secret as an argument to keyfile.json so that it is copied into the container?
My Dockerfile:
FROM fishtownanalytics/dbt:0.21.0
ARG RUN_TARGET=foo
RUN groupadd --gid 50000 docker && \
useradd --home-dir /home/docker --create-home --uid 50000 --gid 50000 --skel /dev/null docker
USER docker
RUN mkdir /home/docker/.dbt
# Ordering is least to most frequently touched folder/file
COPY profiles.yml /home/docker/.dbt/profiles.yml
COPY keyfile.json /home/docker/keyfile.json
COPY macros /home/docker/macros
COPY dbt_project.yml /home/docker/dbt_project.yml
COPY models /home/docker/models
WORKDIR /home/docker/
# Run dbt on container startup.
CMD ["run"]
My github/workflows/main.yml file looks as follows:
name: Build and Deploy to dbt project
on: push
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout#v2
- name: dotenv-load
id: dotenv
uses: falti/dotenv-action#v0.2.7
- name: Set up Python 3.9
uses: actions/setup-python#v2
with:
python-version: 3.9
- name: Install dependencies
run: |
python -m pip install --upgrade pip
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Configure Docker
run: gcloud auth configure-docker -q
- name: Build and push Docker
uses: mr-smithers-excellent/docker-build-push#v5
with:
image: repo/image
tags: v1, latest
registry: eu.gcr.io
username: _json_key
password: ${{ secrets.GCP_SA_KEY }}
This gives the following error when building:
COPY failed: file not found in build context or excluded by .dockerignore: stat keyfile.json: file does not exist
I have tried passing the github secret as a build-args, but to no success.
Or is it really bad practice to put the credentials in the container and should I approach it in a different way? (edited)
Subsequent gcloud commands work for me after the below step. Try adding it immediately after your checkout step.
- name: Set up gcloud
uses: google-github-actions/setup-gcloud#master
with:
service_account_key: ${{ secrets.GCP_SERVICE_ACCOUNT_KEY }}
project_id: ${{ secrets.GCP_PROJECT_ID }}
I ended up using the oath method for authentication:
jaffle_shop:
target: dev
outputs:
dev:
type: bigquery
method: oauth
project: project_name
dataset: dataset_name
threads: 1
timeout_seconds: 300
location: europe-west4 # Optional, one of US or EU
priority: interactive
retries: 1
name: Build and Deploy to dbt project
on: push
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout#v2
- name: dotenv-load
id: dotenv
uses: falti/dotenv-action#v0.2.7
- name: get sha
id: vars
run: |
echo ::set-output name=sha_short::$(git rev-parse --short=8 ${{ github.sha }})
- name: Set up Python 3.9
uses: actions/setup-python#v2
with:
python-version: 3.9
- name: Install dependencies
run: |
python -m pip install --upgrade pip
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Login
uses: google-github-actions/setup-gcloud#master
with:
project_id: ${{ steps.dotenv.outputs.GCP_PROJECT }}
service_account_key: ${{ secrets.GCP_SA_KEY }}
export_default_credentials: true
- name: Configure Docker
run: gcloud auth configure-docker -q
- name: Build and push Docker
uses: mr-smithers-excellent/docker-build-push#v5
with:
image: repo/image
tags: v1, latest
registry: eu.gcr.io
username: _json_key
password: ${{ secrets.GCP_SA_KEY }}
Related
I have the following environment variable set in GitHub workflow file (on the top level):
env:
GITHUB_RUN_NUMBER: ${github.run_number}
I would like to access it in my docker container (frontend-react) and log in console:
console.log(`Build number= ${process.env.GITHUB_RUN_NUMBER}`)
Currently, the GITHUB_RUN_NUMBER does not reach the container build in the workflow exec container env
How should I pass environment variables from workflow to container?
Thanks!
name: CI_DEMO
on:
workflow_dispatch
env:
GITHUB_RUN_NUMBER: ${github.run_number}
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
Test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Use Node.js
uses: actions/setup-node#v3
with:
node-version: '14.20.1'
cache: 'npm'
- run: npm ci
- run: CI=true npm test -- --verbose --coverage --watchAll=false
Build_Demo:
runs-on: ubuntu-latest
needs: [ Test ]
steps:
- uses: actions/checkout#v3
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-central-1
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build and Push Docker Image
env:
CI_COMMIT_TAG: N-${{ github.run_number }}
run: |
sudo docker build -t $CI_REGISTRY/$CI_REGISTRY_REPOSITORY:$CI_COMMIT_TAG . \
-f ./deploy/Dockerfile \
--build-arg WEB_PRIVATE_KEY="$WEB_PRIVATE_KEY" \
--build-arg REACT_APP_GA_TRACKING_ID=$REACT_APP_GA_TRACKING_ID \
--build-arg
docker push $CI_REGISTRY/$CI_REGISTRY_REPOSITORY:$CI_COMMIT_TAG
Deploy_Demo:
runs-on: ubuntu-latest
needs: [ Build_NIX_Demo ]
steps:
- uses: actions/checkout#v3
- name: Preparing for deploy
env:
DEMO_PRIVATE_KEY: ${{ secrets.DEMO_PRIVATE_KEY }}
run: |
- name: Deploy
env:
run: |
export TMP_AFRONTEND_VERSION=$CI_COMMIT_TAG
ssh -o SendEnv=TMP_AFRONTEND_VERSION $DEMO_IP "AFRONTEND_VERSION=$TMP_AFRONTEND_VERSION; unset AFRONTEND_VERSION; export AFRONTEND_VERSION; docker-compose -f $DEMO_PROJECT_PATH/docker-compose.yml --env-file $DEMO_PROJECT_PATH/.env up -d --no-deps a_frontend"
The missing step was defining the variable in Docker file as mentioned by aknosis
I had a script previously working from Docker Hub that I now want to run that pulls from GitHub Container registry instead. I'm sure I've got the syntax wrong somehow. I keep going between errors like "can not have using and with" to now, I'm getting a syntax error reporting on link 41 with no error (41 is the third line below).
I basically want to build my Docker image, then push it when my action file changes.
- name: Run step if any of the listed files above change # UPDATE
if: steps.changed-files-specific.outputs.any_changed == 'true'
- uses: docker/login-action#v1
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- run: |
RELEASEVERSION=11.09
# RELEASEVERSION=$(cat version.txt)
# https://github.community/t/wanting-to-add-a-build-date-and-time-to-my-github-action/220185/6'
#
RELEASEDATE1=$(date +"%m/%d/%YT%H:%M:%S%p")
RELEASEDATE=$(TZ=":US/Pacific" date +%c)
# https://unix.stackexchange.com/questions/164826/date-command-iso-8601-option
RELEASEDATEISO=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
#
# removes any previous lines that might have contained VERSION or DATE (not tested)
perl -ni -e 'next if /^RELEASE(?:VERSION|DATE)=/;print' .env.production
# record in `.env.production`
(
echo "RELEASEVERSION=$RELEASEVERSION"
echo "RELEASEDATE=$RELEASEDATE"
echo "RELEASEDATEISO=$RELEASEDATEISO"
) >> .env.production
echo "Docker webdevsvcc changed so building then pushing..."
docker build . --file Dockerfile --tag ghcr.io/pkellner/svccwebsitedev --tag ghcr.io/pkellner/svccwebsitedev:$RELEASEVERSION
docker push ghcr.io/pkellner/svccwebsitedev --all-tags
I watched a good video on Yaml and that helped a lot. Here is the file that I wanted that works now.
jobs:
build:
runs-on: ubuntu-latest # windows-latest | macos-latest
defaults:
run:
working-directory: ApolloServerSvcc # UPDATE
name: docker build and push
steps:
- name: Checkout code
uses: actions/checkout#v2
# setup Docker buld action
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action#v1
- name: Login to Github Packages
uses: docker/login-action#v1
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GH_TOKEN }}
- name: Image digest
echo "One or more files in /ApolloServerSvcc changed in branch webdevsvccmobi-release"
run: |
RELEASEVERSION=11.02
# RELEASEVERSION=$(cat version.txt)
# https://github.community/t/wanting-to-add-a-build-date-and-time-to-my-github-action/220185/6'
#
RELEASEDATE1=$(date +"%m/%d/%YT%H:%M:%S%p")
RELEASEDATE=$(TZ=":US/Pacific" date +%c)
# https://unix.stackexchange.com/questions/164826/date-command-iso-8601-option
RELEASEDATEISO=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
#
# removes any previous lines that might have contained VERSION or DATE (not tested)
perl -ni -e 'next if /^RELEASE(?:VERSION|DATE)=/;print' .env.production
# record in `.env`
(
echo "RELEASEVERSION=$RELEASEVERSION"
echo "RELEASEDATE=$RELEASEDATE"
echo "RELEASEDATEISO=$RELEASEDATEISO"
) >> .env
echo "building then pushing..."
docker build . --file Dockerfile --tag ghcr.io/pkellner/apolloserversvccdev:latest --tag ghcr.io/pkellner/apolloserversvccdev:$RELEASEVERSION
docker push ghcr.io/pkellner/apolloserversvccdev --all-tags
I have created the following repository : https://github.com/d3vpasha/docker
I have a Dockerfile that is meant to create a node app:
FROM node:12 as node
RUN printf "deb http://archive.debian.org/debian/ jessie main\ndeb-src http://archive.debian.org/debian/ jessie main\ndeb http://security.debian.org jessie/updates main\ndeb-src http://security.debian.org jessie/updates main" > /etc/apt/sources.list
RUN apt-get update && \
apt-get install -y zip
RUN npm i -g npm#latest
I have also created the following GitHub workflow to build the image from the dockerfile & push it to the GitHub registry:
name: webapp
on:
push:
branches: ['main']
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build:
name: "build webapp"
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout the repo
uses: actions/checkout#v2
- name: Login to Github container registry
uses: docker/login-action#v1
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action#v1
with:
push: true
Once I push to the main branch, the GitHub workflow is launched & finished successfully but nothing is present in packages menu of the repo, neither in the packages menu of my own organization. Those are the final lines of the step "Build and push Docker image ":
Successfully built 98edd82e75d5
Pushing image []
I followed the official tutorial & did everything like asked but it still doesn't work.
Does someone have a clue about it?
You miss the tags, the correct is:
- name: Build and push Docker image
uses: docker/build-push-action#v1
with:
push: true
tags: $your_dockerhub_account/$your_dockerhub_repository
BTW, docker/build-push-action is now in v2, you could consider to use it. If you use v2, it will clearly tell you next error if you not specify tags:
error: tag is needed when pushing to registry
Error: buildx failed with: error: tag is needed when pushing to registry
So, I have been trying to move to GitHub Actions for CI/CD. While I was writing a workflow, I faced some issues in building the image. The error is "Process completed with exit code 1". And I am unable to figure out what's wrong with my file. I mean, I got most of it from docs.
The step which is causing the error is "build" - docker build -t gcr.io/${{ secrets.GCP_PROJECT_ID }}/github-action-test:latest.
name: cloud-run-deploy
on:
push:
branches:
- develop
jobs:
build-and-deploy:
name: Cloud Run Deployment
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#master
- name: Setup GCP Service Account
uses: GoogleCloudPlatform/github-actions/setup-gcloud#master
with:
version: "latest"
service_account_email: ${{ secrets.GCP_SA_EMAIL }}
service_account_key: ${{ secrets.GCP_SA_KEY }}
export_default_credentials: true
- name: Configure Docker
run: |
gcloud auth configure-docker
- name: Build
run: |
docker build -t gcr.io/${{ secrets.GCP_PROJECT_ID }}/github-action-test:latest .
- name: Push
run: |
docker push gcr.io/${{ secrets.GCP_PROJECT_ID }}/github-action-test:latest
- name: Deploy
run: |
gcloud run deploy github-action-test \
--region europe-west1 \
--image gcr.io/${{ secrets.GCP_PROJECT_ID }}/github-action-test \
--platform managed \
--allow-unauthenticated \
--project ${{ secrets.GCP_PROJECT_ID }}
I am trying to use gsutil to copy a file from GCS into a Run container during the build step.
The steps I have tried:
RUN pip install gsutil
RUN gsutil -m cp -r gs://BUCKET_NAME $APP_HOME/artefacts
The error:
ServiceException: 401 Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object.
CommandException: 1 file/object could not be transferred.
The command '/bin/sh -c gsutil -m cp -r gs://BUCKET_NAME $APP_HOME/artefacts' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
The service account (default compute & cloudbuild) does have access to GCS, and I have also tried to gsutil config -a and with various other flags with no success!
I am not sure on exactly how I should authenticate to successfully access the bucket.
Here my github action job
jobs:
build:
name: Build image
runs-on: ubuntu-latest
env:
BRANCH: ${GITHUB_REF##*/}
SERVICE_NAME: ${{ secrets.SERVICE_NAME }}
PROJECT_ID: ${{ secrets.PROJECT_ID }}
steps:
- name: Checkout
uses: actions/checkout#v2
# Setup gcloud CLI
- uses: google-github-actions/setup-gcloud#master
with:
service_account_key: ${{ secrets.SERVICE_ACCOUNT_KEY }}
project_id: ${{ secrets.PROJECT_ID }}
export_default_credentials: true
# Download the file locally
- name: Get_file
run: |-
gsutil cp gs://BUCKET_NAME/path/to/file .
# Build docker image
- name: Image_build
run: |-
docker build -t gcr.io/$PROJECT_ID/$SERVICE_NAME .
# Configure docker to use the gcloud command-line tool as a credential helper
- run: |
gcloud auth configure-docker -q
# Push image to Google Container Registry
- name: Image_push
run: |-
docker push gcr.io/$PROJECT_ID/$SERVICE_NAME
You have to set 3 secrets:
SERVICE_ACCOUNT_KEY: which is your service account key file
SERVICE_NAME: the name of your container
PROJECT_ID: the project where to deploy your image
Because you download the file locally, the file is locally present in the Docker build. Then, simply COPY it in the docker file and do what you want with it.
UPDATE
If you want to do this in docker, you can achieve this like that
Dockerfile
FROM google/cloud-sdk:alpine as gcloud
WORKDIR /app
ARG KEY_FILE_CONTENT
RUN echo $KEY_FILE_CONTENT | gcloud auth activate-service-account --key-file=- \
&& gsutil cp gs://BUCKET_NAME/path/to/file .
....
FROM <FINAL LAYER>
COPY --from=gcloud /app/<myFile> .
....
The Docker build command
docker build --build-arg KEY_FILE_CONTENT="YOUR_KEY_FILE_CONTENT" \
-t gcr.io/$PROJECT_ID/$SERVICE_NAME .
YOUR_KEY_FILE_CONTENT depends on your environment. Here some solution to inject it:
On Github Action: ${{ secrets.SERVICE_ACCOUNT_KEY }}
On your local environment: $(cat my_key.json)
I see you tagged Cloud Build,
You can use step like this:
steps:
- name: gcr.io/cloud-builders/gsutil
args: ['cp', 'gs://mybucket/results.zip', 'previous_results.zip']
# operations that use previous_results.zip and produce new_results.zip
- name: gcr.io/cloud-builders/gsutil
args: ['cp', 'new_results.zip', 'gs://mybucket/results.zip']