Google Cloud Build Docker build-arg not respected - docker

I'm having a problem with Google Cloud Build where the docker build command doesn't seem to be accepting the build-arg option, even though this same command works as expected on local:
Dockerfile:
ARG ASSETS_ENV=development
RUN echo "ASSETS_ENV is ${ASSETS_ENV}"
Build Command:
docker build --build-arg="ASSETS_ENV=production" .
Result on local:
ASSETS_ENV is production
Result on Cloud Build:
ASSETS_ENV is development

Ok the fix was in the cloud build yaml config:
Before:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '--build-arg="ASSETS_ENV=production"', '.']
After:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: ['-c', 'docker build --build-arg="ASSETS_ENV=production" .']

For anyone who defines their steps like this:
- name: 'gcr.io/cloud-builders/docker'
args:
- build
- foo
- bar
The comment from #nader-ghanbari worked for me:
- name: 'gcr.io/cloud-builders/docker'
args:
- build
- --build-arg
- TF2_BASE_IMAGE=${_TF2_BASE_IMAGE}

Related

How to ignore "image not found" error in cloudbuild.yaml when deleting images from the artifact registry?

Currently the cloudbuild.yaml looks like this:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: [ 'artifacts', 'docker', 'images', 'delete', 'location-docker.pkg.dev/$PROJECT_ID/repository/image' ]
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'location-docker.pkg.dev/$PROJECT_ID/repository/image:latest', './' ]
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'location-docker.pkg.dev/$PROJECT_ID/reporitory/image:latest']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args: ['run', 'deploy', 'image', '--image', 'us-east1-docker.pkg.dev/$PROJECT_ID/cb-area52-ar/area52-image:latest', '--region', 'region']
images:
- 'location-docker.pkg.dev/$PROJECT_ID/registry/image:latest'
That does basically the following:
delete the existsing image in the artifact registry
build the new image
pushes it back to the artifact registry
deploys it to google cloud run
My problem is now that the first step fails whenever there is no image in the registry.
How can i prevent it from cancelling the whole build process when this occurs?
You can create an inline script to check whether the image exists or not. This assumes you always want to delete the image with the latest tag.
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-eEuo'
- 'pipefail'
- '-c'
- |-
if [[ -z `gcloud artifacts docker images describe location-docker.pkg.dev/$PROJECT_ID/repository/image:latest --verbosity=none --format=text` ]]
then
echo "Image does not exist. Continue with the build"
else
echo "Deleting Image"
gcloud artifacts docker images delete location-docker.pkg.dev/$PROJECT_ID/repository/image
fi

Secret in Cloud Build to Cloud run step

I'm trying to setup CICD for my app using Cloud Build and Cloud Run.
My cloudbuild.yaml looks like this:
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/project/dsapp-staging:$COMMIT_SHA', '.']
timeout: '1200s'
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/project/dsapp-staging:$COMMIT_SHA']
timeout: '1200s'
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'dsapp-staging'
- '--image'
- 'gcr.io/project/dsapp-staging:$COMMIT_SHA'
- '--region'
- 'europe-west1'
- "--set-env-vars=FIREBASE_AUTH=$$FIREBASE_AUTH"
timeout: '1200s'
secretEnv: ['FIREBASE_AUTH']
timeout: '1200s'
availableSecrets:
secretManager:
- versionName: projects/projectid/secrets/FIREBASE_AUTH/versions/1
env: 'FIREBASE_AUTH'
images:
- 'gcr.io/project/dsapp-staging:$COMMIT_SHA'
My problem is with the 'FIREBASE_AUTH' secret variable I get an error saying the substitution i not available.
How can I pass my env var taken from secrets to my gcloud command ?
You can't use a secret like that in Cloud Build. I don't know the technical reason, but I can give you the workaround: you must use a shell mode in your step. Let's write it like that
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: bash
args:
- '-c'
- "gcloud run deploy dsapp-staging --image gcr.io/project/dsapp-taging:$COMMIT_SHA --region europe-west1 --set-env-vars=FIREBASE_AUTH=$$FIREBASE_AUTH"
timeout: '1200s'
secretEnv: ['FIREBASE_AUTH']
And now it works!

Set Dockerfile path as the same directory

My dockerfile and source code are in the same directory, so my cloudbuild.yaml file has '.' as the argument.
However, I am getting this error:
I already specified path '.', but its looking for a different path /workspace/Dockerfile.
My cloudbuil.yaml file:
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- '-t'
- gcr.io/$PROJECT_ID/$_API_NAME
- .
id: docker_build
- name: gcr.io/cloud-builders/docker
args:
- push
- gcr.io/$PROJECT_ID/$_API_NAME
id: docker_push
Can you please try adding the build and dot arguments in a single quote format as below?
- name: 'gcr.io/cloud-builders/docker'
args:
- 'build'
- '-t'
- 'gcr.io/$PROJECT_ID/$_API_NAME'
- '.'
Also, you could add a ls command prior to docker build for troubleshooting purposes, just to make sure you have the Dockerfile and source at the current directory.

Is there a way to automate the build of kubeflow pipeline in gcp

here is my cloudbuild.yaml file
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$_TRAINER_IMAGE_NAME:$TAG_NAME', '.']
dir: $_PIPELINE_FOLDER/trainer_image
# Build the base image for lightweight components
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$_BASE_IMAGE_NAME:$TAG_NAME', '.']
dir: $_PIPELINE_FOLDER/base_image
# Compile the pipeline
- name: 'gcr.io/$PROJECT_ID/kfp-cli'
args:
- '-c'
- |
dsl-compile --py $_PIPELINE_DSL --output $_PIPELINE_PACKAGE
env:
- 'BASE_IMAGE=gcr.io/$PROJECT_ID/$_BASE_IMAGE_NAME:$TAG_NAME'
- 'TRAINER_IMAGE=gcr.io/$PROJECT_ID/$_TRAINER_IMAGE_NAME:$TAG_NAME'
- 'RUNTIME_VERSION=$_RUNTIME_VERSION'
- 'PYTHON_VERSION=$_PYTHON_VERSION'
- 'COMPONENT_URL_SEARCH_PREFIX=$_COMPONENT_URL_SEARCH_PREFIX'
- 'USE_KFP_SA=$_USE_KFP_SA'
dir: $_PIPELINE_FOLDER/pipeline
# Upload the pipeline
- name: 'gcr.io/$PROJECT_ID/kfp-cli'
args:
- '-c'
- |
kfp --endpoint $_ENDPOINT pipeline upload -p ${_PIPELINE_NAME}_$TAG_NAME $_PIPELINE_PACKAGE
dir: $_PIPELINE_FOLDER/pipeline
# Push the images to Container Registry
images: ['gcr.io/$PROJECT_ID/$_TRAINER_IMAGE_NAME:$TAG_NAME', 'gcr.io/$PROJECT_ID/$_BASE_IMAGE_NAME:$TAG_NAME']
so basically what i am trying to do is write a bash script which contains all the variables and any time when i push the changes the cloud build should automatically triggered.
You may add another step in pipeline for the bash. Something like below.
steps:
- name: 'bash'
args: ['echo', 'I am bash']
You seem to split the compilation and upload between two separate steps. I'm not 100% sure this works in CloudBuild (I'm not sure the files are shared).
You might want to combine those steps.

How to use docker image from build step in deploy step in CircleCI 2.0?

Struggling for a few last days to migrate from CircleCI 1.0 to 2.0 and while the build process is done, deployment is still a big issue. CircleCI documentation is not really of a big help.
Here is a similar config.yml to what I have:
version 2
jobs:
build:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- setup_remote_docker
- run
name: Install required stuff
command: [...]
- run:
name: Build
command: docker build -t project .
deploy:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- run:
name: Deploy
command: |
bash scripts/deploy/deploy.sh
docker tag project [...]
docker push [...]
workflows:
version: 2
build-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: develop
The issue is in deploy job. I have to specify the docker: -image point but I want to reuse the environment from build job where all the required stuff is installed already. Surely, I could just install them in deploy job, but having multiple deploy jobs leads to code duplication which is something I do not want.
you probably want to persist to workspace and attach it in your deploy job.
you wont need to use '- checkout' after that
https://circleci.com/docs/2.0/configuration-reference/#persist_to_workspace
jobs:
build:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- setup_remote_docker
- run
name: Install required stuff
command: [...]
- run:
name: Build
command: docker build -t project .
- persist_to_workspace:
root: ./
paths:
- ./
deploy:
docker:
- image: circleci/node:8.9.1
steps:
- attach_workspace:
at: ./
- run:
name: Deploy
command: |
bash scripts/deploy/deploy.sh
docker tag project [...]
docker push [...]
workflows:
version: 2
build-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: develop
If you label the image built by the build stage, you can then reference it in the deploy stage: https://docs.docker.com/compose/compose-file/#labels

Resources