Is there a way to automate the build of kubeflow pipeline in gcp - docker

here is my cloudbuild.yaml file
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$_TRAINER_IMAGE_NAME:$TAG_NAME', '.']
dir: $_PIPELINE_FOLDER/trainer_image
# Build the base image for lightweight components
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$_BASE_IMAGE_NAME:$TAG_NAME', '.']
dir: $_PIPELINE_FOLDER/base_image
# Compile the pipeline
- name: 'gcr.io/$PROJECT_ID/kfp-cli'
args:
- '-c'
- |
dsl-compile --py $_PIPELINE_DSL --output $_PIPELINE_PACKAGE
env:
- 'BASE_IMAGE=gcr.io/$PROJECT_ID/$_BASE_IMAGE_NAME:$TAG_NAME'
- 'TRAINER_IMAGE=gcr.io/$PROJECT_ID/$_TRAINER_IMAGE_NAME:$TAG_NAME'
- 'RUNTIME_VERSION=$_RUNTIME_VERSION'
- 'PYTHON_VERSION=$_PYTHON_VERSION'
- 'COMPONENT_URL_SEARCH_PREFIX=$_COMPONENT_URL_SEARCH_PREFIX'
- 'USE_KFP_SA=$_USE_KFP_SA'
dir: $_PIPELINE_FOLDER/pipeline
# Upload the pipeline
- name: 'gcr.io/$PROJECT_ID/kfp-cli'
args:
- '-c'
- |
kfp --endpoint $_ENDPOINT pipeline upload -p ${_PIPELINE_NAME}_$TAG_NAME $_PIPELINE_PACKAGE
dir: $_PIPELINE_FOLDER/pipeline
# Push the images to Container Registry
images: ['gcr.io/$PROJECT_ID/$_TRAINER_IMAGE_NAME:$TAG_NAME', 'gcr.io/$PROJECT_ID/$_BASE_IMAGE_NAME:$TAG_NAME']
so basically what i am trying to do is write a bash script which contains all the variables and any time when i push the changes the cloud build should automatically triggered.

You may add another step in pipeline for the bash. Something like below.
steps:
- name: 'bash'
args: ['echo', 'I am bash']

You seem to split the compilation and upload between two separate steps. I'm not 100% sure this works in CloudBuild (I'm not sure the files are shared).
You might want to combine those steps.

Related

How to ignore "image not found" error in cloudbuild.yaml when deleting images from the artifact registry?

Currently the cloudbuild.yaml looks like this:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: [ 'artifacts', 'docker', 'images', 'delete', 'location-docker.pkg.dev/$PROJECT_ID/repository/image' ]
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'location-docker.pkg.dev/$PROJECT_ID/repository/image:latest', './' ]
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'location-docker.pkg.dev/$PROJECT_ID/reporitory/image:latest']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args: ['run', 'deploy', 'image', '--image', 'us-east1-docker.pkg.dev/$PROJECT_ID/cb-area52-ar/area52-image:latest', '--region', 'region']
images:
- 'location-docker.pkg.dev/$PROJECT_ID/registry/image:latest'
That does basically the following:
delete the existsing image in the artifact registry
build the new image
pushes it back to the artifact registry
deploys it to google cloud run
My problem is now that the first step fails whenever there is no image in the registry.
How can i prevent it from cancelling the whole build process when this occurs?
You can create an inline script to check whether the image exists or not. This assumes you always want to delete the image with the latest tag.
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-eEuo'
- 'pipefail'
- '-c'
- |-
if [[ -z `gcloud artifacts docker images describe location-docker.pkg.dev/$PROJECT_ID/repository/image:latest --verbosity=none --format=text` ]]
then
echo "Image does not exist. Continue with the build"
else
echo "Deleting Image"
gcloud artifacts docker images delete location-docker.pkg.dev/$PROJECT_ID/repository/image
fi

Secret in Cloud Build to Cloud run step

I'm trying to setup CICD for my app using Cloud Build and Cloud Run.
My cloudbuild.yaml looks like this:
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/project/dsapp-staging:$COMMIT_SHA', '.']
timeout: '1200s'
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/project/dsapp-staging:$COMMIT_SHA']
timeout: '1200s'
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'dsapp-staging'
- '--image'
- 'gcr.io/project/dsapp-staging:$COMMIT_SHA'
- '--region'
- 'europe-west1'
- "--set-env-vars=FIREBASE_AUTH=$$FIREBASE_AUTH"
timeout: '1200s'
secretEnv: ['FIREBASE_AUTH']
timeout: '1200s'
availableSecrets:
secretManager:
- versionName: projects/projectid/secrets/FIREBASE_AUTH/versions/1
env: 'FIREBASE_AUTH'
images:
- 'gcr.io/project/dsapp-staging:$COMMIT_SHA'
My problem is with the 'FIREBASE_AUTH' secret variable I get an error saying the substitution i not available.
How can I pass my env var taken from secrets to my gcloud command ?
You can't use a secret like that in Cloud Build. I don't know the technical reason, but I can give you the workaround: you must use a shell mode in your step. Let's write it like that
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: bash
args:
- '-c'
- "gcloud run deploy dsapp-staging --image gcr.io/project/dsapp-taging:$COMMIT_SHA --region europe-west1 --set-env-vars=FIREBASE_AUTH=$$FIREBASE_AUTH"
timeout: '1200s'
secretEnv: ['FIREBASE_AUTH']
And now it works!

Gitlab pipeline fails, even though deployment happened on GCP

I just created my first CI/CD pipeline on Gitlab, which creates a docker container for a Next.js app, and deploys it on Google Cloud Run.
My cloudbuild.yaml:
# File: cloudbuild.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/inook-web', '.' ]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/inook-web']
# deploy to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args: ['run', 'deploy', 'inook-web', '--image', 'gcr.io/$PROJECT_ID/inook-web', '--region', 'europe-west1', '--platform', 'managed', '--allow-unauthenticated']
My .gitlab-ci.yml:
# File: .gitlab-ci.yml
image: docker:latest
stages: # List of stages for jobs, and their order of execution
- deploy-test
- deploy-prod
deploy-test:
stage: deploy-test
image: google/cloud-sdk
services:
- docker:dind
script:
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild.yaml
I get the following error message in the CI/CD pipeline:
https://ibb.co/ZXLWrj1
However, the deployment actually succeeds on GCP: https://ibb.co/ZJjtXzG
Any idea what I can do to fix the pipeline error?
What worked for me was to add a custom bucket for the gcloud builds submit to push logs to. Thanks #slauth for pointing me in the right direction.
Updated command:
gcloud builds submit . --config=cloudbuild.yaml --gcs-log-dir=gs://inook_test_logs
If you add a bucket at the end of the command, then it works.
gcloud builds submit . --config=cloudbuild.yaml --gcs-log-dir=gs://my_bucket_name_on_gcp
Remember to create a bucket on GCP :D

Set Dockerfile path as the same directory

My dockerfile and source code are in the same directory, so my cloudbuild.yaml file has '.' as the argument.
However, I am getting this error:
I already specified path '.', but its looking for a different path /workspace/Dockerfile.
My cloudbuil.yaml file:
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- '-t'
- gcr.io/$PROJECT_ID/$_API_NAME
- .
id: docker_build
- name: gcr.io/cloud-builders/docker
args:
- push
- gcr.io/$PROJECT_ID/$_API_NAME
id: docker_push
Can you please try adding the build and dot arguments in a single quote format as below?
- name: 'gcr.io/cloud-builders/docker'
args:
- 'build'
- '-t'
- 'gcr.io/$PROJECT_ID/$_API_NAME'
- '.'
Also, you could add a ls command prior to docker build for troubleshooting purposes, just to make sure you have the Dockerfile and source at the current directory.

Google Cloud Build Docker build-arg not respected

I'm having a problem with Google Cloud Build where the docker build command doesn't seem to be accepting the build-arg option, even though this same command works as expected on local:
Dockerfile:
ARG ASSETS_ENV=development
RUN echo "ASSETS_ENV is ${ASSETS_ENV}"
Build Command:
docker build --build-arg="ASSETS_ENV=production" .
Result on local:
ASSETS_ENV is production
Result on Cloud Build:
ASSETS_ENV is development
Ok the fix was in the cloud build yaml config:
Before:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '--build-arg="ASSETS_ENV=production"', '.']
After:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: ['-c', 'docker build --build-arg="ASSETS_ENV=production" .']
For anyone who defines their steps like this:
- name: 'gcr.io/cloud-builders/docker'
args:
- build
- foo
- bar
The comment from #nader-ghanbari worked for me:
- name: 'gcr.io/cloud-builders/docker'
args:
- build
- --build-arg
- TF2_BASE_IMAGE=${_TF2_BASE_IMAGE}

Resources