How can I create CI/CD pipeline with cloudbuild.yaml in GCP? - docker

I am trying to create a simple CI/CD pipeline. After the client makes git push, it will start a trigger with the below cloudbuilder.yaml:
# steps:
# - name: 'docker/compose:1.28.2'
# args: ['up', '-d']
# - name: "docker/compose:1.28.2"
# args: ["build"]
# images: ['gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose']
# - name: 'gcr.io/cloud-builders/docker'
# id: 'backend'
# args: ['build','-t', 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:latest','.']
# - name: 'gcr.io/cloud-builders/docker'
# args: ['push', 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:latest']
# steps:
# - name: 'docker/compose:1.28.2'
# args: ['up', '-d']
# - name: "docker/compose:1.28.2"
# args: ["build"]
# images:
# - 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose'
# In this directory, run the following command to build this builder.
# $ gcloud builds submit . --config=cloudbuild.yaml
substitutions:
_DOCKER_COMPOSE_VERSION: 1.28.2
steps:
- name: 'docker/compose:1.28.2'
args:
- 'build'
- '--build-arg'
- 'DOCKER_COMPOSE_VERSION=${_DOCKER_COMPOSE_VERSION}'
- '-t'
- 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:latest'
- '-t'
- 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:${_DOCKER_COMPOSE_VERSION}'
- '.'
- name: 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose'
args: ['version']
images:
- 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:latest'
- 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:${_DOCKER_COMPOSE_VERSION}'
tags: ['cloud-builders-community']
It returns the error below: it cannot create images in the repository. How can I solve this issue?
ERROR: failed to pull because we ran out of retries.
ERROR
ERROR: build step 1 "gcr.io/internal-invoicing-solution/cloudbuild-demo-dockercompose" failed: error pulling build step 1 "gcr.io/internal-invoicing-solution/cloudbuild-demo-dockercompose": generic::unknown: retry budget exhausted (10 attempts): step exited with non-zero status: 1
```

Before pulling your image in the 2nd step, you need to push it. When you declare the images at the end of your yaml definition, the image are pushed automatically at the end on the pipeline. Here you need it in the middle.
EDIT 1
I just added a docker push step, copy and paste from your comments. Does it work?
substitutions:
_DOCKER_COMPOSE_VERSION: 1.28.2
steps:
- name: 'docker/compose:1.28.2'
args:
- 'build'
- '--build-arg'
- 'DOCKER_COMPOSE_VERSION=${_DOCKER_COMPOSE_VERSION}'
- '-t'
- 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:latest'
- '-t'
- 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:${_DOCKER_COMPOSE_VERSION}'
- '.'
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:latest']
- name: 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose'
args: ['version']
images:
- 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:latest'
- 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:${_DOCKER_COMPOSE_VERSION}'
tags: ['cloud-builders-community']

Related

Circleci Can we use multiple workflows for multiple type?

I'm new in circleci. I want to install my infrastructure via terraform after that I also want to trigger my build, deploy and push command for aws side. But workflow does not allow me to use plan_approve_apply and build-and-deploy together in understand one workflow. I also try to create multiple workflows (like below example) for each one but also it didn't work. How can I call both in single circli config file
My Circleci config yml file:
version: 2.1
orbs:
aws-ecr: circleci/aws-ecr#8.1.0
aws-ecs: circleci/aws-ecs#2.2.1
jobs:
init-plan:
working_directory: /tmp/project
docker:
- image: docker.mirror.hashicorp.services/hashicorp/terraform:light
steps:
- checkout
- run:
name: terraform init & plan
command: |
terraform init
terraform plan
- persist_to_workspace:
root: .
paths:
- .
apply:
docker:
- image: docker.mirror.hashicorp.services/hashicorp/terraform:light
steps:
- attach_workspace:
at: .
- run:
name: terraform
command: |
terraform apply
- persist_to_workspace:
root: .
paths:
- .
destroy:
docker:
- image: docker.mirror.hashicorp.services/hashicorp/terraform:light
steps:
- attach_workspace:
at: .
- run:
name: destroy
command: |
terraform destroy
- persist_to_workspace:
root: .
paths:
- .
workflows:
version: 2
plan_approve_apply:
jobs:
- init-plan
- apply:
requires:
- init-plan
- hold-destroy:
type: approval
requires:
- apply
- destroy:
requires:
- hold-destroy
workflows: # didn't work
build-and-deploy:
jobs:
- aws-ecr/build_and_push_image:
account-url: "${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com"
repo: "${AWS_RESOURCE_NAME_PREFIX}"
region: ${AWS_DEFAULT_REGION}
tag: "${CIRCLE_SHA1}"
- aws-ecs/deploy-service-update:
requires:
- aws-ecr/build_and_push_image
aws-region: ${AWS_DEFAULT_REGION}
family: "${AWS_RESOURCE_NAME_PREFIX}-service"
cluster-name: "${AWS_RESOURCE_NAME_PREFIX}-cluster"
container-image-name-updates: "container=${AWS_RESOURCE_NAME_PREFIX}-service,image-and-tag=${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${AWS_RESOURCE_NAME_PREFIX}:${CIRCLE_SHA1}"

Secret in Cloud Build to Cloud run step

I'm trying to setup CICD for my app using Cloud Build and Cloud Run.
My cloudbuild.yaml looks like this:
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/project/dsapp-staging:$COMMIT_SHA', '.']
timeout: '1200s'
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/project/dsapp-staging:$COMMIT_SHA']
timeout: '1200s'
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'dsapp-staging'
- '--image'
- 'gcr.io/project/dsapp-staging:$COMMIT_SHA'
- '--region'
- 'europe-west1'
- "--set-env-vars=FIREBASE_AUTH=$$FIREBASE_AUTH"
timeout: '1200s'
secretEnv: ['FIREBASE_AUTH']
timeout: '1200s'
availableSecrets:
secretManager:
- versionName: projects/projectid/secrets/FIREBASE_AUTH/versions/1
env: 'FIREBASE_AUTH'
images:
- 'gcr.io/project/dsapp-staging:$COMMIT_SHA'
My problem is with the 'FIREBASE_AUTH' secret variable I get an error saying the substitution i not available.
How can I pass my env var taken from secrets to my gcloud command ?
You can't use a secret like that in Cloud Build. I don't know the technical reason, but I can give you the workaround: you must use a shell mode in your step. Let's write it like that
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: bash
args:
- '-c'
- "gcloud run deploy dsapp-staging --image gcr.io/project/dsapp-taging:$COMMIT_SHA --region europe-west1 --set-env-vars=FIREBASE_AUTH=$$FIREBASE_AUTH"
timeout: '1200s'
secretEnv: ['FIREBASE_AUTH']
And now it works!

Set Dockerfile path as the same directory

My dockerfile and source code are in the same directory, so my cloudbuild.yaml file has '.' as the argument.
However, I am getting this error:
I already specified path '.', but its looking for a different path /workspace/Dockerfile.
My cloudbuil.yaml file:
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- '-t'
- gcr.io/$PROJECT_ID/$_API_NAME
- .
id: docker_build
- name: gcr.io/cloud-builders/docker
args:
- push
- gcr.io/$PROJECT_ID/$_API_NAME
id: docker_push
Can you please try adding the build and dot arguments in a single quote format as below?
- name: 'gcr.io/cloud-builders/docker'
args:
- 'build'
- '-t'
- 'gcr.io/$PROJECT_ID/$_API_NAME'
- '.'
Also, you could add a ls command prior to docker build for troubleshooting purposes, just to make sure you have the Dockerfile and source at the current directory.

Error on cloudbuild.yaml : (gcloud.builds.submit) interpreting cloudbuild.yaml as build config: 'list' object has no attribute 'items'

This is my cloudbuild.yaml file:
steps:
# BUILD IMAGE
- name: "gcr.io/cloud-builders/docker"
args:
- "build"
- "--build-arg"
- "PROJECT_ID=$PROJECT_ID"
- "--build-arg"
- "SERVER_ENV=$_SERVER_ENV"
- "--tag"
- "gcr.io/$PROJECT_ID/my-image:$TAG_NAME"
- "."
env:
- "PROJECT_ID=$PROJECT_ID"
timeout: 180s
# PUSH IMAGE TO REGISTRY
- name: "gcr.io/cloud-builders/docker"
args:
- "push"
- "gcr.io/$PROJECT_ID/my-image:$TAG_NAME"
timeout: 180s
# DEPLOY CONTAINER WITH GCLOUD
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: gcloud
args:
- "run"
- "deploy"
- "my-service"
- "--image=gcr.io/$PROJECT_ID/my-image:$TAG_NAME"
- "--platform=managed"
- "--region=us-central1"
- "--min-instances=1"
- "--max-instances=3"
- "--port=8080"
timeout: 180s
images:
- "gcr.io/$PROJECT_ID/my-image:$TAG_NAME"
substitutions:
- "_SERVER_ENV=TEST"
Is there anything wrong with this file?
Here is the error I'm getting when I'm running the following command:
gcloud builds submit ./cloudRun \
--config=./cloudRun/cloudbuild.yaml \
--substitutions=_SERVER_ENV=TEST,TAG_NAME=MY_TAG \
--project=MY_PROJECT_ID
ERROR: (gcloud.builds.submit) interpreting ./cloudRun/cloudbuild.yaml as build config: 'list' object has no attribute 'items'
Just found out what was wrong:
substitutions is not an ARRAY, but an OBJECT:
So this is NOT correct:
substitutions:
- "_SERVER_ENV=TEST"
But this is correct:
substitutions:
_SERVER_ENV: "TEST"

Is there a way to automate the build of kubeflow pipeline in gcp

here is my cloudbuild.yaml file
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$_TRAINER_IMAGE_NAME:$TAG_NAME', '.']
dir: $_PIPELINE_FOLDER/trainer_image
# Build the base image for lightweight components
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$_BASE_IMAGE_NAME:$TAG_NAME', '.']
dir: $_PIPELINE_FOLDER/base_image
# Compile the pipeline
- name: 'gcr.io/$PROJECT_ID/kfp-cli'
args:
- '-c'
- |
dsl-compile --py $_PIPELINE_DSL --output $_PIPELINE_PACKAGE
env:
- 'BASE_IMAGE=gcr.io/$PROJECT_ID/$_BASE_IMAGE_NAME:$TAG_NAME'
- 'TRAINER_IMAGE=gcr.io/$PROJECT_ID/$_TRAINER_IMAGE_NAME:$TAG_NAME'
- 'RUNTIME_VERSION=$_RUNTIME_VERSION'
- 'PYTHON_VERSION=$_PYTHON_VERSION'
- 'COMPONENT_URL_SEARCH_PREFIX=$_COMPONENT_URL_SEARCH_PREFIX'
- 'USE_KFP_SA=$_USE_KFP_SA'
dir: $_PIPELINE_FOLDER/pipeline
# Upload the pipeline
- name: 'gcr.io/$PROJECT_ID/kfp-cli'
args:
- '-c'
- |
kfp --endpoint $_ENDPOINT pipeline upload -p ${_PIPELINE_NAME}_$TAG_NAME $_PIPELINE_PACKAGE
dir: $_PIPELINE_FOLDER/pipeline
# Push the images to Container Registry
images: ['gcr.io/$PROJECT_ID/$_TRAINER_IMAGE_NAME:$TAG_NAME', 'gcr.io/$PROJECT_ID/$_BASE_IMAGE_NAME:$TAG_NAME']
so basically what i am trying to do is write a bash script which contains all the variables and any time when i push the changes the cloud build should automatically triggered.
You may add another step in pipeline for the bash. Something like below.
steps:
- name: 'bash'
args: ['echo', 'I am bash']
You seem to split the compilation and upload between two separate steps. I'm not 100% sure this works in CloudBuild (I'm not sure the files are shared).
You might want to combine those steps.

Resources