Set Dockerfile path as the same directory - docker

My dockerfile and source code are in the same directory, so my cloudbuild.yaml file has '.' as the argument.
However, I am getting this error:
I already specified path '.', but its looking for a different path /workspace/Dockerfile.
My cloudbuil.yaml file:
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- '-t'
- gcr.io/$PROJECT_ID/$_API_NAME
- .
id: docker_build
- name: gcr.io/cloud-builders/docker
args:
- push
- gcr.io/$PROJECT_ID/$_API_NAME
id: docker_push

Can you please try adding the build and dot arguments in a single quote format as below?
- name: 'gcr.io/cloud-builders/docker'
args:
- 'build'
- '-t'
- 'gcr.io/$PROJECT_ID/$_API_NAME'
- '.'
Also, you could add a ls command prior to docker build for troubleshooting purposes, just to make sure you have the Dockerfile and source at the current directory.

Related

How can I create CI/CD pipeline with cloudbuild.yaml in GCP?

I am trying to create a simple CI/CD pipeline. After the client makes git push, it will start a trigger with the below cloudbuilder.yaml:
# steps:
# - name: 'docker/compose:1.28.2'
# args: ['up', '-d']
# - name: "docker/compose:1.28.2"
# args: ["build"]
# images: ['gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose']
# - name: 'gcr.io/cloud-builders/docker'
# id: 'backend'
# args: ['build','-t', 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:latest','.']
# - name: 'gcr.io/cloud-builders/docker'
# args: ['push', 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:latest']
# steps:
# - name: 'docker/compose:1.28.2'
# args: ['up', '-d']
# - name: "docker/compose:1.28.2"
# args: ["build"]
# images:
# - 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose'
# In this directory, run the following command to build this builder.
# $ gcloud builds submit . --config=cloudbuild.yaml
substitutions:
_DOCKER_COMPOSE_VERSION: 1.28.2
steps:
- name: 'docker/compose:1.28.2'
args:
- 'build'
- '--build-arg'
- 'DOCKER_COMPOSE_VERSION=${_DOCKER_COMPOSE_VERSION}'
- '-t'
- 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:latest'
- '-t'
- 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:${_DOCKER_COMPOSE_VERSION}'
- '.'
- name: 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose'
args: ['version']
images:
- 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:latest'
- 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:${_DOCKER_COMPOSE_VERSION}'
tags: ['cloud-builders-community']
It returns the error below: it cannot create images in the repository. How can I solve this issue?
ERROR: failed to pull because we ran out of retries.
ERROR
ERROR: build step 1 "gcr.io/internal-invoicing-solution/cloudbuild-demo-dockercompose" failed: error pulling build step 1 "gcr.io/internal-invoicing-solution/cloudbuild-demo-dockercompose": generic::unknown: retry budget exhausted (10 attempts): step exited with non-zero status: 1
```
Before pulling your image in the 2nd step, you need to push it. When you declare the images at the end of your yaml definition, the image are pushed automatically at the end on the pipeline. Here you need it in the middle.
EDIT 1
I just added a docker push step, copy and paste from your comments. Does it work?
substitutions:
_DOCKER_COMPOSE_VERSION: 1.28.2
steps:
- name: 'docker/compose:1.28.2'
args:
- 'build'
- '--build-arg'
- 'DOCKER_COMPOSE_VERSION=${_DOCKER_COMPOSE_VERSION}'
- '-t'
- 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:latest'
- '-t'
- 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:${_DOCKER_COMPOSE_VERSION}'
- '.'
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:latest']
- name: 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose'
args: ['version']
images:
- 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:latest'
- 'gcr.io/$PROJECT_ID/cloudbuild-demo-dockercompose:${_DOCKER_COMPOSE_VERSION}'
tags: ['cloud-builders-community']

Error on cloudbuild.yaml : (gcloud.builds.submit) interpreting cloudbuild.yaml as build config: 'list' object has no attribute 'items'

This is my cloudbuild.yaml file:
steps:
# BUILD IMAGE
- name: "gcr.io/cloud-builders/docker"
args:
- "build"
- "--build-arg"
- "PROJECT_ID=$PROJECT_ID"
- "--build-arg"
- "SERVER_ENV=$_SERVER_ENV"
- "--tag"
- "gcr.io/$PROJECT_ID/my-image:$TAG_NAME"
- "."
env:
- "PROJECT_ID=$PROJECT_ID"
timeout: 180s
# PUSH IMAGE TO REGISTRY
- name: "gcr.io/cloud-builders/docker"
args:
- "push"
- "gcr.io/$PROJECT_ID/my-image:$TAG_NAME"
timeout: 180s
# DEPLOY CONTAINER WITH GCLOUD
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: gcloud
args:
- "run"
- "deploy"
- "my-service"
- "--image=gcr.io/$PROJECT_ID/my-image:$TAG_NAME"
- "--platform=managed"
- "--region=us-central1"
- "--min-instances=1"
- "--max-instances=3"
- "--port=8080"
timeout: 180s
images:
- "gcr.io/$PROJECT_ID/my-image:$TAG_NAME"
substitutions:
- "_SERVER_ENV=TEST"
Is there anything wrong with this file?
Here is the error I'm getting when I'm running the following command:
gcloud builds submit ./cloudRun \
--config=./cloudRun/cloudbuild.yaml \
--substitutions=_SERVER_ENV=TEST,TAG_NAME=MY_TAG \
--project=MY_PROJECT_ID
ERROR: (gcloud.builds.submit) interpreting ./cloudRun/cloudbuild.yaml as build config: 'list' object has no attribute 'items'
Just found out what was wrong:
substitutions is not an ARRAY, but an OBJECT:
So this is NOT correct:
substitutions:
- "_SERVER_ENV=TEST"
But this is correct:
substitutions:
_SERVER_ENV: "TEST"

Is there a way to automate the build of kubeflow pipeline in gcp

here is my cloudbuild.yaml file
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$_TRAINER_IMAGE_NAME:$TAG_NAME', '.']
dir: $_PIPELINE_FOLDER/trainer_image
# Build the base image for lightweight components
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$_BASE_IMAGE_NAME:$TAG_NAME', '.']
dir: $_PIPELINE_FOLDER/base_image
# Compile the pipeline
- name: 'gcr.io/$PROJECT_ID/kfp-cli'
args:
- '-c'
- |
dsl-compile --py $_PIPELINE_DSL --output $_PIPELINE_PACKAGE
env:
- 'BASE_IMAGE=gcr.io/$PROJECT_ID/$_BASE_IMAGE_NAME:$TAG_NAME'
- 'TRAINER_IMAGE=gcr.io/$PROJECT_ID/$_TRAINER_IMAGE_NAME:$TAG_NAME'
- 'RUNTIME_VERSION=$_RUNTIME_VERSION'
- 'PYTHON_VERSION=$_PYTHON_VERSION'
- 'COMPONENT_URL_SEARCH_PREFIX=$_COMPONENT_URL_SEARCH_PREFIX'
- 'USE_KFP_SA=$_USE_KFP_SA'
dir: $_PIPELINE_FOLDER/pipeline
# Upload the pipeline
- name: 'gcr.io/$PROJECT_ID/kfp-cli'
args:
- '-c'
- |
kfp --endpoint $_ENDPOINT pipeline upload -p ${_PIPELINE_NAME}_$TAG_NAME $_PIPELINE_PACKAGE
dir: $_PIPELINE_FOLDER/pipeline
# Push the images to Container Registry
images: ['gcr.io/$PROJECT_ID/$_TRAINER_IMAGE_NAME:$TAG_NAME', 'gcr.io/$PROJECT_ID/$_BASE_IMAGE_NAME:$TAG_NAME']
so basically what i am trying to do is write a bash script which contains all the variables and any time when i push the changes the cloud build should automatically triggered.
You may add another step in pipeline for the bash. Something like below.
steps:
- name: 'bash'
args: ['echo', 'I am bash']
You seem to split the compilation and upload between two separate steps. I'm not 100% sure this works in CloudBuild (I'm not sure the files are shared).
You might want to combine those steps.

Google Cloud Build Docker build-arg not respected

I'm having a problem with Google Cloud Build where the docker build command doesn't seem to be accepting the build-arg option, even though this same command works as expected on local:
Dockerfile:
ARG ASSETS_ENV=development
RUN echo "ASSETS_ENV is ${ASSETS_ENV}"
Build Command:
docker build --build-arg="ASSETS_ENV=production" .
Result on local:
ASSETS_ENV is production
Result on Cloud Build:
ASSETS_ENV is development
Ok the fix was in the cloud build yaml config:
Before:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '--build-arg="ASSETS_ENV=production"', '.']
After:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: ['-c', 'docker build --build-arg="ASSETS_ENV=production" .']
For anyone who defines their steps like this:
- name: 'gcr.io/cloud-builders/docker'
args:
- build
- foo
- bar
The comment from #nader-ghanbari worked for me:
- name: 'gcr.io/cloud-builders/docker'
args:
- build
- --build-arg
- TF2_BASE_IMAGE=${_TF2_BASE_IMAGE}

How to build a docker image using cloud build with sdk, in local machine without dying trying it

I'm using cloud build to build a docker image
Guiding myself from examples provide at github:
------bin
------pkg
------src
--cloud.google.com
--contrib.go.opencensus.io
--github.com
--go.opencensus.io
--golang.org
--google.golang.org
--me
--backend
------cloudbuild.yaml
------Dockerfile
Where all my code is in src -> me -> backend
Cloud build steps .yaml file content is:
steps:
- name: 'gcr.io/cloud-builders/go'
args: ['install', 'me/backend']
env: ['GOPATH=.']
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '--tag=gcr.io/superpack-213022/me/backend', '.']
images: ['gcr.io/superpack-213022/me/backend']
Docker File:
FROM scratch
COPY bin/backend /me/backend
ENTRYPOINT ["/me/backend"]
Gives me this error:
can not find a package golang/x/sys/unix in any of ...
Guiding myself from examples provide at documentation:
------bin
------pkg
------src
--cloud.google.com
--contrib.go.opencensus.io
--github.com
--go.opencensus.io
--golang.org
--google.golang.org
--me
--backend
cloudbuild.yaml
Dockerfile
Where all my code is in src -> me -> backend
Cloud build steps .yaml file content is:
steps:
- name: 'gcr.io/cloud-builders/go'
args: ['install', '.']
env: ['GOPATH=backend']
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '--tag=gcr.io/superpack-213022/backend', '.']
images: ['gcr.io/superpack-213022/backend']
Docker File:
FROM scratch
COPY bin/backend /backend
ENTRYPOINT ["backend"]
give me this error:
"can not find package me/backend in any of . and"
and a buch of error with the same, it is not able to find my package
So anybody knows what is wrong with the configuration? :(
For users with the same trouble, the big problem is go dependecies
args: ['install', 'me/backend']
"install" was the bottleneck that stoped me to acomplish the build, for some reason, "install" does not fetch all dependencies, you need to fetch all dependencies first with this:
args: ['get','-d','me/backend/...'], obviusly you change "me/backend" for your repositorie you want to build.
How is my local repositorie setup:
-----bin
------pkg
------src
--cloud.google.com #dependency
--contrib.go.opencensus.io #dependency
--github.com #dependency
--go.opencensus.io #dependency
--golang.org #dependency
--google.golang.org #dependency
--me #my code
--backend
.
.
--deploy
cloudbuild.yaml
Dockerfile
Also I moved all my code at "src/me" to google cloud repositories
cloudbuild.yaml:
steps:
- name: 'gcr.io/cloud-builders/gcloud-slim'
args: ['source','repos','clone', '[repositorie name]','src/me','--project=[project name]'] #change [repositorie name] and [project name] for your repositorie name and project name respectively
- name: 'gcr.io/cloud-builders/go'
args: ['get','-d','me/backend/...']
- name: 'gcr.io/cloud-builders/go'
args: ['install', 'me/backend']
env: ['GOPATH=.']
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '--tag=gcr.io/[project name]/me/backend', '.'] #change [project name] with your project name
images: ['gcr.io/[project name]/me/backend'] #change [project name] with your project name
artifacts:
objects:
location: 'gs://[your bucket name]/backend/' #change [your bucket name] for your bucket name
paths: ['./bin/backend']
Dockerfile:
FROM alpine
COPY bin/backend /backend
RUN apk update && apk add ca-certificates && rm -rf /var/cache/apk/*
CMD ["/backend"]
RUN chmod 755 /backend
In command line you should(taking my local repositorie example):
cd src/me/deploy
gcloud builds submit .

Resources