I have two or more Docker images where the latter ones are based on the first image. I want to build them all with Google Cloud Build and have the following multi-step cloudbuild.yaml:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/lh-build', './src']
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/lhweb', './src/LHWeb']
images:
- gcr.io/$PROJECT_ID/lh-build
- gcr.io/$PROJECT_ID/lhweb
When I run this build config, I can see the following error:
Step 1/6 : FROM eu.gcr.io/logistikhelden/lh-build manifest for
eu.gcr.io/logistikhelden/lh-build not found
I then tried to push the image after the first step:
...
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/lh-build']
...
The same problem remains, though. Any idea whats wrong here?
You are pushing the image to gcr.io, but it looks like your Dockerfile specifies a base image in the eu.gcr.io registry. Try changing your Dockerfile base image to FROM gcr.io/logistikhelden/lh-build.
Related
Does Google Cloud Build keep docker images between build steps by default?
In their docs they say built images are discarded after every step but I've seen examples in which build steps use images produced in previous build ones! so, are built images discarded on completion of every step or saved somewhere for coming steps?
Here's my cloudbuild.yaml.
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- '-t'
- '${_ARTIFACT_REPO}'
- .
- name: gcr.io/cloud-builders/docker
args:
- push
- '${_ARTIFACT_REPO}'
- name: gcr.io/google.com/cloudsdktool/cloud-sdk
args:
- run
- deploy
- my-service
- '--image'
- '${_ARTIFACT_REPO}'
- '--region'
- us-central1
- '--allow-unauthenticated'
entrypoint: gcloud
Yes, Cloud Build keep images between steps.
You can imagine Cloud Build like a simple VM or your local computer so when you build an image it is stored in local (like when you run docker build -t TAG .)
All the steps run in the same instance so you can re-use built images in previous steps in other steps. Your sample steps do not show this but the following do:
steps:
- name: 'gcr.io/cloud-builders/docker'
args:
- build
- -t
- MY_TAG
- .
- name: 'gcr.io/cloud-builders/docker'
args:
- run
- MY_TAG
- COMMAND
- ARG
As well use the previous built image as part of an step:
steps:
- name: 'gcr.io/cloud-builders/docker'
args:
- build
- -t
- MY_TAG
- .
- name: 'MY_TAG'
args:
- COMMAND
- ARG
All the images built are available in your workspace until the build is done (success or fail).
P.D. The reason I asked where did you read that the images are discarded after every step is because I've not read that in the docs unless I missed something, so if you have the link to that please share it with us.
In documentation they say we can build and deploy container from cloud container registry to cloud run using cloudbuild.yaml file:
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/PROJECT_ID/IMAGE', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/PROJECT_ID/IMAGE']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args: ['run', 'deploy', 'SERVICE-NAME', '--image', 'gcr.io/PROJECT_ID/IMAGE', '--region', 'REGION', '--platform', 'managed']
images:
- gcr.io/PROJECT_ID/IMAGE
And we cal also pull image from docker hub in cloudbuild.yaml file like this :
steps:
- name: "maven"
args: ["mvn", "--version"]
I want to pull image from docker hub and build and deploy this image to cloud run using cloudbuil.yaml file but I don't know how to do that as I am new to docker and cloud run.
I suspect this question is slightly too broad for Stackoverflow.
You would probably benefit from reading the documentation and learning more about these technologies.
The answer also depends on security constraints.
IIRC, Cloud Run requires that you deploy images from Google Container Registry (GCR) so a key step is in transferring the image from DockerHub to GCR (docker pull from DockerHub; docker tag for GCR; docker push to GCR).
If DockerHub requires authentication, you'll need to login to DockerHub before you can docker pull from it.
If GCR requires authentication (probably), you'll need to login to GCR before you can docker push to it. Commonly, this is effected by granting the Cloud Build's account write permission to the storage bucket that underpins GCR.
All of this is possible using Cloud Build steps (see: cloud-builders)
Once the image is in GCR, you can use the gcloud step to deploy it
These steps can be effected using Cloud Build (cloudbuild.yaml), something of the form:
steps:
- name: "docker"
args:
- "login"
- "--username=[[username]]"
- "--password=[[password-or-token]]"
- name: "docker"
args:
- "pull"
- "[[image]]:[[tag]]"
- name: "docker"
args:
- "tag"
- "[[image]]:[[tag]]"
- "gcr.io/${PROJECT_ID}/[[image]]:[[tag]]"
- name: "docker"
args:
- "push"
- "gcr.io/${PROJECT_ID}/[[image]]:[[tag]]"
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- "run"
- "deploy"
- "[[service]]"
- "--image=gcr.io/${PROJECT_ID}/[[image]]:[[tag]]"
- "--region=[[REGION]]"
- "--platform=managed"
You should spend some hands-on time with docker. You can pull an image and push it to a different place like this:
docker pull ubuntu
docker tag ubuntu gcr.io/.../ubuntu
docker push gcr.io/.../ubuntu
I'm not sure how and why Maven is involved here.
I got my project in gitlab and push it to Google Cloud Plattform, to build, push and deploy.
The first step building it, works fine and finished with:
Built and pushed image as gcr.io/my-project/backend
But always the second step is failing with this:
The push refers to repository [gcr.io/my-project/backend]
An image does not exist locally with the tag: gcr.io/my-project/backend
My cloudbuild.yaml
# build the container image
- name: 'gcr.io/cloud-builders/mvn:3.5.0-jdk-8'
args: ['clean', 'install', 'jib:build', '-Dimage=gcr.io/$PROJECT_ID/backend']
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/backend:latest']
You'd have to tag the image with the args:
- name: gcr.io/cloud-builders/docker
id: 'backend'
args: [
'build',
'-t', 'gcr.io/$PROJECT_ID/backend:${SHORT_SHA}',
'-t', 'gcr.io/$PROJECT_ID/backend:latest',
...
]
An images block also seems to be missing:
images:
- 'gcr.io/$PROJECT_ID/backend:${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/backend:latest'
With image-tagging set up, that error message should disappear.
And in order to configure Docker for the Google Container Registry:
gcloud auth configure-docker
See Storing images in Container Registry for reference.
I am trying to create a CI pipeline to automate building and testing on Google Cloud Build. I currently have two seperate builds. The first build is triggered manually, it calls the grc.io/cloud-builders/docker builder to use a dockerfile that creates a Ubuntu development environment with the required packages for building our program, I am currently just manually calling this build step because it shouldn't change much. This step creates a docker image that is then stored in our Google Cloud Container Registry. The cloudbuild.yml file for this build step is as follows:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/image_folder', '.']
timeout: 500s
images:
- gcr.io/$PROJECT_ID/image_folder
Now that the docker image is stored in the Container Registry, I set up a build trigger to build our program. The framework for our program will be changing so it is essential that our pipeline periodically rebuilds our program before testing can take place. To do this step I am refering to the previous image stored on our Container Registry to run it as a custom builder on google cloud. At the moment, the argument for our custom builder calls a python script that uses python os.system to give commands to the system that invokes the steps required to build our program. The cloudbuild.yml file for this build step is stored in our Google Cloud Source Repository so that it can be triggered from pushes to our repo. The cloudbuild.yml file is the following:
steps:
- name: 'gcr.io/$PROJECT_ID/image_folder:latest'
entrypoint: 'bash'
args:
- '-c'
- 'python3 path/to/instructions/build_instructions.py'
timeout: 2800s
The next step is to create another build trigger that will use the build that was built in the previous step to run tests on simulations. The previous step takes upwards of 45 minutes to build and it only needs to be built occasionally so I want to create another build trigger that will simply pull an image that already has our program built so it can run tests without having to build it every time.
The problem I am having is I am not sure how to save and export the image from within a custom builder. Because this is not running the gcr.io/cloud-builders/docker builder, I do not know if it is possible to make changes within the custom builder and export a new image (including the changes made) from within this custom builder without access to the standard docker builder. A possible solution may be just to use the standard docker builder and use the run argument to run the container and use CMD commands in the dockerfile to execute our build then list another build step to call docker commit. But I am guessing that there should be another way around this.
Thanks for your help!
TDLR: I want to run a docker container as a custom builder in Google Cloud Build, make changes to the container, then save the changes and export it as an image to Container Registry so that it can be used to test programs without having to spend 45 minutes building the program every time before testing. How can I do this?
I had a similar use case, this is what I did:
steps:
# This step runs builds the docker container which runs flake8, yapf and unit tests
- name: 'gcr.io/cloud-builders/docker'
id: 'BUILD'
args: ['build',
'-t',
'gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA',
'.']
# Create custom image tag and write to file /workspace/_TAG
- name: 'alpine'
id: 'SETUP_TAG'
args: ['sh',
'-c',
"echo `echo $BRANCH_NAME |
sed 's,/,-,g' |
awk '{print tolower($0)}'`_$(date -u +%Y%m%dT%H%M)_$SHORT_SHA > _TAG; echo $(cat _TAG)"]
# Tag image with custom tag
- name: 'gcr.io/cloud-builders/docker'
id: 'TAG_IMAGE'
entrypoint: '/bin/bash'
args: ['-c',
"docker tag gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA gcr.io/$PROJECT_ID/mysql2datacatalog:$(cat _TAG)"]
- name: 'gcr.io/cloud-builders/gsutil'
id: 'PREPARE_SERVICE_ACCOUNT'
args: ['cp',
'gs://my_sa_bucket/mysql2dc-credentials.json',
'.']
- name: 'docker.io/library/python:3.7'
id: 'PREPARE_ENV'
entrypoint: 'bash'
env:
- 'GOOGLE_APPLICATION_CREDENTIALS=/workspace/mysql2dc-credentials.json'
- 'MYSQL2DC_DATACATALOG_PROJECT_ID=${_MYSQL2DC_DATACATALOG_PROJECT_ID}'
args:
- -c
- 'pip install google-cloud-datacatalog &&
system_tests/cleanup.sh'
- name: 'gcr.io/cloud-builders/docker'
id: 'SYSTEM_TESTS'
args: ['run',
'--rm',
'--tty',
'-v',
'/workspace:/data',
'gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA',
'--datacatalog-project-id=${_MYSQL2DC_DATACATALOG_PROJECT_ID}',
'--datacatalog-location-id=${_MYSQL2DC_DATACATALOG_LOCATION_ID}',
'--mysql-host=${_MYSQL2DC_MYSQL_SERVER}',
'--raw-metadata-csv=${_MYSQL2DC_RAW_METADATA_CSV}']
- name: 'gcr.io/cloud-builders/docker'
id: 'TAG_STABLE'
entrypoint: '/bin/bash'
args: ['-c',
"docker tag gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA gcr.io/$PROJECT_ID/mysql2datacatalog:stable"]
images: ['gcr.io/$PROJECT_ID/mysql2datacatalog']
timeout: 15m
Build docker Image
Create a Tag
Tag Image
Pull Service Account
Run
Tests on the Custom Image
Tag the Custom image if success
You could skip 2,3,4. Does this work for you?
I'm following the instructions at https://cloud.google.com/container-builder/docs/speeding-up-builds#using_a_cached_docker_image and I'm trying to setup docker builds that use the image cached from the previous build.
Here's what my cloudbuild.yml looks like:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['pull', 'gcr.io/$PROJECT_ID/$REPO_NAME:infra_docker']
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '--cache-from', 'gcr.io/$PROJECT_ID/$REPO_NAME:infra_docker', '-t', 'gcr.io/$PROJECT_ID/$REPO_NAME:infra_docker', '.']
timeout: 120m
images: ['gcr.io/$PROJECT_ID/$REPO_NAME:infra_docker']
options:
machineType: 'N1_HIGHCPU_8'
Here's what my Dockerfile starts with:
FROM ubuntu:14.04
SHELL ["/bin/bash", "-c"]
# lots of RUN commands after this
No matter what I try, the docker image pulled from the cache (as a result of the first step), is not used to speed up the actual docker build (second step). It always runs the entire 38 steps in my Dockerfile!
What am I doing wrong?
Is the dockerfile multi-stage?
I ran into this problem where only the final image is available for caching. Depending on the steps you run, this can appear as if no step is using the cache.
If this is the case you need to push the intermediate image(s) to the container registry as well and pull them when building.