Unable to get --cache-from to work - docker

I'm following the instructions at https://cloud.google.com/container-builder/docs/speeding-up-builds#using_a_cached_docker_image and I'm trying to setup docker builds that use the image cached from the previous build.
Here's what my cloudbuild.yml looks like:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['pull', 'gcr.io/$PROJECT_ID/$REPO_NAME:infra_docker']
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '--cache-from', 'gcr.io/$PROJECT_ID/$REPO_NAME:infra_docker', '-t', 'gcr.io/$PROJECT_ID/$REPO_NAME:infra_docker', '.']
timeout: 120m
images: ['gcr.io/$PROJECT_ID/$REPO_NAME:infra_docker']
options:
machineType: 'N1_HIGHCPU_8'
Here's what my Dockerfile starts with:
FROM ubuntu:14.04
SHELL ["/bin/bash", "-c"]
# lots of RUN commands after this
No matter what I try, the docker image pulled from the cache (as a result of the first step), is not used to speed up the actual docker build (second step). It always runs the entire 38 steps in my Dockerfile!
What am I doing wrong?

Is the dockerfile multi-stage?
I ran into this problem where only the final image is available for caching. Depending on the steps you run, this can appear as if no step is using the cache.
If this is the case you need to push the intermediate image(s) to the container registry as well and pull them when building.

Related

A locally built Docker image within a Bitbucket Pipeline

What I need is a way to build a Dockerfile within the repository as an image and use this as the image for the next step(s).
I've tried the Bitbucket Pipeline configuration below but in the "Build" step it doesn't seem to have the image (which was built in the previous step) in its cache.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
services:
- docker
caches:
- docker
- step:
name: Build
image: foo/bar
script:
- echo "Hello, World"
- composer --version
services:
- docker
caches:
- docker
I've tried the answer on the StackOverflow question below but the context in that question is pushing the image in the following step. It's not about using the image which was built for the step itself.
Bitbucket pipeline use locally built image from previous step
There's a few conceptual mistakes in your current pipeline. Let me first first run through those before giving you some possible solutions.
Clarifications
Caching
Bitbucket Pipelines uses the cache keyword to persist data across multiple pipelines. Whilst it will also persist across steps, the primary use-case is for the data to be used on separate builds. The cache takes 7 days to expire, and thus will not be updated with new data during those 7 days. You can manually delete the cache on the main Pipelines page. If you want to carry data across steps in the same pipelines, you should use the artifacts keyword.
Docker service
You should only need to use the docker service whenever you want to have a docker daemon available to your build. Most commonly whenever you need to use a docker command in your script. In your second step, you do not need this. So it doesn't need the docker service.
Solution 1 - Combine the steps
Combine the steps, and run composer within the created image by using the docker run command.
pipelines:
branches:
main:
- step:
name: Docker image and build
script:
- docker build -t foo/bar .docker/composer
# Replace <destination> with the working directory of the foo/bar image.
- docker run -v $BITBUCKET_CLONE_DIR:<destination> foo/bar composer --version
services:
- docker
Solution 2 - Using two steps with DockerHub
This example keeps the two step approach. In this scenario, you will push your foo/bar image to a public repository in Dockerhub. Pipelines will then pull it to use in the subsequent step.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
- docker login -u $DOCKERHUB_USER -p $DOCKERHUB_PASSWORD
- docker push foo/bar
services:
- docker
- step:
name: Build
image: foo/bar
script:
- echo "Hello, World. I'm running insider of the previously pushed foo/bar container"
- composer --version
If you'd like to use a private repository instead, you can replace the second step with:
...
- step:
name: Build
image:
name: foo/bar
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
email $DOCKERHUB_EMAIL
script:
- echo "Hello, World. I'm running insider of the previously pushed foo/bar container"
- composer --version
To expand on phod's answer. If you really want two steps, you can transfer the image from one step to another.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
- docker image save foo/bar -o foobar.tar.gz
services:
- docker
caches:
- docker
artifacts:
- foobar.tar.gz
- step:
name: Build
script:
- docker image load -i foobar.tar.gz
- docker run -v $BITBUCKET_CLONE_DIR:<destination> foo/bar composer --version
services:
- docker
Note that this will upload all the layers and dependencies for the image. It can take quite a while to execute and may therefor not be the best solution.

How can I build a Docker Image in Google Cloud Build and use in later Build Steps?

In my Rails project, I have a Docker Image in a repo which is used for DB migration and unit tests. Prior to running migrations/testing, I may need to update gems on the Image. However, it seems that even after updating Gems, the updated image (which is not pushed to the repo, but which is in a build step just prior to migration/testing) is not available to future build steps.
My cloudbuild.yaml looks like this:
steps:
- id: update_gems
name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', "us-central1-docker.pkg.dev/$PROJECT_ID/myregistry/myimage:deploy",
'--build-arg', 'PROJECT=${PROJECT_ID}', '-f', 'docker/bundled.Dockerfile', '.' ]
- id: db_migration
name: "gcr.io/google-appengine/exec-wrapper"
args: ["-i", "us-central1-docker.pkg.dev/$PROJECT_ID/myregistry/myimage:deploy",
"-e", "RAILS_ENV=${_RAILS_ENV}",
"-e", "INSTANCE_CONNECTION_NAME=${_INSTANCE_CONNECTION_NAME}",
"-s", "${_INSTANCE_CONNECTION_NAME}",
"--", "./bin/rake", "db:migrate"]
- id: unit_test
name: "gcr.io/google-appengine/exec-wrapper"
args: ["-i", "us-central1-docker.pkg.dev/$PROJECT_ID/myregistry/myimage:deploy",
"-e", "RAILS_ENV=test",
"-e", "INSTANCE_CONNECTION_NAME=${_INSTANCE_CONNECTION_NAME}",
"-s", "${_INSTANCE_CONNECTION_NAME}",
"--", "./bin/rspec"]
- id: deploy_to_GAE
name: gcr.io/cloud-builders/gcloud
args: ['app', 'deploy', '--project', '${PROJECT_ID}', 'app.yaml']
The Dockerfile referred to in the 1st step looks like this:
ARG PROJECT
FROM us-central1-docker.pkg.dev/${PROJECT}/myregistry/myimage:deploy
WORKDIR /workspace
ADD Gemfile* ./
RUN bundle update
RUN bundle install
During a triggered Cloud Build, I see it update Gems and create a new hash like so:
And then during the db_migration step, I see it pulling the old image before the Gems were updated:
This can be verified in the update_gems step logs where the pre-updated image hash matches (ie the image hash which is freshly pulled, but not yet had its Gems updated):
I realize a work-around is to push the updated image after building it, which does in fact work. For example, I could add this step after update_gems step:
- id: update_image
name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'us-central1-docker.pkg.dev/$PROJECT_ID/myregistry/myimage:deploy' ]
However, it begs the question why the new udate_image build step has access to the image built by the update_gems step while other future steps don't.
The image are stored locally, in a local docker registry, that Docker can access. That's why you can push it with Docker.
But, when you use another step, such as gcr.io/google-appengine/exec-wrapper, Docker is no longer loaded in the runtime context and thus the local docker registry is unknown/not active.
So, the solution is:
Either to push externally the image and then use it. Like this, it's not a local registry but an external registry which is used, and it works in any steps.
Or install docker on your current runtime step image (or use Docker as step image and install what you need on this image) -> it will be difficult, I don't recommend this way.

How can I save changes to a docker container and export it as a docker image when running custom image build step in Google Cloud Build?

I am trying to create a CI pipeline to automate building and testing on Google Cloud Build. I currently have two seperate builds. The first build is triggered manually, it calls the grc.io/cloud-builders/docker builder to use a dockerfile that creates a Ubuntu development environment with the required packages for building our program, I am currently just manually calling this build step because it shouldn't change much. This step creates a docker image that is then stored in our Google Cloud Container Registry. The cloudbuild.yml file for this build step is as follows:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/image_folder', '.']
timeout: 500s
images:
- gcr.io/$PROJECT_ID/image_folder
Now that the docker image is stored in the Container Registry, I set up a build trigger to build our program. The framework for our program will be changing so it is essential that our pipeline periodically rebuilds our program before testing can take place. To do this step I am refering to the previous image stored on our Container Registry to run it as a custom builder on google cloud. At the moment, the argument for our custom builder calls a python script that uses python os.system to give commands to the system that invokes the steps required to build our program. The cloudbuild.yml file for this build step is stored in our Google Cloud Source Repository so that it can be triggered from pushes to our repo. The cloudbuild.yml file is the following:
steps:
- name: 'gcr.io/$PROJECT_ID/image_folder:latest'
entrypoint: 'bash'
args:
- '-c'
- 'python3 path/to/instructions/build_instructions.py'
timeout: 2800s
The next step is to create another build trigger that will use the build that was built in the previous step to run tests on simulations. The previous step takes upwards of 45 minutes to build and it only needs to be built occasionally so I want to create another build trigger that will simply pull an image that already has our program built so it can run tests without having to build it every time.
The problem I am having is I am not sure how to save and export the image from within a custom builder. Because this is not running the gcr.io/cloud-builders/docker builder, I do not know if it is possible to make changes within the custom builder and export a new image (including the changes made) from within this custom builder without access to the standard docker builder. A possible solution may be just to use the standard docker builder and use the run argument to run the container and use CMD commands in the dockerfile to execute our build then list another build step to call docker commit. But I am guessing that there should be another way around this.
Thanks for your help!
TDLR: I want to run a docker container as a custom builder in Google Cloud Build, make changes to the container, then save the changes and export it as an image to Container Registry so that it can be used to test programs without having to spend 45 minutes building the program every time before testing. How can I do this?
I had a similar use case, this is what I did:
steps:
# This step runs builds the docker container which runs flake8, yapf and unit tests
- name: 'gcr.io/cloud-builders/docker'
id: 'BUILD'
args: ['build',
'-t',
'gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA',
'.']
# Create custom image tag and write to file /workspace/_TAG
- name: 'alpine'
id: 'SETUP_TAG'
args: ['sh',
'-c',
"echo `echo $BRANCH_NAME |
sed 's,/,-,g' |
awk '{print tolower($0)}'`_$(date -u +%Y%m%dT%H%M)_$SHORT_SHA > _TAG; echo $(cat _TAG)"]
# Tag image with custom tag
- name: 'gcr.io/cloud-builders/docker'
id: 'TAG_IMAGE'
entrypoint: '/bin/bash'
args: ['-c',
"docker tag gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA gcr.io/$PROJECT_ID/mysql2datacatalog:$(cat _TAG)"]
- name: 'gcr.io/cloud-builders/gsutil'
id: 'PREPARE_SERVICE_ACCOUNT'
args: ['cp',
'gs://my_sa_bucket/mysql2dc-credentials.json',
'.']
- name: 'docker.io/library/python:3.7'
id: 'PREPARE_ENV'
entrypoint: 'bash'
env:
- 'GOOGLE_APPLICATION_CREDENTIALS=/workspace/mysql2dc-credentials.json'
- 'MYSQL2DC_DATACATALOG_PROJECT_ID=${_MYSQL2DC_DATACATALOG_PROJECT_ID}'
args:
- -c
- 'pip install google-cloud-datacatalog &&
system_tests/cleanup.sh'
- name: 'gcr.io/cloud-builders/docker'
id: 'SYSTEM_TESTS'
args: ['run',
'--rm',
'--tty',
'-v',
'/workspace:/data',
'gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA',
'--datacatalog-project-id=${_MYSQL2DC_DATACATALOG_PROJECT_ID}',
'--datacatalog-location-id=${_MYSQL2DC_DATACATALOG_LOCATION_ID}',
'--mysql-host=${_MYSQL2DC_MYSQL_SERVER}',
'--raw-metadata-csv=${_MYSQL2DC_RAW_METADATA_CSV}']
- name: 'gcr.io/cloud-builders/docker'
id: 'TAG_STABLE'
entrypoint: '/bin/bash'
args: ['-c',
"docker tag gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA gcr.io/$PROJECT_ID/mysql2datacatalog:stable"]
images: ['gcr.io/$PROJECT_ID/mysql2datacatalog']
timeout: 15m
Build docker Image
Create a Tag
Tag Image
Pull Service Account
Run
Tests on the Custom Image
Tag the Custom image if success
You could skip 2,3,4. Does this work for you?

Google Cloud Build with Docker images that are based on each other

I have two or more Docker images where the latter ones are based on the first image. I want to build them all with Google Cloud Build and have the following multi-step cloudbuild.yaml:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/lh-build', './src']
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/lhweb', './src/LHWeb']
images:
- gcr.io/$PROJECT_ID/lh-build
- gcr.io/$PROJECT_ID/lhweb
When I run this build config, I can see the following error:
Step 1/6 : FROM eu.gcr.io/logistikhelden/lh-build manifest for
eu.gcr.io/logistikhelden/lh-build not found
I then tried to push the image after the first step:
...
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/lh-build']
...
The same problem remains, though. Any idea whats wrong here?
You are pushing the image to gcr.io, but it looks like your Dockerfile specifies a base image in the eu.gcr.io registry. Try changing your Dockerfile base image to FROM gcr.io/logistikhelden/lh-build.

Google AppEngine ENV variables from Google Cloud Build Dockerfile

So I have a CloudBuild trigger that builds my cloudbuild.yaml file and this is all fine and dandy. I also use the gcloud builder to run docker commands to pass ENV variables to my Dockerfile. for example:
steps:
- name: 'gcr.io/$PROJECT_ID/swift:4.2'
args: ['test']
id: 'Running unit tests'
- name: 'gcr.io/cloud-builders/docker'
args: ['build','--build-arg', 'PROJECT=$PROJECT_ID','-t', 'us.gcr.io/$PROJECT_ID/$BRANCH_NAME:$SHORT_SHA', '.']
id: 'Building docker image'
- name: 'gcr.io/cloud-builders/docker'
args: ["push", "us.gcr.io/$PROJECT_ID/$BRANCH_NAME:$SHORT_SHA"]
id: 'Pushing built image to registry'
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy']
id: 'Deploying to AppEngine'
timeout: 1800s # 30 minute timeout
As you can see I, I'm using the ENV variables that all GCP resources have by default.($PROJECT_ID for example). And in the docker command I'm passing it as an argument so I can use the ARG command in the dockerfile:
ARG PROJECT
FROM gcr.io/${PROJECT}/swift:4.2 as builder
WORKDIR /App
#Other commands....
Now all of this works fine and I'm able to build my image etc. now I want to deploy to app engine in the final step.
Only problem is that I'm using the same Dockerfile to uses the swift:4.2 base image that's only located in my GoogleContainerRegistry so I need the $PROJECT_ID for my project to pull that.
My question is: Is there any way to have AppEngine build environment pass arguments to the docker build that builds my image when deploying? I have an app.yaml file and I know there's an env_variables: property and I know I'd be able to use the docker ARG or ENV command (can't remember which one) to get my $PROJECT_ID inside my Dockerfile. But the only problem is AppEngine doesn't have that Property defined as far as I know. The only other thing I can think of is to echo the $PROJECT_ID from Cloud Builder step to the end of the app.yaml file. But if there's a cleaner approach I'd love to hear about it. Thanks!
I think I've found a solution for my needs.
gcloud app deploy has a flag image-url that can specify an already built image rather than rebuilding the Dockerfile. So I went with this as my final cloudbuild.yaml
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', '--image-url', 'gcr.io/$PROJECT_ID/$BRANCH_NAME:$SHORT_SHA']
Basically point to the image I just built and pushed to my container registry.

Resources