So I have a CloudBuild trigger that builds my cloudbuild.yaml file and this is all fine and dandy. I also use the gcloud builder to run docker commands to pass ENV variables to my Dockerfile. for example:
steps:
- name: 'gcr.io/$PROJECT_ID/swift:4.2'
args: ['test']
id: 'Running unit tests'
- name: 'gcr.io/cloud-builders/docker'
args: ['build','--build-arg', 'PROJECT=$PROJECT_ID','-t', 'us.gcr.io/$PROJECT_ID/$BRANCH_NAME:$SHORT_SHA', '.']
id: 'Building docker image'
- name: 'gcr.io/cloud-builders/docker'
args: ["push", "us.gcr.io/$PROJECT_ID/$BRANCH_NAME:$SHORT_SHA"]
id: 'Pushing built image to registry'
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy']
id: 'Deploying to AppEngine'
timeout: 1800s # 30 minute timeout
As you can see I, I'm using the ENV variables that all GCP resources have by default.($PROJECT_ID for example). And in the docker command I'm passing it as an argument so I can use the ARG command in the dockerfile:
ARG PROJECT
FROM gcr.io/${PROJECT}/swift:4.2 as builder
WORKDIR /App
#Other commands....
Now all of this works fine and I'm able to build my image etc. now I want to deploy to app engine in the final step.
Only problem is that I'm using the same Dockerfile to uses the swift:4.2 base image that's only located in my GoogleContainerRegistry so I need the $PROJECT_ID for my project to pull that.
My question is: Is there any way to have AppEngine build environment pass arguments to the docker build that builds my image when deploying? I have an app.yaml file and I know there's an env_variables: property and I know I'd be able to use the docker ARG or ENV command (can't remember which one) to get my $PROJECT_ID inside my Dockerfile. But the only problem is AppEngine doesn't have that Property defined as far as I know. The only other thing I can think of is to echo the $PROJECT_ID from Cloud Builder step to the end of the app.yaml file. But if there's a cleaner approach I'd love to hear about it. Thanks!
I think I've found a solution for my needs.
gcloud app deploy has a flag image-url that can specify an already built image rather than rebuilding the Dockerfile. So I went with this as my final cloudbuild.yaml
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', '--image-url', 'gcr.io/$PROJECT_ID/$BRANCH_NAME:$SHORT_SHA']
Basically point to the image I just built and pushed to my container registry.
Related
I am trying to set an environment variable in a docker image via a cloudbuild.yaml for Google Cloud Build
Here is the sample cloudbuild.yaml:
steps:
- name: "gcr.io/cloud-builders/docker"
args: ["run", "--rm", "--volume=/foo:/bar", "--privileged", "-e FOO=bar", "my/build:latest", "/root/init_build.sh" ]
timeout: "600s"
When I run on the command line locally and pass the environment variables into the container, it works as expected. However, when I trigger a build in Cloud Build, the environment variable doesn't get set in the container.
Thank you in advance for any guidance.
I was able to get the result I was looking for by doing the following:
steps:
- name: "gcr.io/cloud-builders/docker"
entrypoint: "bash"
args: ["-c", "docker run --rm --volume=/workspace:/srv/jekyll --privileged -e FOO=bar my/build:latest /root/init_build.sh" ]
timeout: "600s"
Take a look at the Docker commandline reference and this StackOverflow post for more information
I am using Cloud Run and I want to active the continued implementation whit Github but obviously, I can't upload my env variables so, what can I use
I can't put It when I use "Implement and edit a new version" because it doesn't go to continue, I have to open It click it, and fill the env
I can't use ENV on my Dockerfile because I have to upload it on my Github
I can't use replace it on cloud Build because I am using a Dockerfile and this option is only for cloudbuild.yml (and I don't know how to create it I only know docker :)
Maybe I can edit the yalm on Cloud run I I am not sure if that is a good option
Maybe I can pass if I use gcloud build but I have to click on "Implement and edit a new version" and It is not continuous implementation
My Dockerfile if you want to help me to transform it on a cloudbuild.yml
FROM node:15
WORKDIR /app
COPY package*.json ./
ENV ENV production
ENV PORT 3000
ENV API_URL https://api.mysite.com
RUN npm install --only=production
COPY . .
RUN npm run build
CMD ["npm", "start"]
On google documentation, I found how to create the cloudbuild.yalm to continuous integration
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/api:$COMMIT_SHA', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/api:$COMMIT_SHA']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'api'
- '--image'
- 'gcr.io/$PROJECT_ID/api:$COMMIT_SHA'
- '--region'
- 'us-east1'
- '--platform'
- 'managed'
images:
- 'gcr.io/$PROJECT_ID/api:$COMMIT_SHA'
You have to change API for the name of your service
After, I put on "Implement and edit a new version" and put the environment variables
And all the continuous implementations going to have the same environments variables that I put when I implement a new version.
You're not passing on any environment variables into the service.
gcloud beta run deploy --help check for --set-env-vars.
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'api'
- '--image'
- 'gcr.io/$PROJECT_ID/api:$COMMIT_SHA'
- '--region'
- 'us-east1'
- '--platform'
- 'managed'
- '--set-env-vars'
- 'API_URL=${_API_URL}'
You can use substitutions in the build trigger: https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values
In my Rails project, I have a Docker Image in a repo which is used for DB migration and unit tests. Prior to running migrations/testing, I may need to update gems on the Image. However, it seems that even after updating Gems, the updated image (which is not pushed to the repo, but which is in a build step just prior to migration/testing) is not available to future build steps.
My cloudbuild.yaml looks like this:
steps:
- id: update_gems
name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', "us-central1-docker.pkg.dev/$PROJECT_ID/myregistry/myimage:deploy",
'--build-arg', 'PROJECT=${PROJECT_ID}', '-f', 'docker/bundled.Dockerfile', '.' ]
- id: db_migration
name: "gcr.io/google-appengine/exec-wrapper"
args: ["-i", "us-central1-docker.pkg.dev/$PROJECT_ID/myregistry/myimage:deploy",
"-e", "RAILS_ENV=${_RAILS_ENV}",
"-e", "INSTANCE_CONNECTION_NAME=${_INSTANCE_CONNECTION_NAME}",
"-s", "${_INSTANCE_CONNECTION_NAME}",
"--", "./bin/rake", "db:migrate"]
- id: unit_test
name: "gcr.io/google-appengine/exec-wrapper"
args: ["-i", "us-central1-docker.pkg.dev/$PROJECT_ID/myregistry/myimage:deploy",
"-e", "RAILS_ENV=test",
"-e", "INSTANCE_CONNECTION_NAME=${_INSTANCE_CONNECTION_NAME}",
"-s", "${_INSTANCE_CONNECTION_NAME}",
"--", "./bin/rspec"]
- id: deploy_to_GAE
name: gcr.io/cloud-builders/gcloud
args: ['app', 'deploy', '--project', '${PROJECT_ID}', 'app.yaml']
The Dockerfile referred to in the 1st step looks like this:
ARG PROJECT
FROM us-central1-docker.pkg.dev/${PROJECT}/myregistry/myimage:deploy
WORKDIR /workspace
ADD Gemfile* ./
RUN bundle update
RUN bundle install
During a triggered Cloud Build, I see it update Gems and create a new hash like so:
And then during the db_migration step, I see it pulling the old image before the Gems were updated:
This can be verified in the update_gems step logs where the pre-updated image hash matches (ie the image hash which is freshly pulled, but not yet had its Gems updated):
I realize a work-around is to push the updated image after building it, which does in fact work. For example, I could add this step after update_gems step:
- id: update_image
name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'us-central1-docker.pkg.dev/$PROJECT_ID/myregistry/myimage:deploy' ]
However, it begs the question why the new udate_image build step has access to the image built by the update_gems step while other future steps don't.
The image are stored locally, in a local docker registry, that Docker can access. That's why you can push it with Docker.
But, when you use another step, such as gcr.io/google-appengine/exec-wrapper, Docker is no longer loaded in the runtime context and thus the local docker registry is unknown/not active.
So, the solution is:
Either to push externally the image and then use it. Like this, it's not a local registry but an external registry which is used, and it works in any steps.
Or install docker on your current runtime step image (or use Docker as step image and install what you need on this image) -> it will be difficult, I don't recommend this way.
I am trying to create a CI pipeline to automate building and testing on Google Cloud Build. I currently have two seperate builds. The first build is triggered manually, it calls the grc.io/cloud-builders/docker builder to use a dockerfile that creates a Ubuntu development environment with the required packages for building our program, I am currently just manually calling this build step because it shouldn't change much. This step creates a docker image that is then stored in our Google Cloud Container Registry. The cloudbuild.yml file for this build step is as follows:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/image_folder', '.']
timeout: 500s
images:
- gcr.io/$PROJECT_ID/image_folder
Now that the docker image is stored in the Container Registry, I set up a build trigger to build our program. The framework for our program will be changing so it is essential that our pipeline periodically rebuilds our program before testing can take place. To do this step I am refering to the previous image stored on our Container Registry to run it as a custom builder on google cloud. At the moment, the argument for our custom builder calls a python script that uses python os.system to give commands to the system that invokes the steps required to build our program. The cloudbuild.yml file for this build step is stored in our Google Cloud Source Repository so that it can be triggered from pushes to our repo. The cloudbuild.yml file is the following:
steps:
- name: 'gcr.io/$PROJECT_ID/image_folder:latest'
entrypoint: 'bash'
args:
- '-c'
- 'python3 path/to/instructions/build_instructions.py'
timeout: 2800s
The next step is to create another build trigger that will use the build that was built in the previous step to run tests on simulations. The previous step takes upwards of 45 minutes to build and it only needs to be built occasionally so I want to create another build trigger that will simply pull an image that already has our program built so it can run tests without having to build it every time.
The problem I am having is I am not sure how to save and export the image from within a custom builder. Because this is not running the gcr.io/cloud-builders/docker builder, I do not know if it is possible to make changes within the custom builder and export a new image (including the changes made) from within this custom builder without access to the standard docker builder. A possible solution may be just to use the standard docker builder and use the run argument to run the container and use CMD commands in the dockerfile to execute our build then list another build step to call docker commit. But I am guessing that there should be another way around this.
Thanks for your help!
TDLR: I want to run a docker container as a custom builder in Google Cloud Build, make changes to the container, then save the changes and export it as an image to Container Registry so that it can be used to test programs without having to spend 45 minutes building the program every time before testing. How can I do this?
I had a similar use case, this is what I did:
steps:
# This step runs builds the docker container which runs flake8, yapf and unit tests
- name: 'gcr.io/cloud-builders/docker'
id: 'BUILD'
args: ['build',
'-t',
'gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA',
'.']
# Create custom image tag and write to file /workspace/_TAG
- name: 'alpine'
id: 'SETUP_TAG'
args: ['sh',
'-c',
"echo `echo $BRANCH_NAME |
sed 's,/,-,g' |
awk '{print tolower($0)}'`_$(date -u +%Y%m%dT%H%M)_$SHORT_SHA > _TAG; echo $(cat _TAG)"]
# Tag image with custom tag
- name: 'gcr.io/cloud-builders/docker'
id: 'TAG_IMAGE'
entrypoint: '/bin/bash'
args: ['-c',
"docker tag gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA gcr.io/$PROJECT_ID/mysql2datacatalog:$(cat _TAG)"]
- name: 'gcr.io/cloud-builders/gsutil'
id: 'PREPARE_SERVICE_ACCOUNT'
args: ['cp',
'gs://my_sa_bucket/mysql2dc-credentials.json',
'.']
- name: 'docker.io/library/python:3.7'
id: 'PREPARE_ENV'
entrypoint: 'bash'
env:
- 'GOOGLE_APPLICATION_CREDENTIALS=/workspace/mysql2dc-credentials.json'
- 'MYSQL2DC_DATACATALOG_PROJECT_ID=${_MYSQL2DC_DATACATALOG_PROJECT_ID}'
args:
- -c
- 'pip install google-cloud-datacatalog &&
system_tests/cleanup.sh'
- name: 'gcr.io/cloud-builders/docker'
id: 'SYSTEM_TESTS'
args: ['run',
'--rm',
'--tty',
'-v',
'/workspace:/data',
'gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA',
'--datacatalog-project-id=${_MYSQL2DC_DATACATALOG_PROJECT_ID}',
'--datacatalog-location-id=${_MYSQL2DC_DATACATALOG_LOCATION_ID}',
'--mysql-host=${_MYSQL2DC_MYSQL_SERVER}',
'--raw-metadata-csv=${_MYSQL2DC_RAW_METADATA_CSV}']
- name: 'gcr.io/cloud-builders/docker'
id: 'TAG_STABLE'
entrypoint: '/bin/bash'
args: ['-c',
"docker tag gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA gcr.io/$PROJECT_ID/mysql2datacatalog:stable"]
images: ['gcr.io/$PROJECT_ID/mysql2datacatalog']
timeout: 15m
Build docker Image
Create a Tag
Tag Image
Pull Service Account
Run
Tests on the Custom Image
Tag the Custom image if success
You could skip 2,3,4. Does this work for you?
I'm following the instructions at https://cloud.google.com/container-builder/docs/speeding-up-builds#using_a_cached_docker_image and I'm trying to setup docker builds that use the image cached from the previous build.
Here's what my cloudbuild.yml looks like:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['pull', 'gcr.io/$PROJECT_ID/$REPO_NAME:infra_docker']
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '--cache-from', 'gcr.io/$PROJECT_ID/$REPO_NAME:infra_docker', '-t', 'gcr.io/$PROJECT_ID/$REPO_NAME:infra_docker', '.']
timeout: 120m
images: ['gcr.io/$PROJECT_ID/$REPO_NAME:infra_docker']
options:
machineType: 'N1_HIGHCPU_8'
Here's what my Dockerfile starts with:
FROM ubuntu:14.04
SHELL ["/bin/bash", "-c"]
# lots of RUN commands after this
No matter what I try, the docker image pulled from the cache (as a result of the first step), is not used to speed up the actual docker build (second step). It always runs the entire 38 steps in my Dockerfile!
What am I doing wrong?
Is the dockerfile multi-stage?
I ran into this problem where only the final image is available for caching. Depending on the steps you run, this can appear as if no step is using the cache.
If this is the case you need to push the intermediate image(s) to the container registry as well and pull them when building.