Google Cloud Build - Different scopes of Dockerfile and cloudbuild.yaml - docker

I recently asked a question about why I get the error Specified public directory 'dist/browser' does not exist, can't deploy hosting to site PROJECT-ID when I´m trying to deploy to Firebase Hosting in my cloudbuild.yaml. However, since I find the question too bloated with information I tried to break it down.
I created a simple image to visualize what happens when I call gcloud builds submit --config=cloudbuild.yaml. So why can´t I access the directory dist/browser from cloudbuild.yaml even though it is processed after the Dockerfile where the directory dist/browser is created?

Cloud Build is best conceptualized as a series of functions (steps) applied to data in the form of a local file system (often just /workspace as this is a default volume mount added to each step, but you can add other volume mounts) and the Internet.
Output of each function (step) is self-contained unless you explicitly publish data back to one of these two sources (one of the step's volume mounts or the Internet).
In this case docker build consumes local files (not shown in your example) and generates dist/browser in the image that results but this folder is only accessible within that image; nothing is added to e.g. /workspace that you could use in subsequent steps.
In order to use that directory subsequently:
Hack a way to mount the (file system of the) image generated by the step and extract the directory from it (not advised; possible not permitted).
You'd need to run that image as a container and then docker cp files from it back into the Cloud Build's (VM's) file system (perhaps somewhere on /workspace).
Not put the directory in an image in the first place (see below)
Proposal
Instead of docker build'ing an image containing the directory, deconstruct the Dockerfile into a series of Cloud Build steps. This way, the artifacts you want (if written somewhere under one of the step's volume mounts), will be available in subsequent steps:
steps:
- name: gcr.io/cloud-builders/npm
args:
- install
- name: gcr.io/cloud-builders/npm
args:
- run
- build:ssr # Presumably this is where dist/browser is generated?
- name: firebase
args:
- deploy # dist/browser
NOTE Every Cloud Build step has an implicit:
- name: some-step
volumes:
- name: workspace
path: /workspace
Proof
Here's a minimal Cloud Build config that uses a volume called testdir that maps to the Cloud Build VM's /testdir directory.
NOTE The example uses testdir to prove the point. Each Cloud Build step automatically mounts /workspace and this could be used instead.
The config:
Lists the empty /testdir
Creates a file freddie.txt in /testdir
Lists /testdir now containing freddie.txt
options:
# volumes:
# - name: testdir
# path: /testdir
steps:
- name: busybox
volumes:
- name: testdir
path: /testdir
args:
- ash
- -c
- "ls -1a /testdir"
- name: busybox
volumes:
- name: testdir
path: /testdir
args:
- ash
- -c
- 'echo "Hello Freddie" > /testdir/freddie.txt'
- name: busybox
volumes:
- name: testdir
path: /testdir
args:
- ash
- -c
- "ls -1a /testdir"
NOTE Uncommenting volumes under options would remove the need to reproduce the volumes in each step.
The edited output is:
gcloud builds submit \
--config=./cloudbuild.yaml \
--project=${PROJECT}
# Lists (empty) /testdir
Starting Step #0
Step #0: Pulling image: busybox
Step #0: .
Step #0: ..
# Creates /test/freddie.txt
Starting Step #1
Step #1: Already have image: busybox
Finished Step #1
# List /testdir containing freddie.txt
Starting Step #2
Step #2: .
Step #2: ..
Step #2: freddie.txt
Finished Step #2

Related

GCP Cloud Build via trigger: Dockerfile not found

I attempt to trigger building a Docker image on GCP Cloud Build via webhook called from Gitlab. The webhook works, but the build process stops when I run docker build with this error:
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
The YAML for this step is:
- name: gcr.io/cloud-builders/docker
args:
- build
- '-t'
- '${_ARTIFACT_REPO}'
- .
where I later supply the variable _ARTIFACT_REPO via substitutions.
My Gitlab repo includes the Dockerfile on the root level. So the repo structure is:
app/
.gitignore
Dockerfile
README.md
requirements.txt
The error message indicates that the Dockerfile cannot be found, but I do not understand why this is the case. Help is much appreciated!
Just solved the issue:
I followed the GCP docs (link) where they include these two steps in the cloudbuild:
- name: gcr.io/cloud-builders/git
args:
- clone
- '-n'
- 'git#gitlab.com/GITLAB_REPO'
- .
volumes:
- name: ssh
path: /root/.ssh
- name: gcr.io/cloud-builders/git
args:
- checkout
- $_TO_SHA
As I did not require a specific checkout, I deleted the second step of those but I overlooked the -n flag in the first step that prevents checking out the cloned repo.
So I just deleted the - '-n' and issue solved.

Docker secret not mounting to default location

I'm using docker-compose to produce a docker image which requires access to a secure Azure Artifacts directory via Paket. As I'm sure at least some people are aware, Paket does not have default compatibility with the Azure Artifacts Credential Provider. To gain the access I need, I'm trying to mount the access token produced by the credential provider as a secret, then consume it using cat within a paket config command. cat then returns an error message stating that the file is not found at the default secret location.
I'm running this code within an Azure Pipeline on the Microsoft-provided ubuntu-latest agent.
Here's the relevant code snippets (It's possible I'm going into too much detail...):
docker-compose.ci.build.yml:
version: '3.6'
services:
ci_build:
build:
context: .
dockerfile: Dockerfile
image: <IMAGE IDENTITY>
secrets:
- azure_credential
secrets:
azure_credential:
file: ./credential.txt
dockerfile:
FROM mcr.microsoft.com/dotnet/sdk:6.0.102-bullseye-slim-amd64 AS build
<LABEL maintainer="<Engineering lead>"
WORKDIR /src
<Various COPY instructions>
RUN dotnet tool restore
RUN dotnet paket restore
RUN --mount=type=secret,id=azure_credential dotnet paket config add-token "<ARTIFACT_FEED_URL>" "$(cat /run/secrets/azure_credential)"
Azure pipeline definition YAML:
jobs:
- job: BuildPublish
displayName: Build & Publish
steps:
- task: PowerShell#2
displayName: pwsh build.ps1
inputs:
filePath: ${{ parameters.workingDirectory }}/.azure-pipelines/build.ps1
pwsh: true
workingDirectory: ${{ parameters.workingDirectory }}
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
The relevant lines of the powershell script initiating docker-compose:
$projectRoot = Split-Path -Path $PSScriptRoot -Parent
Push-Location -Path $projectRoot
try {
...
Out-File -FilePath ./credential.txt -InputObject $Env:SYSTEM_ACCESSTOKEN
...
& docker-compose -f ./docker-compose.ci.build.yml build
...
}
finally {
...
Pop-Location
}
The error message:
0.276 cat: /run/secrets/azure_credential: No such file or directory
If there's other relevant code, let me know.
I tried to verify that the environment variable I'm housing the secret in on the agent even existed and that the value was being saved to the ./credential.txt file for mounting in the image. I verified that the text file was being properly created. I've tried fiddling with the syntax for all the relevant commands--fun fact, Docker docs have two different versions of the mounting syntax, but the other version just crashed. I tried using Windows default pathing in case my source image was a Windows one, but it doesn't appear to be.
Essentially, here's where I've left it: I know that the file ./credential.txt exists and contains some value. I know my mounting syntax is correct, or Docker would crash. The issue appears to be something to do with the default mounting path and/or how docker-compose embeds its secrets.
I figured it out. For reasons I do not understand, the path to the mounted secret has to be defined as an environment variable in the docker-compose YAML. So, like this:
version: '3.6'
services:
ci_build:
build:
context: .
dockerfile: Dockerfile
image: <IMAGE IDENTITY>
secrets:
- azure_credential
environment:
AZURE_CREDENTIAL_FILE: /run/secrets/azure_credential
secrets:
azure_credential:
file: credential.txt
This solved the issue. If anyone knows why this solved the issue, I'd love to hear.

How can I save changes to a docker container and export it as a docker image when running custom image build step in Google Cloud Build?

I am trying to create a CI pipeline to automate building and testing on Google Cloud Build. I currently have two seperate builds. The first build is triggered manually, it calls the grc.io/cloud-builders/docker builder to use a dockerfile that creates a Ubuntu development environment with the required packages for building our program, I am currently just manually calling this build step because it shouldn't change much. This step creates a docker image that is then stored in our Google Cloud Container Registry. The cloudbuild.yml file for this build step is as follows:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/image_folder', '.']
timeout: 500s
images:
- gcr.io/$PROJECT_ID/image_folder
Now that the docker image is stored in the Container Registry, I set up a build trigger to build our program. The framework for our program will be changing so it is essential that our pipeline periodically rebuilds our program before testing can take place. To do this step I am refering to the previous image stored on our Container Registry to run it as a custom builder on google cloud. At the moment, the argument for our custom builder calls a python script that uses python os.system to give commands to the system that invokes the steps required to build our program. The cloudbuild.yml file for this build step is stored in our Google Cloud Source Repository so that it can be triggered from pushes to our repo. The cloudbuild.yml file is the following:
steps:
- name: 'gcr.io/$PROJECT_ID/image_folder:latest'
entrypoint: 'bash'
args:
- '-c'
- 'python3 path/to/instructions/build_instructions.py'
timeout: 2800s
The next step is to create another build trigger that will use the build that was built in the previous step to run tests on simulations. The previous step takes upwards of 45 minutes to build and it only needs to be built occasionally so I want to create another build trigger that will simply pull an image that already has our program built so it can run tests without having to build it every time.
The problem I am having is I am not sure how to save and export the image from within a custom builder. Because this is not running the gcr.io/cloud-builders/docker builder, I do not know if it is possible to make changes within the custom builder and export a new image (including the changes made) from within this custom builder without access to the standard docker builder. A possible solution may be just to use the standard docker builder and use the run argument to run the container and use CMD commands in the dockerfile to execute our build then list another build step to call docker commit. But I am guessing that there should be another way around this.
Thanks for your help!
TDLR: I want to run a docker container as a custom builder in Google Cloud Build, make changes to the container, then save the changes and export it as an image to Container Registry so that it can be used to test programs without having to spend 45 minutes building the program every time before testing. How can I do this?
I had a similar use case, this is what I did:
steps:
# This step runs builds the docker container which runs flake8, yapf and unit tests
- name: 'gcr.io/cloud-builders/docker'
id: 'BUILD'
args: ['build',
'-t',
'gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA',
'.']
# Create custom image tag and write to file /workspace/_TAG
- name: 'alpine'
id: 'SETUP_TAG'
args: ['sh',
'-c',
"echo `echo $BRANCH_NAME |
sed 's,/,-,g' |
awk '{print tolower($0)}'`_$(date -u +%Y%m%dT%H%M)_$SHORT_SHA > _TAG; echo $(cat _TAG)"]
# Tag image with custom tag
- name: 'gcr.io/cloud-builders/docker'
id: 'TAG_IMAGE'
entrypoint: '/bin/bash'
args: ['-c',
"docker tag gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA gcr.io/$PROJECT_ID/mysql2datacatalog:$(cat _TAG)"]
- name: 'gcr.io/cloud-builders/gsutil'
id: 'PREPARE_SERVICE_ACCOUNT'
args: ['cp',
'gs://my_sa_bucket/mysql2dc-credentials.json',
'.']
- name: 'docker.io/library/python:3.7'
id: 'PREPARE_ENV'
entrypoint: 'bash'
env:
- 'GOOGLE_APPLICATION_CREDENTIALS=/workspace/mysql2dc-credentials.json'
- 'MYSQL2DC_DATACATALOG_PROJECT_ID=${_MYSQL2DC_DATACATALOG_PROJECT_ID}'
args:
- -c
- 'pip install google-cloud-datacatalog &&
system_tests/cleanup.sh'
- name: 'gcr.io/cloud-builders/docker'
id: 'SYSTEM_TESTS'
args: ['run',
'--rm',
'--tty',
'-v',
'/workspace:/data',
'gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA',
'--datacatalog-project-id=${_MYSQL2DC_DATACATALOG_PROJECT_ID}',
'--datacatalog-location-id=${_MYSQL2DC_DATACATALOG_LOCATION_ID}',
'--mysql-host=${_MYSQL2DC_MYSQL_SERVER}',
'--raw-metadata-csv=${_MYSQL2DC_RAW_METADATA_CSV}']
- name: 'gcr.io/cloud-builders/docker'
id: 'TAG_STABLE'
entrypoint: '/bin/bash'
args: ['-c',
"docker tag gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA gcr.io/$PROJECT_ID/mysql2datacatalog:stable"]
images: ['gcr.io/$PROJECT_ID/mysql2datacatalog']
timeout: 15m
Build docker Image
Create a Tag
Tag Image
Pull Service Account
Run
Tests on the Custom Image
Tag the Custom image if success
You could skip 2,3,4. Does this work for you?

Best practices for adding .env-File to Docker-Image during build in Gitlab CI

I have a node.js Project which I run as Docker-Container in different environments (local, stage, production) and therefor configure it via .env-Files. As always advised I don't store the .env-Files in my remote repository which is Gitlab. My production- and stage-systems are run as kubernetes cluster.
What I want to achieve is an automated build via Gitlab's CI for different environments (e.g. stage) depending on the commit-branch (named stage as well), meaning when I push to origin/stage I want an Docker-image to be built for my stage-environment with the corresponding .env-File in it.
On my local machine it's pretty simple, since I have all the different .env-Files in the root-Folder of my app I just use this in my Dockerfile
COPY .env-stage ./.env
and everything is fine.
Since I don't store the .env-Files in my remote repo, this approach doesn't work, so I used Gitlab CI Variables and created a variable named DOTENV_STAGE of type file with the contents of my local .env-stage file.
Now my problem is: How do I get that content as .env-File inside the docker image that is going to be built by gitlab since that file is not yet a file in my repo but a variable instead?
I tried using cp (see below, also in the before_script-section) to just copy the file to an .env-File during the build process, but that obviously doesn't work.
My current build stage looks like this:
image: docker:git
services:
- docker:dind
build stage:
only:
- stage
stage: build
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- cp $DOTENV_STAGE .env
- docker pull $GITLAB_IMAGE_PATH-$CI_COMMIT_BRANCH || true
- docker build --cache-from $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH --file=Dockerfile-$CI_COMMIT_BRANCH -t $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH:$CI_COMMIT_SHORT_SHA .
- docker push $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH
This results in
Step 12/14 : COPY .env ./.env
COPY failed: stat /var/lib/docker/tmp/docker-builder513570233/.env: no such file or directory
I also tried cp $DOTENV_STAGE .env as well as cp $DOTENV_STAGE $CI_BUILDS_DIR/.env and cp $DOTENV_STAGE $CI_PROJECT_DIR/.env but none of them worked.
So the part I actually don't know is: Where do I have to put the file in order to make it available to docker during build?
Thanks
You should avoid copying .env file into the container altogether. Rather feed it from outside on runtime. There's a dedicated prop for that: env_file.
web:
env_file:
- .env
You can store contents of the .env file itself in a Masked Variable in the GitLabs CI backend. Then dump it to .env file in the runner and feed to Docker compose pipeline.
After some more research I stumbled upon a support-forum entry on gitlab.com, which exactly describes my situation (unfortunately it got deleted in the meanwhile) and it got solved by the same approach I was trying to use, namely this:
...
script:
- cp $DOTENV_STAGE $CI_PROJECT_DIR/.env
...
in my .gitlab-ci.yml
The part I was actually missing was adjusting my .dockerignore-File accordingly (removing .env from it) and then removing the line
COPY .env ./.env
from my Dockerfile
An alternative approach I thought about after joyarjo's answer could be to use a ConfigMap for Kubernetes. But I didn't try it yet

unable to upload non-image artifacts with cloud build

I have a very simple container (effectively the Cloud Build quickstart sample code) that generates a file. I'm trying to extend this container to upload said file to a bucket via the documentation on storing non-image artifacts with Cloud Build.
My Dockerfile builds a trivial container and executes a single script:
FROM alpine
WORKDIR /app
COPY . /app # the only file present is quickstart.sh
CMD ["./quickstart.sh"]
The script (quickstart.sh) generates a simple timestamp file:
#!/bin/sh
echo "Creating file 'time.txt'"
echo "The time is $(date)" > time.txt
## for debugging:
# pwd
# ls
# cat time.txt
My cloudbuild.yaml file is basically copy-pasted from the aforementioned docs, and is configured to upload the file:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/quickstart-image', '.' ]
artifacts:
objects:
location: 'gs://my-bucket/'
paths: ['*.txt']
images:
- 'gcr.io/$PROJECT_ID/quickstart-image'
However, the file fails to upload and the build fails as a result. When I run the build command
gcloud builds submit --config cloudbuild.yaml .
All logs are successful until the end:
Artifacts will be uploaded to gs://my-bucket using gsutil cp
*.txt: Uploading path....
CommandException: No URLs matched: *.txt
CommandException: 1 file/object could not be transferred.
ERROR
ERROR: could not upload *.txt to gs://my-bucket/; err = exit status 1
Where gsutil is claiming no matching file can be found. However, if I build manually and generate the file, I can use gsutil cp *.txt gs://my-bucket/ to upload the file with no problem. So it's almost as if the file is wiped before Cloud Build reaches the "upload artifacts" step, but that does not seem like it would make sense. I imagine this is a pretty common use case but I'm not making any progress with the documentation alone. Any ideas? Thanks.
The issue here is that with the current steps, you are just building the container and not running it so the time.txt file doesn't get created. Even if you run the container, then the file gets created inside the container so you need to fetch it from inside the container so that gsutil can "see" the file.
I added 2 steps in the cloudbuild.yaml file to do this:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/quickstart-image', '.' ]
- name: 'gcr.io/cloud-builders/docker'
args: [ 'run', '--name', 'containername', 'gcr.io/$PROJECT_ID/quickstart-image']
- name: 'gcr.io/cloud-builders/docker'
args: [ 'cp', 'containername:/app/time.txt, './time.txt']
artifacts:
objects:
location: 'gs://mybucket/'
paths: ['*.txt']
images:
- 'gcr.io/$PROJECT_ID/quickstart-image'
I hope this works for you.

Resources