unable to upload non-image artifacts with cloud build - docker

I have a very simple container (effectively the Cloud Build quickstart sample code) that generates a file. I'm trying to extend this container to upload said file to a bucket via the documentation on storing non-image artifacts with Cloud Build.
My Dockerfile builds a trivial container and executes a single script:
FROM alpine
WORKDIR /app
COPY . /app # the only file present is quickstart.sh
CMD ["./quickstart.sh"]
The script (quickstart.sh) generates a simple timestamp file:
#!/bin/sh
echo "Creating file 'time.txt'"
echo "The time is $(date)" > time.txt
## for debugging:
# pwd
# ls
# cat time.txt
My cloudbuild.yaml file is basically copy-pasted from the aforementioned docs, and is configured to upload the file:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/quickstart-image', '.' ]
artifacts:
objects:
location: 'gs://my-bucket/'
paths: ['*.txt']
images:
- 'gcr.io/$PROJECT_ID/quickstart-image'
However, the file fails to upload and the build fails as a result. When I run the build command
gcloud builds submit --config cloudbuild.yaml .
All logs are successful until the end:
Artifacts will be uploaded to gs://my-bucket using gsutil cp
*.txt: Uploading path....
CommandException: No URLs matched: *.txt
CommandException: 1 file/object could not be transferred.
ERROR
ERROR: could not upload *.txt to gs://my-bucket/; err = exit status 1
Where gsutil is claiming no matching file can be found. However, if I build manually and generate the file, I can use gsutil cp *.txt gs://my-bucket/ to upload the file with no problem. So it's almost as if the file is wiped before Cloud Build reaches the "upload artifacts" step, but that does not seem like it would make sense. I imagine this is a pretty common use case but I'm not making any progress with the documentation alone. Any ideas? Thanks.

The issue here is that with the current steps, you are just building the container and not running it so the time.txt file doesn't get created. Even if you run the container, then the file gets created inside the container so you need to fetch it from inside the container so that gsutil can "see" the file.
I added 2 steps in the cloudbuild.yaml file to do this:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/quickstart-image', '.' ]
- name: 'gcr.io/cloud-builders/docker'
args: [ 'run', '--name', 'containername', 'gcr.io/$PROJECT_ID/quickstart-image']
- name: 'gcr.io/cloud-builders/docker'
args: [ 'cp', 'containername:/app/time.txt, './time.txt']
artifacts:
objects:
location: 'gs://mybucket/'
paths: ['*.txt']
images:
- 'gcr.io/$PROJECT_ID/quickstart-image'
I hope this works for you.

Related

Running docker from github actions can't find file added during previous step

This will be a decent read so I thank you a lot for trying to help :bow:
I am trying to write a github action configuration that does the following two tasks:
Creates a autodeploy.xar file inside the build folder
Use that folder along with all other files inside to create a docker image.
The build process can not find the folder/files that the previous step has created. So I tried three things:
Try to use the file created in the previous step (within the same job in github actions) but couldn't get it to run.
The build process threw an error that complained that the file doesn't exist: Error: buildx failed with: error: failed to solve: lstat /var/lib/docker/tmp/buildkit-mount3658977881/build/autodeploy.xar: no such file or directory
Try to build two jobs, one to initiate the file and the other that needs the first one to build the docker. However, this gave the same error as step 1.
Try to build the docker image from task 1
This step is just running a bash script from the github actions.
I tried to run docker build . from inside the shell script, but the github actions complained with "docker build" requires exactly 1 argument.
I was providing the right argument because on echoing the command I clearly saw the output docker build . --file Dockerfile --tag ***/***:latest --build-arg ADMIN_PASSWORD=***
This must be something very trivial, but I have no idea what's going wrong. And I think a solution to either one of these approaches should work.
Thanks once again for going through all this. Please find the GH actions, workflow.sh and the docker file below:
The GitHub actions yml file:
name: ci
on:
push:
branches:
- 'build'
jobs:
docker:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up JDK 11
uses: actions/setup-java#v3
with:
java-version: '11'
distribution: 'temurin'
- name: Login to DockerHub
uses: docker/login-action#v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Run script to replace template file
run: |
build/workflow.sh
- name: Build and push
uses: docker/build-push-action#v3
with:
push: true
tags: ${{secrets.DOCKERHUB_USERNAME}}/${{secrets.REPO_NAME}}:latest
build-args: |
ADMIN_PASSWORD=${{secrets.ADMIN_PASSWORD}}
The workflow file:
# run the ant
ant <--------- This command just creates autodeploy.xar file and puts it inside the build directory
#### I TESTED WITH AN ECHO COMMAND AND THE FILES ARE ALL THERE:
# echo $(ls build)
The docker file:
# Specify the eXist-db release as a base image
FROM existdb/existdb:6.0.1
COPY build/autodeploy.xar /exist/autodeploy/ <------ THIS LINE FAILS
COPY conf/controller-config.xml /exist/etc/webapp/WEB-INF/
COPY conf/exist-webapp-context.xml /exist/etc/jetty/webapps/
COPY conf/conf.xml /exist/etc
# Ports
EXPOSE 8080 8444
ARG ADMIN_PASSWORD
ENV ADMIN_PASSWORD=$ADMIN_PASSWORD
# Start eXist-db
CMD [ "java", "-jar", "start.jar", "jetty" ]
RUN [ "java", "org.exist.start.Main", "client", "--no-gui", "-l", "-u", "admin", "-P", "", "-x", "sm:passwd('admin','$ADMIN_PASSWORD')" ]
The error saying file was not found:
#5 [2/6] COPY build/autodeploy.xar /exist/autodeploy/
#5 ERROR: lstat /var/lib/docker/tmp/buildkit-mount3658977881/build/autodeploy.xar: no such file or directory
#4 [1/6] FROM docker.io/existdb/existdb:6.0.1#sha256:fa537fa9fd8e00ae839f17980810abfff6230b0b9873718a766b767a32f54ed6
this is dumb, but the only thing I needed to change was the context: . in the github actions
- name: Build and push
uses: docker/build-push-action#v3
with:
context: .

Google Cloud Build - Different scopes of Dockerfile and cloudbuild.yaml

I recently asked a question about why I get the error Specified public directory 'dist/browser' does not exist, can't deploy hosting to site PROJECT-ID when I´m trying to deploy to Firebase Hosting in my cloudbuild.yaml. However, since I find the question too bloated with information I tried to break it down.
I created a simple image to visualize what happens when I call gcloud builds submit --config=cloudbuild.yaml. So why can´t I access the directory dist/browser from cloudbuild.yaml even though it is processed after the Dockerfile where the directory dist/browser is created?
Cloud Build is best conceptualized as a series of functions (steps) applied to data in the form of a local file system (often just /workspace as this is a default volume mount added to each step, but you can add other volume mounts) and the Internet.
Output of each function (step) is self-contained unless you explicitly publish data back to one of these two sources (one of the step's volume mounts or the Internet).
In this case docker build consumes local files (not shown in your example) and generates dist/browser in the image that results but this folder is only accessible within that image; nothing is added to e.g. /workspace that you could use in subsequent steps.
In order to use that directory subsequently:
Hack a way to mount the (file system of the) image generated by the step and extract the directory from it (not advised; possible not permitted).
You'd need to run that image as a container and then docker cp files from it back into the Cloud Build's (VM's) file system (perhaps somewhere on /workspace).
Not put the directory in an image in the first place (see below)
Proposal
Instead of docker build'ing an image containing the directory, deconstruct the Dockerfile into a series of Cloud Build steps. This way, the artifacts you want (if written somewhere under one of the step's volume mounts), will be available in subsequent steps:
steps:
- name: gcr.io/cloud-builders/npm
args:
- install
- name: gcr.io/cloud-builders/npm
args:
- run
- build:ssr # Presumably this is where dist/browser is generated?
- name: firebase
args:
- deploy # dist/browser
NOTE Every Cloud Build step has an implicit:
- name: some-step
volumes:
- name: workspace
path: /workspace
Proof
Here's a minimal Cloud Build config that uses a volume called testdir that maps to the Cloud Build VM's /testdir directory.
NOTE The example uses testdir to prove the point. Each Cloud Build step automatically mounts /workspace and this could be used instead.
The config:
Lists the empty /testdir
Creates a file freddie.txt in /testdir
Lists /testdir now containing freddie.txt
options:
# volumes:
# - name: testdir
# path: /testdir
steps:
- name: busybox
volumes:
- name: testdir
path: /testdir
args:
- ash
- -c
- "ls -1a /testdir"
- name: busybox
volumes:
- name: testdir
path: /testdir
args:
- ash
- -c
- 'echo "Hello Freddie" > /testdir/freddie.txt'
- name: busybox
volumes:
- name: testdir
path: /testdir
args:
- ash
- -c
- "ls -1a /testdir"
NOTE Uncommenting volumes under options would remove the need to reproduce the volumes in each step.
The edited output is:
gcloud builds submit \
--config=./cloudbuild.yaml \
--project=${PROJECT}
# Lists (empty) /testdir
Starting Step #0
Step #0: Pulling image: busybox
Step #0: .
Step #0: ..
# Creates /test/freddie.txt
Starting Step #1
Step #1: Already have image: busybox
Finished Step #1
# List /testdir containing freddie.txt
Starting Step #2
Step #2: .
Step #2: ..
Step #2: freddie.txt
Finished Step #2

google cloud build syntax

I'm working on my first cloudbuild.yaml file and running into this error:
Your build failed to run: failed unmarshalling build config cloudbuild.yaml: yaml: line 8: did not find expected key
Here are the contents of my file (comments omitted), I have a few questions afterwards:
steps:
- name: 'node:12-alpine'
entrypoint: 'bash'
args:
- 'build.sh'
- name: 'docker'
args:
- 'build'
- '-t'
- 'gcr.io/$PROJECT_ID/my-project:$(git describe --tags `git rev-list --tags --max-count=1`)'
images: ['gcr.io/$PROJECT_ID/my-project']
Questions:
The line with - name: 'node:12-alpine' seems to be where it's blowing up. However, the documentation states, "Cloud Build enables you to use any publicly available image to execute your tasks.". The node:12-alpine imgage is publicly available so what am I doing wrong?
Secondly, I'm trying to execute a file with a bunch of BASH commands in the first step. That should work, provided the commands are all supported by the Alpine image I'm using, right?
Lastly, I'm trying to create a docker image with a version number based on the version of the latest git tag. Is syntax like this supported, or how is versioning normally handled with google cloud build (I saw nothing on this topic looking around)
This error is most probably caused because of bad indentation of your cloudbuild.yaml file.
You can take a look at official documentation which shows the structure of this file:
steps:
- name: string
args: [string, string, ...]
entrypoint: string
- name: string
...
- name: string
...
images:
- [string, string, ...]
When you run a container into Cloud Build, the entry point defined into the container is automatically called, and the args are passed in parameter of this entry point.
What you have to know
You can override the entrypoint, as you did in the node:12 image
If the container doesn't contain an entrypoint, the build failed (Your error, you use a generic docker image). You can
Either define the correct entrypoint (here entrypoint: "docker")
Or use a Cloud Builder, for docker, this one - name: 'gcr.io/cloud-builders/docker'
The steps' args are forwarded as-is, without any interpretation (except variable replacement like $MyVariable). Your command interpretation $(my command) isn't evaluated. Except is you do this
- name: 'gcr.io/cloud-builders/docker' #you can also use the raw docker image here
entrypoint: 'bash'
args:
- '-c'
- |
first bash command line
second bash command line
docker build -t gcr.io/$PROJECT_ID/my-project:$(git describe --tags `git rev-list --tags --max-count=1`)
But you can get the tag smarter. If you look at the default environment variable of Cloud Build, you can use $TAG_NAME.
docker build -t gcr.io/$PROJECT_ID/my-project:$TAG_NAME
Be careful, it's true only if you trigger it from your repository. If you run a manual build it doesn't work. So, there is a workaround. Look at this
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
TAG=${TAG_NAME}
if [ -z $${TAG} ]; then TAG=$(git describe --tags `git rev-list --tags --max-count=1`); fi
docker build -t gcr.io/$PROJECT_ID/my-project:$${TAG}
However, I don't recommend you to override your images. You will lost the history, and if you override a good image with a wrong version, you loose it!
If you didn't catch some part, like why the double $$ and so on, don't hesitate to comment

How can I save changes to a docker container and export it as a docker image when running custom image build step in Google Cloud Build?

I am trying to create a CI pipeline to automate building and testing on Google Cloud Build. I currently have two seperate builds. The first build is triggered manually, it calls the grc.io/cloud-builders/docker builder to use a dockerfile that creates a Ubuntu development environment with the required packages for building our program, I am currently just manually calling this build step because it shouldn't change much. This step creates a docker image that is then stored in our Google Cloud Container Registry. The cloudbuild.yml file for this build step is as follows:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/image_folder', '.']
timeout: 500s
images:
- gcr.io/$PROJECT_ID/image_folder
Now that the docker image is stored in the Container Registry, I set up a build trigger to build our program. The framework for our program will be changing so it is essential that our pipeline periodically rebuilds our program before testing can take place. To do this step I am refering to the previous image stored on our Container Registry to run it as a custom builder on google cloud. At the moment, the argument for our custom builder calls a python script that uses python os.system to give commands to the system that invokes the steps required to build our program. The cloudbuild.yml file for this build step is stored in our Google Cloud Source Repository so that it can be triggered from pushes to our repo. The cloudbuild.yml file is the following:
steps:
- name: 'gcr.io/$PROJECT_ID/image_folder:latest'
entrypoint: 'bash'
args:
- '-c'
- 'python3 path/to/instructions/build_instructions.py'
timeout: 2800s
The next step is to create another build trigger that will use the build that was built in the previous step to run tests on simulations. The previous step takes upwards of 45 minutes to build and it only needs to be built occasionally so I want to create another build trigger that will simply pull an image that already has our program built so it can run tests without having to build it every time.
The problem I am having is I am not sure how to save and export the image from within a custom builder. Because this is not running the gcr.io/cloud-builders/docker builder, I do not know if it is possible to make changes within the custom builder and export a new image (including the changes made) from within this custom builder without access to the standard docker builder. A possible solution may be just to use the standard docker builder and use the run argument to run the container and use CMD commands in the dockerfile to execute our build then list another build step to call docker commit. But I am guessing that there should be another way around this.
Thanks for your help!
TDLR: I want to run a docker container as a custom builder in Google Cloud Build, make changes to the container, then save the changes and export it as an image to Container Registry so that it can be used to test programs without having to spend 45 minutes building the program every time before testing. How can I do this?
I had a similar use case, this is what I did:
steps:
# This step runs builds the docker container which runs flake8, yapf and unit tests
- name: 'gcr.io/cloud-builders/docker'
id: 'BUILD'
args: ['build',
'-t',
'gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA',
'.']
# Create custom image tag and write to file /workspace/_TAG
- name: 'alpine'
id: 'SETUP_TAG'
args: ['sh',
'-c',
"echo `echo $BRANCH_NAME |
sed 's,/,-,g' |
awk '{print tolower($0)}'`_$(date -u +%Y%m%dT%H%M)_$SHORT_SHA > _TAG; echo $(cat _TAG)"]
# Tag image with custom tag
- name: 'gcr.io/cloud-builders/docker'
id: 'TAG_IMAGE'
entrypoint: '/bin/bash'
args: ['-c',
"docker tag gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA gcr.io/$PROJECT_ID/mysql2datacatalog:$(cat _TAG)"]
- name: 'gcr.io/cloud-builders/gsutil'
id: 'PREPARE_SERVICE_ACCOUNT'
args: ['cp',
'gs://my_sa_bucket/mysql2dc-credentials.json',
'.']
- name: 'docker.io/library/python:3.7'
id: 'PREPARE_ENV'
entrypoint: 'bash'
env:
- 'GOOGLE_APPLICATION_CREDENTIALS=/workspace/mysql2dc-credentials.json'
- 'MYSQL2DC_DATACATALOG_PROJECT_ID=${_MYSQL2DC_DATACATALOG_PROJECT_ID}'
args:
- -c
- 'pip install google-cloud-datacatalog &&
system_tests/cleanup.sh'
- name: 'gcr.io/cloud-builders/docker'
id: 'SYSTEM_TESTS'
args: ['run',
'--rm',
'--tty',
'-v',
'/workspace:/data',
'gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA',
'--datacatalog-project-id=${_MYSQL2DC_DATACATALOG_PROJECT_ID}',
'--datacatalog-location-id=${_MYSQL2DC_DATACATALOG_LOCATION_ID}',
'--mysql-host=${_MYSQL2DC_MYSQL_SERVER}',
'--raw-metadata-csv=${_MYSQL2DC_RAW_METADATA_CSV}']
- name: 'gcr.io/cloud-builders/docker'
id: 'TAG_STABLE'
entrypoint: '/bin/bash'
args: ['-c',
"docker tag gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA gcr.io/$PROJECT_ID/mysql2datacatalog:stable"]
images: ['gcr.io/$PROJECT_ID/mysql2datacatalog']
timeout: 15m
Build docker Image
Create a Tag
Tag Image
Pull Service Account
Run
Tests on the Custom Image
Tag the Custom image if success
You could skip 2,3,4. Does this work for you?

Where is Dockerfile running in OpenShift Jenkins Pipeline

I am trying to send an file to the Docker image while building Jenkins job in OpenShift.
I tried paths, changing directories and some other methods but couldn't achieve anything. I am trying to send files inside my jenkins pod to docker image. Dockerfile only copies its paths. Where is it? Where is Jenkins Pipeline stores that Dockerfile and stashed war file?
source:
dockerfile: |-
FROM wildfly
COPY ROOT.war /wildfly/standalone/deployments/ROOT.war
RUN ls -la .
CMD $STI_SCRIPTS_PATH/run
binary:
asFile: ROOT.war
type: Docker
strategy:
dockerStrategy:
from:
kind: ImageStreamTag
name: wildfly:latest
paths:
- destinationDir: test
sourcePath: /tmp/myfile/.
type: Docker
triggers: []
I am expecting to send my /tmp/myfile to the Docker image. Paths not working for me. I tried "COPY . ." but only Dockerfile and ROOT.war existing in that path. What the exact path which includes Dockerfile and ROOT.war? I am thinking about copying file manually to the that path. Is it possible or do you know any other way?

Resources