I'm working on my first cloudbuild.yaml file and running into this error:
Your build failed to run: failed unmarshalling build config cloudbuild.yaml: yaml: line 8: did not find expected key
Here are the contents of my file (comments omitted), I have a few questions afterwards:
steps:
- name: 'node:12-alpine'
entrypoint: 'bash'
args:
- 'build.sh'
- name: 'docker'
args:
- 'build'
- '-t'
- 'gcr.io/$PROJECT_ID/my-project:$(git describe --tags `git rev-list --tags --max-count=1`)'
images: ['gcr.io/$PROJECT_ID/my-project']
Questions:
The line with - name: 'node:12-alpine' seems to be where it's blowing up. However, the documentation states, "Cloud Build enables you to use any publicly available image to execute your tasks.". The node:12-alpine imgage is publicly available so what am I doing wrong?
Secondly, I'm trying to execute a file with a bunch of BASH commands in the first step. That should work, provided the commands are all supported by the Alpine image I'm using, right?
Lastly, I'm trying to create a docker image with a version number based on the version of the latest git tag. Is syntax like this supported, or how is versioning normally handled with google cloud build (I saw nothing on this topic looking around)
This error is most probably caused because of bad indentation of your cloudbuild.yaml file.
You can take a look at official documentation which shows the structure of this file:
steps:
- name: string
args: [string, string, ...]
entrypoint: string
- name: string
...
- name: string
...
images:
- [string, string, ...]
When you run a container into Cloud Build, the entry point defined into the container is automatically called, and the args are passed in parameter of this entry point.
What you have to know
You can override the entrypoint, as you did in the node:12 image
If the container doesn't contain an entrypoint, the build failed (Your error, you use a generic docker image). You can
Either define the correct entrypoint (here entrypoint: "docker")
Or use a Cloud Builder, for docker, this one - name: 'gcr.io/cloud-builders/docker'
The steps' args are forwarded as-is, without any interpretation (except variable replacement like $MyVariable). Your command interpretation $(my command) isn't evaluated. Except is you do this
- name: 'gcr.io/cloud-builders/docker' #you can also use the raw docker image here
entrypoint: 'bash'
args:
- '-c'
- |
first bash command line
second bash command line
docker build -t gcr.io/$PROJECT_ID/my-project:$(git describe --tags `git rev-list --tags --max-count=1`)
But you can get the tag smarter. If you look at the default environment variable of Cloud Build, you can use $TAG_NAME.
docker build -t gcr.io/$PROJECT_ID/my-project:$TAG_NAME
Be careful, it's true only if you trigger it from your repository. If you run a manual build it doesn't work. So, there is a workaround. Look at this
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
TAG=${TAG_NAME}
if [ -z $${TAG} ]; then TAG=$(git describe --tags `git rev-list --tags --max-count=1`); fi
docker build -t gcr.io/$PROJECT_ID/my-project:$${TAG}
However, I don't recommend you to override your images. You will lost the history, and if you override a good image with a wrong version, you loose it!
If you didn't catch some part, like why the double $$ and so on, don't hesitate to comment
Related
This will be a decent read so I thank you a lot for trying to help :bow:
I am trying to write a github action configuration that does the following two tasks:
Creates a autodeploy.xar file inside the build folder
Use that folder along with all other files inside to create a docker image.
The build process can not find the folder/files that the previous step has created. So I tried three things:
Try to use the file created in the previous step (within the same job in github actions) but couldn't get it to run.
The build process threw an error that complained that the file doesn't exist: Error: buildx failed with: error: failed to solve: lstat /var/lib/docker/tmp/buildkit-mount3658977881/build/autodeploy.xar: no such file or directory
Try to build two jobs, one to initiate the file and the other that needs the first one to build the docker. However, this gave the same error as step 1.
Try to build the docker image from task 1
This step is just running a bash script from the github actions.
I tried to run docker build . from inside the shell script, but the github actions complained with "docker build" requires exactly 1 argument.
I was providing the right argument because on echoing the command I clearly saw the output docker build . --file Dockerfile --tag ***/***:latest --build-arg ADMIN_PASSWORD=***
This must be something very trivial, but I have no idea what's going wrong. And I think a solution to either one of these approaches should work.
Thanks once again for going through all this. Please find the GH actions, workflow.sh and the docker file below:
The GitHub actions yml file:
name: ci
on:
push:
branches:
- 'build'
jobs:
docker:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up JDK 11
uses: actions/setup-java#v3
with:
java-version: '11'
distribution: 'temurin'
- name: Login to DockerHub
uses: docker/login-action#v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Run script to replace template file
run: |
build/workflow.sh
- name: Build and push
uses: docker/build-push-action#v3
with:
push: true
tags: ${{secrets.DOCKERHUB_USERNAME}}/${{secrets.REPO_NAME}}:latest
build-args: |
ADMIN_PASSWORD=${{secrets.ADMIN_PASSWORD}}
The workflow file:
# run the ant
ant <--------- This command just creates autodeploy.xar file and puts it inside the build directory
#### I TESTED WITH AN ECHO COMMAND AND THE FILES ARE ALL THERE:
# echo $(ls build)
The docker file:
# Specify the eXist-db release as a base image
FROM existdb/existdb:6.0.1
COPY build/autodeploy.xar /exist/autodeploy/ <------ THIS LINE FAILS
COPY conf/controller-config.xml /exist/etc/webapp/WEB-INF/
COPY conf/exist-webapp-context.xml /exist/etc/jetty/webapps/
COPY conf/conf.xml /exist/etc
# Ports
EXPOSE 8080 8444
ARG ADMIN_PASSWORD
ENV ADMIN_PASSWORD=$ADMIN_PASSWORD
# Start eXist-db
CMD [ "java", "-jar", "start.jar", "jetty" ]
RUN [ "java", "org.exist.start.Main", "client", "--no-gui", "-l", "-u", "admin", "-P", "", "-x", "sm:passwd('admin','$ADMIN_PASSWORD')" ]
The error saying file was not found:
#5 [2/6] COPY build/autodeploy.xar /exist/autodeploy/
#5 ERROR: lstat /var/lib/docker/tmp/buildkit-mount3658977881/build/autodeploy.xar: no such file or directory
#4 [1/6] FROM docker.io/existdb/existdb:6.0.1#sha256:fa537fa9fd8e00ae839f17980810abfff6230b0b9873718a766b767a32f54ed6
this is dumb, but the only thing I needed to change was the context: . in the github actions
- name: Build and push
uses: docker/build-push-action#v3
with:
context: .
I am using below config.yml file ( .circleci/config.yml ) to run the circle CI job for github and build and push docker image to repo:
orbs:
docker: circleci/docker#1.5.0
version: 2.1
executors:
docker-publisher:
environment:
IMAGE_NAME: johndocker/docker-node-app
docker: # Each job requires specifying an executor
# (either docker, macos, or machine), see
— image: circleci/golang:1.15.1
auth:
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
jobs:
publishLatestToHub:
executor: docker-publisher
steps:
— checkout
— setup_remote_docker
— run
name: Publish Docker Image to Docker Hub
command: |
echo “$DOCKERHUB_PASSWORD” | docker login -u “$DOCKERHUB_USERNAME” — password-stdin
docker build -t $IMAGE_NAME .
docker push $IMAGE_NAME:latest
workflows:
version: 2
build-master:
jobs:
— publishLatestToHub
The config.yml is the magic that tells circleci what to do with our app, for this demo we want it to build a docker image.
In circleci *workflows* are simply orchestrators, they order how things should be done, *executors* defines or groups up task, *jobs* define the basic steps and commands to run.
But, it shows below error in Circle CI dashboard:
Unable to parse YAML, while scanning a simple key in 'string', line 21,
I checked using yml formatted also , but couldn't resolve the issue. Please help.
What I need is a way to build a Dockerfile within the repository as an image and use this as the image for the next step(s).
I've tried the Bitbucket Pipeline configuration below but in the "Build" step it doesn't seem to have the image (which was built in the previous step) in its cache.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
services:
- docker
caches:
- docker
- step:
name: Build
image: foo/bar
script:
- echo "Hello, World"
- composer --version
services:
- docker
caches:
- docker
I've tried the answer on the StackOverflow question below but the context in that question is pushing the image in the following step. It's not about using the image which was built for the step itself.
Bitbucket pipeline use locally built image from previous step
There's a few conceptual mistakes in your current pipeline. Let me first first run through those before giving you some possible solutions.
Clarifications
Caching
Bitbucket Pipelines uses the cache keyword to persist data across multiple pipelines. Whilst it will also persist across steps, the primary use-case is for the data to be used on separate builds. The cache takes 7 days to expire, and thus will not be updated with new data during those 7 days. You can manually delete the cache on the main Pipelines page. If you want to carry data across steps in the same pipelines, you should use the artifacts keyword.
Docker service
You should only need to use the docker service whenever you want to have a docker daemon available to your build. Most commonly whenever you need to use a docker command in your script. In your second step, you do not need this. So it doesn't need the docker service.
Solution 1 - Combine the steps
Combine the steps, and run composer within the created image by using the docker run command.
pipelines:
branches:
main:
- step:
name: Docker image and build
script:
- docker build -t foo/bar .docker/composer
# Replace <destination> with the working directory of the foo/bar image.
- docker run -v $BITBUCKET_CLONE_DIR:<destination> foo/bar composer --version
services:
- docker
Solution 2 - Using two steps with DockerHub
This example keeps the two step approach. In this scenario, you will push your foo/bar image to a public repository in Dockerhub. Pipelines will then pull it to use in the subsequent step.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
- docker login -u $DOCKERHUB_USER -p $DOCKERHUB_PASSWORD
- docker push foo/bar
services:
- docker
- step:
name: Build
image: foo/bar
script:
- echo "Hello, World. I'm running insider of the previously pushed foo/bar container"
- composer --version
If you'd like to use a private repository instead, you can replace the second step with:
...
- step:
name: Build
image:
name: foo/bar
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
email $DOCKERHUB_EMAIL
script:
- echo "Hello, World. I'm running insider of the previously pushed foo/bar container"
- composer --version
To expand on phod's answer. If you really want two steps, you can transfer the image from one step to another.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
- docker image save foo/bar -o foobar.tar.gz
services:
- docker
caches:
- docker
artifacts:
- foobar.tar.gz
- step:
name: Build
script:
- docker image load -i foobar.tar.gz
- docker run -v $BITBUCKET_CLONE_DIR:<destination> foo/bar composer --version
services:
- docker
Note that this will upload all the layers and dependencies for the image. It can take quite a while to execute and may therefor not be the best solution.
I am trying to create a CI pipeline to automate building and testing on Google Cloud Build. I currently have two seperate builds. The first build is triggered manually, it calls the grc.io/cloud-builders/docker builder to use a dockerfile that creates a Ubuntu development environment with the required packages for building our program, I am currently just manually calling this build step because it shouldn't change much. This step creates a docker image that is then stored in our Google Cloud Container Registry. The cloudbuild.yml file for this build step is as follows:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/image_folder', '.']
timeout: 500s
images:
- gcr.io/$PROJECT_ID/image_folder
Now that the docker image is stored in the Container Registry, I set up a build trigger to build our program. The framework for our program will be changing so it is essential that our pipeline periodically rebuilds our program before testing can take place. To do this step I am refering to the previous image stored on our Container Registry to run it as a custom builder on google cloud. At the moment, the argument for our custom builder calls a python script that uses python os.system to give commands to the system that invokes the steps required to build our program. The cloudbuild.yml file for this build step is stored in our Google Cloud Source Repository so that it can be triggered from pushes to our repo. The cloudbuild.yml file is the following:
steps:
- name: 'gcr.io/$PROJECT_ID/image_folder:latest'
entrypoint: 'bash'
args:
- '-c'
- 'python3 path/to/instructions/build_instructions.py'
timeout: 2800s
The next step is to create another build trigger that will use the build that was built in the previous step to run tests on simulations. The previous step takes upwards of 45 minutes to build and it only needs to be built occasionally so I want to create another build trigger that will simply pull an image that already has our program built so it can run tests without having to build it every time.
The problem I am having is I am not sure how to save and export the image from within a custom builder. Because this is not running the gcr.io/cloud-builders/docker builder, I do not know if it is possible to make changes within the custom builder and export a new image (including the changes made) from within this custom builder without access to the standard docker builder. A possible solution may be just to use the standard docker builder and use the run argument to run the container and use CMD commands in the dockerfile to execute our build then list another build step to call docker commit. But I am guessing that there should be another way around this.
Thanks for your help!
TDLR: I want to run a docker container as a custom builder in Google Cloud Build, make changes to the container, then save the changes and export it as an image to Container Registry so that it can be used to test programs without having to spend 45 minutes building the program every time before testing. How can I do this?
I had a similar use case, this is what I did:
steps:
# This step runs builds the docker container which runs flake8, yapf and unit tests
- name: 'gcr.io/cloud-builders/docker'
id: 'BUILD'
args: ['build',
'-t',
'gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA',
'.']
# Create custom image tag and write to file /workspace/_TAG
- name: 'alpine'
id: 'SETUP_TAG'
args: ['sh',
'-c',
"echo `echo $BRANCH_NAME |
sed 's,/,-,g' |
awk '{print tolower($0)}'`_$(date -u +%Y%m%dT%H%M)_$SHORT_SHA > _TAG; echo $(cat _TAG)"]
# Tag image with custom tag
- name: 'gcr.io/cloud-builders/docker'
id: 'TAG_IMAGE'
entrypoint: '/bin/bash'
args: ['-c',
"docker tag gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA gcr.io/$PROJECT_ID/mysql2datacatalog:$(cat _TAG)"]
- name: 'gcr.io/cloud-builders/gsutil'
id: 'PREPARE_SERVICE_ACCOUNT'
args: ['cp',
'gs://my_sa_bucket/mysql2dc-credentials.json',
'.']
- name: 'docker.io/library/python:3.7'
id: 'PREPARE_ENV'
entrypoint: 'bash'
env:
- 'GOOGLE_APPLICATION_CREDENTIALS=/workspace/mysql2dc-credentials.json'
- 'MYSQL2DC_DATACATALOG_PROJECT_ID=${_MYSQL2DC_DATACATALOG_PROJECT_ID}'
args:
- -c
- 'pip install google-cloud-datacatalog &&
system_tests/cleanup.sh'
- name: 'gcr.io/cloud-builders/docker'
id: 'SYSTEM_TESTS'
args: ['run',
'--rm',
'--tty',
'-v',
'/workspace:/data',
'gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA',
'--datacatalog-project-id=${_MYSQL2DC_DATACATALOG_PROJECT_ID}',
'--datacatalog-location-id=${_MYSQL2DC_DATACATALOG_LOCATION_ID}',
'--mysql-host=${_MYSQL2DC_MYSQL_SERVER}',
'--raw-metadata-csv=${_MYSQL2DC_RAW_METADATA_CSV}']
- name: 'gcr.io/cloud-builders/docker'
id: 'TAG_STABLE'
entrypoint: '/bin/bash'
args: ['-c',
"docker tag gcr.io/$PROJECT_ID/mysql2datacatalog:$COMMIT_SHA gcr.io/$PROJECT_ID/mysql2datacatalog:stable"]
images: ['gcr.io/$PROJECT_ID/mysql2datacatalog']
timeout: 15m
Build docker Image
Create a Tag
Tag Image
Pull Service Account
Run
Tests on the Custom Image
Tag the Custom image if success
You could skip 2,3,4. Does this work for you?
If I use a environment variable the circle.yml bellow, fails, But if I statically type the machine name it will work.
How can I properly reference environment variables in CircleCI?
version: 2
executorType: machine
stages:
build:
workDir: ~/app
enviroment:
- IMAGE_NAME: "nginx-ks8-circleci-hello-world"
# - AWS_REGISTER: "096957576271.dkr.ecr.us-east-1.amazonaws.com"
steps:
- type: checkout
- type: shell
name: Build the Docker image
shell: /bin/bash
command: |
docker build --rm=false -t $IMAGE_NAME .
I check your syntax with this example of circleci docs https://circleci.com/docs/2.0/language-python/#config-walkthrough so you have to remove the hiphen
enviroment:
IMAGE_NAME: "nginx-ks8-circleci-hello-world"
Thats for the environment variable inside the docker image for CircleCi 2.0.
Circle runs each command in a subshell so there isn't a way to set environment variables for the CircleCi build from the build itself.
Instead use the actual CircleCi environment variables:
https://circleci.com/gh/{yourOrganization}/{yourRepo}/edit#env-vars