Is there a way to specify a `--build-arg` with Cloud Run? - docker

In docker we can pass in a build argument via --build-arg:
docker build --build-arg CACHEBUST="$(date)" . -t container-name:latest
Is there an equivalent method for gcloud? The below will not work:
gcloud beta builds submit --tag="gcr.io/${PROJECT_NAME}/${name}" --no-cache --build-arg CACHEBUST="$(date)"

The gcloud builds submit command doesn't have an option to specify --build-arg. An alternative workaround is that you need to use a YAML file and pass it with the gcloud builds submit command.
See below sample code:
# Need YAML to set --build-arg
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', '--tag=gcr.io/${PROJECT_ID}/$sample-docker-repo/sample-image:latest', --build-arg CACHEBUST="$(date)" --no-cache', '.']
Then, start the build by running this sample command:
gcloud builds submit --tag gcr.io/[PROJECT_ID]/sample-docker-repo/sample-image:latest

Related

Run docker build - Cant not found docker file

This is how I try to make use of Github Actions in relation to the project being built in Docker. I have look on this site here: How do I specify the dockerfile location in my github action?
That which I would like out of it. It is that it makes use of the docker File that makes in my project.
I have built like this in relation to my files:
API
Docker
Service
Data
Test
However, it's the case that my Docker is in the API section.
I have tried this:
name: Docker Image CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build the Docker image
run: docker build ./API/ --file Dockerfile --tag my-image-name:$(date +%s)
and i have try
docker build . --file Dockerfile --tag my-image-name:$(date +%s)
Try this with the "./" in front of dockerfile name:
docker build . --file ./Dockerfile --tag my-image-name:$(date +%s)
UPDATE:
After looking at Docker.com documents, I found the solution using Docker Compose. Just use:
docker-compose up

Azure Pipelines: Build a specific stage in a multistage Dockerfile without building non-dependent stages

I have a multistage Dockerfile containing the following stages:
FROM python:3.8-slim-bullseye as python-base
# ...
FROM python-base as builder-base
# ...
FROM python-base as runtime-deps
# ...
FROM runtime-deps as dev-deps
# ...
FROM dev-deps as development
# ...
FROM runtime-deps as cloud
# ...
You can see how these stages depend on each other more visually here:
The cloud stage doesn't depend on dev-deps or development, yet those stages still get built when I use the Docker task with the arg --target cloud:
- task: Docker#2
displayName: Build Docker image
inputs:
command: build
repository: $(IMAGE_NAME)
tags: $(TAG_NAME)
arguments: '--target cloud'
Those unneeded stages also still get built when I run the docker command directly in a Bash task:
bash: docker build --target cloud --tag $REGISTRY_NAME/$IMAGE_NAME:$TAG_NAME .
How can I configure a Docker build in a pipeline so only the specified target stage is built?
As skipping stages only works with Buildkit, and Buildkit doesn't seem to be available in the Docker task or in the docker command on Azure Pipelines, you'll have to enable it by using buildx, which uses Buildkit:
- bash: docker buildx build --target cloud --tag $REGISTRY_NAME/$IMAGE_NAME:$TAG_NAME .
You could also use the DOCKER_BUILDKIT=1 environment variable, as described in the Docker docs:
- bash: DOCKER_BUILDKIT=1 docker build --target cloud --tag $REGISTRY_NAME/$IMAGE_NAME:$TAG_NAME .

github actions: run multiple jobs in the same docker

I'm learning to deploy github actions to run multiple jobs with docker, and this is what I have so far:
github actions yml file is shown as follow. There are 2 jobs: job0 builds docker with Dockerfile0 and job1 builds docker with Dockerfile1.
# .github/workflows/main.yml
name: docker CI
on: push
jobs:
job0:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build and Run
run: docker build . --file Dockerfile0 --tag job0
job1:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build and Run
run: docker build . --file Dockerfile1 --tag job1
Dockerfile0 and Dockerfile1 share basically the same content, except for the argument in the last line:
FROM ubuntu:20.04
ADD . /docker_ci
RUN apt-get update -y
RUN apt-get install -y ... ...
WORKDIR /docker_ci
RUN python3 script.py <arg>
I wonder, can I build a docker for the 1st job, and then invoke multiple jobs execute command within the docker built from the 1st job? Thus I don't have to keep multiple Dockerfile and save some docker building time.
It would be better to build my docker locally from Dockerfile so I hope to avoid using container from docker hub.
runs-for-docker-actions looks relevant but I have trouble finding example deploying action locally (without publishing).
It definitely sounds like you should not build two different images - not for CI, and not for local development purposes (if it matters).
From the details you have provided, I would consider the following approach:
Define a Dockerfile with an ENTRYPOINT which is the lowest common denominator for your needs (it can be bash or python script.py).
In GitHub Actions, have a single job with multiple steps - one for building the image, and the others for running it with arguments.
For example:
FROM ubuntu
RUN apt-get update && apt-get install -y python3
WORKDIR /app
COPY script.py .
ENTRYPOINT ["python3", "script.py"]
This Dockerfile can be executed with any argument which will be passed on to the script.py entrypoint:
$ docker run --rm -it imagename some arguments
A sample GitHub Actions config might look like this:
jobs:
jobname:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build the image
run: docker build --tag job .
- name: Test 1
run: docker run --rm -it job arg1
- name: Test 2
run: docker run --rm -it job arg2
If you insist on separating these to different jobs, as far as I understand it, your easiest option would still be to rebuild the image (but still using a single Dockerfile), since sharing a docker image built in one job, to another job, will be a more complicated task that I would recommend trying to avoid.

GCP Cloud build pass secret to docker arg

I intend to pass my npm token to gcp cloud build,
so that I can use it in a multistage build, to install private npm packages.
I have the following abridged Dockerfile:
FROM ubuntu:14.04 AS build
ARG NPM_TOKEN
RUN echo "NPM_TOKEN:: ${NPM_TOKEN}"
and the following abridged cloudbuild.yaml:
---
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: [ '-c', 'gcloud secrets versions access latest --secret=my-npm-token > npm-token.txt' ]
- name: gcr.io/cloud-builders/docker
args:
- build
- "-t"
- gcr.io/my-project/my-program
- "."
- "--build-arg NPM_TOKEN= < npm-token.txt"
- "--no-cache"
I based my cloudbuild.yaml on the documentation, but it seems like I am not able to put two and two together, as the expression: "--build-arg NPM_TOKEN= < npm-token.txt" does not work.
I have tested the DockerFile, when I directly pass in the npm token, and it works. I simply have trouble passing in a token from gcloud secrets as a build argument to docker.
Help is greatly appreciated!
Your goal is to get the secret file contents into the build argument. Therefore you have to read the file content using either NPM_TOKEN="$(cat npm-token.txt)"or NPM_TOKEN="$(< npm-token.txt)".
name: gcr.io/cloud-builders/docker
entrypoint: 'bash'
args: [ '-c', 'docker build -t gcr.io/my-project/my-program . --build-arg NPM_TOKEN="$(cat npm-token.txt)" --no-cache' ]
Note: The gcr.io/cloud-builders/docker however use exec entrypoint form. Therefore you set entrypoint to bash.
Also note that you save the secret to the build workspace (/workspace/..). This also allows you to copy the secret as a file into your container.
FROM ubuntu:14.04 AS build
ARG NPM_TOKEN
COPY npm-token.txt .
RUN echo "NPM_TOKEN:: $(cat npm-token.txt)"
I won't write your second step like you did, but like this:
- name: gcr.io/cloud-builders/docker
entrypoint: "bash"
args:
- "-c"
- |
build -t gcr.io/my-project/my-program . --build-arg NPM_TOKEN=$(cat npm-token.txt) --no-cache

How to disable docker cache while building an image in VSTS build?

Im running a VSTS build which contains a docker build task. I pass --no-cache argument in the build args field. Unfortunately, durring the build I receive a message that this arg was ignored. Did anyone had the same problem?
The reason of it is that, the docker build task add --no-cache argument after --build-arg, that can’t be consumed.
The workaround is that you can add additional arguments, such as test=test --no-cache (the warning will be [test] were not consumed.
On the other hand, you also can call docker build command through Command Line task.
adding --no-cache did not work for me
I added a marker in the Dockerfile before the COPY statement I did not want to be cached
FROM microsoft/azure-functions-dotnet-core2.0:2.0-nanoserver-1803
ARG CACHEBUSTER=0
COPY ./FunctionApp/bin/Release/netstandard2.0/Publish /approot
and then placed a RegEx Replace task before the docker build task and replace ARG CACHEBUSTER=0 with something unique e.g. ARG CACHEBUSTER=$(Build.BuildNumber)
Using "azure-pipelines.yml" in the Azure Build Pipelines fixes this problem:
script: docker build -t $(dockerId)/$(imageName) . # add options to this command to meet your needs
Build, test, and push Docker container apps in Azure Pipelines - Build an image
Example:
pool:
name: MarkusMeyer
demands:
- node.js
- Agent.OSVersion -equals 10.0.17134
variables:
imageName: 'your-container-image-name:$(build.buildId)'
steps:
- script: docker build --no-cache -f Dockerfile -t $(imageName) .
displayName: 'docker build'

Resources