Google Cloud Build + Google Secret Manager Substitution Problems - docker

we have a repository that needs to go get a private repo. To do this, we are using an SSH key to access the private repo/module.
We are storing this SSH key using Google Secret Manager and passing it to Docker using the build-arg flag. Now, when we do this locally, the Dockerfile builds and runs as intended. This is the command we use for a local build:
export SSH_PRIVATE_KEY="$(gcloud secrets versions access latest --secret=secret-data)" && \
docker build --build-arg SSH_PRIVATE_KEY -t my-image .
However, when we try to move this setup to Google Cloud Build, we run into 403 forbidden errors from Bitbucket, which leads me to believe that the SSH key is either not being read or formatted correctly.
The full 403 error is:
https://api.bitbucket.org/2.0/repositories/my-repo?fields=scm: 403 Forbidden
Step #0 - "Build": server response: Access denied. You must have write or admin access.
What is even stranger is that when I run the Cloud Build local emulator, it works fine using this command: cloud-build-local --config=builder/cloudbuild-prod.yaml --dryrun=false .
I've tried many different formats and methods, so out of desperation I am asking the community for help. What could be the problem?
Here is our cloudbuild.yaml:
steps:
# Get secret
- id: 'Get Secret'
name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args:
- '-c'
- |
gcloud secrets versions access latest --secret=secret-data > /workspace/SSH_PRIVATE_KEY.txt
# Build
- id: 'Build'
name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
export SSH_PRIVATE_KEY=$(cat /workspace/SSH_PRIVATE_KEY.txt) &&
docker build --build-arg SSH_PRIVATE_KEY -t my-image .

With Cloud Build, when you want to get local linux variable, and not the substitution variable, you have to espace the $ with another $. Look at this:
# Build
- id: 'Build'
name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
export SSH_PRIVATE_KEY=$(cat /workspace/SSH_PRIVATE_KEY.txt)
docker build --build-arg $$SSH_PRIVATE_KEY -t my-image .
The SSH_PRIVATE_KEY is prefixed by $$ to say: don't look at the substitution variable, but look at the linux variable.
I also remove the && at the end of the export line. The pipe | means: Run each command in succession, line return limit each command

Thanks for all the help! This one was pretty weird. Turns out it's not an issue with Cloud Build or Secret Manager but the Dockerfile I was using.
Instead of setting GOPRIVATE with the command in the Dockerfile below, I was using a statement like RUN export GOPRIVATE="bitbucket.org/odds".
In case anyone runs into something like this again, here's the full Dockerfile that works.
FROM golang:1.15.1
WORKDIR $GOPATH/src/bitbucket.org/gml/my-srv
ENTRYPOINT ["./my-srv"]
ARG CREDENTIALS
RUN git config \
--system \
url."https://${CREDENTIALS}#bitbucket.org/".insteadOf \
"https://bitbucket.org/"
RUN go env -w GOPRIVATE="bitbucket.org/my-team"
COPY . .
RUN make build

Related

Use/pass system.accesstoken into Dockerfile as ENV var for npm auth token using Powershell

I'm trying to automate a docker build that needs to access a personal npm registry on Azure
Currently I manually grab a personal Azure DevOps PAT token, base64 encode it and then save it to a file
I then mount the file as a secret during the docker build
I have this step in the Azure pipeline yaml that calls a Powershell script
steps:
- template: docker/steps/docker-login.yml#templates
parameters:
containerRegistry: ${{ parameters.containerRegistry }}
- pwsh: |
./scripts/buildAndPushContainerImage.ps1 `
-containerRepository ${{ parameters.containerRepository }} `
-branchName ${{ parameters.branchName }} `
-version ${{ parameters.version }} `
-action Build
displayName: Docker build
And this in the Powershell script to build the image
function Build-UI-Image {
$npmTokenFilePath = Join-Path $buildContext "docker/secrets/npm_token.txt"
if(-not (Test-Path $npmTokenFilePath)){
Write-Error "Missing file: $npmTokenFilePath"
}
docker build `
-f $dockerfilePath `
--secret id=npm_token,src=$npmTokenFilePath `
--build-arg "BUILDKIT_INLINE_CACHE=1" `
.....(rest of code)
}
And finally a Dockerfile with the value mounted as a secret
RUN --mount=type=secret,id=npm_token \
--mount=type=cache,sharing=locked,target=/tmp/yarn-cache <<EOF
export NPM_TOKEN=$(cat /run/secrets/npm_token)
yarn install --frozen-lockfile --silent --non-interactive --cache-folder /tmp/yarn-cache
unset NPM_TOKEN
EOF
I've read multiple articles about using the Azure built in 'system.accesstoken' to authorise with private npm registries, but I'm not sure how to go about this for my scenario (as I am not using Azure predefied tasks and I'm using Powershell not bash)
I think I can add this to the pipeline yaml as the first step
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
But I'm not sure how I then pass that to the Powershell build script and ultimately get it into the Docker container as an ENV that I can then reference instead of the file?
Do I maybe need to add it as another --build-arg in the Powershell script like this?
--build-arg NPM_TOKEN=$(System.AccessToken)
And then if it was exposed as an ENV value inside the container, how would I reference it?
Would it just be there as NPM_TOKEN and I don't need to do anything further?
Or would I need to take it and try to base64 encode it and export it again?
Bit out of my depth as I've never used a private npm registry before.
Appreciate any info or suggestions.

How to pass substitution variables on Google Cloud build to run a dockerfile

I have set up a Github trigger on Google Cloud build. I already have a dockerfile to do the build steps. Since there is no provision to pass substitution variables I am trying to build it with cloud build configuration file (yaml) and then passing the path to the dockerfile in the configuration file.
This is the cloud build configuration file where I need to pass 5 variables for the Dockerfile to consume and the yaml file goes like this:
steps:
- name: 'gcr.io/cloud-builders/docker'
env:
- 'ACCESS_TOKEN=$ACCESS_TOKEN'
- 'BRANCH=$BRANCH'
- 'UTIL_BRANCH=$UTIL_BRANCH'
- 'ARG_ENVIRONMENT=$ARG_ENVIRONMENT'
- 'ARG_PYTHON_SCRIPT=$ARG_PYTHON_SCRIPT'
args:
- build
- "--tag=gcr.io/$PROJECT_ID/quickstart-image"
- "--file=./twitter/dax/processing_scripts/Dockerfile"
- .
When the trigger runs the build, I get an error in one of the build steps in dockerfile saying that the variable is not available. It's clear that the environment variable passed in the yaml file is not being passed to the Dockerfile for consumption.
And this is how the I have filled the substitution variables in on the trigger page
Pasting the build code from step 5 where the error occurs, before which is just apt-get update commands running:
Step 5/28 : RUN git config --global url."https://${ACCESS_TOKEN}:#github.com/".insteadOf "https://github.com/" && echo $(ACCESS_TOKEN)
---> Running in f7b94bc2a0d9
/bin/sh: 1: ACCESS_TOKEN: not found
Removing intermediate container f7b94bc2a0d9
---> 30965207dcec
Step 6/28 : ARG BRANCH
---> Running in 93e36589ac48
Removing intermediate container 93e36589ac48
---> 1d1508b1c1d9
Step 7/28 : RUN git clone https://github.com/my_repo45/twitter.git -b "${BRANCH}"
---> Running in fbeb93dbb113
Cloning into 'twitter'...
remote: Repository not found.
fatal: Authentication failed for 'https://github.com/my_repo45/twitter.git/'
The command '/bin/sh -c git clone https://github.com/my_repo45/twitter.git -b "${BRANCH}"' returned a non-zero code: 128
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 128```
Could anyone please point out what the issue and validate if the environment declaration is proper?
You would still need to add the variables to the docker build command. As in your current situation, the substitution variables are only available for you within cloud build and not as a regular environment variable. A simplified example of how to pass those variables to cloud build is outlined below. Note that you can only use this with ARG in your dockerfile, not with ENV.
steps:
- name: 'gcr.io/cloud-builders/docker'
env:
- 'ACCESS_TOKEN=$_ACCESS_TOKEN'
args:
- build
- "--build-arg=ACCESS_TOKEN=${ACCESS_TOKEN}"
- "--tag=gcr.io/$PROJECT_ID/quickstart-image"
- "--file=./twitter/dax/processing_scripts/Dockerfile"
Another option is to export the environment variable within the same build step your docker build command is in:
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
env:
- 'ACCESS_TOKEN=$_ACCESS_TOKEN'
args:
- '-c'
- |
export ACCESS_TOKEN=${ACCESS_TOKEN}
docker build \
--tag=gcr.io/$PROJECT_ID/quickstart-image" \
--file=./twitter/dax/processing_scripts/Dockerfile"
Of course, you could argue if the env field is still needed in this setup, I'll leave that up to you.

Failing gitlab CI due to "no such file or directory"

I'm attempting to have my .gitlab-ci.yml file use an image off the Gitlab container registry. I have successfully uploaded the Dockerfile to the registry and I can pull the image from the registry on my local machine and build a container just fine. However, when using the image for my .gitlab-ci.yml file, I get this error:
Authenticating with credentials from job payload (GitLab Registry)
standard_init_linux.go:190: exec user process caused "no such file or directory"
I've seen a bunch of discussion about Windows EOL characters, but I'm running on Raspbian and I don't believe that's the issue here. However, I'm pretty new at this and can't figure out what the issue is. I appreciate any help.
.gitlab-ci.yml file:
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
stages:
- test-version
test:
stage: test-version
image: registry.gitlab.com/my/project/test:latest
script:
- python --version
test.Dockerfile (which is in the registry as registry.gitlab.com/my/project/test:latest):
ARG base_img="python:3.6"
FROM ${base_img}
# Install Python packages
RUN pip install --upgrade pip
Edit:
Another thing to note is that if I change the image in the .gitlab-ci.yml file to just python:3.6, then it runs just fine. It's only when I attempt to link my image in the registry.
As you confirmed in the comments, gitlab.com/my/project is a private repository, so that one cannot directly use docker pull or the image: property with registry.gitlab.com/my/project/test:latest.
However, you should be able to adapt your .gitlab-ci.yml by using the image: docker:latest and manually running docker commands (including docker login).
This relies on the so-called Docker-in-Docker (dind) approach, and it is supported by GitLab CI.
Here is a generic template of .gitlab-ci.yml relying on this idea:
stages:
- test-version
test:
stage: test-version
image: docker:latest
services:
- docker:dind
variables:
# GIT_STRATEGY: none # uncomment if "git clone" is unneeded
IMAGE: "registry.gitlab.com/my/project/test:latest"
before_script:
# - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
# or better
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin "$CI_REGISTRY"
script:
- docker pull "$IMAGE"
- |
docker run --rm -v "$PWD:/build" -w /build "$IMAGE" /bin/bash -c "
export PS4='+ \e[33;1m(\$0 # line \$LINENO) \$\e[0m ' # optional
set -ex # mandatory
## TODO insert your multi-line shell script here ##
echo \"One comment\" # quotes must be escaped here
: A better comment
python --version
echo $PWD # interpolated outside the container
echo \$PWD # interpolated inside the container
## (cont'd) ##
" "$CI_JOB_NAME"
- echo done
This leads to a bit more boilerplate, but this is generic so you can just replace the IMAGE definition and replace the TODO area with your own Bash script, just ensuring that the two items are fulfilled:
If your shell code contains some double quotes, you need to escape them, because the whole code is surrounded by docker run … " and " (the last variable "$CI_JOB_NAME" is a detail, it is optional and just allows one to override the $0 variable referenced within the Bash variable PS4)
If your shell code contains local variables, they need to be escaped (cf. the \$PWD above), otherwise these variables will be resolved prior running the docker run … "$IMAGE" /bin/sh -c "…" command itself.

How to setup google cloud Cloudbuild.yaml to replicate a jenkins job?

I have the following script thats run in my jenkins job
set +x
SERVICE_ACCOUNT=`cat "$GCLOUD_AUTH_FILE"`
docker login -u _json_key -p "${SERVICE_ACCOUNT}" https://gcr.io
set -x
docker pull gcr.io/$MYPROJECT/automation:master
docker run --rm --attach STDOUT -v "$(pwd)":/workspace -v "$GCLOUD_AUTH_FILE":/gcloud-auth/service_account_key.json -v /var/run/docker.sock:/var/run/docker.sock -e "BRANCH=master" -e "PROJECT=myproject" gcr.io/myproject/automation:master "/building/buildImages.sh" "myapp"
if [ $? -ne 0 ]; then
exit 1
fi
I am now trying to do this in cloudbuild.yaml such that I can run my script using my own automation image (which has a bunch of dependencies docker/jdk/pip etc installed) , and mount my git folders in my workspace directory
I tried putting my cloudbuild.yaml at the top level in my directory in my git repo and set it up as this
steps:
- name: 'gcr.io/myproject/automation:master'
volumes:
- name: 'current-working-dir'
path: /mydirectory
args: ['bash', '-c','/building/buildImages.sh', 'myapp']
timeout: 4000s
But this gives me errors saying the
invalid build: Volume "current-working-dir" is only used by one step
Just FYI, my script buildImages.sh, copies folders and dockerfiles, runs pip install/ npm/ and gradle commands and then docker build commands (kind of all in one solution).
Whats the way to translate my script to cloudbuild.yaml
try this in your cloudbuild.yaml:
steps:
- name: 'gcr.io/<your-project>/<image>'
args: ['sh','<your-script>.sh']
using this I was able to pull the image from Google Cloud Registry that has my script, then run the script using 'sh'. It didn't matter where the script is. I'm using alpine in my Dockerfile as base image.

Docker: permission denied while trying to connect to Docker Daemon with local CircleCI build

I have a very simple config.yml:
version: 2
jobs:
build:
working_directory: ~/app
docker:
- image: circleci/node:8.4.0
steps:
- checkout
- run: node -e "console.log('Hello from NodeJS ' + process.version + '\!')"
- run: yarn
- setup_remote_docker
- run: docker build .
All it does: boot a node image, test if node is running, do a yarn install and a docker build.
My dockerfile is nothing special; it has a COPY and ENTRYPOINT.
When I run circleci build on my MacBook Air using Docker Native, I get the following error:
Got permission denied while trying to connect to the Docker daemon socket at unix://[...]
If I change the docker build . command to: sudo docker build ., everything works as planned, locally, with circleci build.
However, pushing this change to CircleCI will result in an error: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
So, to summarize: using sudo works, locally, but not on CircleCI itself. Not using sudo works on CircleCI, but not locally.
Is this something the CircleCI staff has to fix, or is there something I can do?
For reference, I have posted this question on the CircleCI forums as well.
I've created a workaround for myself.
In the very first step of the config.yml, I run this command:
if [[ $CIRCLE_SHELL_ENV == *"localbuild"* ]]; then
echo "This is a local build. Enabling sudo for docker"
echo sudo > ~/sudo
else
echo "This is not a local build. Disabling sudo for docker"
touch ~/sudo
fi
Afterwards, you can do this:
eval `cat ~/sudo` docker build .
Explanation:
The first snippet checks if the CircleCI-provided environment variable CIRCLE_SHELL_ENV contains localbuild. This is only true when running circleci build on your local machine.
If true, it creates a file called sudo with contents sudo in the home directory.
If false, it creates a file called sudo with NO contents in the home directory.
The second snippet opens the ~/sudo file, and executes it with the arguments you give afterwards. If the ~/sudo file contains "sudo", the command in this example will become sudo docker build ., if it doesn't contain anything, it will become docker build ., with a space before it, but that will be ignored.
This way, both the local (circleci build) builds and remote builds will work.
To iterate on the answer of Jeff Huijsmans,
an alternative version is to use a Bash variable for docker:
- run:
name: Set up docker
command: |
if [[ $CIRCLE_SHELL_ENV == *"localbuild"* ]]; then
echo "export docker='sudo docker'" >> $BASH_ENV
else
echo "export docker='docker'" >> $BASH_ENV
fi
Then you can use it in your config
- run:
name: Verify docker
command: $docker --version
You can see this in action in my test for my Dotfiles repository
Documentation about environment variables in CircleCi
You might also solve your issue by running the docker image as root. Specify user: root under the image parameter:
...
jobs:
build:
working_directory: ~/app
docker:
- image: circleci/node:8.4.0
user: root
steps:
- checkout
...
...

Resources