Use/pass system.accesstoken into Dockerfile as ENV var for npm auth token using Powershell - docker

I'm trying to automate a docker build that needs to access a personal npm registry on Azure
Currently I manually grab a personal Azure DevOps PAT token, base64 encode it and then save it to a file
I then mount the file as a secret during the docker build
I have this step in the Azure pipeline yaml that calls a Powershell script
steps:
- template: docker/steps/docker-login.yml#templates
parameters:
containerRegistry: ${{ parameters.containerRegistry }}
- pwsh: |
./scripts/buildAndPushContainerImage.ps1 `
-containerRepository ${{ parameters.containerRepository }} `
-branchName ${{ parameters.branchName }} `
-version ${{ parameters.version }} `
-action Build
displayName: Docker build
And this in the Powershell script to build the image
function Build-UI-Image {
$npmTokenFilePath = Join-Path $buildContext "docker/secrets/npm_token.txt"
if(-not (Test-Path $npmTokenFilePath)){
Write-Error "Missing file: $npmTokenFilePath"
}
docker build `
-f $dockerfilePath `
--secret id=npm_token,src=$npmTokenFilePath `
--build-arg "BUILDKIT_INLINE_CACHE=1" `
.....(rest of code)
}
And finally a Dockerfile with the value mounted as a secret
RUN --mount=type=secret,id=npm_token \
--mount=type=cache,sharing=locked,target=/tmp/yarn-cache <<EOF
export NPM_TOKEN=$(cat /run/secrets/npm_token)
yarn install --frozen-lockfile --silent --non-interactive --cache-folder /tmp/yarn-cache
unset NPM_TOKEN
EOF
I've read multiple articles about using the Azure built in 'system.accesstoken' to authorise with private npm registries, but I'm not sure how to go about this for my scenario (as I am not using Azure predefied tasks and I'm using Powershell not bash)
I think I can add this to the pipeline yaml as the first step
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
But I'm not sure how I then pass that to the Powershell build script and ultimately get it into the Docker container as an ENV that I can then reference instead of the file?
Do I maybe need to add it as another --build-arg in the Powershell script like this?
--build-arg NPM_TOKEN=$(System.AccessToken)
And then if it was exposed as an ENV value inside the container, how would I reference it?
Would it just be there as NPM_TOKEN and I don't need to do anything further?
Or would I need to take it and try to base64 encode it and export it again?
Bit out of my depth as I've never used a private npm registry before.
Appreciate any info or suggestions.

Related

How to correctly pass ssh key file from Jenkins credentials variable into to docker build command?

This question is a follow up to this question
How to pass jenkins credentials into docker build command?
I am getting the ssh key file from jenkins credential store in my groovy pipeline and
passing it into docker build command via --build-arg so that I can checkout and build artifacts from the private git repos from within my docker container
credentials store id : cicd-user, which works for checking out my private works as expected from my groovy Jenkinsfile
checkout([$class: 'GitSCM',
userRemoteConfigs: [[credentialsId: 'cicd-user', url:'ssh://git#bitbucket.myorg.co:7999/A/software.git']]
I access it and try to pass the same to docker build command:
withCredentials([sshUserPrivateKey(credentialsId: 'cicd-user', keyFileVariable: 'FILE')]) {
sh "cd ${WORKSPACE} && docker build -t ${some-name} --build-arg USERNAME=cicd-user --build-arg PRIV_KEY_FILE=\$FILE --network=host -f software/tools/jenkins/${some-name}/Dockerfile ."
}
in Dockerfile I do
RUN echo "$PRIV_KEY_FILE" > /home/"$USERNAME"/.ssh/id_rsa && \
chmod 700 /home/"$USERNAME"/.ssh/id_rsa
RUN echo "Host bitbucket.myorg.co\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
But I am seeing the following issue
"Load key "/home/cicd-user/.ssh/id_rsa" :(invalid format)
"git#Bitbucket.mycomp.co:Permission denied( Public key)
"fatal: could not read from remote repository"
In the past I have passed the ssh priv key as --build-arg from outside by cat'ing like below
--build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)"
Should I do something similar
--build-arg PRIV_KEY_FILE="$(cat $FILE)"
Any idea on what might be going wrong or where I should be looking for debugging this correctly ?
I ran into the same issue yesterday and I think I've come up with a workable solution.
Here are the basic steps I took - using the sshagent plugin to manage the sshagent within the Jenkins job. You could probably use withCredentials as well, though that's not what I ended up finding success with.
The ssagent (or alternatively the key) can be made available to specific build steps using the docker build commands --ssh flag. (Feature reference) It's important to note that for this to work (at the current time) you need to set DOCKER_BUILDKIT=1. If you forget to do this, then it seems like it ignores this configuration and the ssh connection will fail. Once that's set, the sshagent
Cut down look at the pipeline:
pipeline {
agent {
// ...
}
environment {
// Necessary to enable Docker buildkit features such as --ssh
DOCKER_BUILDKIT = "1"
}
stages {
// other stages
stage('Docker Build') {
steps {
// Start ssh agent and add the private key(s) that will be needed in docker build
sshagent(['credentials-id-of-private-key']) {
// Make the default ssh agent (the one configured above) accessible in the build
sh 'docker build --ssh default .'
}
}
// other stages
}
}
}
In the Dockerfile it's necessary to explicitly give lines that need it access to the ssh agent. This can be done by including mount=type=ssh in the relevant RUN command.
For me, this looked roughly like this:
FROM node:14
# Retrieve bitbucket host key
RUN mkdir -p -m -0600 ~/.ssh && ssh-keyscan bitbucket.org >> ~/.ssh/known_hosts
...
# Mount ssh agent for install
RUN --mount=type=ssh npm i
...
With this configuration, the npm install was able to install a private git repo stored on Bitbucket by utilizing the SSH private key within docker build via sshagent.
After spending one week I found some how reasonable way to do.
just add
RUN git config --global url."https://${GIT_ACCESS_TOKEN}#github.com".insteadOf "ssh://git#github.com"
into your docker file and it will install if it needs to install private packages as well.
add pass your GIT_ACCESS_TOKEN (you can have it in your github settings account with setting proper permissions) where you are building your image. Like
docker build --build-arg GIT_ACCESS_TOKEN=yourtoken -t imageNameAndTag .

Copy files from GCS into a Cloud Run docker container during build

I am trying to use gsutil to copy a file from GCS into a Run container during the build step.
The steps I have tried:
RUN pip install gsutil
RUN gsutil -m cp -r gs://BUCKET_NAME $APP_HOME/artefacts
The error:
ServiceException: 401 Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object.
CommandException: 1 file/object could not be transferred.
The command '/bin/sh -c gsutil -m cp -r gs://BUCKET_NAME $APP_HOME/artefacts' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
The service account (default compute & cloudbuild) does have access to GCS, and I have also tried to gsutil config -a and with various other flags with no success!
I am not sure on exactly how I should authenticate to successfully access the bucket.
Here my github action job
jobs:
build:
name: Build image
runs-on: ubuntu-latest
env:
BRANCH: ${GITHUB_REF##*/}
SERVICE_NAME: ${{ secrets.SERVICE_NAME }}
PROJECT_ID: ${{ secrets.PROJECT_ID }}
steps:
- name: Checkout
uses: actions/checkout#v2
# Setup gcloud CLI
- uses: google-github-actions/setup-gcloud#master
with:
service_account_key: ${{ secrets.SERVICE_ACCOUNT_KEY }}
project_id: ${{ secrets.PROJECT_ID }}
export_default_credentials: true
# Download the file locally
- name: Get_file
run: |-
gsutil cp gs://BUCKET_NAME/path/to/file .
# Build docker image
- name: Image_build
run: |-
docker build -t gcr.io/$PROJECT_ID/$SERVICE_NAME .
# Configure docker to use the gcloud command-line tool as a credential helper
- run: |
gcloud auth configure-docker -q
# Push image to Google Container Registry
- name: Image_push
run: |-
docker push gcr.io/$PROJECT_ID/$SERVICE_NAME
You have to set 3 secrets:
SERVICE_ACCOUNT_KEY: which is your service account key file
SERVICE_NAME: the name of your container
PROJECT_ID: the project where to deploy your image
Because you download the file locally, the file is locally present in the Docker build. Then, simply COPY it in the docker file and do what you want with it.
UPDATE
If you want to do this in docker, you can achieve this like that
Dockerfile
FROM google/cloud-sdk:alpine as gcloud
WORKDIR /app
ARG KEY_FILE_CONTENT
RUN echo $KEY_FILE_CONTENT | gcloud auth activate-service-account --key-file=- \
&& gsutil cp gs://BUCKET_NAME/path/to/file .
....
FROM <FINAL LAYER>
COPY --from=gcloud /app/<myFile> .
....
The Docker build command
docker build --build-arg KEY_FILE_CONTENT="YOUR_KEY_FILE_CONTENT" \
-t gcr.io/$PROJECT_ID/$SERVICE_NAME .
YOUR_KEY_FILE_CONTENT depends on your environment. Here some solution to inject it:
On Github Action: ${{ secrets.SERVICE_ACCOUNT_KEY }}
On your local environment: $(cat my_key.json)
I see you tagged Cloud Build,
You can use step like this:
steps:
- name: gcr.io/cloud-builders/gsutil
args: ['cp', 'gs://mybucket/results.zip', 'previous_results.zip']
# operations that use previous_results.zip and produce new_results.zip
- name: gcr.io/cloud-builders/gsutil
args: ['cp', 'new_results.zip', 'gs://mybucket/results.zip']

Google Cloud Build + Google Secret Manager Substitution Problems

we have a repository that needs to go get a private repo. To do this, we are using an SSH key to access the private repo/module.
We are storing this SSH key using Google Secret Manager and passing it to Docker using the build-arg flag. Now, when we do this locally, the Dockerfile builds and runs as intended. This is the command we use for a local build:
export SSH_PRIVATE_KEY="$(gcloud secrets versions access latest --secret=secret-data)" && \
docker build --build-arg SSH_PRIVATE_KEY -t my-image .
However, when we try to move this setup to Google Cloud Build, we run into 403 forbidden errors from Bitbucket, which leads me to believe that the SSH key is either not being read or formatted correctly.
The full 403 error is:
https://api.bitbucket.org/2.0/repositories/my-repo?fields=scm: 403 Forbidden
Step #0 - "Build": server response: Access denied. You must have write or admin access.
What is even stranger is that when I run the Cloud Build local emulator, it works fine using this command: cloud-build-local --config=builder/cloudbuild-prod.yaml --dryrun=false .
I've tried many different formats and methods, so out of desperation I am asking the community for help. What could be the problem?
Here is our cloudbuild.yaml:
steps:
# Get secret
- id: 'Get Secret'
name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args:
- '-c'
- |
gcloud secrets versions access latest --secret=secret-data > /workspace/SSH_PRIVATE_KEY.txt
# Build
- id: 'Build'
name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
export SSH_PRIVATE_KEY=$(cat /workspace/SSH_PRIVATE_KEY.txt) &&
docker build --build-arg SSH_PRIVATE_KEY -t my-image .
With Cloud Build, when you want to get local linux variable, and not the substitution variable, you have to espace the $ with another $. Look at this:
# Build
- id: 'Build'
name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
export SSH_PRIVATE_KEY=$(cat /workspace/SSH_PRIVATE_KEY.txt)
docker build --build-arg $$SSH_PRIVATE_KEY -t my-image .
The SSH_PRIVATE_KEY is prefixed by $$ to say: don't look at the substitution variable, but look at the linux variable.
I also remove the && at the end of the export line. The pipe | means: Run each command in succession, line return limit each command
Thanks for all the help! This one was pretty weird. Turns out it's not an issue with Cloud Build or Secret Manager but the Dockerfile I was using.
Instead of setting GOPRIVATE with the command in the Dockerfile below, I was using a statement like RUN export GOPRIVATE="bitbucket.org/odds".
In case anyone runs into something like this again, here's the full Dockerfile that works.
FROM golang:1.15.1
WORKDIR $GOPATH/src/bitbucket.org/gml/my-srv
ENTRYPOINT ["./my-srv"]
ARG CREDENTIALS
RUN git config \
--system \
url."https://${CREDENTIALS}#bitbucket.org/".insteadOf \
"https://bitbucket.org/"
RUN go env -w GOPRIVATE="bitbucket.org/my-team"
COPY . .
RUN make build

How to setup google cloud Cloudbuild.yaml to replicate a jenkins job?

I have the following script thats run in my jenkins job
set +x
SERVICE_ACCOUNT=`cat "$GCLOUD_AUTH_FILE"`
docker login -u _json_key -p "${SERVICE_ACCOUNT}" https://gcr.io
set -x
docker pull gcr.io/$MYPROJECT/automation:master
docker run --rm --attach STDOUT -v "$(pwd)":/workspace -v "$GCLOUD_AUTH_FILE":/gcloud-auth/service_account_key.json -v /var/run/docker.sock:/var/run/docker.sock -e "BRANCH=master" -e "PROJECT=myproject" gcr.io/myproject/automation:master "/building/buildImages.sh" "myapp"
if [ $? -ne 0 ]; then
exit 1
fi
I am now trying to do this in cloudbuild.yaml such that I can run my script using my own automation image (which has a bunch of dependencies docker/jdk/pip etc installed) , and mount my git folders in my workspace directory
I tried putting my cloudbuild.yaml at the top level in my directory in my git repo and set it up as this
steps:
- name: 'gcr.io/myproject/automation:master'
volumes:
- name: 'current-working-dir'
path: /mydirectory
args: ['bash', '-c','/building/buildImages.sh', 'myapp']
timeout: 4000s
But this gives me errors saying the
invalid build: Volume "current-working-dir" is only used by one step
Just FYI, my script buildImages.sh, copies folders and dockerfiles, runs pip install/ npm/ and gradle commands and then docker build commands (kind of all in one solution).
Whats the way to translate my script to cloudbuild.yaml
try this in your cloudbuild.yaml:
steps:
- name: 'gcr.io/<your-project>/<image>'
args: ['sh','<your-script>.sh']
using this I was able to pull the image from Google Cloud Registry that has my script, then run the script using 'sh'. It didn't matter where the script is. I'm using alpine in my Dockerfile as base image.

New Docker Build secret information for use with aws cli

I would like to use the new --secret flag in order to retreive something from aws with its cli during the build process.
# syntax = docker/dockerfile:1.0-experimental
FROM alpine
RUN --mount=type=secret,id=mysecret,dst=/root/.aws cat /root/.aws
I can see the credentials when running the following command:
docker build --no-cache --progress=plain --secret id=mysecret,src=%USERPROFILE%/.aws/credentials .
However, if I adjust the command to be run, the aws cli cannot find the credentials file and asks me to do aws configure:
RUN --mount=type=secret,id=mysecret,dst=/root/.aws aws ssm get-parameter
Any ideas?
The following works:
# syntax = docker/dockerfile:1.0-experimental
FROM alpine
RUN --mount=type=secret,id=aws,dst=/aws export AWS_SHARED_CREDENTIALS_FILE=/aws aws ssm get-parameter ...

Resources