Cloud Build, Container Registry, Cloud Run: Run tests without exposing env var - docker

Cloud build do the following:
Build image from dockerfile (see dockerfile below)
Push image to container registry
Update service in Cloud Run
My issue is the following:
As I'm running my tests on build time, I need my MONGODB_URI secret on build time, but I've read that using --build-arg is not safe to expose secrets.
I could run npm install and npm run test in Cloud Build container, but it would make build time longer as I'll have to run npm install two times.
Is there a way I can run only once npm install without having to expose secrets ?
Dockerfile
FROM node:16
COPY . ./
WORKDIR /
RUN npm install
ARG env
ARG mongodb_uri
ENV ENVIRONMENT=$env
ENV DB_REMOTE_PROD=$mongodb_uri
RUN npm run test
RUN npm run build
CMD ["node", "./build/index.js"]
And my cloudbuild.yaml config:
steps:
- name: gcr.io/cloud-builders/docker
args:
- '-c'
- >-
docker build --no-cache -t
$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA
--build-arg ENVIRONMENT=staging --build-arg mongodb_uri=$$MONGODB_URI -f
Dockerfile .
id: Build
entrypoint: bash
secretEnv:
- MONGODB_URI
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
args:
- '-c'
- >-
gcloud run services update $_SERVICE_NAME --platform=managed
--image=$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID
--region=$_DEPLOY_REGION --update-env-vars=ENVIRONMENT=staging
--update-env-vars=DB_REMOTE_PROD=$$MONGODB_URI --quiet
id: Deploy
entrypoint: bash
secretEnv:
- MONGODB_URI
images:
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
logging: CLOUD_LOGGING_ONLY
substitutions:
_TRIGGER_ID: 44b16efe-0219-41af-b32b-9b98438728c3
_GCR_HOSTNAME: eu.gcr.io
_PLATFORM: managed
_SERVICE_NAME: app-staging
_DEPLOY_REGION: europe-southwest1
availableSecrets:
secretManager:
- versionName: projects/PROJECT_ID/secrets/mongodb_app_staging/versions/1
env: MONGODB_URI
With my method, the secret is exposed not only in the container since I pass it as build-args, but it is also exposed to all users with access to cloud run...

Your container build step is not so bad. You provide your secrets to your container, but it is not visible. It's only reference.
You can hardly do better. A solution could be to perform the secret access directly inside the Dockerfile, but I'm not sure that the "security gain" worth the Dockerfile increased complexity. (If you want more details, let me know, I will be able to share sample)
About Cloud Run, you are totally true: exposing secret in plain text in environment variable is totally not acceptable.
For that, you can use the Cloud Run - Secret Manager integration. Instead of doing this
gcloud run services update $_SERVICE_NAME --platform=managed
--image=$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID
--region=$_DEPLOY_REGION --update-env-vars=ENVIRONMENT=staging
--update-env-vars=DB_REMOTE_PROD=$$MONGODB_URI --quiet
You can do that (and you now longer need to set the secret as env var of your Cloud Build step)
gcloud run services update $_SERVICE_NAME --platform=managed
--image=$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID
--region=$_DEPLOY_REGION --update-env-vars=ENVIRONMENT=staging
--update-secrets=DB_REMOTE_PROD=mongodb_app_staging:1 --quiet

Related

Gitlab CI npm cannot resolve module

Totally new to Gitlab and CI in general, so apologies for the lack of understanding. I have a repo, which is NuxtJS based, with a Dockerfile. The end goal of the pipeline is to build and push this repo to my docker account. The Dockerfile is relatively straight forward, containing an npm install and npm run build. I'm using a custom docker image as my runner, based on docker:20.10.17-dind-alpine3.16 with ansible, terraform and kubectl installed.
When building the project's docker image on my local machine, I receive no issues, however in gitlab, when running the npm run build command, I get the following error:
Module not found: Error: Can't resolve '../node_modules/vue-confirm-dialog' in '/usr/src/nuxt-app/plugins'
Here is my yml file:
stages:
- docker
docker:
stage: docker
image: <my-runner-image>
services:
- "docker:dind"
before_script:
- docker login -u $DOCKER_REGISTRY_USER -p $DOCKER_REGISTRY_PASSWORD
script:
- docker build -t <my-repo> .
- docker push <my-repo>
Any suggestions are greatly appreciated
--EDIT--
As requested, here is the project's Dockerfile:
FROM node:lts-alpine3.15
# create destination directory
RUN mkdir -p /usr/src/nuxt-app
WORKDIR /usr/src/nuxt-app
# update and install dependency
RUN apk update && apk upgrade
RUN apk add git
# copy the app, note .dockerignore
COPY . /usr/src/nuxt-app/
RUN npm install
RUN npm run build
EXPOSE 3000
ENV NUXT_HOST=0.0.0.0
ENV NUXT_PORT=3000
CMD [ "npm", "start" ]

cloud build pass secret env to dockerfile

I am using google cloud build to build a docker image and deploy in cloud run. The module has dependencies on Github that are private. In the cloudbuild.yaml file I can access secret keys for example the Github token, but I don't know what is the correct and secure way to pass this token to the Dockerfile.
I was following this official guide but it would only work in the cloudbuild.yaml scope and not in the Dockerfile. Accessing GitHub from a build via SSH keys
cloudbuild.yaml
steps:
- name: gcr.io/cloud-builders/docker
args: ["build", "-t", "gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA", "."]
- name: gcr.io/cloud-builders/docker
args: [ "push", "gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA" ]
- name: gcr.io/google.com/cloudsdktool/cloud-sdk
entrypoint: gcloud
args: [
"run", "deploy", "$REPO_NAME",
"--image", "gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA",
"--platform", "managed",
"--region", "us-east1",
"--allow-unauthenticated",
"--use-http2",
]
images:
- gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/GITHUB_USER/versions/1
env: "GITHUB_USER"
- versionName: projects/$PROJECT_ID/secrets/GITHUB_TOKEN/versions/1
env: "GITHUB_TOKEN"
Dockerfile
# [START cloudrun_grpc_dockerfile]
# [START run_grpc_dockerfile]
FROM golang:buster as builder
# Create and change to the app directory.
WORKDIR /app
# Create /root/.netrc cred github
RUN echo machine github.com >> /root/.netrc
RUN echo login "GITHUB_USER" >> /root/.netrc
RUN echo password "GITHUB_PASSWORD" >> /root/.netrc
# Config Github, this create file /root/.gitconfig
RUN git config --global url."ssh://git#github.com/".insteadOf "https://github.com/"
# GOPRIVATE
RUN go env -w GOPRIVATE=github.com/org/repo
# Do I need to remove the /root/.netrc file? I do not want this information to be propagated and seen by third parties.
# Retrieve application dependencies.
# This allows the container build to reuse cached dependencies.
# Expecting to copy go.mod and if present go.sum.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
# RUN go build -mod=readonly -v -o server ./cmd/server
RUN go build -mod=readonly -v -o server
# Use the official Debian slim image for a lean production container.
# https://hub.docker.com/_/debian
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM debian:buster-slim
RUN set -x && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates && \
rm -rf /var/lib/apt/lists/*
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/server /server
# Run the web service on container startup.
CMD ["/server"]
# [END run_grpc_dockerfile]
# [END cloudrun_grpc_dockerfile]
After trying for 2 days I have not found a solution, the simplest thing I could do was to generate the vendor folder and commit it to the repository and avoid go mod download.
You have several way to do things.
With Docker, when you run a build, you run it in an isolated environment (it's the principle of isolation). So, you haven't access to your environment variables from inside the build process.
To solve that, you can use build args and put your secret values in that parameter.
But, there is a trap: you have to use bash code and not built in step code in Cloud Build. Let me show you
# Doesn't work
- name: gcr.io/cloud-builders/docker
secretEnv: ["GITHUB_USER","GITHUB_TOKEN"]
args: ["build", "-t", "gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA", "--build-args=GITHUB_USER=$GITHUB_USER,GITHUB_TOKEN=$GITHUB_TOKEN","."]
# Working version
- name: gcr.io/cloud-builders/docker
secretEnv: ["GITHUB_USER","GITHUB_TOKEN"]
entrypoint: bash
args:
- -c
- |
docker build -t gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA --build-args=GITHUB_USER=$$GITHUB_USER,GITHUB_TOKEN=$$GITHUB_TOKEN .
You can also perform the actions outside of the Dockerfile. It's roughly the same thing: load a container, perform operation, load another container and continue.

GCP Cloud build pass secret to docker arg

I intend to pass my npm token to gcp cloud build,
so that I can use it in a multistage build, to install private npm packages.
I have the following abridged Dockerfile:
FROM ubuntu:14.04 AS build
ARG NPM_TOKEN
RUN echo "NPM_TOKEN:: ${NPM_TOKEN}"
and the following abridged cloudbuild.yaml:
---
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: [ '-c', 'gcloud secrets versions access latest --secret=my-npm-token > npm-token.txt' ]
- name: gcr.io/cloud-builders/docker
args:
- build
- "-t"
- gcr.io/my-project/my-program
- "."
- "--build-arg NPM_TOKEN= < npm-token.txt"
- "--no-cache"
I based my cloudbuild.yaml on the documentation, but it seems like I am not able to put two and two together, as the expression: "--build-arg NPM_TOKEN= < npm-token.txt" does not work.
I have tested the DockerFile, when I directly pass in the npm token, and it works. I simply have trouble passing in a token from gcloud secrets as a build argument to docker.
Help is greatly appreciated!
Your goal is to get the secret file contents into the build argument. Therefore you have to read the file content using either NPM_TOKEN="$(cat npm-token.txt)"or NPM_TOKEN="$(< npm-token.txt)".
name: gcr.io/cloud-builders/docker
entrypoint: 'bash'
args: [ '-c', 'docker build -t gcr.io/my-project/my-program . --build-arg NPM_TOKEN="$(cat npm-token.txt)" --no-cache' ]
Note: The gcr.io/cloud-builders/docker however use exec entrypoint form. Therefore you set entrypoint to bash.
Also note that you save the secret to the build workspace (/workspace/..). This also allows you to copy the secret as a file into your container.
FROM ubuntu:14.04 AS build
ARG NPM_TOKEN
COPY npm-token.txt .
RUN echo "NPM_TOKEN:: $(cat npm-token.txt)"
I won't write your second step like you did, but like this:
- name: gcr.io/cloud-builders/docker
entrypoint: "bash"
args:
- "-c"
- |
build -t gcr.io/my-project/my-program . --build-arg NPM_TOKEN=$(cat npm-token.txt) --no-cache

Env vars lost when building docker image from Gitlab CI

I'm trying to build my React / NodeJS project using Docker and Gitlab CI.
When I build manually my images, I use .env file containing env vars, and everything is fine.
docker build --no-cache -f client/docker/local/Dockerfile . -t espace_client_client:local
docker build --no-cache -f server/docker/local/Dockerfile . -t espace_client_api:local
But when deploying with Gitlab, I can build successfully the image, but when I run it, env vars are empty in the client.
Here is my gitlab CI:
image: node:10.15
variables:
REGISTRY_PACKAGE_CLIENT_NAME: registry.gitlab.com/company/espace_client/client
REGISTRY_PACKAGE_API_NAME: registry.gitlab.com/company/espace_client/api
REGISTRY_URL: https://registry.gitlab.com
DOCKER_DRIVER: overlay
# Client Side
REACT_APP_API_URL: https://api.espace-client.company.fr
REACT_APP_DB_NAME: company
REACT_APP_INFLUX: https://influx-prod.company.fr
REACT_APP_INFLUX_LOGIN: admin
REACT_APP_HOUR_GMT: 2
stages:
- publish
docker-push-client:
stage: publish
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $REGISTRY_URL
image: docker:stable
services:
- docker:dind
script:
- docker build --no-cache -f client/docker/prod/Dockerfile . -t $REGISTRY_PACKAGE_CLIENT_NAME:latest
- docker push $REGISTRY_PACKAGE_CLIENT_NAME:latest
Here is the Dockerfile for the client
FROM node:10.15-alpine
WORKDIR /app
COPY package*.json ./
ENV NODE_ENV production
RUN npm -g install serve && npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD [ "serve", "build", "-l", "3000" ]
Why is there such a difference between the 2 process ?
According to your answer in comments, GitLab CI/CD environment variables doesn't solve your issue. Gitlab CI environment is actual only in context of GitLab Runner that builds and|or deploys your app.
So, if you are going to propagate Env vars to the app, there are several ways to deliver variables from .gitlab-cy.ymlto your app:
ENV instruction Dockerfile
E.g.
FROM node:10.15-alpine
WORKDIR /app
COPY package*.json ./
ENV NODE_ENV production
ENV REACT_APP_API_URL: https://api.espace-client.company.fr
ENV REACT_APP_DB_NAME: company
ENV REACT_APP_INFLUX: https://influx-prod.company.fr
ENV REACT_APP_INFLUX_LOGIN: admin
ENV REACT_APP_HOUR_GMT: 2
RUN npm -g install serve && npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD [ "serve", "build", "-l", "3000" ]
docker-compose environment directive
web:
environment:
- NODE_ENV=production
- REACT_APP_API_URL=https://api.espace-client.company.fr
- REACT_APP_DB_NAME=company
- REACT_APP_INFLUX=https://influx-prod.company.fr
- REACT_APP_INFLUX_LOGIN=admin
- REACT_APP_HOUR_GMT=2
Docker run -e
(Not your case, just for information)
docker -e REACT_APP_DB_NAME="company"
P.S. Try Gitlab CI variables
There is convenient way to store variables outside of your code: Custom environment variables
You can set them up easily from the UI. That can be very powerful as it can be used for scripting without the need to specify the value itself.
(source: gitlab.com)

gitlab ci e2e test against nginx docker image

I'm trying to run e2e tests against a docker image, which is based on the offical nginx image, and built in a step before.
My idea was to make it available via service in this way:
e2e:
stage: e2e
image: weboaks/node-karma-protractor-chrome:alpine
services:
- name: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
alias: app
before_script:
- yarn
- yarn run webdriver:update --standalone
script:
- yarn run e2e:ci
The docker file of the as service linked image looks like
FROM nginx:1.15-alpine
RUN rm -rf /usr/share/nginx/html/* && apk add --no-cache -vvv bash
ADD deploy/nginx/conf.d /etc/nginx/conf.d
ADD dist /usr/share/nginx/html
But it seems that app isn't available under http://app.
Do I miss something or is there any other approach to test against an already created image?
When I run the image with docker run -p 80:80 local-test locally or deploy it to a server everything works as expected.

Resources