cloud build pass secret env to dockerfile - docker

I am using google cloud build to build a docker image and deploy in cloud run. The module has dependencies on Github that are private. In the cloudbuild.yaml file I can access secret keys for example the Github token, but I don't know what is the correct and secure way to pass this token to the Dockerfile.
I was following this official guide but it would only work in the cloudbuild.yaml scope and not in the Dockerfile. Accessing GitHub from a build via SSH keys
cloudbuild.yaml
steps:
- name: gcr.io/cloud-builders/docker
args: ["build", "-t", "gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA", "."]
- name: gcr.io/cloud-builders/docker
args: [ "push", "gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA" ]
- name: gcr.io/google.com/cloudsdktool/cloud-sdk
entrypoint: gcloud
args: [
"run", "deploy", "$REPO_NAME",
"--image", "gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA",
"--platform", "managed",
"--region", "us-east1",
"--allow-unauthenticated",
"--use-http2",
]
images:
- gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/GITHUB_USER/versions/1
env: "GITHUB_USER"
- versionName: projects/$PROJECT_ID/secrets/GITHUB_TOKEN/versions/1
env: "GITHUB_TOKEN"
Dockerfile
# [START cloudrun_grpc_dockerfile]
# [START run_grpc_dockerfile]
FROM golang:buster as builder
# Create and change to the app directory.
WORKDIR /app
# Create /root/.netrc cred github
RUN echo machine github.com >> /root/.netrc
RUN echo login "GITHUB_USER" >> /root/.netrc
RUN echo password "GITHUB_PASSWORD" >> /root/.netrc
# Config Github, this create file /root/.gitconfig
RUN git config --global url."ssh://git#github.com/".insteadOf "https://github.com/"
# GOPRIVATE
RUN go env -w GOPRIVATE=github.com/org/repo
# Do I need to remove the /root/.netrc file? I do not want this information to be propagated and seen by third parties.
# Retrieve application dependencies.
# This allows the container build to reuse cached dependencies.
# Expecting to copy go.mod and if present go.sum.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
# RUN go build -mod=readonly -v -o server ./cmd/server
RUN go build -mod=readonly -v -o server
# Use the official Debian slim image for a lean production container.
# https://hub.docker.com/_/debian
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM debian:buster-slim
RUN set -x && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates && \
rm -rf /var/lib/apt/lists/*
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/server /server
# Run the web service on container startup.
CMD ["/server"]
# [END run_grpc_dockerfile]
# [END cloudrun_grpc_dockerfile]
After trying for 2 days I have not found a solution, the simplest thing I could do was to generate the vendor folder and commit it to the repository and avoid go mod download.

You have several way to do things.
With Docker, when you run a build, you run it in an isolated environment (it's the principle of isolation). So, you haven't access to your environment variables from inside the build process.
To solve that, you can use build args and put your secret values in that parameter.
But, there is a trap: you have to use bash code and not built in step code in Cloud Build. Let me show you
# Doesn't work
- name: gcr.io/cloud-builders/docker
secretEnv: ["GITHUB_USER","GITHUB_TOKEN"]
args: ["build", "-t", "gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA", "--build-args=GITHUB_USER=$GITHUB_USER,GITHUB_TOKEN=$GITHUB_TOKEN","."]
# Working version
- name: gcr.io/cloud-builders/docker
secretEnv: ["GITHUB_USER","GITHUB_TOKEN"]
entrypoint: bash
args:
- -c
- |
docker build -t gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA --build-args=GITHUB_USER=$$GITHUB_USER,GITHUB_TOKEN=$$GITHUB_TOKEN .
You can also perform the actions outside of the Dockerfile. It's roughly the same thing: load a container, perform operation, load another container and continue.

Related

Question regarding configuration of Next JS on Cloud Run with Docker Images

I have a question regarding an approach and how secure it is. Basically I have a next js app which I push to Cloud Run via Github Actions. Now in Github Actions I have defined secrets which I then pass via github action yml file to Docker. Then in Docker I pass them to environment variables at build time. But then I use next.config.js to make it available to the app at build time. Here are my files
Github Action Yml File
name: nextjs-cloud-run
on:
push:
branches:
- main
env:
GCP_PROJECT_ID: ${{ secrets.GCP_PROJECT_ID }}
GCP_REGION: europe-west1
# project-name but it can be anything you want
REPO_NAME: some-repo-name
jobs:
build-and-deploy:
name: Setup, Build, and Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
# This step is where our service account will be authenticated
- uses: google-github-actions/setup-gcloud#v0.2.0
with:
project_id: ${{ secrets.GCP_PROJECT_ID }}
service_account_key: ${{ secrets.GCP_SA_KEYBASE64 }}
service_account_email: ${{ secrets.GCP_SA_EMAIL }}
- name: Enable the necessary APIs and enable docker auth
run: |-
gcloud services enable containerregistry.googleapis.com
gcloud services enable run.googleapis.com
gcloud --quiet auth configure-docker
- name: Build, tag image & push
uses: docker/build-push-action#v3
with:
push: true
tags: "gcr.io/${{ secrets.GCP_PROJECT_ID }}/collect-opinion-frontend:${{ github.sha }}"
secrets: |
"NEXT_PUBLIC_STRAPI_URL=${{ secrets.NEXT_PUBLIC_STRAPI_URL }}"
"NEXTAUTH_SECRET=${{ secrets.NEXTAUTH_SECRET }}"
"NEXTAUTH_URL=${{ secrets.NEXTAUTH_URL }}"
- name: Deploy Image
run: |-
gcloud components install beta --quiet
gcloud beta run deploy $REPO_NAME --image gcr.io/$GCP_PROJECT_ID/$REPO_NAME:$GITHUB_SHA \
--project $GCP_PROJECT_ID \
--platform managed \
--region $GCP_REGION \
--allow-unauthenticated \
--quiet
This is my Dockerfile for Next Js
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
elif [ -f package-lock.json ]; then npm ci; \
elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
else echo "Lockfile not found." && exit 1; \
fi
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
# get the environment vars from secrets
RUN --mount=type=secret,id=NEXT_PUBLIC_STRAPI_URL \
--mount=type=secret,id=NEXTAUTH_SECRET \
--mount=type=secret,id=NEXTAUTH_URL \
export NEXT_PUBLIC_STRAPI_URL=$(cat /run/secrets/NEXT_PUBLIC_STRAPI_URL) && \
export NEXTAUTH_SECRET=$(cat /run/secrets/NEXTAUTH_SECRET) && \
export NEXTAUTH_URL=$(cat /run/secrets/NEXTAUTH_URL) && \
yarn build
# RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
My next.config.js file
// next.config.js
module.exports = {
// ... rest of the configuration.
output: "standalone",
env: {
// Will only be available on the server side
NEXTAUTH_SECRET: process.env.NEXTAUTH_SECRET,
NEXTAUTH_URL: process.env.NEXTAUTH_URL, // Pass through env variables
},
};
My question is that, does this create a security issue with the fact that the environment variable could somehow be accessed via the client?
According to Next JS Documentation that should not be the case or at least that's how I understand. Snippet from the site
Note: environment variables specified in this way will always be included in the JavaScript bundle, prefixing the environment variable name with NEXT_PUBLIC_ only has an effect when specifying them through the environment or .env files.
I would appreciate if you could advise me on this

Successfully deployed dockerized NextJS through GitHub Workflow to Github page, open website Get 200 but response weird

I dockerize a NextJs repository and deploy to Github Page.
all follow step-by-step with tutorial link here
Although deploy successfully and website Get 200,
but..., the Get has wrong response as below:
It should response the website UI.
Anyone could help :)
I fixed after setting DockerFile to:
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Set label maintainer, version & description
LABEL maintainer="s982535#gmail.com"
LABEL version="0.1.0"
LABEL description="Unofficial Next.js + Typescript + Tailwind CSS starter with a latest package"
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN yarn && yarn cache clean
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/out ./
and .yml file to
name: Docker Image CI
on:
push:
branches: [ "master" ]
jobs:
build-and-deploy:
concurrency: ci-${{ github.ref }} # Recommended if you intend to make multiple deployments in quick succession.
runs-on: ubuntu-latest
steps:
- name: Checkout 🛎️
uses: actions/checkout#v3
- name: Build and export
uses: docker/build-push-action#v3
with:
context: .
tags: myimage:latest
outputs: type=local,dest=build
secrets: |
GIT_AUTH_TOKEN=${{ secrets.ACCESS_TOKEN }}
- name: Deploy 🚀
uses: JamesIves/github-pages-deploy-action#v4
with:
folder: build/app # The folder the action should deploy.
token: ${{ secrets.ACCESS_TOKEN }}
BRANCH: gh-pages
# clean: true

How to access secret (GCP service account json file) during Docker CMD step

I have a Dockerfile like below:
# syntax=docker/dockerfile:1
FROM continuumio/miniconda3
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Create directory to store our application
WORKDIR /app
## The following three commands adapted from Dockerfile snippet at
## https://docs.docker.com/develop/develop-images/build_enhancements/#using-ssh-to-access-private-data-in-builds
# Install ssh client and git
RUN apt-get upgrade && apt-get update && apt-get install openssh-client git -y
# Download public key for gitlab.com
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan gitlab.com >> ~/.ssh/known_hosts
# clone my-repo, authenticating using client's ssh-agent.
RUN --mount=type=ssh git clone git#gitlab.com:mycompany/data-products/my-repo.git /app/
# set up python (conda) environment to run application
RUN conda create --name recenv --file conda-linux-64.lock
# run my-package with the conda environment we just created.
CMD ["conda", "run", "-n", "recenv", "python", "-m", "my_package.train" "path/to/gcp/service/account.json"]
This dockerfile builds successfully with docker build . --no-cache --tag my-package --ssh default but fails (as expected) on docker run my-package:latest with:
FileNotFoundError: [Errno 2] No such file or directory: path/to/gcp/service/account.json
So I've gotten the ssh secrets management working so the RUN ...git clone step uses my ssh/rsa creds successfully. But I'm having trouble using my other secret - my gcp service account json file. The difference is I only need the ssh secret in a RUN step but I need my gcp service account secret in my CMD step.
While everything I've read, such as docker docs page on using the --secret flag, tutorials and SO answers I've found, all reference how to pass in a secret to be used in a RUN step but not the CMD step. But I need to pass my GCP service account json file to my CMD step.
I could just COPY the file into my container, but from my reading that's supposedly not a great solution from a security standpoint.
What is the recommended, secure way of passing a secret json file to the CMD step of a docker container?

Docker multi-stage builds and Codeship running container

I'm doing a multi-stage Docker build:
# Dockerfile
########## Build stage ##########
FROM golang:1.10 as build
ENV TEMP /go/src/github.com/my-id/my-go-project
WORKDIR $TEMP
COPY . .
RUN make build
########## Final stage ##########
FROM alpine:3.4
# ...
ENV HOME /home/$USER
ENV TEMP /go/src/github.com/my-id/my-go-project
COPY --from=build $TEMP/bin/my-daemon $HOME/bin/
RUN chown -R $USER:$GROUP $HOME
USER $USER
ENTRYPOINT ["my-daemon"]
and the Makefile contains in part:
build: bin
go build -v -o bin/my-daemon cmd/my-daemon/main.go
bin:
mkdir $#
This all works just fine with a docker build.
Now I want to use Codeship, so I have:
# codeship-services.yml
cachemanager:
build:
image: my-daemon
dockerfile: Dockerfile
and:
# codeship-steps.yml
- name: my-daemon build
tag: master
service: my-service
command: true
The issue is if I do jet steps --master, it builds everything OK, but then runs the container as if I did a docker run. Why? I don't want it to do that.
It's as if I would have to have two separate Dockerfiles: one only for the build stage and one only for the run stage and use the former with jet. But then this defeats the point of Docker multi-stage builds.
I was able to solve this problem using multi-stage builds split into two different files following this guide: https://documentation.codeship.com/pro/common-issues/caching-multi-stage-dockerfile/
Basically, you'll take your existing Dockerfile and split it into two files like so, with the second referencing the first:
# Dockerfile.build-stage
FROM golang:1.10 as build-stage
ENV TEMP /go/src/github.com/my-id/my-go-project
WORKDIR $TEMP
COPY . .
RUN make build
# Dockerfile
FROM build-stage as build-stage
FROM alpine:3.4
# ...
ENV HOME /home/$USER
ENV TEMP /go/src/github.com/my-id/my-go-project
COPY --from=build $TEMP/bin/my-daemon $HOME/bin/
RUN chown -R $USER:$GROUP $HOME
USER $USER
ENTRYPOINT ["my-daemon"]
Then, in your codeship-service.yml file:
# codeship-services.yml
cachemanager-build:
build:
dockerfile: Dockerfile.build-stage
cachemanager-app:
build:
image: my-daemon
dockerfile: Dockerfile
And in your codeship-steps.yml file:
# codeship-steps.yml
- name: cachemanager build
tag: master
service: cachemanager-build
command: <here you can run tests or linting>
- name: publish to registry
tag: master
service: cachemanager-app
...
I don't think you want to actually run the Dockerfile because it will start your app. We use the second stage to push a smaller build to an image registry.

How to use Dockerfile in Gitlab CI

Using gitlab-ci for my node/react app, I'm trying to use phusion/passenger-nodejs as the base docker image
I can specify this easily in .gitlab-ci.yml:
image: phusion/passenger-nodejs:latest
variables:
HOME: /root
cache:
paths:
- node_modules/
stages:
- build
- test
- deploy
set_environment:
stage: build
script:
- npm install
tags:
- docker
test_node:
stage: test
script:
- npm install
- npm test
tags:
- docker
However, Phusion Passenger expects you to make configuration changes, e.g. python support, using their special init process, etc. in the Dockerfile.
#FROM phusion/passenger-ruby24:<VERSION>
#FROM phusion/passenger-jruby91:<VERSION>
FROM phusion/passenger-nodejs:<VERSION>
#FROM phusion/passenger-customizable:<VERSION>
# Set correct environment variables.
ENV HOME /root
# Use baseimage-docker's init process.
CMD ["/sbin/my_init"]
# If you're using the 'customizable' variant, you need to explicitly opt-in
# for features.
#
# N.B. these images are based on https://github.com/phusion/baseimage-docker,
# so anything it provides is also automatically on board in the images below
# (e.g. older versions of Ruby, Node, Python).
#
# Uncomment the features you want:
#
# Ruby support
#RUN /pd_build/ruby-2.0.*.sh
#RUN /pd_build/ruby-2.1.*.sh
#RUN /pd_build/ruby-2.2.*.sh
#RUN /pd_build/ruby-2.3.*.sh
#RUN /pd_build/ruby-2.4.*.sh
#RUN /pd_build/jruby-9.1.*.sh
# Python support.
RUN /pd_build/python.sh
# Node.js and Meteor standalone support.
# (not needed if you already have the above Ruby support)
RUN /pd_build/nodejs.sh
# ...put your own build instructions here...
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
Is there a way to use a Dockerfile with gitlab-ci? Is there a good work around other than apt-get install and adding shell scripts?
Yes, create a second Gitlab repository where you place your Dockerfile in. There you add a gitlab-ci.yml file with a script command that builds you modified image and push it to your private registry or the Gitlab embedded Docker registry, eg:
script:
docker build . -t http://myregistry:5000/mymodified image
docker push http://myregistry:5000/mymodified
Inside your other Gitlab repository, change the image: line accordingly:
image: http://myregistry:5000/mymodified
Information on the Gitlab embedded Docker registry can be found here -> here

Resources