How do I pass build-time config to heroku container build scripts? - docker

I have a Dockerfile, thusly:
# Use a multi-stage build to handle the compiling, installing, etc.
# STAGE 1: Install node_modules on a stretch container
FROM node:10-stretch as base
ARG APP_DIR=/node
EXPOSE 3000
WORKDIR $APP_DIR
RUN chown node:node $APP_DIR
COPY --chown=node:node package*.json ./
USER node
RUN ["npm", "install", "--no-optional"]
COPY --chown=node:node . .
# STAGE 2: Extend the base image as a builder image
FROM base as builder
ARG APP_NAME
ARG BASE_URL
ARG NODE_ENV
RUN npm run build -- --mode=production && rm -rf node_modules && npm install --no-optional --production
# STAGE 3: Copy the 'build' directory from previous stage and run in alpine
# Since this does not extend the base image, we need to set workdir, user, etc. again.
FROM node:10-alpine
ARG APP_DIR=/node
EXPOSE 3000
WORKDIR ${APP_DIR}
COPY --from=builder --chown=node:node $APP_DIR/dist .
COPY --from=builder --chown=node:node $APP_DIR/node_modules ./node_modules
USER node
CMD ["node", "./server"]
and what I'd like to do is have thos environment variables in stage 2 set by Heroku on build. If I set those values in the Dockerfile, they do end up being available in the build - but otherwise they don't seem to get passed the values from heroku.yml.
According to this documentation I should be able to set these values in heroku.yml, but thus far - I've been unsuccessful. My heroku.yml file looks like this:
build:
docker:
web: Dockerfile
config:
APP_NAME: My App
BASE_URL: /
NODE_ENV: production
What am I doing wrong?

I think it's necessary to repeat the ARG statements like this in order to have them accessible to each build stage:
# All `ARG`s must be referenced appear *before* the first `FROM`
ARG APP_NAME
ARG BASE_URL
ARG NODE_ENV
# STAGE 1: Install node_modules on a stretch container
FROM node:10-stretch as base
# Any subset of the `ARG`s can then be imported into the build stage
ARG APP_DIR
# STAGE 2: Extend the base image as a builder image
FROM base as builder
# Any subset of the `ARG`s can then be imported into the build stage
ARG APP_NAME
ARG BASE_URL
ARG NODE_ENV
# STAGE 3: Copy the 'build' directory from previous stage and run in alpine
FROM node:10-alpine
# Any subset of the `ARG`s can then be imported into the build stage
ARG APP_DIR
NOTE If you overwrite the values in any state, e.g. ARG APP_NAME=/node, this value will supersede the values you're obtaining from the environment (from the heroku.yml)

Related

Docker secret nextjs env variables are not available at runtime

I am trying to set environment variables in my dockerfile that are available at runtime after running the next js app via npm run start (next start).
I have read that I need to use ENV variables in my dockerfile to have these env variables available at runtime. ARG variables in dockerfile are only available at build time.
So I am running the docker build command wih --build-arg and it is working with my NEXT_PUBLIC... variables but it wont work for my secret non public env variabels.
here is my content of .env file in nextjs:
NEXT_PUBLIC_RECAPTCHA_SITE_KEY=my-public-key...
RECAPTCHA_SECRET_KEY=my-secret-key...
this is my docker run command from my Gitlab CI:
docker build --build-arg NEXT_PUBLIC_RECAPTCHA_SITE_KEY="$NEXT_PUBLIC_RECAPTCHA_SITE_KEY" --build-arg RECAPTCHA_SECRET_KEY="$RECAPTCHA_SECRET_KEY" -t ${CI_REGISTRY}/${CI_PROJECT_PATH}/nextjs:${CI_COMMIT_SHA} ./nextjs
the docker file:
ARG BASE_IMAGE=node:14.16.0-alpine3.13
# Build
FROM $BASE_IMAGE as BUILD
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1
RUN apk add --no-cache bash git
WORKDIR /app
COPY ./package.json ./
COPY ./package-lock.json ./
RUN CI=true npm ci
COPY . ./
ARG RECAPTCHA_SECRET_KEY=recaptchasecrect_placeholder
ENV RECAPTCHA_SECRET_KEY=${RECAPTCHA_SECRET_KEY}
ARG NEXT_PUBLIC_RECAPTCHA_SITE_KEY=recaptchasitekey_placeholder
ENV NEXT_PUBLIC_RECAPTCHA_SITE_KEY=${NEXT_PUBLIC_RECAPTCHA_SITE_KEY}
RUN npm run build
# Run
FROM $BASE_IMAGE
WORKDIR /app
COPY --from=BUILD /app ./
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1
EXPOSE 3000
CMD ["npm", "start"]
If I put ENV RECAPTCHA_SECRET_KEY=my-secret-key... hardcoded into the dockerfile above EXPOSE 3000 it will work and the .env variable is available at runtime.
Why is my NEXT_PUBLIC_RECAPTCHA_SITE_KEY variable available at runtime and my RECAPTCHA_SECRET_KEY variable that is set the same way not?
When you run the next app, the variables in .evn will only load to next app if they start with NEXT_PUBLIC_, remember you are not running the next app from cmd, your starting point is 'npm start' in docker which only loads env variable with names stating from "NEXT_PUBLIC"
NEXT_PUBLIC_ANALYTICS_ID=abcdefghijk
More info here - https://nextjs.org/docs/basic-features/environment-variables
Just prefix all your variables with "NEXT_PUBLIC"
NEXT_PUBLIC_RECAPTCHA_SITE_KEY=my-public-key...
NEXT_PUBLIC_RECAPTCHA_SECRET_KEY=my-secret-key...

Running npm run test in the Dockerfile?

Using a builder to generate a smaller docker image, what would be a good way to run npm run test? I seems like running it in the Dockerfile after the build would make sense but maybe I'm missing something
Dockerfile
# Global args to persist through build stages
ARG docker_build_user
ARG docker_build_time
ARG docker_build_head
ARG docker_build_head_short
ARG docker_build_submodules_head
FROM node:8.9.4-alpine as builder
WORKDIR /app
COPY . .
RUN apk add --no-cache bash
RUN apk add --no-cache git
RUN apk add --no-cache make gcc g++ python
RUN npm install
ENV NODE_ENV=production
RUN npm run build
RUN rm -rf node_modules
RUN npm install
FROM node:8.9.4-alpine
# setup build metadata
ARG docker_build_user
ARG docker_build_time
ARG docker_build_head
ARG docker_build_head_short
ARG docker_build_submodules_head
WORKDIR /app
COPY --from=builder /app .
ENV DOCKER_BUILD_USER $docker_build_user
ENV DOCKER_BUILD_TIME $docker_build_time
ENV DOCKER_BUILD_HEAD $docker_build_head
ENV DOCKER_BUILD_HEAD_SHORT $docker_build_head_short
ENV DOCKER_BUILD_SUBMODULES_HEAD $docker_build_submodules_head
ENV DOCKER_BUILD_DESCRIPTION This build was created by $docker_build_user at $docker_build_time from $docker_build_head_short
ENV NODE_ENV=production
ENV ENABLE_LOGGING=true
RUN echo "DESCRIPTION:${DOCKER_BUILD_DESCRIPTION}"
RUN chown -R 999:999 .
USER 999
# expose our service port
EXPOSE 8080
# Default is to run the server (should be able to run worker)
# Set env var in k8s or run : NPM_RUN_TASK (default is serve)
CMD ["/app/startup.sh"]
From what you afford, you have already used multistage build for your Dockerfile, one stage for build, and one stage for package.
You use this because the final package stage do not need some build dependency for build, so you separate build to first stage. Then the things are same for test, your dockerfile structure will be something like next:
Dockerfile:
# Build stage
FROM node:8.9.4-alpine as builder
# ......
RUN npm install
# Test stage
FROM builder as test
# ......
RUN npm run test
# Package stage
FROM node:8.9.4-alpine
COPY --from=builder /app .
# ......
Then, the test stage still could use the built out things in build stage to test, but package stage will not have anything generated in test stage.
Some related guide refers to this and also this, above is what other folks daily do for their nodejs project docker integration.

Expand ARG value in CMD [Dockerfile]

I'm passing a build argument into: docker build --build-arg RUNTIME=test
In my Dockerfile I want to use the argument's value in the CMD:
CMD ["npm", "run", "start:${RUNTIME}"]
Doing so results in this error: npm ERR! missing script: start:${RUNTIME} - it's not expanding the variable
I read through this post: Use environment variables in CMD
So I tried doing: CMD ["sh", "-c", "npm run start:${RUNTIME}"] - I end up with this error: /bin/sh: [sh,: not found
Both errors occur when I run the built container.
I'm using the node alpine image as a base. Anyone have ideas how to get the argument value to expand within CMD? Thanks in advance!
full Dockerfile:
FROM node:10.15.0-alpine as builder
ARG RUNTIME_ENV=test
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY . .
RUN npm ci
RUN npm run build
FROM node:10.15.0-alpine
COPY --from=builder /usr/app/.npmrc /usr/app/package*.json /usr/app/server.js ./
COPY --from=builder /usr/app/config ./config
COPY --from=builder /usr/app/build ./build
RUN npm ci --only=production
EXPOSE 3000
CMD ["npm", "run", "start:${RUNTIME_ENV}"]
Update:
Just for clarity there were two problems I was running into.
1. The problem as described by Samuel P.
2. ENV values are not carried between containers (multi-stage)
Here's the working Dockerfile where I'm able to expand environment variables in CMD:
# Here we set the build-arg as an environment variable.
# Setting this in the base image allows each build stage to access it
FROM node:10.15.0-alpine as base
ARG ENV
ENV RUNTIME_ENV=${ENV}
FROM base as builder
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY . .
RUN npm ci && npm run build
FROM base
COPY --from=builder /usr/app/.npmrc /usr/app/package*.json /usr/app/server.js ./
COPY --from=builder /usr/app/config ./config
COPY --from=builder /usr/app/build ./build
RUN npm ci --only=production
EXPOSE 3000
CMD npm run start:${RUNTIME_ENV}
The problem here is that ARG params are available only during image build.
The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag.
https://docs.docker.com/engine/reference/builder/#arg
CMD is executed at container startup where ARG variables aren't available anymore.
ENV variables are available during build and also in the container:
The environment variables set using ENV will persist when a container is run from the resulting image.
https://docs.docker.com/engine/reference/builder/#env
To solve your problem you should transfer the ARG variable to an ENV variable.
add the following line before your CMD:
ENV RUNTIME_ENV ${RUNTIME_ENV}
If you want to provide a default value you can use the following:
ENV RUNTIME_ENV ${RUNTIME_ENV:default_value}
Here are some more details about the usage of ARG and ENV from the docker docs.

Issues with COPY when using multistage Dockerfile builds -- no such file or directory

I'm trying to convert my project to use multi-stage builds. However, the final step always fails with an error:
Step 11/13 : COPY --from=build /bin/grafana-server /bin/grafana-server
COPY failed: stat /var/lib/docker/overlay2/xxxx/merged/bin/grafana-server: no such file or directory
My Dockerfile looks like this:
FROM golang:latest AS build
ENV SRC_DIR=/go/src/github.com/grafana/grafana/
ENV GIT_SSL_NO_VERIFY=1
COPY . $SRC_DIR
WORKDIR $SRC_DIR
# Building of Grafana
RUN \
npm run build && \
go run build.go setup && \
go run build.go build
# Create final stage containing only required artifacts
FROM scratch
COPY --from=build /bin/grafana-server /bin/grafana-server
EXPOSE 3001
CMD ["./bin/grafana-server"]
The build.go build step will output artifacts to ./bin/ -- The error is pretty unhelpful other than telling me the files don't exist where I think they should exist.
My folder structure on my machine is:
--| ~/Documents/dev/grafana/src/grafana/grafana
--------| bin
------------| <grafan-server builds to here>
--------| deploy
------------| docker
----------------| Dockerfile
From ~/Documents/dev/grafana/src/grafana/grafana is where I issue: docker build -t grafana -f deploy/docker/Dockerfile .
To follow-up my comment, the path you set with the WORKDIR is absolute and should be specified in the same way in the COPY --from=build command.
So this could lead to the following Dockerfile:
FROM golang:latest AS build
ENV SRC_DIR=/go/src/github.com/grafana/grafana/
ENV GIT_SSL_NO_VERIFY=1
COPY . $SRC_DIR
WORKDIR $SRC_DIR
# Building of Grafana
RUN \
npm run build && \
go run build.go setup && \
go run build.go build
# Create final stage containing only required artifacts
FROM scratch
ENV SRC_DIR=/go/src/github.com/grafana/grafana/
WORKDIR $SRC_DIR
COPY --from=build ${SRC_DIR}/bin/grafana-server ${SRC_DIR}/bin/grafana-server
EXPOSE 3001
CMD ["./bin/grafana-server"]
(only partially tested)

Keep Docker intermediate layers in multistage build

I'm attempting to have a dev container and a "production" container built from a single Dockerfile, it already "works" but I do not have access to the dev container after the build (multistage intermediaries are cached, but not tagged in a useful way).
The Dockerfile is as-so:
# See https://github.com/facebook/flow/issues/3649 why here
# is a separate one for a flow using image ... :(
FROM node:8.9.4-slim AS graphql-dev
WORKDIR /graphql-api
ENV PATH /graphql-api/node_modules/.bin:$PATH
RUN apt update && apt install -y libelf1
ADD ./.babelrc /graphql-api/
ADD ./.eslintignore /graphql-api/
ADD ./.eslintrc /graphql-api/
ADD ./.flowconfig /graphql-api/
ADD ./.npmrc /graphql-api/
ADD ./*.json5 /graphql-api/
ADD ./lib/ /graphql-api/lib
ADD ./package.json /graphql-api/
ADD ./schema/ /graphql-api/schema
ADD ./yarn.lock /graphql-api/
RUN yarn install --production --silent && npm install --silent
CMD ["npm", "run", "lint-flow-test"]
# Cleans node_modules etc, see github.com/tj/node-prune
# this container contains no node, etc (golang:latest)
FROM golang:latest AS graphql-cleaner
WORKDIR /graphql-api
ENV PATH /graphql-api/node_modules/.bin:$PATH
COPY --from=graphql-dev graphql-api .
RUN go get github.com/tj/node-prune/cmd/node-prune
RUN node-prune
# Minimal end-container (Alpine 💖)
FROM node:8.9.4-alpine
WORKDIR /graphql-api
ENV PATH /graphql-api/node_modules/.bin:$PATH
COPY --from=graphql-cleaner graphql-api .
EXPOSE 3000
CMD ["npm", "start"]
Ideally I'd be able to start graphql-dev and the final container both with a docker-compose.yml, as so:
version: '3'
services:
graphql-dev:
image: graphql-dev
build: ./Dockerfile
volumes:
- ./lib:/graphql-api/lib
- ./schema:/graphql-api/schema
graphql-prod:
image: graphql
build: ./Dockerfile
The two final steps are the "shrinking" for the final build (saves over 250Mb for us) are not really required except for in the production build.
If I extract the dockerfile into two.. somehow Dockerfile.prod and Dockerfile.dev then I have to manage dependencies between them as I can't force prod to always build dev (can I?)
If I were somehow able to specify target on the build in the docker-compose.yml file I could do it, there were some issues, but specifying a target under build in my yml file yields an error:
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.graphql-dev.build contains unsupported option: 'target'

Resources