Expand ARG value in CMD [Dockerfile] - docker

I'm passing a build argument into: docker build --build-arg RUNTIME=test
In my Dockerfile I want to use the argument's value in the CMD:
CMD ["npm", "run", "start:${RUNTIME}"]
Doing so results in this error: npm ERR! missing script: start:${RUNTIME} - it's not expanding the variable
I read through this post: Use environment variables in CMD
So I tried doing: CMD ["sh", "-c", "npm run start:${RUNTIME}"] - I end up with this error: /bin/sh: [sh,: not found
Both errors occur when I run the built container.
I'm using the node alpine image as a base. Anyone have ideas how to get the argument value to expand within CMD? Thanks in advance!
full Dockerfile:
FROM node:10.15.0-alpine as builder
ARG RUNTIME_ENV=test
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY . .
RUN npm ci
RUN npm run build
FROM node:10.15.0-alpine
COPY --from=builder /usr/app/.npmrc /usr/app/package*.json /usr/app/server.js ./
COPY --from=builder /usr/app/config ./config
COPY --from=builder /usr/app/build ./build
RUN npm ci --only=production
EXPOSE 3000
CMD ["npm", "run", "start:${RUNTIME_ENV}"]
Update:
Just for clarity there were two problems I was running into.
1. The problem as described by Samuel P.
2. ENV values are not carried between containers (multi-stage)
Here's the working Dockerfile where I'm able to expand environment variables in CMD:
# Here we set the build-arg as an environment variable.
# Setting this in the base image allows each build stage to access it
FROM node:10.15.0-alpine as base
ARG ENV
ENV RUNTIME_ENV=${ENV}
FROM base as builder
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY . .
RUN npm ci && npm run build
FROM base
COPY --from=builder /usr/app/.npmrc /usr/app/package*.json /usr/app/server.js ./
COPY --from=builder /usr/app/config ./config
COPY --from=builder /usr/app/build ./build
RUN npm ci --only=production
EXPOSE 3000
CMD npm run start:${RUNTIME_ENV}

The problem here is that ARG params are available only during image build.
The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag.
https://docs.docker.com/engine/reference/builder/#arg
CMD is executed at container startup where ARG variables aren't available anymore.
ENV variables are available during build and also in the container:
The environment variables set using ENV will persist when a container is run from the resulting image.
https://docs.docker.com/engine/reference/builder/#env
To solve your problem you should transfer the ARG variable to an ENV variable.
add the following line before your CMD:
ENV RUNTIME_ENV ${RUNTIME_ENV}
If you want to provide a default value you can use the following:
ENV RUNTIME_ENV ${RUNTIME_ENV:default_value}
Here are some more details about the usage of ARG and ENV from the docker docs.

Related

Use Jenkins variable in Dockerfile command

I am new to Docker and Jenkins. I have to build and deploy Nest Js app in jenkins. When I run the Jenkins job I have to select the 'DEPLOY_PROFILE' which is equals to 'dev' and 'qa' as follows.
This is my Dockerfile,
FROM node:16-alpine
WORKDIR /app
ADD package.json /app/package.json
RUN npm config set registry http://registry.npmjs.org
RUN npm install
ADD . /app
EXPOSE 3000
CMD ["npm", "run", "start"]
I need to pass the 'DEPLOY_PROFILE' variable which is equals to 'dev' or 'qa' to the Dockerfile. Then final docker command should be look like npm run start:dev or npm run start:qa
I have tried using
CMD ["npm", "run", "start", `:${DEPLOY_PROFILE}`]
and
CMD ["npm", "run", "start", `:${env.DEPLOY_PROFILE}`]
But nothing gave me the luck. Any help may highly appreciated!
You can use an environment variable for that. In your dockerfile, declare an argument (passed into docker build) and an environment variable like this:
ARG DEPLOY_PROFILE
ENV PROFILE=${deploy_profile}
Then use the environment variable like this:
CMD npm run start $PROFILE
Then call buildah (or whatever you are using) like this:
buildah bud --format=docker --build-arg deploy_profile="$DEPLOY_PROFILE"

Docker secret nextjs env variables are not available at runtime

I am trying to set environment variables in my dockerfile that are available at runtime after running the next js app via npm run start (next start).
I have read that I need to use ENV variables in my dockerfile to have these env variables available at runtime. ARG variables in dockerfile are only available at build time.
So I am running the docker build command wih --build-arg and it is working with my NEXT_PUBLIC... variables but it wont work for my secret non public env variabels.
here is my content of .env file in nextjs:
NEXT_PUBLIC_RECAPTCHA_SITE_KEY=my-public-key...
RECAPTCHA_SECRET_KEY=my-secret-key...
this is my docker run command from my Gitlab CI:
docker build --build-arg NEXT_PUBLIC_RECAPTCHA_SITE_KEY="$NEXT_PUBLIC_RECAPTCHA_SITE_KEY" --build-arg RECAPTCHA_SECRET_KEY="$RECAPTCHA_SECRET_KEY" -t ${CI_REGISTRY}/${CI_PROJECT_PATH}/nextjs:${CI_COMMIT_SHA} ./nextjs
the docker file:
ARG BASE_IMAGE=node:14.16.0-alpine3.13
# Build
FROM $BASE_IMAGE as BUILD
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1
RUN apk add --no-cache bash git
WORKDIR /app
COPY ./package.json ./
COPY ./package-lock.json ./
RUN CI=true npm ci
COPY . ./
ARG RECAPTCHA_SECRET_KEY=recaptchasecrect_placeholder
ENV RECAPTCHA_SECRET_KEY=${RECAPTCHA_SECRET_KEY}
ARG NEXT_PUBLIC_RECAPTCHA_SITE_KEY=recaptchasitekey_placeholder
ENV NEXT_PUBLIC_RECAPTCHA_SITE_KEY=${NEXT_PUBLIC_RECAPTCHA_SITE_KEY}
RUN npm run build
# Run
FROM $BASE_IMAGE
WORKDIR /app
COPY --from=BUILD /app ./
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1
EXPOSE 3000
CMD ["npm", "start"]
If I put ENV RECAPTCHA_SECRET_KEY=my-secret-key... hardcoded into the dockerfile above EXPOSE 3000 it will work and the .env variable is available at runtime.
Why is my NEXT_PUBLIC_RECAPTCHA_SITE_KEY variable available at runtime and my RECAPTCHA_SECRET_KEY variable that is set the same way not?
When you run the next app, the variables in .evn will only load to next app if they start with NEXT_PUBLIC_, remember you are not running the next app from cmd, your starting point is 'npm start' in docker which only loads env variable with names stating from "NEXT_PUBLIC"
NEXT_PUBLIC_ANALYTICS_ID=abcdefghijk
More info here - https://nextjs.org/docs/basic-features/environment-variables
Just prefix all your variables with "NEXT_PUBLIC"
NEXT_PUBLIC_RECAPTCHA_SITE_KEY=my-public-key...
NEXT_PUBLIC_RECAPTCHA_SECRET_KEY=my-secret-key...

How do I pass build-time config to heroku container build scripts?

I have a Dockerfile, thusly:
# Use a multi-stage build to handle the compiling, installing, etc.
# STAGE 1: Install node_modules on a stretch container
FROM node:10-stretch as base
ARG APP_DIR=/node
EXPOSE 3000
WORKDIR $APP_DIR
RUN chown node:node $APP_DIR
COPY --chown=node:node package*.json ./
USER node
RUN ["npm", "install", "--no-optional"]
COPY --chown=node:node . .
# STAGE 2: Extend the base image as a builder image
FROM base as builder
ARG APP_NAME
ARG BASE_URL
ARG NODE_ENV
RUN npm run build -- --mode=production && rm -rf node_modules && npm install --no-optional --production
# STAGE 3: Copy the 'build' directory from previous stage and run in alpine
# Since this does not extend the base image, we need to set workdir, user, etc. again.
FROM node:10-alpine
ARG APP_DIR=/node
EXPOSE 3000
WORKDIR ${APP_DIR}
COPY --from=builder --chown=node:node $APP_DIR/dist .
COPY --from=builder --chown=node:node $APP_DIR/node_modules ./node_modules
USER node
CMD ["node", "./server"]
and what I'd like to do is have thos environment variables in stage 2 set by Heroku on build. If I set those values in the Dockerfile, they do end up being available in the build - but otherwise they don't seem to get passed the values from heroku.yml.
According to this documentation I should be able to set these values in heroku.yml, but thus far - I've been unsuccessful. My heroku.yml file looks like this:
build:
docker:
web: Dockerfile
config:
APP_NAME: My App
BASE_URL: /
NODE_ENV: production
What am I doing wrong?
I think it's necessary to repeat the ARG statements like this in order to have them accessible to each build stage:
# All `ARG`s must be referenced appear *before* the first `FROM`
ARG APP_NAME
ARG BASE_URL
ARG NODE_ENV
# STAGE 1: Install node_modules on a stretch container
FROM node:10-stretch as base
# Any subset of the `ARG`s can then be imported into the build stage
ARG APP_DIR
# STAGE 2: Extend the base image as a builder image
FROM base as builder
# Any subset of the `ARG`s can then be imported into the build stage
ARG APP_NAME
ARG BASE_URL
ARG NODE_ENV
# STAGE 3: Copy the 'build' directory from previous stage and run in alpine
FROM node:10-alpine
# Any subset of the `ARG`s can then be imported into the build stage
ARG APP_DIR
NOTE If you overwrite the values in any state, e.g. ARG APP_NAME=/node, this value will supersede the values you're obtaining from the environment (from the heroku.yml)

How get teamcity env params in dockerfile?

I have several build steps in teamcity which build and push docker image. How can I get env params from teamcity in dockerfile. Now dockerfile looks like:
FROM python:3.8.9-slim-buster
ENV test24=2626
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD [ "sh", "./app.sh" ]
You can pass --build-arg ARG_NAME=ARG_VALUE options to the docker build command, and within the Dockerfile you then have an ARG defined to pickup the value. eg:
ARG ARG_VALUE=DEFAULT_ARG_VALUE_IF_NOT_SPECIFIED
LABEL com.stackoverflow.arg="${ARG_VALUE}"

Running npm run test in the Dockerfile?

Using a builder to generate a smaller docker image, what would be a good way to run npm run test? I seems like running it in the Dockerfile after the build would make sense but maybe I'm missing something
Dockerfile
# Global args to persist through build stages
ARG docker_build_user
ARG docker_build_time
ARG docker_build_head
ARG docker_build_head_short
ARG docker_build_submodules_head
FROM node:8.9.4-alpine as builder
WORKDIR /app
COPY . .
RUN apk add --no-cache bash
RUN apk add --no-cache git
RUN apk add --no-cache make gcc g++ python
RUN npm install
ENV NODE_ENV=production
RUN npm run build
RUN rm -rf node_modules
RUN npm install
FROM node:8.9.4-alpine
# setup build metadata
ARG docker_build_user
ARG docker_build_time
ARG docker_build_head
ARG docker_build_head_short
ARG docker_build_submodules_head
WORKDIR /app
COPY --from=builder /app .
ENV DOCKER_BUILD_USER $docker_build_user
ENV DOCKER_BUILD_TIME $docker_build_time
ENV DOCKER_BUILD_HEAD $docker_build_head
ENV DOCKER_BUILD_HEAD_SHORT $docker_build_head_short
ENV DOCKER_BUILD_SUBMODULES_HEAD $docker_build_submodules_head
ENV DOCKER_BUILD_DESCRIPTION This build was created by $docker_build_user at $docker_build_time from $docker_build_head_short
ENV NODE_ENV=production
ENV ENABLE_LOGGING=true
RUN echo "DESCRIPTION:${DOCKER_BUILD_DESCRIPTION}"
RUN chown -R 999:999 .
USER 999
# expose our service port
EXPOSE 8080
# Default is to run the server (should be able to run worker)
# Set env var in k8s or run : NPM_RUN_TASK (default is serve)
CMD ["/app/startup.sh"]
From what you afford, you have already used multistage build for your Dockerfile, one stage for build, and one stage for package.
You use this because the final package stage do not need some build dependency for build, so you separate build to first stage. Then the things are same for test, your dockerfile structure will be something like next:
Dockerfile:
# Build stage
FROM node:8.9.4-alpine as builder
# ......
RUN npm install
# Test stage
FROM builder as test
# ......
RUN npm run test
# Package stage
FROM node:8.9.4-alpine
COPY --from=builder /app .
# ......
Then, the test stage still could use the built out things in build stage to test, but package stage will not have anything generated in test stage.
Some related guide refers to this and also this, above is what other folks daily do for their nodejs project docker integration.

Resources