Environment variable in the Docker file for a specific RUN command - docker

In my Dockerfile, I have:
ENV ENVIRONMENT=$ENVIRONMENT
CMD NODE_ENV=$ENVIRONMENT npm run serve
However, I need to run another command BEFORE serve, and make sure NODE_ENV Is set for that one too.
I tried this:
ENV ENVIRONMENT=$ENVIRONMENT
RUN NODE_ENV=$ENVIRONMENT npm run upgrade
CMD NODE_ENV=$ENVIRONMENT npm run serve
However, NODE_ENV doesn't seem to be set for RUN.
What am I missing?
(Note: edited tag removed)

Environment variables set in a RUN statement do not persist.
It's like you open a shell, set the environment variable and close the shell session again. The next shell will not have the environment variable you set in the previous shell session.
How to fix this? Add the NODE_ENV Variable as an ENV in your dockerfile
ENV ENVIRONMENT=$ENVIRONMENT \
NODE_ENV=$ENVIRONMENT
RUN npm run upgrade
CMD npm run serve

Related

Docker. How to set 3rd party environment variable in a NodeJs container?

I'm trying to set an Oracle environment variable inside the container.
I believe it is not running, because the files are not available on the OS.
Could anyone help?
Thank you so much
FROM node:lts-alpine
RUN mkdir -p /usr/src/app
COPY ./ /usr/src/app
WORKDIR /usr/src/app
RUN export LD_LIBRARY_PATH=/usr/src/app/instantclient_21_5:$LD_LIBRARY_PATH
CMD [ \"npm\", \"run\", \"start\" ]
When running the bash container. I try to run commands from the environment variable. Unsuccessfully.
Trying to set an environment variable in a RUN statement doesn't make any sense: the commands in a RUN statement are executed in a child shell that exits when they are complete, so the effect of export LD_LIBRARY_PATH... aren't visible once then RUN statement finishes executing.
Docker provides an ENV directive for setting environment variables, e.g:
FROM node:lts-alpine
RUN mkdir -p /usr/src/app
COPY ./ /usr/src/app
WORKDIR /usr/src/app
ENV LD_LIBRARY_PATH=/usr/src/app/instantclient_21_5
CMD [ "npm", "run", "start" ]
Note that this can only be used to set static values (that is, you cannot include the value of another environment variable in the value -- you can't write ENV LD_LIBRARY_PATH=/usr/src/app/instantclient_21_5:$LD_LIBRARY_PATH). That should be fine because in this situation LD_LIBRARY_PATH should initially be unset.
(Also, you need to stop escaping the quotes in your CMD directive.)

Passing arguments via env variables through Dockerfile and Kubernetes deployment

Hello I have a problem with manually running deployment.
I use GitlabCI, dockerfile and kubernetes.
FROM python:3.8
RUN mkdir /app
COPY . /app/
WORKDIR /app
RUN pip install -r requirements.txt
CMD ["python", "main.py", "${flag1}", "${ARG1}", "${flag2}, "${ARG2}"]
i need to run my app with command like this "python3 main.py -g argument1 -c argument2", and every run I need using other arguments. Im using this:
Then my pipeline run bash script who check if variable "${ARG1}" is empty, if is empty, then unset "${FLAG1}". The next step is deploy to kubernetes using standard deployment using gitlabCI.
My idea Is bad because those environment variables aren't passing to Dockerfile. Anybody have some idea? Can't use Dockers build-args because they are don't support "CMD" step.
You are using the array-syntax for the command (CMD), therefore there is no shell that could expand the variables, but the data is directly used for the exec system call.
If you want the variables to be expaned, use
CMD python main.py ${flag1} ${ARG1} ${flag2} ${ARG2}
or replace the command completely in kubernetes pod/replica/deployment definition, optionally with variables replaced.
Additional note: The CMD is executed at runtime of the container, not at build time.

How can I export env variables to a Dockerfile?

Objective
I have an env variable script file that looks like:
#!/bin/sh
export FOO="public"
export BAR="private"
I would like to source the env variables to be available when a docker image is being built. I am aware that I can use ARG and ENV with build args, but I have too many Env Variables, and I am afraid that will be a lengthy list.
It's worth mentioning that I only need the env variables to install a specific step in my docker file (will highlight in the Dockerfile below), and do not necessarily want them to be available in the built image after that.
What I have tried so far
I have tried having a script (envs.sh) that export env vars like:
#!/bin/sh
export DOG="woof"
export CAT="meow"
My Docker file looks like:
FROM fishtownanalytics/dbt:0.18.1
# Define working directory
# Load ENV Vars
COPY envs.sh envs.sh
CMD ["sh", "envs.sh"]
# Install packages required
CMD ["sh", "-c", "envs.sh"]
RUN dbt deps # I need to env variables to be available for this step
# Exposing DBT Port
EXPOSE 8081
But that did not seem to work. How can I export env variables as a script to the docker file?
In the general case, you can't set environment variables in a RUN command: each RUN command runs a new shell in a new container, and any environment variables you set there will get lost at the end of that RUN step.
However, you say you only need the variables at one specific step in your Dockerfile. In that special case, you can run the setup script and the actual command in the same RUN step:
FROM fishtownanalytics/dbt:0.18.1
COPY envs.sh envs.sh
RUN . ./envs.sh \
&& dbt deps
# Anything that envs.sh `export`ed is lost _after_ the RUN step
(CMD is irrelevant here: it only provides the default command that gets run when you launch a container from the built image, and doesn't have any effect on RUN steps. It also looks like the image declares an ENTRYPOINT so that you can only run dbt subcommands as CMD, not normal shell commands. I also use the standard . to read in a script file instead of source, since not every container has a shell that provides that non-standard extension.)
Your CMD call runs a new shell (sh) that defines those variables and then dies, leaving the current process unchanged. If you want those environment variables to apply to the current process, you could source it:
CMD ["source", "envs.sh"]

Calling different commands in Dockerfiles depending on environment

What is the best way to call different npm scripts from a Dockerfile depending on type of environment (i.e. development or production)?
My current Dockerfile is below:
FROM node:12.15.0-alpine
ARG env
WORKDIR /usr/app
COPY ./ /usr/app
CMD npm run start
EXPOSE 5000
Ideally I would either like to be able to run a npm run start:development script, or start:production script.
I have tried a mix of ARG and ENV variables to get the desired effect. However judging from the below closed GitHub issue, they are not available in the correct part of the cycle that I would require.
i.e.
CMD npm run start:${env}
Primarily I am wondering if there is a preferred methodology that is used to keep everything in one Dockerfile.
Edit:
I have had some sort of success with the below code, but sometimes it causes my terminal to become unresponsive.
RUN if [ "$env" = "production" ]; then \
npm run start:prod; \
else \
npm run start:dev; \
fi
The Dockerfile is running in a 'build' context, so any variables available are related to the build environment (when you run docker build), not the execution environment. The build process is running only the first time when you build the image.
If you want to use environment variables defined at execution time, you could use a CMD pointing to a container script. Inside this script, all environment variables are available from the initial execution (container start).
Dockerfile
...
COPY ./scripts /script/path
CMD /script/path/test.sh
./scripts/test.sh
cd /your/app/path
echo ENV = $ENV
npm run start:$ENV
Also you could review the best practices for Dockerfiles with good examples and use cases
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

How do you use environment variables in the start up scripts in a passenger-docker container

When I try to use an environment variable($HOME) that I set in the Dockerfile, in the script that runs at start up, $HOME is not set. If I run printenv in the container, $HOME is set. So I am confused, and not sure what is going on.
I am using the phusion/passenger-customizable image, so that I can run a custom node server via pm2. I need a different version of Node then what is bundled in the node specific passenger image.
Dockerfile
# Simplified
FROM phusion/passenger-customizable:0.9.27
RUN apt-get update && apt-get upgrade -y -o Dpkg::Options::="--force-confold"
# Set environment variables needed for the docker image.
ARG HOME=/opt/var/app
ENV HOME $HOME
# Use baseimage-docker's init process.
CMD ["/sbin/my_init"]
RUN mkdir /etc/service/app
ADD start.sh /etc/service/app/run
RUN chmod a+x /etc/service/app/run
start.sh
echo $HOME
# run some scripts that reference the $HOME directory
What do I need to do to be able to reference a environment variable, set in the Dockerfile, in my start up scripts? Or do I just need to hardcode the paths in that start up script and call it a day?
$HOME is reserved, in some fashion. When running printenv, per #Sebastian, all my other variables where there but not $HOME. I prepended it with the initials of my company and it is working as intended.

Resources