How do make all environment variables available on the Dockerfile? - docker

I have this Dockerfile
FROM node:14.17.1
ARG GITHUB_TOKEN
ARG REACT_APP_BASE_URL
ARG DATABASE_URL
ARG BASE_URL
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
ENV GITHUB_TOKEN=${GITHUB_TOKEN}
ENV REACT_APP_BASE_URL=${REACT_APP_BASE_URL}
ENV DATABASE_URL=${DATABASE_URL}
ENV BASE_URL=${BASE_URL}
ENV PORT 80
COPY . /usr/src/app
RUN npm install
RUN npm run build
EXPOSE 80
CMD ["npm", "start"]
But I don't like having to set each environment variable. Is is possible to make all of them available without needing to set one by one?

We need to pay attention to next two items before continue:
As mentioned by #Lukman in comments, TOKEN is not a good item to be stored in image unless you totally for internal use, you decide.
Even we did not specify environment one by one in Dockerfile, we still need to define them in some other place, as program itself can't know what environment you really need.
If you no problem with above, let's go on. Basically, I think define the environment (Here, use ENV1, ENV2 as example) in a script, then source them in container, and let app have ways to access these variables is what you needed.
env.sh:
export ENV1=1
export ENV2=2
app.js:
#!/usr/bin/env node
var env1 = process.env.ENV1;
var env2 = process.env.ENV2;
console.log(env1);
console.log(env2);
entrypoint.sh:
#!/bin/bash
source /usr/src/app/env.sh
exec node /usr/src/app/app.js
Dockerfile:
FROM node:14.17.1
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN chmod -R 755 /usr/src/app
CMD ["/usr/src/app/entrypoint.sh"]
Execution:
$ docker build -t abc:1 .
$ docker run --rm abc:1
1
2
Explain:
We change CMD or ENTRYPOINT in Dockerfile to use customized entrypoint.sh, in this entrypoint.sh, we will first source env.sh which make ENV1 and ENV2 visible to subprocess of entrypoint.sh.
Then, we use exec to replace current process as node app.js, so PID1 becomes node app.js now, meanwhile app.js still could get the environment defined in env.sh.
With above, we no need to define variables in Dockerfile one by one, but still our app could get the environment.

Here's a different (easy) way.
Start by making your file. Here I'm choosing to use everything on my this is messy and not recommended. It's a useful bit of code though so I thought I'd add it.
env | sed 's/^/export /' > env.sh
edit it so you only have what you need
vi env.sh
Use the below to import files into the container. Change pwd to whichever folder you want to share. Using this carelessly may result in you sharing to many files*
sudo docker run -it -v `pwd`:`pwd` ubuntu
Assign appropriate file permissions. I'm using 777 which means anyone can read, write, execute - for demonstration purposes. But you only need execute privileges.
Run this command and make sure you add the full stop.
. /LOCATION/env.sh
If you're confused to where your file is just type pwd in the host console.
You can just add those commands where appropriate to your Dockerfile to automate the process. If I recall there is a VOLUME flag for Dockerfile.

Related

Passing arguments via env variables through Dockerfile and Kubernetes deployment

Hello I have a problem with manually running deployment.
I use GitlabCI, dockerfile and kubernetes.
FROM python:3.8
RUN mkdir /app
COPY . /app/
WORKDIR /app
RUN pip install -r requirements.txt
CMD ["python", "main.py", "${flag1}", "${ARG1}", "${flag2}, "${ARG2}"]
i need to run my app with command like this "python3 main.py -g argument1 -c argument2", and every run I need using other arguments. Im using this:
Then my pipeline run bash script who check if variable "${ARG1}" is empty, if is empty, then unset "${FLAG1}". The next step is deploy to kubernetes using standard deployment using gitlabCI.
My idea Is bad because those environment variables aren't passing to Dockerfile. Anybody have some idea? Can't use Dockers build-args because they are don't support "CMD" step.
You are using the array-syntax for the command (CMD), therefore there is no shell that could expand the variables, but the data is directly used for the exec system call.
If you want the variables to be expaned, use
CMD python main.py ${flag1} ${ARG1} ${flag2} ${ARG2}
or replace the command completely in kubernetes pod/replica/deployment definition, optionally with variables replaced.
Additional note: The CMD is executed at runtime of the container, not at build time.

How can I export env variables to a Dockerfile?

Objective
I have an env variable script file that looks like:
#!/bin/sh
export FOO="public"
export BAR="private"
I would like to source the env variables to be available when a docker image is being built. I am aware that I can use ARG and ENV with build args, but I have too many Env Variables, and I am afraid that will be a lengthy list.
It's worth mentioning that I only need the env variables to install a specific step in my docker file (will highlight in the Dockerfile below), and do not necessarily want them to be available in the built image after that.
What I have tried so far
I have tried having a script (envs.sh) that export env vars like:
#!/bin/sh
export DOG="woof"
export CAT="meow"
My Docker file looks like:
FROM fishtownanalytics/dbt:0.18.1
# Define working directory
# Load ENV Vars
COPY envs.sh envs.sh
CMD ["sh", "envs.sh"]
# Install packages required
CMD ["sh", "-c", "envs.sh"]
RUN dbt deps # I need to env variables to be available for this step
# Exposing DBT Port
EXPOSE 8081
But that did not seem to work. How can I export env variables as a script to the docker file?
In the general case, you can't set environment variables in a RUN command: each RUN command runs a new shell in a new container, and any environment variables you set there will get lost at the end of that RUN step.
However, you say you only need the variables at one specific step in your Dockerfile. In that special case, you can run the setup script and the actual command in the same RUN step:
FROM fishtownanalytics/dbt:0.18.1
COPY envs.sh envs.sh
RUN . ./envs.sh \
&& dbt deps
# Anything that envs.sh `export`ed is lost _after_ the RUN step
(CMD is irrelevant here: it only provides the default command that gets run when you launch a container from the built image, and doesn't have any effect on RUN steps. It also looks like the image declares an ENTRYPOINT so that you can only run dbt subcommands as CMD, not normal shell commands. I also use the standard . to read in a script file instead of source, since not every container has a shell that provides that non-standard extension.)
Your CMD call runs a new shell (sh) that defines those variables and then dies, leaving the current process unchanged. If you want those environment variables to apply to the current process, you could source it:
CMD ["source", "envs.sh"]

Setup different user permissions on files copied in Dockerfile

I have this Dockerfile setup:
FROM node:14.5-buster-slim AS base
WORKDIR /app
FROM base AS production
ENV NODE_ENV=production
RUN chown -R node:node /app
RUN chmod 755 /app
USER node
... other copies
COPY ./scripts/startup-production.sh ./
COPY ./scripts/healthz.sh ./
CMD ["./startup-production.sh"]
The problem I'm facing is that I can't execute ./healthz.sh because it's only executable by the node user. When I commented out the two RUN and the USER commands, I could execute the file just fine. But I want to enforce the executable permissions only to the node for security reasons.
I need the ./healthz.sh to be externally executable by Kubernetes' liveness & rediness probes.
How can I make it so? Folder restructuring or stuff like that are fine with me.
In most cases, you probably want your code to be owned by root, but to be world-readable, and for scripts be world-executable. The Dockerfile COPY directive will copy in a file with its existing permissions from the host system (hidden in the list of bullet points at the end is a note that a file "is copied individually along with its metadata"). So the easiest way to approach this is to make sure the script has the right permissions on the host system:
# mode 0755 is readable and executable by everyone but only writable by owner
chmod 0755 healthz.sh
git commit -am 'make healthz script executable'
Then you can just COPY it in, without any special setup.
# Do not RUN chown or chmod; just
WORKDIR /app
COPY ./scripts/healthz.sh .
# Then when launching the container, specify
USER node
CMD ["./startup-production.sh"]
You should be able to verify this locally by running your container and manually invoking the health-check script
docker run -d --name app the-image
# possibly with a `docker exec -u` option to specify a different user
docker exec app /app/healthz.sh && echo OK
The important thing to check is that the file is world-executable. You can also double-check this by looking at the built container
docker run --rm the-image ls -l /app/healthz.sh
That should print out one line, starting with a permission string -rwxr-xr-x; the last three r-x are the important part. If you can't get the permissions right another way, you can also fix them up in your image build
COPY ./scripts/healthz.sh .
# If you can't make the permissions on the original file right:
RUN chmod 0755 *.sh
You need to modify user Dockerfile CMD command like this : ["sh", "./startup-production.sh"]
This will interpret the script as sh, but it can be dangerous if your script is using bash specific features like [[]] with #!/bin/bash as its first line.
Moreover I would say use ENTRYPOINT here instead of CMD if you want this to run whenever container is up

Calling different commands in Dockerfiles depending on environment

What is the best way to call different npm scripts from a Dockerfile depending on type of environment (i.e. development or production)?
My current Dockerfile is below:
FROM node:12.15.0-alpine
ARG env
WORKDIR /usr/app
COPY ./ /usr/app
CMD npm run start
EXPOSE 5000
Ideally I would either like to be able to run a npm run start:development script, or start:production script.
I have tried a mix of ARG and ENV variables to get the desired effect. However judging from the below closed GitHub issue, they are not available in the correct part of the cycle that I would require.
i.e.
CMD npm run start:${env}
Primarily I am wondering if there is a preferred methodology that is used to keep everything in one Dockerfile.
Edit:
I have had some sort of success with the below code, but sometimes it causes my terminal to become unresponsive.
RUN if [ "$env" = "production" ]; then \
npm run start:prod; \
else \
npm run start:dev; \
fi
The Dockerfile is running in a 'build' context, so any variables available are related to the build environment (when you run docker build), not the execution environment. The build process is running only the first time when you build the image.
If you want to use environment variables defined at execution time, you could use a CMD pointing to a container script. Inside this script, all environment variables are available from the initial execution (container start).
Dockerfile
...
COPY ./scripts /script/path
CMD /script/path/test.sh
./scripts/test.sh
cd /your/app/path
echo ENV = $ENV
npm run start:$ENV
Also you could review the best practices for Dockerfiles with good examples and use cases
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

App relies on sourcing secrets.sh for env variables. How to accomplish this in my Dockerfile?

I'm working on creating a container to hold my running Django app. During development and manual deployment I've been setting environment variables by sourcing a secrets.sh file in my repo. This has worked fine until now that I'm trying to automate my server's configuration environment in a Dockerfile.
So far it looks like this:
FROM python:3.7-alpine
RUN pip install --upgrade pip
RUN pip install pipenv
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
WORKDIR /home/appuser/site
COPY . /home/appuser/site
RUN /bin/sh -c "source secrets.sh"
RUN env
I'd expect this to set the environment variables properly but it doesn't. I've also tried adding the variables to my appuser's bashrc, but this doesn't work either.
Am I missing something here? Is there another best practice to set env variables to be accessible by django, without having to check them into the Dockerfile in my repo?
Each RUN step launches a totally new container with a totally new shell; only its filesystem is persisted afterwards. RUN commands that try to start processes or set environment variables are no-ops. (RUN export or RUN service start do absolutely nothing.)
In your setup you need the environment variables to be set at container startup time based on information that isn't available at build time. (You don't want to persist secrets in an image: they can be easily read out by anyone who gets the image later on.) The usual way to do this is with an entrypoint script; this could look like
#!/bin/sh
# If the secrets file exists, read it in.
if [ -f /secrets.sh ]; then
# (Prefer POSIX "." to bash-specific "source".)
. /secrets.sh
fi
# Now run the main container CMD, replacing this script.
exec "$#"
A typical Dockerfile built around this would look like:
FROM python:3.7-alpine
RUN pip install --upgrade pip
WORKDIR /app
# Install Python dependencies, as an early step to support
# Docker layer caching.
COPY requirements.txt ./
RUN pip install -r requirements.txt
# Install the main application.
COPY . ./
# Create a non-root user. It doesn't own the source files,
# and so can't modify the application.
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
# Startup-time metadata.
ENTRYPOINT ["/app/entrypoint.sh"]
CMD ["/app/app.py"]
And then when you go to run the container, you'd inject the secrets file
docker run -p 8080:8080 -v $PWD/secrets-prod.sh:/secrets.sh myimage
(As a matter of style, I reserve ENTRYPOINT for this pattern and for single-binary FROM scratch containers, and always use CMD for whatever the container's main process is.)

Resources