How to hide env variables from docker file - docker

I have a Dockerfile ,to deploy node js app with some secret api keys ,that I want to hide from the docker file.
Currently I am using ENV keyword to define env variables like below
FROM node:17
WORKDIR /usr/app
COPY package.json /usr/app/
RUN npm install
COPY . /usr/app
ENV TWILIO_ACCOUNT_SID=""
ENV TWILIO_AUTH_TOKEN=""
ENV OTP_TEXT="This is your Otp"
ENV TWILLIO_SENDER=99999
ENV PORT=8080
ENV DB_URL=""
ENV JWT_SECRET="Some Secrete"
ENV JWT_EXPIRES_IN=30min
ENV OTP_EXPIRE_TIME_SECONDS=150000
ENV AWS_S3_REGION = us-east-2
ENV AWS_S3_BUCKET = gos32
ENV AWS_ACCESS_KEY_ID =""
ENV AWS_SECRET_ACCESS_KEY =""
CMD ["npm", "start"]
Any better way to do that ?
Edit :
Just adding What works me from the answer given by #blami
docker build -t app .
then I ran
docker run --env-file env.txt -d -p 8080:8080 app
docker run with the file option after putting all the env variables in env.txt file

You should not put sensitive data in Dockerfile at all. If your application is configured via environment, you should only provide these variables to container when it is started e.g. manually (using docker -e , --env , and --env-file flags directly on command line) or via your container runtime (which you do not specify in your question):
Kubernetes can manage secrets and expose them via files or environment variables - see documentation with examples
Docker Swarm supports managing secrets out of the box via docker secret command and can expose such secrets as environment variables too - see documentation with examples
Managed cloud providers usually have option to manage secrets and somehow inject them to containers too (or directly expose features of runtime they use).
In any of cases above secrets usually live in secure storage from where they are retrieved only when container starts and injected into it. That way you don't need to have them in Dockerfile. Note that if someone gains access to your running container with application or higher privileges they will be able to retrieve secrets from the environment (as that is how environment variables work).

Related

Sourcing a shell script inside Dockerfile to injecting environment variables to the container

I had browsed through a lot of related posts but still didn’t resolve this issue. I am quite new to Docker so sorry if this is repeated.
So for my project, I have a shell script named vault-until.sh, which getting secrets from Vault and exported those secrets.
Like ‘export DB_Password_Auto=(Some Vault operations)’
What I want to achieve is to copy this file to the docker container and source this file in the Dockerfile. So that those secrets can be accessed as environment variables inside the container.
The code I have right now inside Dockerfile is:
COPY vault-until.sh /build
RUN Chmod -x /build/vault-until.sh
RUN /bin/sh -c “source /build/vault-util.sh”
After I log in to the container through “docker -exec -it -u build container-name /bin/bash”
the environment var is still empty.
It shows only after I type the source command again in the cli.
So I am wondering is this mechanism of access vault secret as env vat actually plausible? If so, what I need to modify in the Dockerfile to make this work? Thank you!
If you have a script that gets secrets from Vault, you probably need to re-run it every time the container starts. You don't want to compromise the secrets by putting them in a Docker image where they can be easily extracted, and you don't want an old version of a credential "baked into" an image if it changes in Vault.
You can use an entrypoint wrapper script to run this when the container starts up. This is a script you set as the container ENTRYPOINT; it does first-time setup like setting dynamic environment variables and then runs whatever is the container CMD.
#!/bin/sh
# entrypoint.sh
# Get a set of credentials from Vault.
. /build/vault-util.sh
# Run the main container command.
exec "$#"
In your Dockerfile, you need to make sure you COPY this in and set it as the ENTRYPOINT, but you don't need to immediately RUN it.
COPY vault-util.sh entrypoint.sh /build
ENTRYPOINT ["/build/entrypoint.sh"] # must be JSON-array syntax
CMD same command as originally
You won't be able to see the secrets with tools like docker inspect (this is good!). But if you want to you can run a test container to dump out the results of this setup. For example,
docker run --rm ... your-image env
replaces the Dockerfile's CMD with env, which prints out the environment and exits. This gets passed as arguments to the entrypoint, so first it runs the script to fetch environment variables and then runs env and then exits.

How do make all environment variables available on the Dockerfile?

I have this Dockerfile
FROM node:14.17.1
ARG GITHUB_TOKEN
ARG REACT_APP_BASE_URL
ARG DATABASE_URL
ARG BASE_URL
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
ENV GITHUB_TOKEN=${GITHUB_TOKEN}
ENV REACT_APP_BASE_URL=${REACT_APP_BASE_URL}
ENV DATABASE_URL=${DATABASE_URL}
ENV BASE_URL=${BASE_URL}
ENV PORT 80
COPY . /usr/src/app
RUN npm install
RUN npm run build
EXPOSE 80
CMD ["npm", "start"]
But I don't like having to set each environment variable. Is is possible to make all of them available without needing to set one by one?
We need to pay attention to next two items before continue:
As mentioned by #Lukman in comments, TOKEN is not a good item to be stored in image unless you totally for internal use, you decide.
Even we did not specify environment one by one in Dockerfile, we still need to define them in some other place, as program itself can't know what environment you really need.
If you no problem with above, let's go on. Basically, I think define the environment (Here, use ENV1, ENV2 as example) in a script, then source them in container, and let app have ways to access these variables is what you needed.
env.sh:
export ENV1=1
export ENV2=2
app.js:
#!/usr/bin/env node
var env1 = process.env.ENV1;
var env2 = process.env.ENV2;
console.log(env1);
console.log(env2);
entrypoint.sh:
#!/bin/bash
source /usr/src/app/env.sh
exec node /usr/src/app/app.js
Dockerfile:
FROM node:14.17.1
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN chmod -R 755 /usr/src/app
CMD ["/usr/src/app/entrypoint.sh"]
Execution:
$ docker build -t abc:1 .
$ docker run --rm abc:1
1
2
Explain:
We change CMD or ENTRYPOINT in Dockerfile to use customized entrypoint.sh, in this entrypoint.sh, we will first source env.sh which make ENV1 and ENV2 visible to subprocess of entrypoint.sh.
Then, we use exec to replace current process as node app.js, so PID1 becomes node app.js now, meanwhile app.js still could get the environment defined in env.sh.
With above, we no need to define variables in Dockerfile one by one, but still our app could get the environment.
Here's a different (easy) way.
Start by making your file. Here I'm choosing to use everything on my this is messy and not recommended. It's a useful bit of code though so I thought I'd add it.
env | sed 's/^/export /' > env.sh
edit it so you only have what you need
vi env.sh
Use the below to import files into the container. Change pwd to whichever folder you want to share. Using this carelessly may result in you sharing to many files*
sudo docker run -it -v `pwd`:`pwd` ubuntu
Assign appropriate file permissions. I'm using 777 which means anyone can read, write, execute - for demonstration purposes. But you only need execute privileges.
Run this command and make sure you add the full stop.
. /LOCATION/env.sh
If you're confused to where your file is just type pwd in the host console.
You can just add those commands where appropriate to your Dockerfile to automate the process. If I recall there is a VOLUME flag for Dockerfile.

How can I export env variables to a Dockerfile?

Objective
I have an env variable script file that looks like:
#!/bin/sh
export FOO="public"
export BAR="private"
I would like to source the env variables to be available when a docker image is being built. I am aware that I can use ARG and ENV with build args, but I have too many Env Variables, and I am afraid that will be a lengthy list.
It's worth mentioning that I only need the env variables to install a specific step in my docker file (will highlight in the Dockerfile below), and do not necessarily want them to be available in the built image after that.
What I have tried so far
I have tried having a script (envs.sh) that export env vars like:
#!/bin/sh
export DOG="woof"
export CAT="meow"
My Docker file looks like:
FROM fishtownanalytics/dbt:0.18.1
# Define working directory
# Load ENV Vars
COPY envs.sh envs.sh
CMD ["sh", "envs.sh"]
# Install packages required
CMD ["sh", "-c", "envs.sh"]
RUN dbt deps # I need to env variables to be available for this step
# Exposing DBT Port
EXPOSE 8081
But that did not seem to work. How can I export env variables as a script to the docker file?
In the general case, you can't set environment variables in a RUN command: each RUN command runs a new shell in a new container, and any environment variables you set there will get lost at the end of that RUN step.
However, you say you only need the variables at one specific step in your Dockerfile. In that special case, you can run the setup script and the actual command in the same RUN step:
FROM fishtownanalytics/dbt:0.18.1
COPY envs.sh envs.sh
RUN . ./envs.sh \
&& dbt deps
# Anything that envs.sh `export`ed is lost _after_ the RUN step
(CMD is irrelevant here: it only provides the default command that gets run when you launch a container from the built image, and doesn't have any effect on RUN steps. It also looks like the image declares an ENTRYPOINT so that you can only run dbt subcommands as CMD, not normal shell commands. I also use the standard . to read in a script file instead of source, since not every container has a shell that provides that non-standard extension.)
Your CMD call runs a new shell (sh) that defines those variables and then dies, leaving the current process unchanged. If you want those environment variables to apply to the current process, you could source it:
CMD ["source", "envs.sh"]

How to set environment variables in Dockerfile and start parent image

I have a dockerfile that has a entrypoint.sh file which exports some Postgres variable.
Then I want to start the parent docker container which is referenced in "FROM pactfoundation/pact-broker" image. Looking at github for it's Dockerfile github pact broker it has CMD ["config.ru"] at the end. So I did similar to that in my Dockerfile:
FROM pactfoundation/pact-broker
COPY entrypoint.sh .
CMD ["config.ru"]
When I execute my docker run command:
docker run --rm -e POSTGRES_PORT=5433 -e POSTGRES_DBNAME=pactsd -e POSTGRES_URL=localhost -e POSTGRES_PASSWORD=1234 -e POSTGRES_USERNAME=postgres --name pact sonamsamdupkhangsar/pact:test -d
I see my entrypoint.sh echo statement and the container is dead.
setting pact broker database variables
How do I start the parent container after setting my envrionment variables in my entrypoint.sh file?
I also tried with the following:
FROM pactfoundation/pact-broker
ENV PACT_BROKER_DATABASE_NAME=${POSTGRES_DBNAME}
ENV PACT_BROKER_DATABASE_USERNAME=${POSTGRES_USERNAME}
ENV PACT_BROKER_DATABASE_PASSWORD=${POSTGRES_PASSWORD}
ENV PACT_BROKER_DATABASE_HOST=${POSTGRES_URL}
ENV PACT_BROKER_DATABASE_NAME=${POSTGRES_DBNAME}
ENV PACT_BROKER_DATABASE_PORT=$POSTGRES_PORT
RUN echo "PACT_BROKER_DATABASE_PORT: $PACT_BROKER_DATABASE_PORT"
Yet, when I run my built docker image I still don't see the variables being set. I tried both approaches for "${}" and "$" for env var setting.
You've to set your environment variables using the ENV in your docker file.
As each step executed at different containers which altogether builds the image if you set via shell scripts it won't work. Consider using the ENV command to set it
Ref: DOCKERFILE ENV
What is happening is that those environment variables that you are passing in at run-time with '-e' parameter are not yet defined at build-time as the ENV instructions are executed at build-time only.
E.g. at build-time this line you have:
ENV PACT_BROKER_DATABASE_NAME=${POSTGRES_DBNAME}
becomes this line:
ENV PACT_BROKER_DATABASE_NAME=
as '${POSTGRES_DBNAME}' evaluates to empty at build-time. Then, when run-time happens, you are defining all your POSTGRES_ environment variables as parameters so they will indeed exist in the container, BUT no further instructions will be executed to set the PACT_BROKER_ environment variables to any other values.
Proposed solution: I would recommend the simplest approach if you can make it work to just use the environment variables 'directly' however you define them as parameters. I.e. either change the names of your '-e' parameters to PACT_BROKER_'s or use the POSTGRES_ environment variables in your container. Either way you would remove the ENV lines from the Dockerfile.
If you really-really need to set the environment variables to other names at run-time, then you should be able to do this by writing to the appropriate 'startup' file in the Dockerfile (making sure to literally write the '$'s to the file so they could be dereferenced at run-time).

Passing env variables at runtime without quotes

When passing environment during docker runtime, my environment variables are getting wrapped with quotes. How am I able to set an environment variable without having it quoted?
I set the environment like such; docker run server -e NODE_ENV=dev
Output from the command above:
node dist/server.js "NODE_ENV=dev"
Heres a snippet from my Dockerfile
FROM base AS release
# copy production node_modules
COPY --from=dependencies /root/app/prod_node_modules ./node_modules
# copy app sources
COPY . .
# expose port and define CMD
EXPOSE 3000
ENTRYPOINT ["npm", "run", "start:prod"]
First of all I think the sequence of your docker run command has a problem.
-e option should be before your docker image name, like this
docker run -e NODE_ENV=dev server
If its still not helping, then try --env-file option of docker run.
docker run --env-file /path/to/server.env server
In server.env
NODE_ENV=dev

Resources