Docker. How to set 3rd party environment variable in a NodeJs container? - docker

I'm trying to set an Oracle environment variable inside the container.
I believe it is not running, because the files are not available on the OS.
Could anyone help?
Thank you so much
FROM node:lts-alpine
RUN mkdir -p /usr/src/app
COPY ./ /usr/src/app
WORKDIR /usr/src/app
RUN export LD_LIBRARY_PATH=/usr/src/app/instantclient_21_5:$LD_LIBRARY_PATH
CMD [ \"npm\", \"run\", \"start\" ]
When running the bash container. I try to run commands from the environment variable. Unsuccessfully.

Trying to set an environment variable in a RUN statement doesn't make any sense: the commands in a RUN statement are executed in a child shell that exits when they are complete, so the effect of export LD_LIBRARY_PATH... aren't visible once then RUN statement finishes executing.
Docker provides an ENV directive for setting environment variables, e.g:
FROM node:lts-alpine
RUN mkdir -p /usr/src/app
COPY ./ /usr/src/app
WORKDIR /usr/src/app
ENV LD_LIBRARY_PATH=/usr/src/app/instantclient_21_5
CMD [ "npm", "run", "start" ]
Note that this can only be used to set static values (that is, you cannot include the value of another environment variable in the value -- you can't write ENV LD_LIBRARY_PATH=/usr/src/app/instantclient_21_5:$LD_LIBRARY_PATH). That should be fine because in this situation LD_LIBRARY_PATH should initially be unset.
(Also, you need to stop escaping the quotes in your CMD directive.)

Related

How do make all environment variables available on the Dockerfile?

I have this Dockerfile
FROM node:14.17.1
ARG GITHUB_TOKEN
ARG REACT_APP_BASE_URL
ARG DATABASE_URL
ARG BASE_URL
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
ENV GITHUB_TOKEN=${GITHUB_TOKEN}
ENV REACT_APP_BASE_URL=${REACT_APP_BASE_URL}
ENV DATABASE_URL=${DATABASE_URL}
ENV BASE_URL=${BASE_URL}
ENV PORT 80
COPY . /usr/src/app
RUN npm install
RUN npm run build
EXPOSE 80
CMD ["npm", "start"]
But I don't like having to set each environment variable. Is is possible to make all of them available without needing to set one by one?
We need to pay attention to next two items before continue:
As mentioned by #Lukman in comments, TOKEN is not a good item to be stored in image unless you totally for internal use, you decide.
Even we did not specify environment one by one in Dockerfile, we still need to define them in some other place, as program itself can't know what environment you really need.
If you no problem with above, let's go on. Basically, I think define the environment (Here, use ENV1, ENV2 as example) in a script, then source them in container, and let app have ways to access these variables is what you needed.
env.sh:
export ENV1=1
export ENV2=2
app.js:
#!/usr/bin/env node
var env1 = process.env.ENV1;
var env2 = process.env.ENV2;
console.log(env1);
console.log(env2);
entrypoint.sh:
#!/bin/bash
source /usr/src/app/env.sh
exec node /usr/src/app/app.js
Dockerfile:
FROM node:14.17.1
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN chmod -R 755 /usr/src/app
CMD ["/usr/src/app/entrypoint.sh"]
Execution:
$ docker build -t abc:1 .
$ docker run --rm abc:1
1
2
Explain:
We change CMD or ENTRYPOINT in Dockerfile to use customized entrypoint.sh, in this entrypoint.sh, we will first source env.sh which make ENV1 and ENV2 visible to subprocess of entrypoint.sh.
Then, we use exec to replace current process as node app.js, so PID1 becomes node app.js now, meanwhile app.js still could get the environment defined in env.sh.
With above, we no need to define variables in Dockerfile one by one, but still our app could get the environment.
Here's a different (easy) way.
Start by making your file. Here I'm choosing to use everything on my this is messy and not recommended. It's a useful bit of code though so I thought I'd add it.
env | sed 's/^/export /' > env.sh
edit it so you only have what you need
vi env.sh
Use the below to import files into the container. Change pwd to whichever folder you want to share. Using this carelessly may result in you sharing to many files*
sudo docker run -it -v `pwd`:`pwd` ubuntu
Assign appropriate file permissions. I'm using 777 which means anyone can read, write, execute - for demonstration purposes. But you only need execute privileges.
Run this command and make sure you add the full stop.
. /LOCATION/env.sh
If you're confused to where your file is just type pwd in the host console.
You can just add those commands where appropriate to your Dockerfile to automate the process. If I recall there is a VOLUME flag for Dockerfile.

Passing arguments via env variables through Dockerfile and Kubernetes deployment

Hello I have a problem with manually running deployment.
I use GitlabCI, dockerfile and kubernetes.
FROM python:3.8
RUN mkdir /app
COPY . /app/
WORKDIR /app
RUN pip install -r requirements.txt
CMD ["python", "main.py", "${flag1}", "${ARG1}", "${flag2}, "${ARG2}"]
i need to run my app with command like this "python3 main.py -g argument1 -c argument2", and every run I need using other arguments. Im using this:
Then my pipeline run bash script who check if variable "${ARG1}" is empty, if is empty, then unset "${FLAG1}". The next step is deploy to kubernetes using standard deployment using gitlabCI.
My idea Is bad because those environment variables aren't passing to Dockerfile. Anybody have some idea? Can't use Dockers build-args because they are don't support "CMD" step.
You are using the array-syntax for the command (CMD), therefore there is no shell that could expand the variables, but the data is directly used for the exec system call.
If you want the variables to be expaned, use
CMD python main.py ${flag1} ${ARG1} ${flag2} ${ARG2}
or replace the command completely in kubernetes pod/replica/deployment definition, optionally with variables replaced.
Additional note: The CMD is executed at runtime of the container, not at build time.

Calling different commands in Dockerfiles depending on environment

What is the best way to call different npm scripts from a Dockerfile depending on type of environment (i.e. development or production)?
My current Dockerfile is below:
FROM node:12.15.0-alpine
ARG env
WORKDIR /usr/app
COPY ./ /usr/app
CMD npm run start
EXPOSE 5000
Ideally I would either like to be able to run a npm run start:development script, or start:production script.
I have tried a mix of ARG and ENV variables to get the desired effect. However judging from the below closed GitHub issue, they are not available in the correct part of the cycle that I would require.
i.e.
CMD npm run start:${env}
Primarily I am wondering if there is a preferred methodology that is used to keep everything in one Dockerfile.
Edit:
I have had some sort of success with the below code, but sometimes it causes my terminal to become unresponsive.
RUN if [ "$env" = "production" ]; then \
npm run start:prod; \
else \
npm run start:dev; \
fi
The Dockerfile is running in a 'build' context, so any variables available are related to the build environment (when you run docker build), not the execution environment. The build process is running only the first time when you build the image.
If you want to use environment variables defined at execution time, you could use a CMD pointing to a container script. Inside this script, all environment variables are available from the initial execution (container start).
Dockerfile
...
COPY ./scripts /script/path
CMD /script/path/test.sh
./scripts/test.sh
cd /your/app/path
echo ENV = $ENV
npm run start:$ENV
Also you could review the best practices for Dockerfiles with good examples and use cases
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

Expand ARG value in CMD [Dockerfile]

I'm passing a build argument into: docker build --build-arg RUNTIME=test
In my Dockerfile I want to use the argument's value in the CMD:
CMD ["npm", "run", "start:${RUNTIME}"]
Doing so results in this error: npm ERR! missing script: start:${RUNTIME} - it's not expanding the variable
I read through this post: Use environment variables in CMD
So I tried doing: CMD ["sh", "-c", "npm run start:${RUNTIME}"] - I end up with this error: /bin/sh: [sh,: not found
Both errors occur when I run the built container.
I'm using the node alpine image as a base. Anyone have ideas how to get the argument value to expand within CMD? Thanks in advance!
full Dockerfile:
FROM node:10.15.0-alpine as builder
ARG RUNTIME_ENV=test
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY . .
RUN npm ci
RUN npm run build
FROM node:10.15.0-alpine
COPY --from=builder /usr/app/.npmrc /usr/app/package*.json /usr/app/server.js ./
COPY --from=builder /usr/app/config ./config
COPY --from=builder /usr/app/build ./build
RUN npm ci --only=production
EXPOSE 3000
CMD ["npm", "run", "start:${RUNTIME_ENV}"]
Update:
Just for clarity there were two problems I was running into.
1. The problem as described by Samuel P.
2. ENV values are not carried between containers (multi-stage)
Here's the working Dockerfile where I'm able to expand environment variables in CMD:
# Here we set the build-arg as an environment variable.
# Setting this in the base image allows each build stage to access it
FROM node:10.15.0-alpine as base
ARG ENV
ENV RUNTIME_ENV=${ENV}
FROM base as builder
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY . .
RUN npm ci && npm run build
FROM base
COPY --from=builder /usr/app/.npmrc /usr/app/package*.json /usr/app/server.js ./
COPY --from=builder /usr/app/config ./config
COPY --from=builder /usr/app/build ./build
RUN npm ci --only=production
EXPOSE 3000
CMD npm run start:${RUNTIME_ENV}
The problem here is that ARG params are available only during image build.
The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag.
https://docs.docker.com/engine/reference/builder/#arg
CMD is executed at container startup where ARG variables aren't available anymore.
ENV variables are available during build and also in the container:
The environment variables set using ENV will persist when a container is run from the resulting image.
https://docs.docker.com/engine/reference/builder/#env
To solve your problem you should transfer the ARG variable to an ENV variable.
add the following line before your CMD:
ENV RUNTIME_ENV ${RUNTIME_ENV}
If you want to provide a default value you can use the following:
ENV RUNTIME_ENV ${RUNTIME_ENV:default_value}
Here are some more details about the usage of ARG and ENV from the docker docs.

What is the point of WORKDIR on Dockerfile?

I'm learning Docker. For many times I've seen that Dockerfile has WORKDIR command:
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 3000
CMD [ “npm”, “start” ]
Can't I just omit WORKDIR and Copy and just have my Dockerfile at the root of my project? What are the downsides of using this approach?
According to the documentation:
The WORKDIR instruction sets the working directory for any RUN, CMD,
ENTRYPOINT, COPY and ADD instructions that follow it in the
Dockerfile. If the WORKDIR doesn’t exist, it will be created even if it’s not used in any subsequent Dockerfile instruction.
Also, in the Docker best practices it recommends you to use it:
... you should use WORKDIR instead of proliferating instructions like
RUN cd … && do-something, which are hard to read, troubleshoot, and
maintain.
I would suggest to keep it.
I think you can refactor your Dockerfile to something like:
FROM node:latest
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . ./
EXPOSE 3000
CMD [ "npm", "start" ]
You dont have to
RUN mkdir -p /usr/src/app
This will be created automatically when you specifiy your WORKDIR
FROM node:latest
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . ./
EXPOSE 3000
CMD [ “npm”, “start” ]
You can think of WORKDIR like a cd inside the container (it affects commands that come later in the Dockerfile, like the RUN command). If you removed WORKDIR in your example above, RUN npm install wouldn't work because you would not be in the /usr/src/app directory inside your container.
I don't see how this would be related to where you put your Dockerfile (since your Dockerfile location on the host machine has nothing to do with the pwd inside the container). You can put the Dockerfile wherever you'd like in your project. However, the first argument to COPY is a relative path, so if you move your Dockerfile you may need to update those COPY commands.
Before applying WORKDIR. Here the WORKDIR is at the wrong place and is not used wisely.
FROM microsoft/aspnetcore:2
COPY --from=build-env /publish /publish
WORKDIR /publish
ENTRYPOINT ["dotnet", "/publish/api.dll"]
We corrected the above code to put WORKDIR at the right location and optimised the following statements by removing /Publish
FROM microsoft/aspnetcore:2
WORKDIR /publish
COPY --from=build-env /publish .
ENTRYPOINT ["dotnet", "api.dll"]
So it acts like a cd and sets the tone for the upcoming statements.
The answer by #juanlumn is great, but I wanted to add one more (important) thing.
In regular command line, if you cd somewhere, it stays there until you change it. However, in a Dockerfile, each RUN command starts back at the root directory! That's a gotcha for docker newbies, and something to be aware of.
So not only does WORKDIR make a more obvious visual cue to someone reading your code, but it also keeps the working directory for more than just the one RUN command.
Beware of using vars as the target directory name for WORKDIR - doing that appears to result in a "cannot normalize nothing" fatal error. IMO, it's also worth pointing out that WORKDIR behaves in the same way as mkdir -p <path> i.e. all elements of the path are created if they don't exist already.
UPDATE:
I encountered the variable related problem (mentioned above) whilst running a multi-stage build - it now appears that using a variable is fine - if it (the variable) is "in scope" e.g. in the following, the 2nd WORKDIR reference fails ...
FROM <some image>
ENV varname varval
WORKDIR $varname
FROM <some other image>
WORKDIR $varname
whereas, it succeeds in this ...
FROM <some image>
ENV varname varval
WORKDIR $varname
FROM <some other image>
ENV varname varval
WORKDIR $varname
.oO(Maybe it's in the docs & I've missed it)
Be careful where you set WORKDIR because it can affect the continuous integration flow. For example, setting it to /home/circleci/project will cause error something like .ssh or whatever is the remote circleci is doing at setup time.

Resources