using environment variables in a docker file basic - docker

Using the following docker file I am getting an error on RUN md $SiteFolderPath and I am not sure why every example I look at:
https://docs.docker.com/engine/reference/builder/#environment-replacement
Indicates I am doing it correctly.
FROM microsoft/iis
#FROM nanoserver/iis
SHELL ["powershell"]
ENV SiteFolderPath c:\\app
ENV SiteFolderPathLogs c:\\app\\logs
ENV WorkFolder /app
ENV SiteAppPool LocalAppPool
ENV SiteName LocalWebSite
ENV SiteHostName LocalSite
RUN md $SiteFolderPath
RUN md $SiteFolderPathLogs
WORKDIR $WorkFolder
COPY ./Public .
RUN New-Item $SiteFolderPath -type Directory
RUN Set-Content $SiteFolderPath\Default.htm "<h1>Hello IIS</h1>"
RUN New-Item IIS:\AppPools\$SiteAppPool
RUN New-Item IIS:\Sites\$SiteName -physicalPath $SiteFolderPath -bindings #{protocol="http";bindingInformation=":80:"+$SiteHostName}
RUN Set-ItemProperty IIS:\Sites\$SiteName -name applicationPool -value $SiteAppPool
EXPOSE 80
EXPOSE 443
VOLUME ${SiteFolderPathLogs}
I get an error message when building the docker file:
mkdir : Cannot bind argument to parameter 'Path' because it is null.

The page you linked to states:
Environment variables are supported by the following list of instructions in the Dockerfile:
ADD
COPY
ENV
EXPOSE
FROM
LABEL
STOPSIGNAL
USER
VOLUME
WORKDIR
as well as:
ONBUILD (when combined with one of the supported instructions above)
Since you are using the variables as part of a RUN block, you should use the Windows environment variable syntax: %SiteFolderPath%

Related

Openshift fails to execute dotnet after apparently successful docker image build?

I'm trying to build the project in our GitLab pipeline and then copy it into the Docker container which gets deployed to my company's OpenShift environment.
My dockerfile is below, which does a RUN ls to show that all the files were copied in correctly:
FROM bar.prod.myc.com/myc-docker/myc-ubi8-dotnet60-sdk AS builder
USER root
WORKDIR /App
EXPOSE 5000
RUN dotnet --info
COPY MyApp/bin/publish/ .
RUN pwd
RUN ls
FROM bar.prod.myc.com/myc-docker/myc-ubi8-dotnet60-sdk:latest
ENV ASPNETCORE_ENVIRONMENT="Debug"
ENV WEBSERVER="Kestrel"
ENV ASPNETCORE_URLS=http://0.0.0.0:5000
ENTRYPOINT ["dotnet", "MyApp.dll"]
It builds and deploys correctly, but when OpenShift tries to run it the logs show this error:
Could not execute because the specified command or file was not found.
Possible reasons for this include:
* You misspelled a built-in dotnet command.
* You intended to execute a .NET program, but dotnet-MyApp.dll does not exist.
* You intended to run a global tool, but a dotnet-prefixed executable with this name could not be found on the PATH.
I've double checked the spelling and case multiple times, so what else could be causing this issue?
Your dockerfile builds two containers:
The first one is:
FROM bar.prod.myc.com/myc-docker/myc-ubi8-dotnet60-sdk AS builder
USER root
WORKDIR /App
EXPOSE 5000
RUN dotnet --info
COPY MyApp/bin/publish/ .
RUN pwd
RUN ls
And the second one is:
FROM bar.prod.myc.com/myc-docker/myc-ubi8-dotnet60-sdk:latest
ENV ASPNETCORE_ENVIRONMENT="Debug"
ENV WEBSERVER="Kestrel"
ENV ASPNETCORE_URLS=http://0.0.0.0:5000
ENTRYPOINT ["dotnet", "MyApp.dll"]
The second container is what you are trying to run? It doesn't include a COPY --from=builder .... That means it doesn't actually contain your application at all. It's expected that dotnet complains, because your dll is not present. You can confirm that by doing an ls in your second container:
FROM bar.prod.myc.com/myc-docker/myc-ubi8-dotnet60-sdk:latest
ENV ASPNETCORE_ENVIRONMENT="Debug"
ENV WEBSERVER="Kestrel"
ENV ASPNETCORE_URLS=http://0.0.0.0:5000
RUN pwd
RUN ls
ENTRYPOINT ["dotnet", "MyApp.dll"]
Aside: If you have already published your application as framework-dependent, you can probably run it using the smaller registry.access.redhat.com/ubi8/dotnet-60-runtime image.

How do make all environment variables available on the Dockerfile?

I have this Dockerfile
FROM node:14.17.1
ARG GITHUB_TOKEN
ARG REACT_APP_BASE_URL
ARG DATABASE_URL
ARG BASE_URL
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
ENV GITHUB_TOKEN=${GITHUB_TOKEN}
ENV REACT_APP_BASE_URL=${REACT_APP_BASE_URL}
ENV DATABASE_URL=${DATABASE_URL}
ENV BASE_URL=${BASE_URL}
ENV PORT 80
COPY . /usr/src/app
RUN npm install
RUN npm run build
EXPOSE 80
CMD ["npm", "start"]
But I don't like having to set each environment variable. Is is possible to make all of them available without needing to set one by one?
We need to pay attention to next two items before continue:
As mentioned by #Lukman in comments, TOKEN is not a good item to be stored in image unless you totally for internal use, you decide.
Even we did not specify environment one by one in Dockerfile, we still need to define them in some other place, as program itself can't know what environment you really need.
If you no problem with above, let's go on. Basically, I think define the environment (Here, use ENV1, ENV2 as example) in a script, then source them in container, and let app have ways to access these variables is what you needed.
env.sh:
export ENV1=1
export ENV2=2
app.js:
#!/usr/bin/env node
var env1 = process.env.ENV1;
var env2 = process.env.ENV2;
console.log(env1);
console.log(env2);
entrypoint.sh:
#!/bin/bash
source /usr/src/app/env.sh
exec node /usr/src/app/app.js
Dockerfile:
FROM node:14.17.1
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN chmod -R 755 /usr/src/app
CMD ["/usr/src/app/entrypoint.sh"]
Execution:
$ docker build -t abc:1 .
$ docker run --rm abc:1
1
2
Explain:
We change CMD or ENTRYPOINT in Dockerfile to use customized entrypoint.sh, in this entrypoint.sh, we will first source env.sh which make ENV1 and ENV2 visible to subprocess of entrypoint.sh.
Then, we use exec to replace current process as node app.js, so PID1 becomes node app.js now, meanwhile app.js still could get the environment defined in env.sh.
With above, we no need to define variables in Dockerfile one by one, but still our app could get the environment.
Here's a different (easy) way.
Start by making your file. Here I'm choosing to use everything on my this is messy and not recommended. It's a useful bit of code though so I thought I'd add it.
env | sed 's/^/export /' > env.sh
edit it so you only have what you need
vi env.sh
Use the below to import files into the container. Change pwd to whichever folder you want to share. Using this carelessly may result in you sharing to many files*
sudo docker run -it -v `pwd`:`pwd` ubuntu
Assign appropriate file permissions. I'm using 777 which means anyone can read, write, execute - for demonstration purposes. But you only need execute privileges.
Run this command and make sure you add the full stop.
. /LOCATION/env.sh
If you're confused to where your file is just type pwd in the host console.
You can just add those commands where appropriate to your Dockerfile to automate the process. If I recall there is a VOLUME flag for Dockerfile.

Adding files to docker container based on a docker-compose Environment variables

I have a large set of test files (3.2 gb) that I only want to add to the container if an environment variable (DEBUG) is set. For testing locally I set these in a docker-compose file.
So far, I've added the test data folder to a .dockerignore file and tried the solution mentioned here in my Dockerfile without any success.
I've also tried running the cp command from within a run_app.sh which i call in my docker file:
cp local/folder app/testdata
but get cp: cannot stat 'local/folder': No such file or directory, i guess because it's trying to find a folder that exists on my local machine inside the container?
This is my docker file:
RUN mkdir /app
WORKDIR /app
ADD . /app/
ARG DEBUG
RUN if [ "x$DEBUG" = "True" ] ; echo "Argument not provided" ; echo "Argument is $arg" ; fi
RUN pip install -r requirements.txt
USER nobody
ENV PORT 5000
EXPOSE ${PORT}
CMD /uus/run_app.sh
If it's really just for testing, and it's in a clearly named isolated directory like testdata, you can inject it using a bind mount.
Remove the ARG DEBUG and the build-time option to copy the content into the image. When you run the container, run it with
docker run \
-v $PWD/local/folder:/app/testdata:ro \
...
This makes that host folder appear in that container directory, read-only so you don't accidentally overwrite the test data for later runs.
Note that this hides whatever was in the image on that path before; hence the "if it's in a separate directory, then..." disclaimer.

Setting environment variable during Docker container start (on AWS)

I am trying to pass a variable which fetches an IP address with the following code:
${wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4}
I have tried setting an entrypoint shell script such as:
#!/bin/sh
export ECS_INSTANCE_IP_ADDRESS=$(wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4)
java -jar user-service-0.0.1-SNAPSHOT.jar
And my dockerfile looking like:
#############
### build ###
#############
# base image
FROM maven:3.6.2-ibmjava-8-alpine as build
# set working directory
WORKDIR /app
# install and cache app dependencies
COPY . /app
RUN mvn clean package
############
### prod ###
############
# base image
FROM openjdk:8-alpine
# set working directory
WORKDIR /app
# copy entrypoint from the 'build environment'
COPY --from=build /app/entrypoint.sh /app/entrypoint.sh
# copy artifact build from the 'build environment'
COPY --from=build /app/target/user-service-0.0.1-SNAPSHOT.jar /app/user-service-0.0.1-SNAPSHOT.jar
# expose port 8081
EXPOSE 8081
# run entrypoint and set variables
ENTRYPOINT ["/bin/sh", "./entrypoint.sh"]
I need this variable to show up when echoing ECS_INSTANCE_IP_ADDRESS so that it can be loaded by a .properties file which $ECS_INSTANCE_IP_ADDRESS. Whenever I get into /bin/sh of the container and echo the variable it shows up blank. If I do an export on that shell I do get the echo response.
Any ideas? I've tried a bunch of things and can't get the variable to be available on the container.
The standard way to set Environment variables in Docker is Dockerfile or Task definition in case of AWS-ECS.
The current docker-entrypoint will set env for that session only, that is why you got empty value when do ssh, but you can verify it has been in that session but this is not recommended approach in Docker.
You can verify
#!/bin/sh
export ECS_INSTANCE_IP_ADDRESS=$(wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4)
echo "instance IP is ${ECS_INSTANCE_IP_ADDRESS}"
java -jar user-service-0.0.1-SNAPSHOT.jar
You will be able to see this value but it will not be available in another session.
Another example,
Dockerfile
FROM node:alpine
WORKDIR /app
COPY run.sh run.sh
RUN chmod +X run.sh
ENTRYPOINT ["./run.sh"]
entrypoint
#!/bin/sh
export ECS_INSTANCE_IP_ADDRESS=$(wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4)
node -e "console.log(process.env.ECS_INSTANCE_IP_ADDRESS)"
You will see the node process is able to process the environment, but if you run a command inside the container it will show undefined.
docker exec -it <container_id> ash -c 'node -e "console.log(process.env.ECS_INSTANCE_IP_ADDRESS)"'
Two Option to deal with ENV,
Pass the IP ENV if you already know the IP, as instance always launches before container, for example
The second option is passing environment variable to a java class in command line or How to pass system properties to a jar file if that seems to work for then you entrypoint will be
java -DECS_INSTANCE_IP_ADDRESS=$(wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4) -jar myjar.jar
This is how I fixed it, I believe there was also an error on the metadata URL and that could have been the culprit.
Dockerfile:
#############
### build ###
#############
# base image
FROM maven:3.6.2-ibmjava-8-alpine as build
# set working directory
WORKDIR /app
# install and cache app dependencies
COPY . /app
RUN mvn clean package
############
### prod ###
############
# base image
FROM openjdk:8-alpine
RUN apk add jq
# set working directory
WORKDIR /app
# copy entrypoint from the 'build environment'
COPY --from=build /app/entrypoint.sh /app/entrypoint.sh
# copy artifact build from the 'build environment'
COPY --from=build /app/target/user-service-0.0.1-SNAPSHOT.jar /app/user-service-0.0.1-SNAPSHOT.jar
# expose port 8081
EXPOSE 8081
# run entrypoint
ENTRYPOINT ["/bin/sh", "./entrypoint.sh"]
entrypoint.sh
#!/bin/sh
export FARGATE_IP=$(wget -q -O - http://169.254.170.2/v2/metadata | jq -r .Containers[0].Networks[0].IPv4Addresses[0])
echo $FARGATE_IP
java -jar user-service-0.0.1-SNAPSHOT.jar
application-aws.properties (not full file):
eureka.instance.ip-address=${FARGATE_IP}
Eureka is printing a template error on the Eureka status page but microservices do connect and respond to actuator health and swagger-ui from Spring Boot.

ENV/ARG command not populating variables in Dockerfile

I'm trying to create a nanoserver w/ nodejs base image, but I can't seem to get the ARG (or ENV) command to work properly.
My docker file:
FROM microsoft/nanoserver
ENV NODE_VERSION=8.11.4
ADD https://nodejs.org/dist/v${NODE_VERSION}/node-v${NODE_VERSION}-win-x64.zip C:\\build\\node-v${NODE_VERSION}-win-x64.zip
RUN powershell -Command Expand-Archive C:\build\node-v${NODE_VERSION}-win-x64.zip C:\; Rename-Item C:\node-v${NODE_VERSION}-win-x64 node
RUN SETX PATH C:\node
ENTRYPOINT C:\node\node.exe
Build command:
docker build . -t base-image:latest
It downloads the zip file, but when it tries to rename the downloaded file it throws an error:
Expand-Archive : The path 'C:\build\node-v-win-x64.zip' either does not exist
or is not a valid file system path.
According to the ENV documentation:
Environment variables are supported by the following list of
instructions in the Dockerfile:
ADD COPY ENV EXPOSE FROM LABEL STOPSIGNAL USER VOLUME WORKDIR as well
as:
ONBUILD (when combined with one of the supported instructions above)
Therefore it appears variables defined with ENV are not supported by the RUN directive.
However, you can instead replace the ENV directive with the ARG directive and NODE_VERSION will be availablein subsequent RUN directives.
Example:
FROM microsoft/nanoserver
ARG NODE_VERSION=8.11.4
ADD https://nodejs.org/dist/v${NODE_VERSION}/node-v${NODE_VERSION}-win-x64.zip C:\\build\\node-v${NODE_VERSION}-win-x64.zip
RUN powershell -Command Expand-Archive C:\build\node-v${NODE_VERSION}-win-x64.zip C:\; Rename-Item C:\node-v${NODE_VERSION}-win-x64 node
RUN SETX PATH C:\node
ENTRYPOINT C:\node\node.exe
Additionally you can override the value of NODE_VERSION in your docker build command.
$ docker build -t base-image:latest --build-arg NODE_VERSION=10.0.0 .
Using the ARG directive will not make NODE_VERSION available in the environment of a running container. Depending on your use case you may also need to use an additional ENV definition.
Found an answer here:
https://github.com/docker/for-win/issues/542
Essentially - within the powershell commands the %VARIABLE_NAME% format has to be used:
FROM microsoft/nanoserver
ENV NODE_VERSION=8.11.4
ADD https://nodejs.org/dist/v${NODE_VERSION}/node-v${NODE_VERSION}-win-x64.zip C:\\build\\node-v${NODE_VERSION}-win-x64.zip
RUN powershell -Command Expand-Archive C:\build\node-v%NODE_VERSION%-win-x64.zip C:\; Rename-Item C:\node-v%NODE_VERSION%-win-x64 node
RUN SETX PATH C:\node
ENTRYPOINT C:\node\node.exe

Resources