I am having difficulties with ARG & ENV in docker after I have upgraded to Docker version 20.10.7, build f0df350 on windows 10.
I have made dockerfile to show issue:
FROM microsoft/dotnet:2.2-aspnetcore-runtime AS base
ARG node_build=production
ENV node_build_env=${node_build}
FROM node:12.18.3 AS node-build
WORKDIR /root
RUN echo $node_build_env > test.txt
FROM base AS final
WORKDIR /app
COPY --from=node-build /root/test.txt ./
My goal here is that an ARG can be set and it will be then set as environment variable inside the container and if none is set it has a default value.
In this Dockerfile I am attempting to write the environment variable node_build_env to a text file then copy it to the final layer. The problem though is that the file is empty.
To re-create these are commands I am using:
docker build -t testargs:test .
docker run -it --rm testargs:test /bin/bash
cat test.txt
The file is empty. However if I run:
docker build -t testargs:test . --target node-build
and then manually run the command:
echo $node_build_env > test.txt
It works and the value production is written into the file.
Why does it work when I do it manually but not as part of the RUN command?
You are using multi-stage builds.
Your ARG & ENV belongs to base stage. And you're not using your base stage in your node-build build stage.
That means there is no node_build_env value in node-build. Hence the following line creates an empty test.txt file.
RUN echo $node_build_env > test.txt
However your final stage uses base stage. Which means it has access to node_build_env variable. So after building your image using docker build -t testargs:test . and then open up an interactive session with that container and try to execute the following command,
echo $node_build_env
You will see production will be printed out in the terminal.
I believe this will help you solve the problem. Cheers 🍻 !!!
edit:
this is working version:
ARG node_build=production
FROM microsoft/dotnet:2.2-aspnetcore-runtime AS base
FROM node:12.18.3 AS node-build
ARG node_build
ENV node_build_env=$node_build
WORKDIR /root
RUN echo $node_build_env > test.txt
FROM base AS final
WORKDIR /app
COPY --from=node-build /root/test.txt ./
Related
I'm trying to build the project in our GitLab pipeline and then copy it into the Docker container which gets deployed to my company's OpenShift environment.
My dockerfile is below, which does a RUN ls to show that all the files were copied in correctly:
FROM bar.prod.myc.com/myc-docker/myc-ubi8-dotnet60-sdk AS builder
USER root
WORKDIR /App
EXPOSE 5000
RUN dotnet --info
COPY MyApp/bin/publish/ .
RUN pwd
RUN ls
FROM bar.prod.myc.com/myc-docker/myc-ubi8-dotnet60-sdk:latest
ENV ASPNETCORE_ENVIRONMENT="Debug"
ENV WEBSERVER="Kestrel"
ENV ASPNETCORE_URLS=http://0.0.0.0:5000
ENTRYPOINT ["dotnet", "MyApp.dll"]
It builds and deploys correctly, but when OpenShift tries to run it the logs show this error:
Could not execute because the specified command or file was not found.
Possible reasons for this include:
* You misspelled a built-in dotnet command.
* You intended to execute a .NET program, but dotnet-MyApp.dll does not exist.
* You intended to run a global tool, but a dotnet-prefixed executable with this name could not be found on the PATH.
I've double checked the spelling and case multiple times, so what else could be causing this issue?
Your dockerfile builds two containers:
The first one is:
FROM bar.prod.myc.com/myc-docker/myc-ubi8-dotnet60-sdk AS builder
USER root
WORKDIR /App
EXPOSE 5000
RUN dotnet --info
COPY MyApp/bin/publish/ .
RUN pwd
RUN ls
And the second one is:
FROM bar.prod.myc.com/myc-docker/myc-ubi8-dotnet60-sdk:latest
ENV ASPNETCORE_ENVIRONMENT="Debug"
ENV WEBSERVER="Kestrel"
ENV ASPNETCORE_URLS=http://0.0.0.0:5000
ENTRYPOINT ["dotnet", "MyApp.dll"]
The second container is what you are trying to run? It doesn't include a COPY --from=builder .... That means it doesn't actually contain your application at all. It's expected that dotnet complains, because your dll is not present. You can confirm that by doing an ls in your second container:
FROM bar.prod.myc.com/myc-docker/myc-ubi8-dotnet60-sdk:latest
ENV ASPNETCORE_ENVIRONMENT="Debug"
ENV WEBSERVER="Kestrel"
ENV ASPNETCORE_URLS=http://0.0.0.0:5000
RUN pwd
RUN ls
ENTRYPOINT ["dotnet", "MyApp.dll"]
Aside: If you have already published your application as framework-dependent, you can probably run it using the smaller registry.access.redhat.com/ubi8/dotnet-60-runtime image.
I do
git clone https://github.com/openzipkin/zipkin.git
cd zipkin
The create a Dockerfile as below
FROM openjdk
RUN mkdir app
WORKDIR /app
COPY ./ .
ENTRYPOINT ["sleep", "1000000"]
then
docker build -t abc .
docker run abc
I then run docker exec -it CONTAINER_ID bash
pwd returns /app which is expected
but I ls and see that the files are not copied
only the directories and the xml file is copied into the /app directory
What is the reason? how to fix it?
Also I tried
FROM openjdk
RUN mkdir app
WORKDIR /app
COPY . /app
ENTRYPOINT ["sleep", "1000000"]
That repository contains a .dockerignore file which excludes everything except a set of things it selects.
That repository's docker directory also contains several build scripts for official images and you may find it easier to start your custom image FROM openzipkin/zipkin rather than trying to reinvent it.
I have a multi-stage build where a python script runs in the first stage and uses several env vars.
How do I set these variables in the docker build command?
Here's the Dockerfile:
FROM python:3 AS exporter
RUN mkdir -p /opt/export && pip install mysql-connector-python
ADD --chmod=555 export.py /opt/export
CMD ["python", "/opt/export/export.py"]
FROM nginx
COPY --from=exporter /tmp/gen/* /usr/share/nginx/html
My export.py script reads several env vars, and I have a .env file. If I run a container built with teh first stage and pass --env-file it works, but I can't seem to get it to work in the build stage.
How can I get the env vars to be available when building the first stage?
I don't care if they are saved in the image or not...
its seens you are looking for the ARG instruction. it's only avaible at the building time and won't be avaible at image runtime. Don’t use them for secrets which are not meant to stick around!
# default value if not using --build-arg instruction
ARG GLOBAL_AVAILABLE=iamglobal
FROM python:3 AS exporter
RUN mkdir -p /opt/export && pip install mysql-connector-python
ADD --chmod=555 export.py /opt/export
ARG GLOBAL_AVAILABLE
ENV GLOBAL_AVAILABLE=$GLOBAL_AVAILABLE
# only visible at exporter build stage:
ARG LOCAL_AVAILABLE=aimlocal
# multistage visible:
RUN echo ${GLOBAL_AVAILABLE}
# local stage visible (exporter build stage):
RUN echo ${LOCAL_AVAILABLE}
CMD ["python", "/opt/export/export.py"]
FROM nginx
COPY --from=exporter /tmp/gen/* /usr/share/nginx/html
you can pass custom ARG values by using the --build-arg flag:
docker build -t <image-name>:<tag> --build-arg GLOBAL_AVAILABLE=abc .
the general format to pass multiple args is:
docker build -t <image-name>:<tag> --build-arg <key1>=<value1> --build-arg <key2>=<value2> .
some refs:
https://docs.docker.com/engine/reference/builder/
https://blog.bitsrc.io/how-to-pass-environment-info-during-docker-builds-1f7c5566dd0e
https://vsupalov.com/docker-arg-env-variable-guide/
I have below dockerfile:
FROM node:16.7.0
ARG JS_FILE
ENV JS_FILE=${JS_FILE:-"./sum.js"}
ARG JS_TEST_FILE
ENV JS_TEST_FILE=${JS_TEST_FILE:-"./sum.test.js"}
WORKDIR /app
# Copy the package.json to /app
COPY ["package.json", "./"]
# Copy source code into the image
COPY ${JS_FILE} .
COPY ${JS_TEST_FILE} .
# Install dependencies (if any) in package.json
RUN npm install
CMD ["sh", "-c", "tail -f /dev/null"]
after building the docker image, if I tried to run the image with the below command, then still could not see the updated files.
docker run --env JS_FILE="./Scripts/updated_sum.js" --env JS_TEST_FILE="./Test/updated_sum.test.js" -it <image-name>
I would like to see updated_sum.js and updated_sum.test.js in my container, however, I still see sum.js and sum.test.js.
Is it possible to achieve this?
This is my current folder/file structure:
.
-->Dockerfile
-->package.json
-->sum.js
-->sum.test.js
-->Test
-->--->updated_sum.test.js
-->Scripts
-->--->updated_sum.js
Using Docker generally involves two phases. First, you compile your application into an image, and then you run a container based on that image. With the plain Docker CLI, these correspond to the docker build and docker run steps. docker build does everything in the Dockerfile, then stops; docker run starts from the fixed result of that and runs the image's CMD.
So if you run
docker build -t sum .
The sum:latest image will have the sum.js and sum.test.js files, because that's what the Dockerfile COPYs in. You can then
docker run --rm sum \
ls
docker run --rm sum \
node ./sum.js
to see and run the contents of the image. (Specifying the latter command as CMD would be a better practice.) You can run the command with different environment variables, but it won't change the files in the image:
docker run --rm -e JS_FILE=missing.js sum ls
# still only has sum.js
docker run --rm -e JS_FILE=missing.js node missing.js
# not found
Instead you need to rebuild the image, using docker build --build-arg options to provide the values
docker build \
--build-arg JS_FILE=./product.js \
--build-arg JS_TEST_FILE=./product.test.js \
-t product \
.
docker run --rm product node ./product.js
The extremely parametrizable Dockerfile you show here can be a little harder to work with than a single-purpose Dockerfile. I might create a separate Dockerfile per application:
# Dockerfile.sum
FROM node:16.7.0
WORKDIR /app
COPY package*.json .
RUN npm ci
COPY sum.js sum.test.js .
CMD node ./sum.js
Another option is to COPY the entire source tree into the image (Javascript files are pretty small compared to a complete Node installation) and use a docker run command to pick which script to run.
In my Dockerfile I have the following:
ARG a-version
RUN wget -q -O /tmp/alle.tar.gz http://someserver/server/$a-version/a-server-$a-version.tar.gz && \
mkdir /opt/apps/$a-version
However when building this with:
--build-arg http_proxy=http://myproxy","--build-arg a-version=a","--build-arg b-version=b"
Step 10/15 : RUN wget... is shown with $a-version in the path instead of the substituted value and the build fails.
I have followed the instructions shown here but must be missing something else.
My questions is, what could be causing this issue and how can i solve
it?
Another thing to be careful about is that after every FROM statements all the ARGs get collected and are no longer available. Be careful with multi-stage builds.
You can reuse ARG with omitted default value inside FROM to get through this problem:
ARG VERSION=latest
FROM busybox:$VERSION
ARG VERSION
RUN echo $VERSION > image_version
Example taken from docs:
https://docs.docker.com/engine/reference/builder/#understand-how-arg-and-from-interact
Don't use - in variable names.
Docker build will always show you the line as is written down in the Dockerfile, despite the variable value.
So use this variable name a_version:
ARG a_version
See this example:
Dockerfile:
FROM alpine
ARG a_version
RUN echo $a_version
Build:
$ docker build . --build-arg a_version=1234
Sending build context to Docker daemon 2.048 kB
Step 1/3 : FROM alpine
---> a41a7446062d
Step 2/3 : ARG a_version
---> Running in c55e98cab494
---> 53dedab7de75
Removing intermediate container c55e98cab494
Step 3/3 : RUN echo $a_version <<< note this <<
---> Running in 56b8aeddf77b
1234 <<<< and this <<
---> 89badadc1ccf
Removing intermediate container 56b8aeddf77b
Successfully built 89badadc1ccf
I had the same problem using Windows containers for Windows.
Instead of doing this (Which works in linux containers)
FROM alpine
ARG TARGETPLATFORM
RUN echo "I'm building for $TARGETPLATFORM"
You need to do this
FROM mcr.microsoft.com/windows/servercore
ARG TARGETPLATFORM
RUN echo "I'm building for %TARGETPLATFORM%"
Just change the variable resolution according to the OS.
I spent much time to have the argument substitution working, but the solution was really simple. The substitution within RUN needs the argument reference to be enclosed in double quotes.
ARG CONFIGURATION=Debug
RUN dotnet publish "Project.csproj" -c "$CONFIGURATION" -o /app/publish
The only way I was able to substitute an ARG in a Windows Container was to prefix with $env:, as mentioned here.
An example of my Dockerfile is below. Notice that the ARG PAT is defined after the FROM so that it's in scope for its use in the RUN nuget sources add command (as Hongtao suggested). The only successful way I found to supply the personal access token was using $env:PAT
FROM mcr.microsoft.com/dotnet/framework/sdk:4.7.2 AS build
WORKDIR /app
ARG PAT
# copy csproj and restore as distinct layers
COPY *.sln .
COPY WebApi/*.csproj ./WebApi/
COPY WebApi/*.config ./WebApi/
RUN nuget sources add -name AppDev -source https://mysource.pkgs.visualstudio.com/_packaging/AppDev/nuget/v2 -username usern -password $env:PAT
RUN nuget restore
# copy everything else and build app
COPY WebApi/. ./WebApi/
WORKDIR /app/WebApi
RUN msbuild /p:Configuration=Release
FROM mcr.microsoft.com/dotnet/framework/aspnet:4.7.2 AS runtime
WORKDIR /inetpub/wwwroot
COPY --from=build /app/WebApi/. ./
The actual Docker command looks like this:
docker build --build-arg PAT=mypatgoeshere -t webapi .
I had the same problem accessing build-args in my RUN command. Turns out that the line containing the ARG definition should not be the first line. The working Dockerfile snippet looks like this:
FROM centos:7
MAINTAINER xxxxx
ARG SERVER_IPS
Earlier, I had placed the ARG definition as the first line of Dockerfile . My docker version is v19.
There are many answers, which make sense.
But the main thing is missed.
The way, how to use build arguments depends on the base image.
For Linux image, it will work with $ARG
For Windows, depending on image, it can be either $env:ARG(e.g. for mcr.microsoft.com/dotnet/framework/sdk:4.8) or %ARG% (e.g. for mcr.microsoft.com/windows/nanoserver:1809)
For me it was argument's order:
docker build . -f somepath/to/Dockerfile --build-arg FOO=BAR
did not work, but:
docker build --build-arg FOO=BAR . -f somepath/to/Dockerfile
did.