docker-compose env vars not available when connecting via ssh - docker

I have a local dev env which requires different hosts reachable via SSH so i set up a docker-compose.yml with some services:
services:
ssh1:
build:
context: ./.project/docker/ssh1
dockerfile: Dockerfile
environment:
MYSQL_USER: app1
MYSQL_PASSWORD: app1
The Dockerfile contains following contents:
FROM ubuntu:20.04
RUN export DEBIAN_FRONTEND=noninteractive \
&& ln -fs /usr/share/zoneinfo/Europe/Berlin /etc/localtime \
&& apt update \
&& apt upgrade -y \
&& apt install -y openssh-server rsync php \
&& mkdir /run/sshd/ \
&& ssh-keygen -A \
&& for key in $(ls /etc/ssh/ssh_host_* | grep -v pub); do echo "HostKey $key" >> /etc/ssh/sshd_config; done \
&& addgroup --gid 1000 app \
&& adduser --gecos "" --disabled-password --shell /bin/bash --uid 1000 --gid 1000 app \
&& mkdir -m 700 /home/app/.ssh/ \
&& chown app:app /home/app/.ssh/ \
&& rm -rf /var/lib/apt/lists/*
COPY --chown=app:app ssh1_rsa.pub /home/app/.ssh/authorized_keys
CMD ["/usr/sbin/sshd", "-D"]
EXPOSE 22
I can verify, that the environment variables are set in the container
$ docker-compose exec ssh1 printenv | grep MYSQL
MYSQL_USER=app1
MYSQL_PASSWORD=app1
docker inspect project_ssh1_1 also shows the ENV variables.
But when i connect from another container to ssh1 via ssh, my environment variables are not set.
Why are my environment variables not set when i ssh into the container?
I also appreciate any in-depth input on how env vars are set in the container via docker and how env vars are inherited from processes or the "OS".
Solution edit:
The actual question was answered. However, i did not ask the right question. So here's my actual solution. I am able to set the ENV VARS in the SSH session with the help of a pretty hacky solution, which should not be used in PROD environments, since it could lead to information disclosure.
Add all ENV VARS as build args.
docker-compose.yml:
ssh1:
build:
context: ./.project/docker/ssh1
dockerfile: Dockerfile
args:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: app1
MYSQL_USER: app1
MYSQL_PASSWORD: app1
MYSQL_HOST: mysql1
And write them to $HOME/.ssh/environment as well as enabling PermitUserEnvironment. Don't do this in production
Dockerfile
FROM ubuntu:20.04
ARG MYSQL_ROOT_PASSWORD
ARG MYSQL_DATABASE
ARG MYSQL_USER
ARG MYSQL_PASSWORD
ARG MYSQL_HOST
RUN echo "MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD" >> /home/app/.ssh/environment \
&& echo "MYSQL_DATABASE=$MYSQL_DATABASE" >> /home/app/.ssh/environment \
&& echo "MYSQL_USER=$MYSQL_USER" >> /home/app/.ssh/environment \
&& echo "MYSQL_PASSWORD=$MYSQL_PASSWORD" >> /home/app/.ssh/environment \
&& echo "MYSQL_HOST=$MYSQL_HOST" >> /home/app/.ssh/environment \
&& sed -i 's/#PermitUserEnvironment no/PermitUserEnvironment yes/g' /etc/ssh/sshd_config
Now, when you login, SSH will read the env vars from the users .ssh/environment (app in this case) and set them in the user's SSH session.

Environment variables are present in RUN commands and in the shell you exec into when issuing docker exec command, but when you ssh into an ssh server running inside container, you actually get a brand new shell which doesn’t have those env variables set.

Your issue has actually not much to do with docker but is due to the way sshd works: for every connection sshd will setup a new environment wiping out all variables in its own environment, see the Login Process in man sshd.
(It's easy to see why this makes sense: sshd is started by the root user and so may contain sensitive data in its environment variables that should not be leaked to the users. Other variables wouldn't be reasonable to pass to the user session, e.g. HOME, PATH, SHELL)
Depending on your use case there are various ways to pass environment variables to a new ssh session, depending on whether it being an interactive or non-interactive session, running a (login) shell or not:
~/.ssh/environment: variables injected by ssh, see PermitUserEnvironment
/etc/environment: used by pam_env on login
/etc/profile, ~/.bashrc and alike: configs used by the (login) shell, see bash for example.
Also depending on your use case you have now various options how to add these files to the container:
if the variables are static: just ADD the respective file to the image or create it in the Dockerfile
if the variables are set on build-time: use ARGs (or ENV) to pass the variables to the build and create the respective file from that in the build (as you did in your solution)
if the variables should be set on container run-time:
use a custom ENTRYPOINT script to generate the respective file on startup from the passed environment variables or command line arguments
volume-mount the respective file into the container (you may also use docker secret for sensitive data here)

Related

Unable to set environment variable inside docker container when calling sh file from Dockerfile CMD

I am following this link to create a spark cluster. I am able to run the spark cluster. However, I have to give an absolute path to start spark-shell. I am trying to set environment variables i.e. PATH and a few others in start-shell.sh. However, it's not setting that inside container. I tried printing it using printenv inside the container. But these variables are never reflected.
Am I trying to set environment variables incorrectly? Spark cluster is running successfully though.
I am using docker-compose.yml to build and recreate an image and container.
docker-compose up --build
Dockerfile
# builder step used to download and configure spark environment
FROM openjdk:11.0.11-jre-slim-buster as builder
# Add Dependencies for PySpark
RUN apt-get update && apt-get install -y curl vim wget software-properties-common ssh net-tools ca-certificates python3 python3-pip python3-numpy python3-matplotlib python3-scipy python3-pandas python3-simpy
# JDBC driver download and install
ADD https://go.microsoft.com/fwlink/?linkid=2168494 /usr/share/java
RUN update-alternatives --install "/usr/bin/python" "python" "$(which python3)" 1
# Fix the value of PYTHONHASHSEED
# Note: this is needed when you use Python 3.3 or greater
ENV SPARK_VERSION=3.1.2 \
HADOOP_VERSION=3.2 \
SPARK_HOME=/opt/spark \
PYTHONHASHSEED=1
# Download and uncompress spark from the apache archive
RUN wget --no-verbose -O apache-spark.tgz "https://archive.apache.org/dist/spark/spark-${SPARK_VERSION}/spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz" \
&& mkdir -p ${SPARK_HOME} \
&& tar -xf apache-spark.tgz -C ${SPARK_HOME} --strip-components=1 \
&& rm apache-spark.tgz
My Dockerfile-spark
When using SPARK_BIN="${SPARK_HOME}/bin/ under ENV in Dockerfile, environment variable get's set. It is visible inside the docker container by using printenv
FROM apache-spark
WORKDIR ${SPARK_HOME}
ENV SPARK_MASTER_PORT=7077 \
SPARK_MASTER_WEBUI_PORT=8080 \
SPARK_LOG_DIR=${SPARK_HOME}/logs \
SPARK_MASTER_LOG=${SPARK_HOME}/logs/spark-master.out \
SPARK_WORKER_LOG=${SPARK_HOME}/logs/spark-worker.out \
SPARK_WORKER_WEBUI_PORT=8080 \
SPARK_MASTER="spark://spark-master:7077" \
SPARK_WORKLOAD="master"
COPY start-spark.sh /
CMD ["/bin/bash", "/start-spark.sh"]
start-spark.sh
#!/bin/bash
. "$SPARK_HOME/bin/load-spark-env.sh"
export SPARK_BIN="${SPARK_HOME}/bin/" # This doesn't work here
export PATH="${SPARK_HOME}/bin/:${PATH}" # This doesn't work here
# When the spark work_load is master run class org.apache.spark.deploy.master.Master
if [ "$SPARK_WORKLOAD" == "master" ];
then
export SPARK_MASTER_HOST=`hostname` # This works here
cd $SPARK_BIN && ./spark-class org.apache.spark.deploy.master.Master --ip $SPARK_MASTER_HOST --port $SPARK_MASTER_PORT --webui-port $SPARK_MASTER_WEBUI_PORT >> $SPARK_MASTER_LOG.
My File structure is
dockerfile
dockerfile-spark # this uses pre-built image created by dockerfile
start-spark.sh # invoked buy dockerfile-spark
docker-compose.yml # uses build parameter to build an image from dockerfile-spark
From inside the master container
root#3abbd4508121:/opt/spark# export
declare -x HADOOP_VERSION="3.2"
declare -x HOME="/root"
declare -x HOSTNAME="3abbd4508121"
declare -x JAVA_HOME="/usr/local/openjdk-11"
declare -x JAVA_VERSION="11.0.11+9"
declare -x LANG="C.UTF-8"
declare -x OLDPWD
declare -x PATH="/usr/local/openjdk-11/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
declare -x PWD="/opt/spark"
declare -x PYTHONHASHSEED="1"
declare -x SHLVL="1"
declare -x SPARK_HOME="/opt/spark"
declare -x SPARK_LOCAL_IP="spark-master"
declare -x SPARK_LOG_DIR="/opt/spark/logs"
declare -x SPARK_MASTER="spark://spark-master:7077"
declare -x SPARK_MASTER_LOG="/opt/spark/logs/spark-master.out"
declare -x SPARK_MASTER_PORT="7077"
declare -x SPARK_MASTER_WEBUI_PORT="8080"
declare -x SPARK_VERSION="3.1.2"
declare -x SPARK_WORKER_LOG="/opt/spark/logs/spark-worker.out"
declare -x SPARK_WORKER_WEBUI_PORT="8080"
declare -x SPARK_WORKLOAD="master"
declare -x TERM="xterm"
root#3abbd4508121:/opt/spark#
There are a couple of different ways to set environment variables in Docker, and a couple of different ways to run processes. A container normally runs one process, which is controlled by the image's ENTRYPOINT and CMD settings. If you docker exec a second process in the container, that does not run as a child process of the main process, and will not see environment variables that are set by that main process.
In the setup you show here, the start-spark.sh script is the main container process (it is the image's CMD). If you docker exec your-container printenv, it will see things set in the Dockerfile but not things set in this script.
Things like filesystem paths will generally be fixed every time you run the container, no matter what command you're running there, so you can specify these in the Dockerfile
ENV SPARK_BIN=${SPARK_HOME}/bin PATH=${SPARK_BIN}:${PATH}
You can specify both an ENTRYPOINT and a CMD in your Dockerfile; if you do, the CMD is passed as arguments to the ENTRYPOINT. This leads to a useful pattern where the CMD is a standard shell command, and the ENTRYPOINT is a wrapper that does first-time setup and then runs it. You can split your script into two:
#!/bin/sh
# spark-env.sh
. "${SPARK_BIN}/load-spark-env.snh"
exec "$#"
#!/bin/sh
# start-spark.sh
spark-class org.apache.spark.deploy.master.Master \
--ip "$SPARK_MASTER_HOST" \
--port "$SPARK_MASTER_PORT" \
--webui-port "$SPARK_MASTER_WEBUI_PORT"
Then in your Dockerfile specify both parts
COPY spark-env.sh start-spark.sh /
ENTRYPOINT ["/spark-env.sh"] # must be JSON-array syntax
CMD ["/start-spark.sh"] # or any other valid CMD
This is useful for your debugging since it's straightforward to override the CMD in a docker run or docker-compose run instruction, leaving the ENTRYPOINT in place.
docker-compose run spark \
printenv
This launches a new container based on all of the same Dockerfile setup. When it runs, it runs printenv instead of the CMD in the image. This will do the first-time setup in the ENTRYPOINT script, and then the final exec "$#" line will run printenv instead of starting the Spark application. This will show you the environment the application will have when it starts.

Why is Docker CMD running as chronos in GKE?

I have a pod and NodePort service running on GKE.
In the Dockerfile for the container in my pod, I'm using gosu to run a command as a specific user:
startup.sh
exec /usr/local/bin/gosu mytestuser "$#"
Dockerfile
FROM ${DOCKER_HUB_PUBLIC}/opensuse/leap:latest
# Download and verify gosu
RUN gpg --batch --keyserver-options http-proxy=${env.HTTP_PROXY} --keyserver hkps://keys.openpgp.org \
--recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 && \
curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.12/gosu-amd64" && \
curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.12/gosu-amd64.asc" && \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu && \
chmod +x /usr/local/bin/gosu
# Add tini
ENV TINI_VERSION v0.18.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--", "/startup/startup.sh"]
# Add mytestuser
RUN useradd mytestuser
# Run startup.sh which will use gosu to execute following `CMD` as `mytestuser`
RUN /startup/startup.sh
CMD ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/helloworld.jar"]
I've just noticed that when I log into the container on GKE and look at the processes running, the java process that I would expect to be running as mytestuser is actually running as chronos:
me#gke-cluster-1-default-ool-1234 ~ $ ps aux | grep java
root 9551 0.0 0.0 4296 780 ? Ss 09:43 0:00 /tini -- /startup/startup.sh java -Djava.security.egd=file:/dev/./urandom -jar /helloworld.jar
chronos 9566 0.6 3.5 3308988 144636 ? Sl 09:43 0:12 java -Djava.security.egd=file:/dev/./urandom -jar /helloworld.jar
Can anyone explain what's happening, i.e. who is the chronos user, and why my process is not running as mytestuser?
When you RUN adduser, it assigns a user ID in the image's /etc/passwd file. Your script launches the process using that numeric user ID. When you subsequently run ps from the host, though, it looks up that user ID in the host's /etc/passwd file, and gets something different.
This difference doesn't usually matter. Only the numeric user ID matters for things like filesystem permissions, if you're bind-mounting a directory from the host. For security purposes it's important that the numeric user ID not be 0, but that's pretty universally named root.
When you run a useradd inside the container (or as part of the image build), it adds am entry to the /etc/passwd inside the container. The uid/gid will be in a shared namespace with the host, unless you enable user namespaces. However the mapping of those ids to names will be specific to the filesystem namespace where the process is running. Therefore in this scenario, the uid of mytestuser inside the container happens to be the same uid as chronos on the host.

Inject SSH key into a Docker container

I am trying to find a "global" solution for injecting an SSH key into a container. I know that there are several solutions including docker build kit and so on...but I don't want to build an image and inject the SSH key. I want to inject the SSH key by using an existing image with docker compose.
I use the following docker compose file:
version: '3.1'
services:
server1:
image: XXXXXXX
container_name: server1
command: bash -c "/root/init.sh && python3 /root/my_python.py"
environment:
- MANAGED_HOST=mserver
volumes:
- ./init.sh:/root/init.sh
secrets:
- id_rsa
secrets:
id_rsa:
file: /home/user/.ssh/id_rsa
The init.sh is as follows:
#!/bin/bash
eval "$(ssh-agent -s)" > /dev/null
if [ ! -d "/root/.ssh/" ]; then
mkdir /root/.ssh
ssh-keyscan $MANAGED_HOST > /root/.ssh/known_hosts
fi
ssh-add -k /run/secrets/id_rsa
If I run docker compose with the parameter command
bash -c "/root/init.sh && python3 /root/my_python.py", then the SSH authentication to the appropriate remote host ($MANAGED_HOST) is not working.
An agent process is running:
root 8 1 0 12:50 ? 00:00:00 ssh-agent -s
known_hosts is OK:
root#c67655d87ced:~# cat /root/.ssh/known_hosts
BLABLABLA ssh-rsa AAAAB3BLABLABLA....
and the agent is running, but the private key is not added:
root#c67655d87ced:~# ssh-add -l
Could not open a connection to your authentication agent.
Now, if I log in the container (docker exec -it server1 /bin/bash) and run the commands from init.sh one by one from the command line, then the SSH authentication to the appropriate remote host ($MANAGED_HOST) is working?!?
Any idea, how I can get it working by using the docker compose?
It should be enough to cause the file $HOME/.ssh/id_rsa to exist with appropriate permissions; you don't need an ssh agent running.
#!/bin/sh
if ! [ -d "$HOME/.ssh" ]; then
mkdir "$HOME/.ssh"
fi
chmod 0700 "$HOME/.ssh"
if [ -n "$MANAGED_HOST" ]; then
ssh-keyscan "$MANAGED_HOST" >> "$HOME/.ssh/known_hosts"
fi
if [ -f /run/secrets/id_rsa ]; then
cp /run/secrets/id_rsa "$HOME/.ssh/id_rsa"
chmod 0400 "$HOME/.ssh/id_rsa"
fi
# exec "$#"
A typical pattern is to use the Dockerfile ENTRYPOINT to do first-time setup tasks like this. That will get passed the CMD as arguments, and the commented exec "$#" line at the end of the file runs that as a command. You'd set this up in your image's Dockerfile like:
FROM XXXXXX
...
# Script must be executable on the host, and must start with a
# #!/bin/sh "shebang" line
COPY init.sh /root
# MUST use JSON-array form
ENTRYPOINT ["/root/init.sh"]
# Can use any Dockerfile syntax
CMD ["python3", "/root/my_python.py"]
In your specific example, you're launching init.sh as a subprocess. The ssh-agent setup sets some environment variables, like $SSH_AUTH_SOCK, but when these run as a subprocess they don't get propagated back out to the host process. You can use the standard POSIX shell . builtin (the bash source builtin is equivalent, but non-standard) to cause those environment variables to be set in the context of the parent shell:
command: sh -c ". /root/init.sh && exec python3 /root/my_python.py"
The exec replaces the shell wrapper with the Python script, which you generally want. This will also wind up being the parent process of ssh-agent, which could potentially surprise your process if it happens to exit.

How to set environment variables within Dockerfile's RUN?

I need to set the output of a RUN command to be an environment variable that is available inside my container after it's built:
...
RUN ...
GO111MODULE=on go get github.com/emersion/hydroxide/cmd/hydroxide && \
echo "the_password" | hydroxide auth the_username#protonmail.com > bridge_password.txt && \
BRIDGE_PASSWORD=`sed -n 's/Password: Bridge password: //p' bridge_password.txt` && \
hydroxide smtp
...
Above, I need $BRIDGE_PASSWORD to be available in my NodeJS project (process.env.BRIDGE_PASSWORD). How can I achieve this?
You need to:
Declare your env var with the ENV command in your Dockerfile.
Substitute your current RUN with a command in your image entrypoint so that the variable gets set correctly everytime your start a new container and is available to any launched command.

ARG or ENV, which one to use in this case?

This could be maybe a trivial question but reading docs for ARG and ENV doesn't put things clear to me.
I am building a PHP-FPM container and I want to give the ability for enable/disable some extensions on user needs.
Would be great if this could be done in the Dockerfile by adding conditionals and passing flags on the build command perhaps but AFAIK is not supported.
In my case and my personal approach is to run a small script when container starts, something like the following:
#!/bin/sh
set -e
RESTART="false"
# This script will be placed in /config/init/ and run when container starts.
if [ "$INSTALL_XDEBUG" == "true" ]; then
printf "\nInstalling Xdebug ...\n"
yum install -y php71-php-pecl-xdebug
RESTART="true"
fi
...
if [ "$RESTART" == "true" ]; then
printf "\nRestarting php-fpm ...\n"
supervisorctl restart php-fpm
fi
exec "$#"
This is how my Dockerfile looks like:
FROM reynierpm/centos7-supervisor
ENV TERM=xterm \
PATH="/root/.composer/vendor/bin:${PATH}" \
INSTALL_COMPOSER="false" \
COMPOSER_ALLOW_SUPERUSER=1 \
COMPOSER_ALLOW_XDEBUG=1 \
COMPOSER_DISABLE_XDEBUG_WARN=1 \
COMPOSER_HOME="/root/.composer" \
COMPOSER_CACHE_DIR="/root/.composer/cache" \
SYMFONY_INSTALLER="false" \
SYMFONY_PROJECT="false" \
INSTALL_XDEBUG="false" \
INSTALL_MONGO="false" \
INSTALL_REDIS="false" \
INSTALL_HTTP_REQUEST="false" \
INSTALL_UPLOAD_PROGRESS="false" \
INSTALL_XATTR="false"
RUN yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm \
https://rpms.remirepo.net/enterprise/remi-release-7.rpm
RUN yum install -y \
yum-utils \
git \
zip \
unzip \
nano \
wget \
php71-php-fpm \
php71-php-cli \
php71-php-common \
php71-php-gd \
php71-php-intl \
php71-php-json \
php71-php-mbstring \
php71-php-mcrypt \
php71-php-mysqlnd \
php71-php-pdo \
php71-php-pear \
php71-php-xml \
php71-pecl-apcu \
php71-php-pecl-apfd \
php71-php-pecl-memcache \
php71-php-pecl-memcached \
php71-php-pecl-zip && \
yum clean all && rm -rf /tmp/yum*
RUN ln -sfF /opt/remi/php71/enable /etc/profile.d/php71-paths.sh && \
ln -sfF /opt/remi/php71/root/usr/bin/{pear,pecl,phar,php,php-cgi,phpize} /usr/local/bin/. && \
mv -f /etc/opt/remi/php71/php.ini /etc/php.ini && \
ln -s /etc/php.ini /etc/opt/remi/php71/php.ini && \
rm -rf /etc/php.d && \
mv /etc/opt/remi/php71/php.d /etc/. && \
ln -s /etc/php.d /etc/opt/remi/php71/php.d
COPY container-files /
RUN chmod +x /config/bootstrap.sh
WORKDIR /data/www
EXPOSE 9001
Currently this is working but ... If I want to add let's say 20 (a random number) of extensions or any other feature that can be enable|disable then I will end with 20 non necessary ENV (because Dockerfile doesn't support .env files) definition whose only purpose would be set this flag for let the script knows what to do then ...
Is this the right way to do it?
Should I use ENV for this purpose?
I am open to ideas if you have a different approach for achieve this please let me know about it
From Dockerfile reference:
The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag.
The ENV instruction sets the environment variable <key> to the value <value>.
The environment variables set using ENV will persist when a container is run from the resulting image.
So if you need build-time customization, ARG is your best choice.
If you need run-time customization (to run the same image with different settings), ENV is well-suited.
If I want to add let's say 20 (a random number) of extensions or any other feature that can be enable|disable
Given the number of combinations involved, using ENV to set those features at runtime is best here.
But you can combine both by:
building an image with a specific ARG
using that ARG as an ENV
That is, with a Dockerfile including:
ARG var
ENV var=${var}
You can then either build an image with a specific var value at build-time (docker build --build-arg var=xxx), or run a container with a specific runtime value (docker run -e var=yyy)
So if want to set the value of an environment variable to something different for every build then we can pass these values during build time and we don't need to change our docker file every time.
While ENV, once set cannot be overwritten through command line values. So, if we want to have our environment variable to have different values for different builds then we could use ARG and set default values in our docker file. And when we want to overwrite these values then we can do so using --build-args at every build without changing our docker file.
For more details, you can refer this.
Why to use ARG or ENV ?
Let's say we have a jar file and we want to make a docker image of it. So, we can ship it to any docker engine.
We can write a Dockerfile.
Dockerfile
FROM eclipse-temurin:17-jdk-alpine
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
Now, if we want to build the docker image using Maven, we can pass the JAR_FILE using the --build-arg as target/*.jar
docker build --build-arg JAR_FILE=target/*.jar -t myorg/myapp
However, if we are using Gradle; the above command doesn't work and we've to pass a different path: build/libs/
docker build --build-arg JAR_FILE=build/libs/*.jar -t myorg/myapp .
Once you have chosen a build system, we don’t need the ARG. We can hard code the JAR location.
For Maven, that would be as follows:
Dockerfile
FROM eclipse-temurin:17-jdk-alpine
VOLUME /tmp
COPY target/*.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
here, we can build an image with the following command:
docker build -t image:tag .
When to use `ENV`?
If we want to set some values at running containers and reflect that to the image like the Port Number that your application can run/listen on. We can set that using the ENV.
Both ARG and ENV seem very similar. Both can be accessed from within our Dockerfile commands in the same manner.
Example:
ARG VAR_A 5
ENV VAR_B 6
RUN echo $VAR_A
RUN echo $VAR_B
Personal Option!
There is a tradeoff between choosing ARG over ENV. If you choose ARG you can't change it later during the run. However, if you chose ENV you can modify the value at the container.
I personally prefer ARG over ENV wherever I can, like,
In the above Example:
I have used ARG as the build system maven or Gradle impacts during build rather than runtime. It thus encapsulates a lot of details and provided a minimum set of arguments for the runtime.
For more details, you can refer to this.

Resources