Run Prometheus with docker failed when --enable-feature=promql-at-modifier - docker

On Windows
I successfully run Prometheus from a docker image like this.
docker run -p 9090:9090 \
-v D:/WORK/MyProject/grafana:/etc/prometheus \
prom/prometheus
The D:/WORK/MyProject/grafana contains prometheus.yml file with all configs I need.
Now I need to enable # operator usage so I added promql-at-modifier tried to run
docker run -p 9090:9090 \
-v D:/WORK/MyProject/grafana:/etc/prometheus \
prom/prometheus --enable-feature=promql-at-modifier
I got the following:
level=info ts=2021-07-30T14:56:29.139Z caller=main.go:143 msg="Experimental promql-at-modifier enabled"
level=error ts=2021-07-30T14:56:29.139Z caller=main.go:356 msg="Error loading config (--config.file=prometheus.yml)" err="open prometheus.yml: no such file or directory"
Tried to google. There are suggestions to mount file
docker run -p 9090:9090 \
-v /path/to/prometheus.yml:/etc/prometheus/prometheus.yml \
prom/prometheus
(from https://www.promlts.com/resources/wheres-my-prometheus-yml)
But no luck.
Tried to specify config file option but again no luck.
Could you help?

I am a fan of docker, but it does have a few points of friction, and you found one of them.
https://github.com/prometheus/prometheus/blob/main/Dockerfile#L25 is where the upstream prometheus defines ENTRYPOINT and CMD:
ENTRYPOINT [ "/bin/prometheus" ]
CMD [ "--config.file=/etc/prometheus/prometheus.yml", \
"--storage.tsdb.path=/prometheus", \
"--web.console.libraries=/usr/share/prometheus/console_libraries", \
"--web.console.templates=/usr/share/prometheus/consoles" ]
The problem is, any arguments provided to the docker run command will replace the default CMD. So in order to append arguments to the default CMD, you sadly need to copy the upstream CMD and then add your argument to the list.
Sadly, docker does not (currently!) support any way to "append" something to an upstream's CMD. How to append an argument to a container command? gives one idea for using an environment variable to do it.
In the general case where I want to provide default arguments and allow the invocation to provide additional arguments, I usually follow this pattern:
Make the entrypoint launch a shell script
exec the real entrypoint at the end of the shell script. exec replaces the shell with the real entrypoint, so that exec is important so signals are passed to the entrypoint and not the wrapper shell script.
At the end of the arguments to exec within the script, add "$#", which expands to the arguments of the shell script, quoted appropriately (yes, shell is quite esoteric! you'd think it would quote all the arguments together, but instead it quotes each of the arguments because that token is magical)
In this way, the "default" commands are within the shell script and thus don't need to be included with CMD. The downside to this method is that the shell script provided arguments are more difficult to remove if you wanted to.
Here's an example:
https://github.com/farrellit/stackoverflow/tree/main/68593213
The dockerfile includes a default CMD:
FROM alpine
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["7"]
the entrypoint.sh includes a set of "automatic" arguments to which is appended CMD, either default or overridden.
#!/bin/sh
exec echo 1 2 3 "$#"
The Makefile demonstrates to two invocations:
docker run --rm stackoverflow-68593213
docker run --rm stackoverflow-68593213 4 5 6
docker run --rm stackoverflow-68593213
1 2 3 7
docker run --rm stackoverflow-68593213 4 5 6
1 2 3 4 5 6
Here, 1 2 3 are the default "base" paramters I always want to pass to the ENTRYPOINT, 7 is the default "additional" parameters, and 4 5 6 provided to override the default parameters.

Can you try adding:
--config.file=/etc/prometheus/prometheus.yml
i.e.
docker run --publish=9090:9090 \
--volume=D:/WORK/MyProject/grafana:/etc/prometheus \
prom/prometheus \
--config.file=/etc/prometheus/prometheus.yml \
--enable-feature=promql-at-modifier
Explanation: Once you add flags (e.g. --enable-feature), other flags take default values. The default value for --config.file is prometheus.yml which is not what you want (you want /etc/prometheus/prometheus.yml) and so you must explicitly reference it.

Just a few brief details that lie behind DazWilkin's answer:
If you docker inspect the prom/prometheus image, you'll find the
following:
"Entrypoint": [
"/bin/prometheus"
],
"Cmd": [
"--config.file=/etc/prometheus/prometheus.yml",
"--storage.tsdb.path=/prometheus",
"--web.console.libraries=/usr/share/prometheus/console_libraries",
"--web.console.templates=/usr/share/prometheus/consoles"
],
When you run:
docker run ... prom/prometheus --enable-feature=promql-at-modifier
You are replacing the existing Cmd setting, so the command actually
executed is /bin/prometheus --enable-feature=promql-at-modifier. To
provide the same behavior as you get by default, you would actually
want to run:
docker run ... prom/prometheus \
--enable-feature=promql-at-modifier \
--config.file=/etc/prometheus/prometheus.yml \
--storage.tsdb.path=/prometheus \
--web.console.libraries=/usr/share/prometheus/console_libraries \
--web.console.templates=/usr/share/prometheus/consoles

Related

Build a docker container for a "custom" program [duplicate]

I am new to the docker world. I have to invoke a shell script that takes command line arguments through a docker container.
Ex: My shell script looks like:
#!bin/bash
echo $1
Dockerfile looks like this:
FROM ubuntu:14.04
COPY ./file.sh /
CMD /bin/bash file.sh
I am not sure how to pass the arguments while running the container
with this script in file.sh
#!/bin/bash
echo Your container args are: "$#"
and this Dockerfile
FROM ubuntu:14.04
COPY ./file.sh /
ENTRYPOINT ["/file.sh"]
you should be able to:
% docker build -t test .
% docker run test hello world
Your container args are: hello world
Use the same file.sh
#!/bin/bash
echo $1
Build the image using the existing Dockerfile:
docker build -t test .
Run the image with arguments abc or xyz or something else.
docker run -ti --rm test /file.sh abc
docker run -ti --rm test /file.sh xyz
There are a few things interacting here:
docker run your_image arg1 arg2 will replace the value of CMD with arg1 arg2. That's a full replacement of the CMD, not appending more values to it. This is why you often see docker run some_image /bin/bash to run a bash shell in the container.
When you have both an ENTRYPOINT and a CMD value defined, docker starts the container by concatenating the two and running that concatenated command. So if you define your entrypoint to be file.sh, you can now run the container with additional args that will be passed as args to file.sh.
Entrypoints and Commands in docker have two syntaxes, a string syntax that will launch a shell, and a json syntax that will perform an exec. The shell is useful to handle things like IO redirection, chaining multiple commands together (with things like &&), variable substitution, etc. However, that shell gets in the way with signal handling (if you've ever seen a 10 second delay to stop a container, this is often the cause) and with concatenating an entrypoint and command together. If you define your entrypoint as a string, it would run /bin/sh -c "file.sh", which alone is fine. But if you have a command defined as a string too, you'll see something like /bin/sh -c "file.sh" /bin/sh -c "arg1 arg2" as the command being launched inside your container, not so good. See the table here for more on how these two options interact
The shell -c option only takes a single argument. Everything after that would get passed as $1, $2, etc, to that single argument, but not into an embedded shell script unless you explicitly passed the args. I.e. /bin/sh -c "file.sh $1 $2" "arg1" "arg2" would work, but /bin/sh -c "file.sh" "arg1" "arg2" would not since file.sh would be called with no args.
Putting that all together, the common design is:
FROM ubuntu:14.04
COPY ./file.sh /
RUN chmod 755 /file.sh
# Note the json syntax on this next line is strict, double quotes, and any syntax
# error will result in a shell being used to run the line.
ENTRYPOINT ["file.sh"]
And you then run that with:
docker run your_image arg1 arg2
There's a fair bit more detail on this at:
https://docs.docker.com/engine/reference/run/#cmd-default-command-or-options
https://docs.docker.com/engine/reference/builder/#exec-form-entrypoint-example
With Docker, the proper way to pass this sort of information is through environment variables.
So with the same Dockerfile, change the script to
#!/bin/bash
echo $FOO
After building, use the following docker command:
docker run -e FOO="hello world!" test
What I have is a script file that actually runs things. This scrip file might be relatively complicated. Let's call it "run_container". This script takes arguments from the command line:
run_container p1 p2 p3
A simple run_container might be:
#!/bin/bash
echo "argc = ${#*}"
echo "argv = ${*}"
What I want to do is, after "dockering" this I would like to be able to startup this container with the parameters on the docker command line like this:
docker run image_name p1 p2 p3
and have the run_container script be run with p1 p2 p3 as the parameters.
This is my solution:
Dockerfile:
FROM docker.io/ubuntu
ADD run_container /
ENTRYPOINT ["/bin/bash", "-c", "/run_container \"$#\"", "--"]
If you want to run it #build time :
CMD /bin/bash /file.sh arg1
if you want to run it #run time :
ENTRYPOINT ["/bin/bash"]
CMD ["/file.sh", "arg1"]
Then in the host shell
docker build -t test .
docker run -i -t test
I wanted to use the string version of ENTRYPOINT so I could use the interactive shell.
FROM docker.io/ubuntu
...
ENTRYPOINT python -m server "$#"
And then the command to run (note the --):
docker run -it server -- --my_server_flag
The way this works is that the string version of ENTRYPOINT runs a shell with the command specified as the value of the -c flag. Arguments passed to the shell after -- are provided as arguments to the command where "$#" is located. See the table here: https://tldp.org/LDP/abs/html/options.html
(Credit to #jkh and #BMitch answers for helping me understand what's happening.)
Another option...
To make this works
docker run -d --rm $IMG_NAME "bash:command1&&command2&&command3"
in dockerfile
ENTRYPOINT ["/entrypoint.sh"]
in entrypoint.sh
#!/bin/sh
entrypoint_params=$1
printf "==>[entrypoint.sh] %s\n" "entry_point_param is $entrypoint_params"
PARAM1=$(echo $entrypoint_params | cut -d':' -f1) # output is 1 must be 'bash' it will be tested
PARAM2=$(echo $entrypoint_params | cut -d':' -f2) # the real command separated by &&
printf "==>[entrypoint.sh] %s\n" "PARAM1=$PARAM1"
printf "==>[entrypoint.sh] %s\n" "PARAM2=$PARAM2"
if [ "$PARAM1" = "bash" ];
then
printf "==>[entrypoint.sh] %s\n" "about to running $PARAM2 command"
echo $PARAM2 | tr '&&' '\n' | while read cmd; do
$cmd
done
fi

Issue Running an Initialization Script with Nginx Docker Container

I am creating a Nginx docker image that I'll be using as a reverse proxy component in ECS/Fargate in AWS. I'm using the official Nginx image as the base image (1.17.5).
When the container starts I'm trying to run a bash script from an ENTRYPOINT to go out to the AWS Parameter Store and retrieve certificate info. This work fine, however when I try to add a parameter to pass to the bash script (e.g. ENTRYPOINT ["installcerts.sh", "AppName"] it executes the script but the container terminates without error.
I want the container to continue on to start up Nginx after the parameterized batch script.
Here is my Docker File:
FROM nginx:1.17.5
# Install AWS CLI/BOTO3, JQ
RUN apt-get update && apt-get install -y && apt-get install awscli -y && apt-get install jq -y
# Copy Nginx config to etc/nginx
COPY proxy_ssl.conf /etc/nginx/conf.d/
VOLUME ["/etc/nginx/conf/d"]
# Copy entrypoint bash script to install certs from the AWS Parameter Store
COPY installcerts.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/installcerts.sh
#Pull certs from Parameter Store
ENTRYPOINT ["/usr/local/bin/installcerts.sh", "AppName"]
CMD ["nginx", "-g", "daemon off;"]
And here is my "installcerts.sh" script showing utilizing the parameter passed in from ENTRYPOINT.
#!/usr/bin/env bash
-e
echo Installing certs...
aws ssm get-parameters --name /Certificate/$1/CRT | jq '.Parameters[0].Value' -r > /etc/nginx/conf.d/app.crt
aws ssm get-parameters --name /Certificate/$1/KEY | jq '.Parameters[0].Value' -r > /etc/nginx/conf.d/app.key
echo Exiting script.
exec "$#"
The "exec "$#" in the bash script is needed, but honestly, I don't completely understand how or why this works even after hours of trying to track it down.
The short story is:
If I use this, the container does what i want it to do but I can't send a parameter to the bash script.
ENTRYPOINT ["/usr/local/bin/installcerts.sh"]
But If I use this, the script will run WITH the parameter successfully, but the container exits and Nginx doesn't start up.
ENTRYPOINT ["/usr/local/bin/installcerts.sh", "AppName"]
What am I doing wrong?
When you set an ENTRYPOINT on your image, then docker passes that script the value of CMD (or whatever you pass on the command line after the image name). For example, if you have:
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/usr/bin/myprogram"]
Then docker runs, effectively:
/entrypoint.sh /usr/bin/myprogram
That is, Docker itself never runs /usr/bin/myprogram: it is entirely up to the ENTRYPOINT script to do that. That is what the exec "$#" is for. This is a shell variable the evaluates to:
Expands to the positional parameters, starting from one.
(bash(1) man page, in the "Special Parameters" section)
In our example, this would evaluate to:
exec /usr/bin/myprogram
...which replaces the current script with /usr/bin/myprogram. But if we were to set ENTRYPOINT as you have in your question:
ENTRYPOINT ["/entrypoint.sh", "appName"]
Then exec "$#" will in fact evaluate to:
exec appName /usr/bin/myprogram
And since appName isn't a valid command, the container will simply fail.
There are a few ways of dealing with this:
Do you really need to pass parameters to your ENTRYPOINT script? What about using environment variables instead?
If you always pass parameters to your script, you can use the shift shell command to drop those from the positional parameters before using $#. For example, for a script that expects two parameters:
param1=$1
param2=$2
shift 2
...do stuff here...
exec "$#"
...but this only works if you always pass two parameters.
You can implement command line option processing in your script using e.g. the getopts command:
while getopts a:b: ch; do
case $ch in
(a) param1=$OPTARG
;;
(b) param2=$OPTARG
;;
esac
done
shift $(( $OPTIND - 1 ))
...do stuff here...
exec "$#"
Having said that: I would opt for option 1 (use environment variables) as being the simplest solution:
docker run -e PARAM1="some value" ...
And then in your ENTRYPOINT script you can just use the $PARAM1 variable where you need it.

Understanding the difference in sequence of ENTRYPOINT/CMD between Dockerfile and docker run

Docker noob here...
I am trying to build and run an IBM DataPower container from a Dockerfile, but it doesn't seem to work the same as when just running docker run and passing the same parameters in the terminal.
This works (docker run)
docker run -it \
-v $PWD/config:/drouter/config \
-e DATAPOWER_ACCEPT_LICENSE=true \
-e DATAPOWER_INTERACTIVE=true \
-e DATAPOWER_WORKER_THREADS=4 \
-p 9090:9090 \
--name mydatapower \
ibmcom/datapower
... the key part being that it mounts the ./config folder and the custom configuration is picked up by datapower running in the container.
This doesn't (Dockerfile)
Dockerfile:
FROM ibmcom/datapower
ENV DATAPOWER_ACCEPT_LICENSE=true
ENV DATAPOWER_INTERACTIVE=true
ENV DATAPOWER_WORKER_THREADS=4
EXPOSE 9090
COPY config/auto-startup.cfg /drouter/config/auto-startup.cfg
Build:
docker build -t local/datapower .
Run:
docker run -it \
-p 9090:9090 \
--name mydatapower local/datapower
The problem is that DataPower doesn't pick up the auto-startup.cfg file, so the additional config options doesn't get used. I know the source file path is correct because if I misspell the file name docker throws an error.
I have a theory that it might be running the inherited ENTRYPOINT or CMD before the config file is available. I don't know how to test or prove this. I don't know what the ENTRYPOINT or CMD is because the inherited image is not open source and I can't figure out how to find it.
Does that seem likely?
UPDATE:
The content of the auto-startup.cfg is:
top; co
ssh
web-mgmt
admin enabled
port 9090
exit
It simply enables the DataPower WebGUI.
The output when running it in the commandline with:
docker run -it -v $PWD/config:/drouter/config -v $PWD/local:/drouter/local -e DATAPOWER_ACCEPT_LICENSE=true -e DATAPOWER_INTERACTIVE=true -e DATAPOWER_WORKER_THREADS=4 -p 9091:9090 --name myconfigureddatapower ibmcom/datapower`
...contains this:
20170908T121729.015Z [0x8100006e][system][notice] : Executing startup configuration.
20170908T121729.970Z [0x00350014][mgmt][notice] web-mgmt(WebGUI-Settings): tid(303): Operational state up
...but with Dockerfile it doesn't. That's why I think the config files may be copied into place too late.
I've tried adding CMD ["/bin/drouter"] to the end of my Dockerfile to no avail.
I have tested your Dockerfile and it seems to be working. My auto-startup.cfg file is copied in the proper location and when I launch the container it's reading the file.
I get this output:
[root#ip-172-30-2-164 tmp]# docker run -ti -p 9090:9090 test
20170908T123728.818Z [0x8040006b][system][notice] logging target(default-log): Logging started.
20170908T123729.067Z [0x804000fe][system][notice] : Container instance UUID: 36bcca0e-6139-4694-91b0-2b7b66c3a498, Cores: 4, vCPUs: 4, CPU model: Intel(R) Xeon(R) CPU E5-2676 v3 # 2.40GHz, Memory: 16049.1MB, Platform: docker, OS: dpos, Edition: developers-limited, Up time: 0 minutes
20170908T123729.071Z [0x8040001c][system][notice] : DataPower IDG is on-line.
20170908T123729.071Z [0x8100006f][system][notice] : Executing default startup configuration.
20170908T123729.416Z [0x8100006d][system][notice] : Executing system configuration.
20170908T123729.417Z [0x8100006b][mgmt][notice] domain(default): tid(8143): Domain operational state is up.
708f98be1390
Unauthorized access prohibited.
20170908T123731.239Z [0x806000dd][system][notice] cert-monitor(Certificate Monitor): tid(399): Enabling Certificate Monitor to scan once every 1 days for soon to expire certificates
20170908T123731.552Z [0x8100006e][system][notice] : Executing startup configuration.
20170908T123732.436Z [0x8100003b][mgmt][notice] domain(default): Domain configured successfully.
20170908T123732.449Z [0x00350014][mgmt][notice] web-mgmt(WebGUI-Settings): tid(303): Operational state up
login:
To check that your file has been copied to the container you can run docker run -ti local/datapower sh to enter the container and then check the content of /drouter/config/.
Your base image command is: CMD ["/bin/drouter"] you can check it running docker history ibmcom/datapower.
UPDATE:
The drouter user in the container must be able to read the auto-startup.cfg file. You have 2 options:
set your local auto-startup.cfg with the proper permissions (chmod 644 config/autostart.cfg).
or add these line in the Dockerfile so drouter can read the file:
USER root
RUN chown drouter /drouter/config/auto-startup.cfg
USER drouter

How to set PS1 in Docker Container

I want to set $PS1 environment variable to the container. It helps me to identify multilevel or complex docker environment setup. Currently docker container prompts with:
root#container-id#
If I can change it as following , I can identify the container by looking at the $PS1 prompt itself.
[Level-1]root#container-id#
I did experiments by exporting $PS1 by making my own image (Dockerfile), .profile file etc. But it's not reflecting.
I had the same problem but in docker-compose context.
Here is how I managed to make it work:
# docker-compose.yml
version: '3'
services:
my_service:
image: my/image
environment:
- "PS1=$$(whoami):$$(pwd) $$ "
Just pass PS1 value as an environment variable in docker-compose.yml configuration file.
Notice how dollars signs need to be escaped to prevent docker-compose from interpolating values (documentation).
This Dockerfile sets PS1 by doing:
RUN echo 'export PS1="[\u#docker] \W # "' >> /root/.bash_profile
We use a similar technique for tracking inputs and outputs in complex container builds.
https://github.com/ianmiell/shutit/blob/master/shutit_global.py#L1338
This line represents the product of hard-won experience dealing with docker/(p)expect combinations:
"SHUTIT_BACKUP_PS1_%s=$PS1 && PS1='%s' && unset PROMPT_COMMAND"
Backing up the prompt is handy if you want to revert, setting the PS1 with PS1= sets the PS1, and unsetting the PROMPT_COMMAND removes any nasty surprises with the terminal being reset etc.. for the expect.
If the question is about how to ensure it's set when you run the container up (as opposed to building), then you may need to add something to your .bashrc / .profile files depending on how you run up your container. As far as I know there's no way to ensure it with a dockerfile directive and make it persist.
I normally create /home/USER/.bashrc or /root/.bashrc, depending on who the USER of the Dockerfile is. That works well. I've tried
ENV PS1 '# '
but that never worked for me.
Here's a way to set the PS1 when you run the container:
docker run -it \
python:latest \
bash -c "echo \"export PS1='[python:latest] \w$ '\" >> ~/.bashrc && bash"
I made a little wrapper script, to be able to run any image with my custom prompt:
#!/usr/bin/env bash
# ~/bin/docker-run
set -eu
image=$1
docker run -it \
-v $(pwd):/opt/app
-w /opt/app ${image} \
bash -c "echo \"export PS1='[${image}] \w$ '\" >> ~/.bashrc && bash"
In debian 9, for running bash, this worked:
RUN echo 'export PS1="[\$ENV_VAR] \W # "' >> /root/.bashrc
It's generally running as root and I generally know I am in docker, so I wanted to have a prompt that indicated what the container was, so I used an environment variable. And I guess the bash I use loads .bashrc preferentially.
Try setting environment variables using docker options
Example:
docker run \
-ti \
--rm \
--name ansibleserver-debug \
-w /githome/axel-ansible/ \
-v /home/lordjea/githome/:/githome/ \
-e "PS1=DEBUG$(pwd)# " \
lordjea/priv:311 bash
docker --help
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Run a command in a new container
Options:
...
-e, --env list Set environment variables
...
You should set that in .profile, not .bashrc.
Just open .profile from your root or home and replace PS1='\u#\h:\w\$ ' with PS1='\e[33;1m\u#\h: \e[31m\W\e[0m\$ ' or whatever you want.
Note that you need to restart your container.
On my MAC I have an alias named lxsh that will start a bash shell using the ubuntu image in my current directory (details). To make the shell's prompt change, I mounted a host file onto /root/.bash_aliases. It's a dirty hack, but it works. The full alias:
alias lxsh='echo "export PS1=\"lxsh[\[$(tput bold)\]\t\[$(tput sgr0)\]\w]\\$\[$(tput sgr0)\] \"" > $TMPDIR/a5ad217e-0f2b-471e-a9f0-a49c4ae73668 && docker run --rm --name lxsh -v $TMPDIR/a5ad217e-0f2b-471e-a9f0-a49c4ae73668:/root/.bash_aliases -v $PWD:$PWD -w $PWD -it ubuntu'
The below solution assumes that you've used Dockerfile USER to set a non-root Linux user for Bash.
What you might have tried without success:
ENV PS1='[docker]$' ## may not work
Using ENV to set PS1 can fail because the value can be overridden by default settings in a preexisting .bashrc when an interactive shell is started. Some Linux distributions are opinionated about PS1 and set it in an initial .bashrc for each user (Ubuntu does this, for example).
The fix is to modify the Dockerfile to set the desired value at the end of the user's .bashrc -- overriding any earlier settings in the script.
FROM ubuntu:20.04
# ...
USER myuser ## the username
RUN echo "PS1='\n[ \u#docker \w ]\n$ '" >>.bashrc

Docker echo environment variable

I'm trying to write a little docker file that sets a User and just echos the current user as a little example to prove to myself it is working. I've tried a number of variants and couldn't find much help in the documentation.
FROM ubuntu
USER daemon
# ENTRYPOINT ["echo", "$USER"]
# just gives "$USER"
# ENTRYPOINT ["echo", "-e", "${USER}"]
# just gives "$USER"
# ENTRYPOINT echo $USER
# gives empty string
# ENTRYPOINT ["/bin/echo", "$USER"]
# just gives "$USER"
I'm running docker build . on the dockerfile and then running docker run <image-id> and getting the results
Expected result is daemon, or without the USER daemon line, I expect root. Probably a really simple answer.
This is the expected behavior, as weird as it seems!
When ENTRYPOINT is a list (as in ENTRYPOINT ["echo", "$USER"]), it is used as-is, without further parsing or interpretation. So $USER remains $USER, because there is no shell involved in the process to replace it with the value of the USER environment variable.
Now, when ENTRYPOINT is a string (as in ENTRYPOINT echo $USER), what is actually executed is sh -c "echo $USER", and $USER is replaced with the value of the environment variable (as you would expect).
However, the environment variable USER is not set by default. It is set by the login process; and when you just run sh -c ... the login process is not involved.
Compare the environment when running docker run -t -i ubuntu bash and docker run -t -i ubuntu login -f root. In the former case, you will get a very basic environment; in the latter case, you will get the complete environment that you are used to (including USERvariable).
Couldn't you set, in the Dockerfile, the ENV command to a default value, and then, when run-ning a container, use the -e, --env dictionary to override what would be interpreted by the:
ENTRYPOINT echo $SOMEENVVAR
form of ENTRYPOINT?
I think there´s a series of issues here.
when I
docker run -i -t ubuntu /bin/bash
echo $USER
set
I don´t see $USER set at all - whoami does report daemon though.
additionally, I have the suspicion (but have not looked at the code yet) that ENV vars in the Dockerfile are escaped, to avoid their use (many people assume that they can export host variables to the built container, but this is something that the docker guys would like to avoid)

Resources