Docker entrypoint script not sourcing file - docker

I have an entrypoint script with docker which is getting executed. However, it just doesn't run the source command to source a file full of env values.
Here's the relevant section from tehe dockerfile
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["-production"]
I have tried 2 version of entrypoint script. Neither of them are working.
VERSION 1
#!/bin/bash
cat >> /etc/bash.bashrc <<EOF
if [[ -f "/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env" ]]
then
echo "${SERVICE_NAME}.env found ..."
set -a
source "/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env"
set +a
fi
EOF
echo "INFO: Starting ${SERVICE_NAME} application, environment:"
exec -a $SERVICE_NAME node .
VERSION 2
ENV_FILE=/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env
if [[] -f "$ENV_FILE" ]; then
echo "INFO: Loading environment variables from file: ${ENV_FILE}"
set -a
source $ENV_FILE
set +a
fi
echo "INFO: Starting ${SERVICE_NAME} application..."
exec -a $SERVICE_NAME node .
Version 2 of above prints to the log that it has found the file however, source command simply isn't loading the contents of file into memory. I check if contents have been loaded by running the env command.
I've been trying few things for 3 days now with no progress. Please can someone help me? Please note I am new to docker which is making things quite difficult.

I think your second version is almost there.
Normally Docker doesn't read or use shell dotfiles at all. This isn't anything particular to Docker, just that you're not running an "interactive" or "login" shell at any point in the sequence. In your first form you write out a .bashrc file but then exec node, and nothing there ever re-reads the dotfile.
You mention in the question that you use the env command to check the environment. If this is via docker exec, that launches a new process inside the container, but it's not a child of the entrypoint script, so any setup that happens there won't be visible to docker exec. This usually isn't a problem.
I can suggest a couple of cleanups that might make it a little easier to see the effects of this. The biggest is to split out the node invocation from the entrypoint script. If you have both an ENTRYPOINT and a CMD then Docker passes the CMD as arguments to the ENTRYPOINT; if you change the entrypoint script to end with exec "$#" then it will run whatever it got passed.
#!/bin/sh
# (trying to avoid bash-specific constructs)
# Read the environment file
ENV_FILE="/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env"
if [[ -f "$ENV_FILE" ]; then
. $ENV_FILE
fi
# Run the main container command
exec "$#"
And then in the Dockerfile, put the node invocation as the main command
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array syntax
CMD ["node", "."] # could be shell-command syntax
The important thing with this is that it's easy to override the command but leave the entrypoint intact. So if you run
docker run --rm your-image env
that will launch a temporary container, but passing env as the command instead of node .. That will go through the steps in the entrypoint script, including setting up the environment, but then print out the environment and exit immediately. That will let you observe the changes.

Related

How to set environment variable in docker container system wide at container start for all users?

I need to set some environment variable for all users and processes inside docker container. It should be set at container start, not in Dockerfile, because it depends on running environment.
So the simple Dockerfile
FROM ubuntu
RUN echo 'export TEST=test' >> '/root/.bashrc'
works well for interactive sessions
docker run -ti test bash
then
env
and there is TEST=test
but when docker run -ti test env there is no TEST
I was trying
RUN echo 'export TEST=test' >> '/etc/environment'
RUN echo 'TEST="test"' >> '/etc/environment'
RUN echo 'export TEST=test' >> /etc/profile.d/1.sh
ENTRYPOINT export TEST=test
Nothing helps.
Why I need this. I have http_proxy variable inside container automatically set by docker, I need to set another variables, based on it, i.e. JAVA_OPT, do it system wide, for all users and processes, and in running environment, not at build time.
I would create a script which would be an entrypoint:
#!/bin/bash
# if env variable is not set, set it
if [ -z $VAR ];
then
# env variable is not set
export VAR=$(a command that gives the var value);
fi
# pass the arguments received by the entrypoint.sh
# to /bin/bash with command (-c) option
/bin/bash -c $#
And in Dockerfile I would set the entrypoint:
ENTRYPOINT entrypoint.sh
Now every time I run docker run -it <image> <any command> it uses my script as entrypoint so will always run it before the command then pass the arguments to the right place which is /bin/bash.
Improvements
The above script is enough to work if you are always using the entrypoint with arguments, otherwise your $# variable will be empty and will give you an error /bin/bash: -c: option requires an argument. A easy fix is an if statement:
if [ ! -z $# ];
then
/bin/bash -c $#;
fi
Setting the parameter in ENTRYPOINT would solve this issue.
In docker file pass parameter in ENTRYPOINT

I would like to populate a entry in config file at run time via docker compose

I have tomcat installed in a container, inside it there is application configuration file. I would like to populate a value inside it during run time. (before that I dont know what the value so cant populate at the time of building the image)
I am invoking service with docker-compose up, and I would like the value in configuration file gets replaced via the value I provide to docker compose as parameter
something like docker-compose up -e "value at run time via docker compose"
URL for server
SERVERADD=https://{{value at run time via docker compose}}/{{index}}
Can I accomplish this with environment variable or any other way kindly suggest !!!
This is normally done in an ENTRYPOINT or CMD script that is built into the image.
The script checks for the environment variable, does the replacements or other work required, then continues on to run the command as before.
#!/bin/sh
if [ -n "$SOME_ENV" ]; then
sed -i '' -e 's/^param=.*/param='"$SOME_ENV"'/' /etc/file.conf
fi
exec "$#"
The script needs to be added to an image, the Dockerfile could be:
FROM whatever
COPY docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT [ "/entrypoint.sh" ]
CMD [ "run_server", "-o", "option" ]

How can I use a variable inside a Dockerfile CMD?

Inside my Dockerfile:
ENV PROJECTNAME mytestwebsite
CMD ["django-admin", "startproject", "$PROJECTNAME"]
Error:
CommandError: '$PROJECTNAME' is not a valid project name
What is the quickest workaround here? Does Docker have any plan to "fix" or introduce this functionality in later versions of Docker?
NOTE: If I remove the CMD line from the Docker file and then run the Docker container, I am able to manually run Django-admin startproject $PROJECTNAME from inside the container and it will create the project...
When you use an execution list, as in...
CMD ["django-admin", "startproject", "$PROJECTNAME"]
...then Docker will execute the given command directly, without involving a shell. Since there is no shell involved, that means:
No variable expansion
No wildcard expansion
No i/o redirection with >, <, |, etc
No multiple commands via command1; command2
And so forth.
If you want your CMD to expand variables, you need to arrange for a shell. You can do that like this:
CMD ["sh", "-c", "django-admin startproject $PROJECTNAME"]
Or you can use a simple string instead of an execution list, which gets you a result largely identical to the previous example:
CMD django-admin startproject $PROJECTNAME
If you want to use the value at runtime, set the ENV value in the Dockerfile. If you want to use it at build-time, then you should use ARG.
Example :
ARG value
ENV envValue=$value
CMD ["sh", "-c", "java -jar ${envValue}.jar"]
Pass the value in the build command:
docker build -t tagName --build-arg value="jarName"
You also can use exec
This is the only known way to handle signals and use env vars simultaneously.
It can be helpful while trying to implement something like graceful shutdown according to Docker github
Example:
ENV PROJECTNAME mytestwebsite
CMD exec django-admin startproject $PROJECTNAME
Lets say you want to start a java process inside a container:
Example Dockerfile excerpt:
ENV JAVA_OPTS -XX +UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -XshowSettings:vm
...
ENTRYPOINT ["/sbin/tini", "--", "entrypoint.sh"]
CMD ["java", "${JAVA_OPTS}", "-myargument=true"]
Example entrypoint.sh excerpt:
#!/bin/sh
...
echo "*** Startup $0 suceeded now starting service using eval to expand CMD variables ***"
exec su-exec mytechuser $(eval echo "$#")
For the Java developers, following my solution below gonna work:
if you tried to run your container with a Dockerfile like below
ENTRYPOINT ["/docker-entrypoint.sh"]
# does not matter your parameter $JAVA_OPTS wrapped as ${JAVA_OPTS}
CMD ["java", "$JAVA_OPTS", "-javaagent:/opt/newrelic/newrelic.jar", "-server", "-jar", "app.jar"]
with an ENTRYPOINT shell script below:
#!/bin/bash
set -e
source /work-dir/env.sh
exec "$#"
it will build the image correctly but print the error below during the run of container:
Error: Could not find or load main class $JAVA_OPTS
Caused by: java.lang.ClassNotFoundException: $JAVA_OPTS
instead, Java can read the command line parameters either through the command line or by _JAVA_OPTIONS environment variable. so, it means we can pass the desired command line parameters through _JAVA_OPTIONS without changing anything on Dockerfile as well as to allow it to be able to start as parent process of container for the valid docker signalization via exec "$#".
The below one is my final version of the Dockerfile and docker-entrypoint.sh files:
...
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["java", "-server", "-jar", "app.jar"]
#!/bin/bash
set -e
source /work-dir/env.sh
export _JAVA_OPTIONS="-XX:+PrintFlagsFinal"
exec "$#"
and after you build your docker image and tried to run it, you will see the logs below that means it worked well:
Picked up _JAVA_OPTIONS: -XX:+PrintFlagsFinal
[Global flags]
int ActiveProcessorCount = -1 {product} {default}
Inspired on above, I did this:
#snapshot by default. 1 is release.
ENV isTagAndRelease=0
CMD echo is_tag: ${isTagAndRelease} && \
if [ ${isTagAndRelease} -eq 1 ]; then echo "release build"; mvn -B release:clean release:prepare release:perform; fi && \
if [ ${isTagAndRelease} -ne 1 ]; then echo "snapshot build"; mvn clean install; fi && \
.....

How to get /etc/profile to run automatically in Alpine / Docker

How can I get /etc/profile to run automatically when starting an Alpine Docker container interactively? I have added some aliases to an aliases.sh file and placed it in /etc/profile.d, but when I start the container using docker run -it [my_container] sh, my aliases aren't active. I have to manually type . /etc/profile from the command line each time.
Is there some other configuration necessary to get /etc/profile to run at login? I've also had problems with using a ~/.profile file. Any insight is appreciated!
EDIT:
Based on VonC's answer, I pulled and ran his example ruby container. Here is what I got:
$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42
/ # more /etc/profile.d/rubygems.sh
export PATH=$PATH:/usr/lib/ruby/gems/2.0.0/bin
/ # env
no_proxy=*.local, 169.254/16
HOSTNAME=6c7e93ebc5a1
SHLVL=1
HOME=/root
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
/ # exit
Although the /etc/profile.d/rubygems.sh file exists, it is not being run when I login and my PATH environment variable is not being updated. Am I using the wrong docker run command? Is something else missing? Has anyone gotten ~/.profile or /etc/profile.d/ files to work with Alpine on Docker? Thanks!
The default shell in Alpine Linux is ash.
Ash will only read the /etc/profile and ~/.profile files if it is started as a login shell sh -l.
To force Ash to source the /etc/profile or any other script you want upon its invocation as a non login shell, you need to setup an environment variable called ENV before launching Ash.
e.g. in your Dockerfile
FROM alpine:3.5
ENV ENV="/root/.ashrc"
RUN echo "echo 'Hello, world!'" > "$ENV"
When you build that you get:
deployer#ubuntu-1604-amd64:~/blah$ docker build --tag test .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM alpine:3.5
3.5: Pulling from library/alpine
627beaf3eaaf: Pull complete
Digest: sha256:58e1a1bb75db1b5a24a462dd5e2915277ea06438c3f105138f97eb53149673c4
Status: Downloaded newer image for alpine:3.5
---> 4a415e366388
Step 2/3 : ENV ENV "/root/.ashrc"
---> Running in a9b6ff7303c2
---> 8d4af0b7839d
Removing intermediate container a9b6ff7303c2
Step 3/3 : RUN echo "echo 'Hello, world!'" > "$ENV"
---> Running in 57c2fd3353f3
---> 2cee6e034546
Removing intermediate container 57c2fd3353f3
Successfully built 2cee6e034546
Finally, when you run the newly generated container, you get:
deployer#ubuntu-1604-amd64:~/blah$ docker run -ti test /bin/sh
Hello, world!
/ # exit
Notice the Ash shell didn't run as a login shell.
So to answer your query, replace
ENV ENV="/root/.ashrc"
with:
ENV ENV="/etc/profile"
and Alpine Linux's Ash shell will automatically source the /etc/profile script each time the shell is launched.
Gotcha: /etc/profile is normally meant to only be sourced once! So, I would advise that you don't source it and instead source a /root/.somercfile instead.
Source: https://stackoverflow.com/a/40538356
You still can try in your Dockerfile a:
RUN echo '\
. /etc/profile ; \
' >> /root/.profile
(assuming the current user is root. If not, replace /root with the full home path)
That being said, those /etc/profile.d/xx.sh should run.
See codeclimate/docker-alpine-ruby as an example:
COPY files /
With 'files/etc" including an files/etc/profile.d/rubygems.sh running just fine.
In the OP project Dockerfile, there is a
COPY aliases.sh /etc/profile.d/
But the default shell is not a login shell (sh -l), which means profile files (or those in /etc/profile.d) are not sourced.
Adding sh -l would work:
docker#default:~$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42 sh -l
87a58e26b744:/# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/ruby/gems/2.0.0/bin
As mentioned by Jinesh before, the default shell in Alpine Linux is ash
localhost:~$ echo $SHELL
/bin/ash
localhost:~$
Therefore simple solution is too add your aliases in .profile. In this case, I put all my aliases in ~/.ash_aliases
localhost:~$ cat .profile
# ~/.profile
# Alias
if [ -f ~/.ash_aliases ]; then
. ~/.ash_aliases
fi
localhost:~$
.ash_aliases file
localhost:~$ cat .ash_aliases
alias a=alias
alias c=clear
alias f=file
alias g=grep
alias l='ls -lh'
localhost:~$
And it works :)
I use this:
docker exec -it my_container /bin/ash '-l'
The -l flag passed to ash will make it behave as a login shell, thus reading ~/.profile

Docker echo environment variable

I'm trying to write a little docker file that sets a User and just echos the current user as a little example to prove to myself it is working. I've tried a number of variants and couldn't find much help in the documentation.
FROM ubuntu
USER daemon
# ENTRYPOINT ["echo", "$USER"]
# just gives "$USER"
# ENTRYPOINT ["echo", "-e", "${USER}"]
# just gives "$USER"
# ENTRYPOINT echo $USER
# gives empty string
# ENTRYPOINT ["/bin/echo", "$USER"]
# just gives "$USER"
I'm running docker build . on the dockerfile and then running docker run <image-id> and getting the results
Expected result is daemon, or without the USER daemon line, I expect root. Probably a really simple answer.
This is the expected behavior, as weird as it seems!
When ENTRYPOINT is a list (as in ENTRYPOINT ["echo", "$USER"]), it is used as-is, without further parsing or interpretation. So $USER remains $USER, because there is no shell involved in the process to replace it with the value of the USER environment variable.
Now, when ENTRYPOINT is a string (as in ENTRYPOINT echo $USER), what is actually executed is sh -c "echo $USER", and $USER is replaced with the value of the environment variable (as you would expect).
However, the environment variable USER is not set by default. It is set by the login process; and when you just run sh -c ... the login process is not involved.
Compare the environment when running docker run -t -i ubuntu bash and docker run -t -i ubuntu login -f root. In the former case, you will get a very basic environment; in the latter case, you will get the complete environment that you are used to (including USERvariable).
Couldn't you set, in the Dockerfile, the ENV command to a default value, and then, when run-ning a container, use the -e, --env dictionary to override what would be interpreted by the:
ENTRYPOINT echo $SOMEENVVAR
form of ENTRYPOINT?
I think there´s a series of issues here.
when I
docker run -i -t ubuntu /bin/bash
echo $USER
set
I don´t see $USER set at all - whoami does report daemon though.
additionally, I have the suspicion (but have not looked at the code yet) that ENV vars in the Dockerfile are escaped, to avoid their use (many people assume that they can export host variables to the built container, but this is something that the docker guys would like to avoid)

Resources