If I have a Dockerfile like this:
FROM ubuntu
CMD [ "ps", "-ef" ]
And if I build and run the image, I get
$ docker run -it 156a9f959f43
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 07:12 pts/0 00:00:00 ps -ef
which is consistent with the documentation.
Question: How does the binary ps get located in the first place when the container runs ?
The exec syntax uses the PATH environment variable defined in the parent image (ubuntu:latest).
$ docker image inspect ubuntu:latest
[
{
...
"Config": {
...
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/bash"
],
....
If you go looking at the Dockerfile for this base image... you'll actually see that the PATH variable is not defined there. We could go looking at scratch but that's a virtual image.
So, lets build an image on scratch with nothing to see what variables are defined:
$ cat df.scratch
FROM scratch
$ docker build -t test-scratch -f df.scratch .
...
$ docker image inspect test-scratch:latest
[
{
...
"Config": {
...
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
...
So the PATH is getting created in the scratch image. This old issue and associated PR show that docker is including a PATH out of the box.
How can you adjust that path? You need to use an ENV line. If you set a variable in a RUN line, it will not be preserved after that RUN line completes. And if you append to the .bashrc in the container, that does not apply to non-bash shells like /bin/sh, anything using the exec syntax without a shell, and any non-interactive bash shells (since the .bashrc stops processing part way through for non-interactive shells). Here's an example of that with a different image/build:
$ cat df.path
FROM ubuntu
# before state from the base image
RUN [ "env" ]
# attempting to modify the .bashrc
RUN echo "export PATH="$PATH:/my/custom/bin/dir"" >> ~/.bashrc
RUN [ "env" ]
# modifying the image environment variable directly
ENV PATH=${PATH}:/opt/custom/bin
RUN [ "env" ]
$ docker build -t test-path -f df.path .
Sending build context to Docker daemon 31.23kB
Step 1/6 : FROM ubuntu
---> 4e5021d210f6
Step 2/6 : RUN [ "env" ]
---> Running in 5bb72abb386d
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=5bb72abb386d
HOME=/root
Removing intermediate container 5bb72abb386d
---> c438fb269c70
Step 3/6 : RUN echo "export PATH="$PATH:/my/custom/bin/dir"" >> ~/.bashrc
---> Running in 127b10aff046
Removing intermediate container 127b10aff046
---> 4af50595c271
Step 4/6 : RUN [ "env" ]
---> Running in c5ff46ba3b82
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=c5ff46ba3b82
HOME=/root
Removing intermediate container c5ff46ba3b82
---> 455325a5e484
Step 5/6 : ENV PATH=${PATH}:/opt/custom/bin
---> Running in e7960d9ce18a
Removing intermediate container e7960d9ce18a
---> ed532bff78b4
Step 6/6 : RUN [ "env" ]
---> Running in 9c1558a61ab7
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/custom/bin
HOSTNAME=9c1558a61ab7
HOME=/root
Removing intermediate container 9c1558a61ab7
---> f08993f21b97
Successfully built f08993f21b97
Successfully tagged test-path:latest
Note the original value of the path at step 2, it is unchanged at step 4, and it has the defined value at step 6.
In docker containers (similarily as in most operating systems) there is a $PATH environment variable, which holds the directory paths to where the executables are located (separated by :).
For example a $PATH variable might hold a value like /usr/local/bin:/usr/bin:/home/ubuntu/bin which would mean that when you are running a command like ps it will look for an executable in those directories.
You can learn more about $PATH variable here https://en.wikipedia.org/wiki/PATH_(variable)
Note: The $PATH variable is going to differ from container to container (as they are isolated units) and will most probably hold the defautl value of the base distro used by the docker image.
To make changes to your $PATH variable on linux based systems, you can run export PATH="$PATH:/custom/bin/dir" and it will append the /custom/bin/dir to the variable.
To make this change permanent, you should add this command to your .bashrc, .profile, .zshrc or similar file (depending on what shell you are using)
So to update the variable in you docker containers you should add something like this to your Docker file
FROM ubuntu
RUN echo "export PATH="$PATH:/my/custom/bin/dir"" >> ~/.bashrc
Related
I have an entrypoint script with docker which is getting executed. However, it just doesn't run the source command to source a file full of env values.
Here's the relevant section from tehe dockerfile
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["-production"]
I have tried 2 version of entrypoint script. Neither of them are working.
VERSION 1
#!/bin/bash
cat >> /etc/bash.bashrc <<EOF
if [[ -f "/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env" ]]
then
echo "${SERVICE_NAME}.env found ..."
set -a
source "/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env"
set +a
fi
EOF
echo "INFO: Starting ${SERVICE_NAME} application, environment:"
exec -a $SERVICE_NAME node .
VERSION 2
ENV_FILE=/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env
if [[] -f "$ENV_FILE" ]; then
echo "INFO: Loading environment variables from file: ${ENV_FILE}"
set -a
source $ENV_FILE
set +a
fi
echo "INFO: Starting ${SERVICE_NAME} application..."
exec -a $SERVICE_NAME node .
Version 2 of above prints to the log that it has found the file however, source command simply isn't loading the contents of file into memory. I check if contents have been loaded by running the env command.
I've been trying few things for 3 days now with no progress. Please can someone help me? Please note I am new to docker which is making things quite difficult.
I think your second version is almost there.
Normally Docker doesn't read or use shell dotfiles at all. This isn't anything particular to Docker, just that you're not running an "interactive" or "login" shell at any point in the sequence. In your first form you write out a .bashrc file but then exec node, and nothing there ever re-reads the dotfile.
You mention in the question that you use the env command to check the environment. If this is via docker exec, that launches a new process inside the container, but it's not a child of the entrypoint script, so any setup that happens there won't be visible to docker exec. This usually isn't a problem.
I can suggest a couple of cleanups that might make it a little easier to see the effects of this. The biggest is to split out the node invocation from the entrypoint script. If you have both an ENTRYPOINT and a CMD then Docker passes the CMD as arguments to the ENTRYPOINT; if you change the entrypoint script to end with exec "$#" then it will run whatever it got passed.
#!/bin/sh
# (trying to avoid bash-specific constructs)
# Read the environment file
ENV_FILE="/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env"
if [[ -f "$ENV_FILE" ]; then
. $ENV_FILE
fi
# Run the main container command
exec "$#"
And then in the Dockerfile, put the node invocation as the main command
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array syntax
CMD ["node", "."] # could be shell-command syntax
The important thing with this is that it's easy to override the command but leave the entrypoint intact. So if you run
docker run --rm your-image env
that will launch a temporary container, but passing env as the command instead of node .. That will go through the steps in the entrypoint script, including setting up the environment, but then print out the environment and exit immediately. That will let you observe the changes.
I'm building a docker image for a Sybase database. Docker build command fails because the name of the build step "server" cannot start with a number.
I have searched A LOT for a way to change the build step machine's name and my solution so far is to retry the build until I get a name that starts with a letter...
Step 1/7 : FROM my_image as docker_sybase_db
---> d266899b4eef
Step 2/7 : COPY *.zip /mnt/backup/
---> Using cache
---> 9e8e405848ce
Step 3/7 : COPY entrypoint.sh ~
---> Using cache
---> 5c0c923985db
Step 4/7 : ENV HOSTNAME docker_sybase_db
---> Using cache
---> f2b39a7280a0
Step 5/7 : RUN init_db.sh
---> Running in 0ae1a95b3203
Server name '0ae1a95b3203' begins with an illegal character. The first
character of a server name must be an alphabetic ascii character.
Error running command 'srvbuild -r /tmp/my_super_build.rs':
If I can't modify this old sybase init script, am I out of luck here ?
EDIT: Here is what I am trying to do
Create a database instance
Load a backup
Package that pre-loaded instance into a container.
Loading the backup takes a lot of time and this old database system requires the server name to start with a letter, not a number.
You could try and see if LolHens's idea of changing the hostname in the container namespace (during the docker build) works for you.
docker build . | tee >((grep --line-buffered -Po '(?<=^change-hostname ).*' || true) | \
while IFS= read -r id; do \
nsenter --target "$(docker inspect -f '{{ .State.Pid }}' "$id")"\
--uts hostname 'new-hostname'; \
done)
The docker build output is parsed to:
detect a "change-hostname" directive
do a nsenter, which runs a program in the UTS (UNIX Time Sharing) namespace, with a different hostname (different than the SHA-generated random one)
That means your RUN step should be:
RUN echo "change-hostname $(hostname)"; \
sleep 1; \
printf '%s\n' "$(hostname)" > /etc/hostname; \
printf '%s\t%s\t%s\n' "$(perl -C -0pe 's/([\s\S]*)\t.*$/$1/m' /etc/hosts)" "$(hostname)" > /etc/hosts; \
init_db.sh
That way, init_db.sh should run in an intermediate container with a different hostname (one you do have control over, and which would not start with a number).
I have tomcat installed in a container, inside it there is application configuration file. I would like to populate a value inside it during run time. (before that I dont know what the value so cant populate at the time of building the image)
I am invoking service with docker-compose up, and I would like the value in configuration file gets replaced via the value I provide to docker compose as parameter
something like docker-compose up -e "value at run time via docker compose"
URL for server
SERVERADD=https://{{value at run time via docker compose}}/{{index}}
Can I accomplish this with environment variable or any other way kindly suggest !!!
This is normally done in an ENTRYPOINT or CMD script that is built into the image.
The script checks for the environment variable, does the replacements or other work required, then continues on to run the command as before.
#!/bin/sh
if [ -n "$SOME_ENV" ]; then
sed -i '' -e 's/^param=.*/param='"$SOME_ENV"'/' /etc/file.conf
fi
exec "$#"
The script needs to be added to an image, the Dockerfile could be:
FROM whatever
COPY docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT [ "/entrypoint.sh" ]
CMD [ "run_server", "-o", "option" ]
How can I get /etc/profile to run automatically when starting an Alpine Docker container interactively? I have added some aliases to an aliases.sh file and placed it in /etc/profile.d, but when I start the container using docker run -it [my_container] sh, my aliases aren't active. I have to manually type . /etc/profile from the command line each time.
Is there some other configuration necessary to get /etc/profile to run at login? I've also had problems with using a ~/.profile file. Any insight is appreciated!
EDIT:
Based on VonC's answer, I pulled and ran his example ruby container. Here is what I got:
$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42
/ # more /etc/profile.d/rubygems.sh
export PATH=$PATH:/usr/lib/ruby/gems/2.0.0/bin
/ # env
no_proxy=*.local, 169.254/16
HOSTNAME=6c7e93ebc5a1
SHLVL=1
HOME=/root
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
/ # exit
Although the /etc/profile.d/rubygems.sh file exists, it is not being run when I login and my PATH environment variable is not being updated. Am I using the wrong docker run command? Is something else missing? Has anyone gotten ~/.profile or /etc/profile.d/ files to work with Alpine on Docker? Thanks!
The default shell in Alpine Linux is ash.
Ash will only read the /etc/profile and ~/.profile files if it is started as a login shell sh -l.
To force Ash to source the /etc/profile or any other script you want upon its invocation as a non login shell, you need to setup an environment variable called ENV before launching Ash.
e.g. in your Dockerfile
FROM alpine:3.5
ENV ENV="/root/.ashrc"
RUN echo "echo 'Hello, world!'" > "$ENV"
When you build that you get:
deployer#ubuntu-1604-amd64:~/blah$ docker build --tag test .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM alpine:3.5
3.5: Pulling from library/alpine
627beaf3eaaf: Pull complete
Digest: sha256:58e1a1bb75db1b5a24a462dd5e2915277ea06438c3f105138f97eb53149673c4
Status: Downloaded newer image for alpine:3.5
---> 4a415e366388
Step 2/3 : ENV ENV "/root/.ashrc"
---> Running in a9b6ff7303c2
---> 8d4af0b7839d
Removing intermediate container a9b6ff7303c2
Step 3/3 : RUN echo "echo 'Hello, world!'" > "$ENV"
---> Running in 57c2fd3353f3
---> 2cee6e034546
Removing intermediate container 57c2fd3353f3
Successfully built 2cee6e034546
Finally, when you run the newly generated container, you get:
deployer#ubuntu-1604-amd64:~/blah$ docker run -ti test /bin/sh
Hello, world!
/ # exit
Notice the Ash shell didn't run as a login shell.
So to answer your query, replace
ENV ENV="/root/.ashrc"
with:
ENV ENV="/etc/profile"
and Alpine Linux's Ash shell will automatically source the /etc/profile script each time the shell is launched.
Gotcha: /etc/profile is normally meant to only be sourced once! So, I would advise that you don't source it and instead source a /root/.somercfile instead.
Source: https://stackoverflow.com/a/40538356
You still can try in your Dockerfile a:
RUN echo '\
. /etc/profile ; \
' >> /root/.profile
(assuming the current user is root. If not, replace /root with the full home path)
That being said, those /etc/profile.d/xx.sh should run.
See codeclimate/docker-alpine-ruby as an example:
COPY files /
With 'files/etc" including an files/etc/profile.d/rubygems.sh running just fine.
In the OP project Dockerfile, there is a
COPY aliases.sh /etc/profile.d/
But the default shell is not a login shell (sh -l), which means profile files (or those in /etc/profile.d) are not sourced.
Adding sh -l would work:
docker#default:~$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42 sh -l
87a58e26b744:/# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/ruby/gems/2.0.0/bin
As mentioned by Jinesh before, the default shell in Alpine Linux is ash
localhost:~$ echo $SHELL
/bin/ash
localhost:~$
Therefore simple solution is too add your aliases in .profile. In this case, I put all my aliases in ~/.ash_aliases
localhost:~$ cat .profile
# ~/.profile
# Alias
if [ -f ~/.ash_aliases ]; then
. ~/.ash_aliases
fi
localhost:~$
.ash_aliases file
localhost:~$ cat .ash_aliases
alias a=alias
alias c=clear
alias f=file
alias g=grep
alias l='ls -lh'
localhost:~$
And it works :)
I use this:
docker exec -it my_container /bin/ash '-l'
The -l flag passed to ash will make it behave as a login shell, thus reading ~/.profile
I have a problem with Docker which does not persist commands launch via "RUN".
Here is my Dockerfile :
FROM jenkins:latest
RUN echo "foo" > /var/jenkins_home/toto ; ls -alh /var/jenkins_home
RUN ls -alh /var/jenkins_home
RUN rm /var/jenkins_home/.bash_logout ; ls -alh /var/jenkins_home
RUN ls -alh /var/jenkins_home
RUN echo "bar" >> /var/jenkins_home/.profile ; cat /var/jenkins_home/.profile
RUN cat /var/jenkins_home/.profile
And here is the output :
Sending build context to Docker daemon 373.8 kB Step 1 : FROM jenkins:latest ---> fc39417bd5fb Step 2 : RUN echo "foo" > /var/jenkins_home/toto ; ls -alh /var/jenkins_home ---> Using cache
---> c614b13d9d83 Step 3 : RUN ls -alh /var/jenkins_home ---> Using cache ---> 8a16a0c92f67 Step 4 : RUN rm /var/jenkins_home/.bash_logout ; ls -alh /var/jenkins_home ---> Using cache ---> f6ca5d5bdc64 Step 5 : RUN ls -alh /var/jenkins_home
---> Using cache ---> 3372c3275b1b Step 6 : RUN echo "bar" >> /var/jenkins_home/.profile ; cat /var/jenkins_home/.profile ---> Running in 79842be2c6e3
# ~/.profile: executed by the command interpreter for login shells.
# This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login
# exists.
# see /usr/share/doc/bash/examples/startup-files for examples.
# the files are located in the bash-doc package.
# the default umask is set in /etc/profile; for setting the umask
# for ssh logins, install and configure the libpam-umask package.
#umask 022
# if running bash if [ -n "$BASH_VERSION" ]; then
# include .bashrc if it exists
if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc"
fi fi
# set PATH so it includes user's private bin if it exists if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH" fi bar ---> 28559b8fe041 Removing intermediate container 79842be2c6e3 Step 7 : RUN cat /var/jenkins_home/.profile ---> Running in c694e0cb5866
# ~/.profile: executed by the command interpreter for login shells.
# This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login
# exists.
# see /usr/share/doc/bash/examples/startup-files for examples.
# the files are located in the bash-doc package.
# the default umask is set in /etc/profile; for setting the umask
# for ssh logins, install and configure the libpam-umask package.
#umask 022
# if running bash if [ -n "$BASH_VERSION" ]; then
# include .bashrc if it exists
if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc"
fi fi
# set PATH so it includes user's private bin if it exists if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH" fi ---> b7e47d65d65e Removing intermediate container c694e0cb5866 Successfully built b7e47d65d65e
Do you guys know why "foo" file is not persisted on step 3? Why ".bash_logout" file is recreated on step 5? Why "bar" is not in my ".profile" file anymore on step 7?
And of course, if I start a container based on this image, none of my modifications are persisted... so my Dockerfile is... useless. Any clue?
The reason those changes are not persisted, is that they are inside a volume the Jenkins Dockerfile marks /var/jenkins_home/ as a VOLUME.
Information inside volumes is not persisted during docker build, or more precisely; each build-step creates a new volume based on the image's content, discarding the volume that was used in the previous build step.
How to resolve this?
I think the best way to resolve this, is to;
Add the files you want to modify inside jenkins_home in a different location inside the image, e.g. /var/jenkins_home_overrides/
Create a custom entrypoint based on, or "wrapping", the default entrypoint script that copies the content of your jenkins_home_overrides to jenkins_home the first time the container is started.
Actually...
And just when I wrote that up; It looks like the official Jenkins image already support this out of the box;
https://github.com/jenkinsci/docker/blob/683b0d6ed17016ee3211f247304ef2f265102c2b/jenkins.sh#L5-L23
According to the documentation, you need to add your files to the /usr/share/jenkins/ref/ directory, and those will be copied to /var/jenkins/home upon start.
Also see https://issues.jenkins-ci.org/browse/JENKINS-24986