.bash_profile does not work with docker php image - docker

There is my Dockerfile:
# https://hub.docker.com/_/php/
FROM php:5.5.23-fpm
USER www-data
ADD .bash_profile /var/www/.bash_profile
SHELL ["/bin/bash", "-c"]
RUN source /var/www/.bash_profile
Then after container built I run docker exec -it CONTAINER_NAME bash I did not see my aliases defined into /var/www/.bash_profile. But if I execute source /var/www/.bash_profile manually – everything is ok.
The same problem described here: https://github.com/docker/kitematic/issues/896 but no answer.

Had a similar issue and the easiest solution was to use the -l option to bash to make bash act as if it had been invoked as a login shell.
docker run --rm -it $IMAGE /bin/bash -l
bash will then read in ~/.bash_profile

That's because those (ie 'RUN' and 'SHELL') are build instructions. When you execute docker run the ENTRYPOINT and COMMAND are being executed instead.
docker exec however just enters into an existing container's namespace and executes a command. So in your case it just runs bash. That's why you have to source your profile again.
UPDATE:
This snippet is from man bash:
When an interactive shell that is not a login shell is started, bash reads and executes commands from
/etc/bash.bashrc and ~/.bashrc, if these files exist.
So in your case if you change the file name to ~/.bashrc probably works

I faced a similar issue.
Dockerfile relevant snippet:
RUN scripts/script1.sh
RUN scripts/script2.sh
Snippet from scripts/script1.sh
echo "TEST_ENV=Hello" >> ~/.bashrc
source ~/.bashrc
Snippet from scripts/script2.sh
echo $TEST_ENV
The echo statement in script2 printed nothing, that is script2 could not see TEST_ENV variable at all.
So, it turns out that every RUN instruction is executed in its own shell. So, the environment variable will only be visible in the shell where source ~/.bashrc is executed, which is the shell in which script1 ran. For it to be visible in script2's shell, a exclusive source ~/.bashrc has to be executed from script2.
In case an environment variable is required to be present within the running container, then the code that is run as part of Docker's ENTRYPOINT or COMMAND should source the .bashrc file. Logging into the running container with docker exec -it <container name> /bin/bash, will demonstrate that the environment variable is indeed set.

Related

How to go inside docker container and launch pipenv shell with Makefile command?

My dockerized project uses pipenv to deal with python dependencies, intepreter etc. Currently I have Makefile command which with I can go inside my docker container:
to-container:
docker exec -ti my_container_name bash
I would want this command also automatically launch pipenv shell inside the container, like that:
to-container:
docker exec -ti my_container_name bash && pipenv shell
Is this possible, and what is the trick?
You can't do what you want like this. Forget even docker for now, much less make. What does this do when you invoke it:
bash && pipenv shell
First, it runs the bash program and waits for that program to finish. Second, assuming that the bash program exits successfully, it will run pipenv shell. It certainly will not run the pipenv shell command inside the bash program.
You want this, or something like it:
bash -c 'pipenv shell'
This will start a bash program and ask it to run the pipenv shell command, then (after pipenv shell completes) exit. pipenv shell will itself start an interactive shell (as I understand it, I don't use pipenv myself). This is a little gross since you have two shells but it's not a big deal.
To translate this into docker you'd use:
docker exec -ti my_container_name bash -c 'pipenv shell'
then to put that into your makefile:
to-container:
docker exec -ti my_container_name bash -c 'pipenv shell'
I know I'm a little bit late to the party. But I stumbled onto this page just today. So for anyone who comes after. I think you might want to set pipenv shell in your ~/.bashrc file as that file is loaded every time you access bash.
For example, in my Dockerfile, my WORKDIR is set to /opt/app/ and my Pipfile is under the same directory. Thus, I just need to add pipenv shell at the end of my .bashrc file. And Voila! running bash exec -ti container-name /bin/bash will go straight to pipenv environment.

Update PATH in Centos Docker image (alternative to Dockerfile ENV)

I'm provisioning docker Centos image with Packer and using bash scripts instead of Dockerfile to configure image (this seems to be the Packer way). What I can't seem to figure out is how to update PATH variable so that my custom binaries can be executed like this:
docker run -i -t <container> my_binary
I have tried putting .sh file in /etc/profile.d/ folder and also writing directly to /etc/environment but none of that seems to take effect.
I'm suspecting it has something to do with what shell Docker uses when executing commands in a disposable container. I thought it was Bourne Shell but as mentioned earlier neither /etc/profile.d/ nor /etc/environment approach worked.
UPDATE:
As I understand now, it is not possible to change environment variables in a running container due to reasons explained in #tgogos answer. However I don't believe this is an issue in my case since after Packer is done provisioning the image, it commits it and uploads to Docker Hub. More accurate example would be as follows:
$ docker run -itd --name test centos:6
$ docker exec -it test /bin/bash
[root#006a9c3195b6 /]# echo 'echo SUCCESS' > /root/test.sh
[root#006a9c3195b6 /]# chmod +x /root/test.sh
[root#006a9c3195b6 /]# echo 'export PATH=/root:$PATH' > /etc/profile.d/my_settings.sh
[root#006a9c3195b6 /]# echo 'PATH=/root:$PATH' > /etc/environment
[root#006a9c3195b6 /]# exit
$ docker commit test test-image:1
$ docker exec -it test-image:1 test.sh
Expecting to see SUCCESS printed but getting
OCI runtime exec failed: exec failed: container_linux.go:296: starting container process caused "exec: \"test.sh\": executable file not found in $PATH": unknown
UPDATE 2
I have updated PATH in ~/.bashrc which lets me execute following:
$ docker run -it test-image:1 /bin/bash
[root#8f821c7b9b82 /]# test.sh
SUCCESS
However running docker run -it test-image:1 test.sh still results in
docker: Error response from daemon: OCI runtime create failed: container_linux.go:296: ...
I can confirm that my image CMD is set to "/bin/bash". So can someone explain why running docker run -it test-image:1 test.sh doesn't source ~/.bashrc?
A few good points are mentioned at:
How to set an environment variable in a running docker container (also check the link to the relevant github issue).
and Docker - Updating Environment Variables of a Container
where #BMitch mentions:
Destroy your container and start a new one up with the new environment variable using docker run -e .... It's identical to changing an environment variable on a running process, you stop it and restart with a new value passed in.
and in the comments section, he adds:
Docker doesn't provide a way to modify an environment variable in a running container because the OS doesn't provide a way to modify an environment variable in a running process. You need to destroy and recreate.
update: (see the comments section)
You can use
docker commit --change "ENV PATH=your_new_path_here" test test-image:1
/etc/profile is only read by bash when invoked by a login shell.
For more information about which files are read by bash on startup see this article.
EDIT: If you change the last line in your example to:
docker exec -it test bash -lc test.sh it works as you expect.

concatenate value to an existing env var using docker run

Trying to concatenate a value to an existing environment variable in a docker container I'm starting.
for example - docker run -it -e PATH=$PATH:foo continuumio/anaconda
I am currently stuck at the point of trying to concatenate a value to the existing PATH environment variable that already exists in the container.
I am expecting to see the following value in the PATH environment variable of the container - PATH=/opt/conda/bin:/usr/lib/jvm/java-8-openjdk-amd64/bin:/usr/local/scala/bin:/usr/local/sbt/bin:/usr/local/spark/bin:/usr/local/spark/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Instead I get this - PATH=$PATH:foo
using the docker run command. Is there anyway to achieve what I'm aiming at?
--EDIT--
I am executing the command from a windows 10 command line window.
Try the following:
docker run -it continuumio/anaconda /bin/bash -c "PATH=$PATH:foo exec bash"
This command launches bash in the container, passes it a command (-c) that appends to the existing $PATH and then replaces itself with a new bash copy (exec bash) that inherits the new $PATH value.
If you also want to execute a command in the updated shell, you can pass another -c option to exec bash, but note that quoting can get tricky, and that a trick is needed to keep a shell open:
docker run -it continuumio/anaconda /bin/bash -c "PATH=$PATH:foo exec bash -c 'date; exec bash'"
The small caveat is that the shell that is running when the startup command has finished is not the same instance as the one that ran the command (which shouldn't be a problem, unless your startup command made modifications to the shell state (such as defining functions, aliases, ...) that must be preserved).
As for what you tried:
The only way to set an environment variable with -e is if the value is known ahead of time, outside the container; whatever you pass to -e must be a literal value - it cannot reference definitions inside the container.
As an aside: If you ran your command on a Unix platform rather than Windows, the current shell would expand $PATH, which is also not the intent.

set environment variable in running docker contianer

I need to set environment variable in a running docker container. I am already aware of the way of setting environment variable while creating a container. As far I found there is no available straight forward way to do this with docker and docker is planning to add something with new version 1.13.
But I found that some people able to manage it which is not working for me now. I tried following ways but did not work for me-
docker exec -it -u=root test /bin/bash -c "export port=8090"
echo "export port=8090" to /etc/bash.bashrc using a script and then source it
docker exec -it test /bin/bash -c "source /etc/bash.bashrc"
configuring the whole thing in a script and run it from host also did not work. While running script from host all the other command successfully executes except "export port=8090" or "source /etc/bash.bashrc" or "source /root/.bashrc".
Can anyone explain why sourcing file from host does not work in docker container even when I set user("-u=root")? Can anyone help me to solve this? When I source the file from inside the container it works perfectly. But in my case I have to do it from host machine
NOTE:, I am using docker 1.12 and tried the above in ubuntu:16.04 and ubuntu:14.04
If you have a running process in the docker and you are attempting to change the environment variable in the docker so the running process will dynamically change - this will not work. The environment variables of a process are set when it starts. You can see here ways to overcome that, but I don't think that is the right way to go.
I would instead, have a configuration file that the file reads (or listens to) periodically. And when you want to change the configuration change the file.
If this isn't your scenario, please describe your scenario so we can better assist you.
I find a way to provide environment variable to a running container. Fist upgrade your docker-engine. I am using V1.12.5.
create a script with environment variables-
#!/bin/bash
echo "export VAR1=VAL1
export VAR2=VAL2" >> /etc/bash.bashrc
source /etc/bash.bashrc
Now start a container. Here, 'test' is the container name:
docker run -idt --name=test ubuntu
Copy your script to container:
docker cp script.sh test:/
Run the script :
docker exec -it test /bin/bash -c "/script.sh"
Restart your container:
docker restart test
Go to container shell
docker exec -it test /bin/bash
Check the variable
echo $VAR1

Workaround to docker run "--env-file" supplied file not being evaluated as expected

My current setup for running a docker container is on the lines of this:
I've got a main.env file:
# Main
export PRIVATE_IP=\`echo localhost\`
export MONGODB_HOST="$PRIVATE_IP"
export MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
In my service file (upstart), I source this file . /path/to/main.env
I then call docker run with multiple -e for each of the environment variables I want inside of the container. In this case I would call something like: docker run -e MONGODB_URL=$MONGODB_URL ubuntu bash
I would then expect MONGODB_URL inside of the container to equal mongodb://localhost:27017/development. Notice that in reality echo localhost is replaced by a curl to amazon's api for an actual PRIVATE_IP.
This becomes a bit unwieldy when you start having more and more environment variables you need to give your container. There is a fine point to see here which is that the environment variables need to be resolved at run time, such as with a call to curl or by referring to other env variables.
The solution I was hoping to use is:
calling docker run with an --env-file parameter such as this:
# Main
PRIVATE_IP=\`echo localhost\`
MONGODB_HOST="$PRIVATE_IP"
MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
Then my docker run command would be significantly shortened to docker run --env-file=/path/to/main.env ubuntu bash (keep in mind usually I've got around 12-15 environment variables.
This is where I hit my problem which is that inside the container none of the variables resolve as expected. Instead I end up with:
PRIVATE_IP=`echo localhost`
MONGODB_HOST="$PRIVATE_IP"
MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
I could circumvent this by doing the following:
Sourcing the main.env file.
Creating a file containing just the names of the variables I want (meaning docker would search for them in the environment).
Then calling docker run with this file as an argument to --env-file. This would work but would mean I would need to maintain two files instead of one, and really wouldn't be that big of an improvement of the current situation.
What I would prefer is to have the variables resolve as expected.
The closest question to mine that I could find is:
12factor config approach with Docker
Ceate a .env file
example: test=123 val=Guru
Execute command
docker run -it --env-file=.env bash
Inside the bash verify using
echo $test (should print 123)
Both --env and --env-file setup variables as is and do not replace nested variables.
Solomon Hykes talks about configuring containers at run time and the the various approaches. The one that should work for you is to volume mounting the main.env from host into the container and sourcing it.
So I just faced this issue as well, what solved it for me was I specified the --env-file or -e KEY=VAL before the name of the container image. For example
Broken:
docker run my-image --env-file .env
Fixed:
docker run --env-file .env my-image
creating an ENV file that is nothing more than key/value pairs can be processed in normal shell commands and appended to the environment. Look at the bash -a pragma.
What you can do is create a startup script that can be run when the container starts. So if your current docker file looks something like this
From ...
...
CMD command
Change it to
From ...
...
ADD start.sh start.sh
CMD ["start.sh"]
In your start.sh script do the following:
export PRIVATE_IP=\`echo localhost\`
export MONGODB_HOST="$PRIVATE_IP"
export MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
command
I had a very similar problem to this. If I passed the contents of the env file to docker as separate -e directives then everything ran fine however if I passed the file using --env-file the container failed to run properly.
Turns out there were some spurious line endings in the file (I had copied from windows and ran docker in Ubuntu). When I removed them the container ran the same with --env or --env-file.
I had this issue when using docker run in a separate run script run.sh file, since I wanted the credentials ADMIN_USER and ADMIN_PASSWORD to be accessible in the container, but not show up in the command.
Following the other answers and passing a separate environment file with --env or --env-file didn't work for my image (though it worked for the Bash image). What worked was creating a separate env file...
# env.list
ADMIN_USER='username'
ADMIN_PASSWORD='password'
...and sourcing it in the run script when launching the container:
# run.sh
source env.list
docker run -d \
-e ADMIN_USER=$INFLUXDB_ADMIN_USER \
-e ADMIN_PASSWORD=$INFLUXDB_ADMIN_PASSWORD \
image_repo/name:tag

Resources