This question already has answers here:
Conditional ENV in Dockerfile
(8 answers)
Closed 3 years ago.
In a given Dockerfile, I want to set a variable based on content of another ENV variable (which is injected into the container beforehand, or defined within the Dockerfile)
I'm looking at something like this
FROM centos:7
ENV ENABLE_REMOTE_DEBUG "true"
ENV DEBUG_FLAG=""
RUN if [ "$ENABLE_REMOTE_DEBUG" = "true" ] ; then echo "set debug flag" ;export DEBUG_FLAG="some_flags"; else echo "remote debug not set" ; fi
RUN echo debug flags: ${DEBUG_FLAG}
## Use the debug flag in the Entrypoint : java jar ${DEBUG_FLAG} ...
the problem with this Dockerfile is $DEBUG_FLAG is not properly set (or is not being used in the next line? ) ... since output is empty:
debug flags:
What am I missing here? (I prefer not to call external bash script)
Let’s take a look at what’s going on when a Dockerfile is used to build a Docker image.
Each line, is executed in a fresh container. The resulting container state, after the line has been interpreted, is saved in a temporary image and used to start a container for the next command.
This temp image do not save any state apart from the files on disk, Docker-specific properties like EXPOSE and a few image settings. Temp images has the advantage of fast subsequent builds using cache.
Now come to your question, if you would want to do using RUN instead of writing in shell script, here is a work around
RUN if [ "$ENABLE_REMOTE_DEBUG" = "true" ] ; then echo "set debug flag" ;echo 'export DEBUG_FLAG="some_flags"' >>/tmp/myenv; else echo "remote debug not set" ; fi
RUN source /tmp/myenv;echo debug flags: ${DEBUG_FLAG}
Since the image is based on CentOS, the default shell is bash, thus sourcing your own environment file will work fine. In other shells where source might not work, then reading the file will help.
You cannot set environment variables using export in RUN instruction and expect to them to available in next instruction. Only filesystem changes created by RUN instruction are persisted. Other stuff like environment variables etc are discarded.
You should move the logic to a shell script.
Related
I run some installation scripts via docker, they change ~/.bashrc but then I need to source it to use installed commands in RUN instructions below.
Tried obvious RUN . ~/.bashrc and got /bin/sh: 13: /root/.bashrc: shopt: not found error.
Tried RUN . ~/.profile and got mesg: ttyname failed: Inappropriate ioctl for device
I do not want to use ENV instructions. The point of having external installation scripts is to use them in non-Docker environments, for example when running unit tests locally. ENV instructions would duplicate environment setup which is already done in installation scripts.
You should not try to set up shell dotfiles in Docker. Many typical paths do not run them at all; for example
# In a Dockerfile
CMD ["some", "command", "here"]
# From the command line
docker run myimage some command here
The Docker environment is, fundamentally, different from a standalone Linux system; in addition to shell dotfiles, "home directory" isn't really a Docker concept, and if you have a multi-part process, on Docker it's standard to run each part in a separate container, but on standalone Linux you could use the init system to keep all of the parts running together. If you're expecting things to work exactly the same with exactly the same installation scripts, a virtual machine would be a better technological match for what you're attempting.
("Inappropriate ioctl for device" also suggests that there are things in the dotfiles that strongly expect to be run from an actual terminal, which you don't necessarily have at docker build time.)
My generic advice here is:
If possible, install things in the "system" directories within the image and avoid needing custom environment variable settings. (Don't use a version manager like nvm or rvm; don't use a Python virtual environment.)
If you do have to set environment variables, ENV is the way to do it.
If you really can't do either of the above, you can set environment variables in an ENTRYPOINT script before launching the main process; but if it's important to you that variables show up in docker inspect or docker exec shells, they won't be set there.
(Also remember that each RUN command launches a new container with a totally new shell environment. You can RUN . .profile; foo, but the environment variable settings won't carry through to the next RUN line.)
I'm trying to write (what I thought would be) a simple bash script that will:
run virtualenv to create a new environment at $1
activate the virtual environment
do some more stuff (install django, add django-admin.py to the virtualenv's path, etc.)
Step 1 works quite well, but I can't seem to activate the virtualenv. For those not familiar with virtualenv, it creates an activate file that activates the virtual environment. From the CLI, you run it using source
source $env_name/bin/activate
Where $env_name, obviously, is the name of the dir that the virtual env is installed in.
In my script, after creating the virtual environment, I store the path to the activate script like this:
activate="`pwd`/$ENV_NAME/bin/activate"
But when I call source "$activate", I get this:
/home/clawlor/bin/scripts/djangoenv: 20: source: not found
I know that $activate contains the correct path to the activate script, in fact I even test that a file is there before I call source. But source itself can't seem to find it. I've also tried running all of the steps manually in the CLI, where everything works fine.
In my research I found this script, which is similar to what I want but is also doing a lot of other things that I don't need, like storing all of the virtual environments in a ~/.virtualenv directory (or whatever is in $WORKON_HOME). But it seems to me that he is creating the path to activate, and calling source "$activate" in basically the same way I am.
Here is the script in its entirety:
#!/bin/sh
PYTHON_PATH=~/bin/python-2.6.1/bin/python
if [ $# = 1 ]
then
ENV_NAME="$1"
virtualenv -p $PYTHON_PATH --no-site-packages $ENV_NAME
activate="`pwd`/$ENV_NAME/bin/activate"
if [ ! -f "$activate" ]
then
echo "ERROR: activate not found at $activate"
return 1
fi
source "$activate"
else
echo 'Usage: djangoenv ENV_NAME'
fi
DISCLAIMER: My bash script-fu is pretty weak. I'm fairly comfortable at the CLI, but there may well be some extremely stupid reason this isn't working.
If you're writing a bash script, call it by name:
#!/bin/bash
/bin/sh is not guaranteed to be bash. This caused a ton of broken scripts in Ubuntu some years ago (IIRC).
The source builtin works just fine in bash; but you might as well just use dot like Norman suggested.
In the POSIX standard, which /bin/sh is supposed to respect, the command is . (a single dot), not source. The source command is a csh-ism that has been pulled into bash.
Try
. $env_name/bin/activate
Or if you must have non-POSIX bash-isms in your code, use #!/bin/bash.
In Ubuntu if you execute the script with sh scriptname.sh you get this problem.
Try executing the script with ./scriptname.sh instead.
best to add the full path of the file you intend to source.
eg
source ./.env instead of source .env
or source /var/www/html/site1/.env
I've seen many dockerfiles include all build steps in a RUN statement, like:
RUN echo "Hello" &&
cd /tmp &&
mv a.txt b.txt &&
...
and so on...
My question is: what's the benefits/drawbacks on replace these instructions by a single bash script that gives me highlight syntax, loop capabilities, etc?
Something like:
COPY ./script.sh /tmp
RUN bash /tmp/script.sh
and then
#!/bin/bash
echo "hello" ;
cd /tmp ;
mv a.txt b.txt ;
...
Thanks!
The primary difference is that when you COPY the bash script into the image it will be available for inspection in the running container, whereas the RUN command is a little more opaque. Putting your commands in a file like that is arguably more manageable for other reasons: changes in your VCS history will be a little more clear, and for longer or more complex scripts you will probably find it easier to format things cleanly with the script in a separate file rather than embedded in your Dockerfile in a RUN command.
Otherwise the result is the same (in both cases, you are executing the same set of commands), although the COPY and RUN will result in an extra image layer (vs. just the RUN by itself).
I guess running it off as a shell script gives you more control.
For instance, you can do if-else statements to check whether a command has failed or not and provide a code path to handle it. Whereas RUN is more straight forward and when the return code is not 0 it fails the build immediately.
Obviously the case you have there is a relatively simple one and it would not have had a huge difference. The only impact I can see here is the code readability aspect. Someone would have to read the shell script to know what is happening, comparing to having everything on a single file.
I guess it all comes down to using the right tool for the right job. If it is a simple command and you don't need complex logic handling then do RUN.
I have a script in Ruby and inside has to run a bash command. This command is an export http_proxy = "" and export https_proxy = "".
The problem is that I run the following, without running errors, but appears that doesn't make any change:
system "export http_proxy=''"
system "export https_proxy=''"
I created a file.sh with this lines and if I run it in a terminal only works when run: source file.sh or . file.sh.
Could you please help me how I can run this commands in the Ruby script? It can be directly with the command lines or executing an .sh file in the script.
When you run a separate process using system, any changes made to the environment of that process affects that process only, and will disappear when the process exits.
That exactly why running file.sh won't change your current shell since it runs it as a sub-shell, and the changes disappear when the process exits.
As you've already discovered, using source or . runs deoes affect the current shell but that's because it runs the script not as a sub-shell, but within the context of the current shell.
If you want to change the environment variables of the current Ruby script, you should look into ENV, something like:
ENV["http_proxy"] = ""
ENV["https_proxy"] = ""
%x( echo 'hi' )
and to capture standard output in a variable
var = %x( echo 'hi' )
I am not able to set env variables through an executable csh/tcsh script
An env variable set inside a csh/tcsh executable script "myscript"
contents of the script ...
setenv MYVAR /abc/xyz
which is not able to set on the shell and reports "Undefined variable"
I have made the csh/tcsh script as executable by the following shell command
chmod +x /home/xx/bin/myscript
also the path is updated to
set path = (/home/xx/bin $path)
which myscript
/home/xx/bin/myscript
When I run the script on command line and echo the env variable ..
myscript
echo $MYVAR
MYVAR "Undefined variable"
but if i source on command line
source /home/xx/bin/myscript
echo $MYVAR
/abc/xyz
you need to source your code rather than execute it so that it is evaluated by the current shell where you want to modify the environment.
You can of course embed
source /home/xx/bin/myscript
within your .cshrc
the script does not need to be executable or have any #! shebang (though they don't hurt)
This is not how environment variables work.
An environment variable is set for a process (in this case, tcsh) which is passed on to all child processes. So when you do:
$ setenv LS_COLORS=foo
$ ls
You first set LS_COLORS for the tcsh process, tcsh then starts the child process ls which inheres tcsh's environment (including LS_COLORS), which it can then use.
However, what you're doing is setting the environment is a child process, and then want to propagate this back to the parent process (somehow). This is not possible. This has nothing to do with tcsh, it works like this for any process on the system.
It works with source because source reads a file, and executes it line-by-line in the current process. So it doesn't start a new tcsh process.
I will leave it as an exercise to you what the implications would mean if it would be possible :-) Do you really want to deal with unwise shell scripts that set some random environment variables? And what about environment variables set by a php process, do we want those to go back in the parent httpd process? :-)
You didn't really describe what goal you're trying to achieve, but in general, you want to do something like:
#!/bin/csh -f
# ... Do stuff ...
echo "Please copy this line to your environment:"
echo "setenv MYVAR $myvar"