How can I create and use a variable inside docker? - docker

I'm trying to get the version from a pom file so that I can use it later in another command. So inside the docker file I have,
RUN VERSION="$(mvn org.apache.maven.plugins:maven-help-plugin:2.1.1:evaluate -Dexpression=project.version|grep -Ev '(^\[|Download\w+:)')"
RUN echo $VERSION
But the echo prints nothing. What I actually want to run is,
RUN mv /abc/abc-${VERSION}.jar /abc/abc.jar

Use ENV like that, it's the way Docker devs want you to do it:
ENV VERSION "$(mvn org.apache.maven.plugins:maven-help-plugin:2.1.1:evaluate -Dexpression=project.version|grep -Ev '(^\[|Download\w+:)')"
RUN echo $VERSION
There is an example is the docs that's very similar to your case: https://docs.docker.com/engine/reference/builder/#environment-replacement

if you group your 2 lines in one, it will work
RUN VERSION="$(mvn org.apache.maven.plugins:maven-help-plugin:2.1.1:evaluate -Dexpression=project.version|grep -Ev '(^\[|Download\w+:)')" ; echo $VERSION
the recommended way is of course the solution proposed by Ilya Peterov, I suspect you want to use it later in your Dockerfile
I will take another example, if you have a script in abc/myscript.sh
RUN cd abc ; ./myscript
will work, while
RUN cd abc
RUN ./myscript
will fail as it will not be in the abc directory

Related

How do you print to console from a docker file during build?

Suppose you have some Dockerfile. What needs to be added to that file such that a string (ie "Hello World") is printed to the console during build?
docker build .
RESEARCH
This question is a top hit in Google for this topic. I have researched by googling and landing here.
WHAT I HAVE TRIED
From the accepted answer:
RUN echo "hello there"
This actually doesn't work.
It's fairly simple actually.
If you just want to print stuff using in the build proccess you could just add the following line to your Dockerfile:
RUN echo "hello there"
And then add these options to your docker build command:
--progress=plain --no-cache
EDIT:
as noted by #SoftwareEngineer, when used for logging or tooling purposes you should append the echo command to the one you want to check if were successful. for example when downloading packages and wanting to get a print statement when finished:
example from nginx official image dockerfile
RUN apt-get install -y whatever && echo "installed package"

Replace string placeholder with value in sh file

I have to say that in Windows environment /Powershell I would have done it immediately, but since I have to execute this shell script inside a docker Linux image, I need your help.
I have a node.js env file where I store my environment variables, so the nodejs app can use them later.I've set some placeholders and I need to replace them substituting from the event args parameter I got from docker run command.
The content of the .env file is
NodePort={NodePort}
DBServer={DBServer}
DBDatabaseName={DBDatabaseName}
DBUser={DBUser}
DBPassword={DBPassword}
DBEncrypt= {DBEncrypt}
RFIDNodeUrlRoot={RFIDNodeUrlRoot}
RFIDStartMethod={RFIDStartMethod}
RFIDStopMethod={RFIDStopMethod}
RFIDGetTagsMethod={RFIDGetTagsMethod}
I don't know which is the best approach to open the file, replace the values from env variables, and then save it.
Anyone can please help me?
Thanks
You can use envsubst which ist part of gettext-base package
see:
https://stackoverflow.com/a/14157575/2087704
https://unix.stackexchange.com/a/294400/193945
.env.temp
NodePort=${NodePort} # notice the `$` before `{}`
DBServer=${DBServer}
..
Assuming you are setting environment variables with
docker run -e "NodePort=8080" -e "DBServer=foo"
Inside that container you will have to use some entrypoint.sh script to run:
envsubst \$NodePort,$\DBServer,.. < .env.temp > .env
then start your app passing .env to your nodejs app.
As an alternative you can also use sed to edit .env, which might be hard to understand.
subst_env() {
eval val="\$$1" # expands environment variable to val
sed -i "s%\$$1%${val}%g" $2 # using % as sed-delimiter to avoid escaping slashes in urls
}
subst_env 'ENV_DOCKER_DOMAIN' .env

Alpine not loading /etc/profile [duplicate]

I'm trying to write (what I thought would be) a simple bash script that will:
run virtualenv to create a new environment at $1
activate the virtual environment
do some more stuff (install django, add django-admin.py to the virtualenv's path, etc.)
Step 1 works quite well, but I can't seem to activate the virtualenv. For those not familiar with virtualenv, it creates an activate file that activates the virtual environment. From the CLI, you run it using source
source $env_name/bin/activate
Where $env_name, obviously, is the name of the dir that the virtual env is installed in.
In my script, after creating the virtual environment, I store the path to the activate script like this:
activate="`pwd`/$ENV_NAME/bin/activate"
But when I call source "$activate", I get this:
/home/clawlor/bin/scripts/djangoenv: 20: source: not found
I know that $activate contains the correct path to the activate script, in fact I even test that a file is there before I call source. But source itself can't seem to find it. I've also tried running all of the steps manually in the CLI, where everything works fine.
In my research I found this script, which is similar to what I want but is also doing a lot of other things that I don't need, like storing all of the virtual environments in a ~/.virtualenv directory (or whatever is in $WORKON_HOME). But it seems to me that he is creating the path to activate, and calling source "$activate" in basically the same way I am.
Here is the script in its entirety:
#!/bin/sh
PYTHON_PATH=~/bin/python-2.6.1/bin/python
if [ $# = 1 ]
then
ENV_NAME="$1"
virtualenv -p $PYTHON_PATH --no-site-packages $ENV_NAME
activate="`pwd`/$ENV_NAME/bin/activate"
if [ ! -f "$activate" ]
then
echo "ERROR: activate not found at $activate"
return 1
fi
source "$activate"
else
echo 'Usage: djangoenv ENV_NAME'
fi
DISCLAIMER: My bash script-fu is pretty weak. I'm fairly comfortable at the CLI, but there may well be some extremely stupid reason this isn't working.
If you're writing a bash script, call it by name:
#!/bin/bash
/bin/sh is not guaranteed to be bash. This caused a ton of broken scripts in Ubuntu some years ago (IIRC).
The source builtin works just fine in bash; but you might as well just use dot like Norman suggested.
In the POSIX standard, which /bin/sh is supposed to respect, the command is . (a single dot), not source. The source command is a csh-ism that has been pulled into bash.
Try
. $env_name/bin/activate
Or if you must have non-POSIX bash-isms in your code, use #!/bin/bash.
In Ubuntu if you execute the script with sh scriptname.sh you get this problem.
Try executing the script with ./scriptname.sh instead.
best to add the full path of the file you intend to source.
eg
source ./.env instead of source .env
or source /var/www/html/site1/.env

What's the difference between RUN and bash script in a dockerfile?

I've seen many dockerfiles include all build steps in a RUN statement, like:
RUN echo "Hello" &&
cd /tmp &&
mv a.txt b.txt &&
...
and so on...
My question is: what's the benefits/drawbacks on replace these instructions by a single bash script that gives me highlight syntax, loop capabilities, etc?
Something like:
COPY ./script.sh /tmp
RUN bash /tmp/script.sh
and then
#!/bin/bash
echo "hello" ;
cd /tmp ;
mv a.txt b.txt ;
...
Thanks!
The primary difference is that when you COPY the bash script into the image it will be available for inspection in the running container, whereas the RUN command is a little more opaque. Putting your commands in a file like that is arguably more manageable for other reasons: changes in your VCS history will be a little more clear, and for longer or more complex scripts you will probably find it easier to format things cleanly with the script in a separate file rather than embedded in your Dockerfile in a RUN command.
Otherwise the result is the same (in both cases, you are executing the same set of commands), although the COPY and RUN will result in an extra image layer (vs. just the RUN by itself).
I guess running it off as a shell script gives you more control.
For instance, you can do if-else statements to check whether a command has failed or not and provide a code path to handle it. Whereas RUN is more straight forward and when the return code is not 0 it fails the build immediately.
Obviously the case you have there is a relatively simple one and it would not have had a huge difference. The only impact I can see here is the code readability aspect. Someone would have to read the shell script to know what is happening, comparing to having everything on a single file.
I guess it all comes down to using the right tool for the right job. If it is a simple command and you don't need complex logic handling then do RUN.

How to add to slave's PATH using Slave SetupPlugin?

I have 2 RHEL machines setup in a Master/Slave configuration using Jenkins ver. 1.609.2
The slave is being launched via SSH Slaves Plugin 1.10.
I'm trying to use the Slave Setup Plugin v 1.9 to install the tools that will be necessary for my slave machine to run builds. In particular I am installing sqlplus.
Here is the script that I am running in order to try installing sqlplus:
if command -v sqlplus >/dev/null; then
echo "sqlplus already setup. Nothing to do."
else
#Create directory for sqlplus and unzip it there.
mkdir /jenkins/tools/sqlplus
tar -xvf sqlplussetup/instantclient-basiclite-linux.x64-12.1.0.2.0.tar.gz -C /jenkins/tools/sqlplus || { echo 'unzip failed' ; exit 1; }
tar -xvf sqlplussetup/instantclient-sqlplus-linux.x64-12.1.0.2.0.tar.gz -C /jenkins/tools/sqlplus || { echo 'unzip failed' ; exit 1; }
cd /jenkins/tools/sqlplus/instantclient_12_1
#Create links for the Oracle libs
ln -s libclntsh.so.12.1 libclntsh.so || { echo 'Could not create link' ; exit 1; }
ln -s libocci.so.12.1 libocci.so || { echo 'Could not create link' ; exit 1; }
#Add two lines to .bashrc only if they don't already exist. Export LD_LIBRARY_PATH and add sqlplus to PATH.
grep -q -F 'export LD_LIBRARY_PATH=/jenkins/tools/sqlplus/instantclient_12_1:$LD_LIBRARY_PATH' /home/jenkins/.bashrc || echo 'export LD_LIBRARY_PATH=/jenkins/tools/sqlplus/instantclient_12_1:$LD_LIBRARY_PATH' >> /home/jenkins/.bashrc
grep -q -F 'export PATH=$PATH:/jenkins/tools/sqlplus/instantclient_12_1' /home/jenkins/.bashrc || echo 'export PATH=$PATH:/jenkins/tools/sqlplus/instantclient_12_1' >> /home/jenkins/.bashrc
#Export variables so they can be used right away
export LD_LIBRARY_PATH=/jenkins/tools/sqlplus/instantclient_12_1:$LD_LIBRARY_PATH
export PATH=$PATH:/jenkins/tools/sqlplus/instantclient_12_1
echo "sqlplus has been setup."
fi
This script runs successfully and everything appears to work until I try to run a build and execute the sqlplus command. The build fails because sqlplus is not a recognized command.
My main question is this:
What is the proper way to automatically add an environment variable when launching a slave?
Please note I am looking for an automated way of doing this. I don't want to go into the configuration screen for my slave, tick a checkbox and specify an environment variable. That is counter-productive to what I am trying to achieve which is a slave that is immediately usable for builds once connected.
I pretty much understand why my script doesn't work. When Jenkins is launching the slave it first makes an SSH connection and then it runs my setup script using the command
/bin/sh -xe /jenkins/tmp/hudson8035138410767957141.sh
Where the contents of hudson8035138410767957141.sh is my script from above. So obviously, the export isn't going to work. I was hoping adding the exports to the .bashrc file would get around this but it does not work. I think this is because this script is executed after the ssh connection is established and therefore the .bashrc has already been read.
Problem is I can't figure out any way to work around this limitation.
Bash does not read any of its startup files (.bashrc, .profile etc) for non-interative shells that don't have the --login option set explicitly -- that's why the exports don't work.
So, solution "A" is to keep the bashrc magic that you suggest above, and to add the --login option by changing the first line in your build step to
#!/bin/bash --login
<your script here>
The explicit shebang at on the first line will also prevent excessive debug output that you get from the default's -x option (see your console snippet above).
Alternative solution "B" uses the fact that bash will source any script whose name is given in $BASH_ENV (if that variable is defined and the file exists). Define that variable globally in your slave properties (e.g., set to /jenkins/tools/setup.sh) and add exports as needed during slave setup. Every bash shell build step will read the settings then.
With solution "B" you don't need to use the --login option and you don't have to mess up the .bashrc. However, the "BASH_ENV" feature is only active when bash runs in "bash mode". As Jenkins starts the shell via sh, bash tries to emulate historic sh, which does not have that feature. So, also for B, you need a shebang:
#!/bin/bash
<your script here>
But that you'd need anyway to get rid of the tracing output that's usually too much in production setups.

Resources