Echo dynamic sed to file inside Dockerfile - docker

I am working on a Dockerfile, inside of which I want to dynamically create a sed expression based on the input argument variable, and write this expression to a file.
Here's part of the Dockerfile:
FROM ubuntu
ARG VERSION
RUN echo $VERSION > /usr/local/testfile
RUN echo '#!/bin/sh \n\
sed -i "s/\"version\"/\${VERSION}/g" file' > /usr/local/foo.sh
the image builds fine.
When I start a container from that image, and inspect the files:
# cat /usr/local/testfile
0.0.1
# cat /usr/local/foo.sh
#!/bin/sh
sed -i "s/\"version\"/\${VERSION}/g" file
I notice that the $VERSION was not replaced correctly in the sed command. What am I missing here? I've tried a few different things (e.g. "$VERSION") but none of them worked.

I ended up breaking down the command. I created a variable for the sed command by using string concatenation and then I echoed that to the file separately:
FROM ubuntu
ARG VERSION
ENV command="sed -i s/\"version\"/""$VERSION""/g"
RUN echo '#!/bin/sh' > /usr/local/foo.sh
RUN echo $command >> usr/local/foo.sh
# cat /usr/local/foo.sh
#!/bin/sh
sed -i s/"version"/0.0.1/g

Related

Permanently change PATH in Dockerfile with dynamic value

I am using security scan software in my Dockerfile and I need to add its bin folder to the path. Its path will contain the version part so I do not know the path until I download the software. My current progress is something like this:
1.Download the software:
RUN curl https://cloud.appscan.com/api/SCX/StaticAnalyzer/SAClientUtil?os=linux --output SAClientUtil.zip
RUN unzip SAClientUtil.zip -d SAClientUtil
2.The desired folder is located: SAClientUtil/SAClientUtil.X.Y.Z/bin/ (xyz mary vary from run to run). Get there using find and cd combination and try to add it to the PATH:
RUN cd "$(dirname "$(find SAClientUtil -type f -name appscan.sh | head -1)")"; \
export PATH="$PATH:$PWD"; # doesn't work
Looks like ENV command is not evaluating the parameter, so
ENV PATH $PATH:"echo $(dirname "$(find SAClientUtil -type f -name appscan.sh | head -1)")"
doesn't work also.
Any ideas on how to dynamically add a folder to the PATH during docker image build?
If you're pretty sure the zip file will contain only a single directory with that exact layout, you can rename it to something fixed.
RUN curl https://cloud.appscan.com/api/SCX/StaticAnalyzer/SAClientUtil?os=linux --output SAClientUtil.zip \
&& unzip SAClientUtil.zip -d tmp \
&& mv tmp/SAClientUtil.* SAClientUtil \
&& rm -rf tmp SAClientUtil.zip
ENV PATH=/SAClientUtil/bin:${PATH}
A simple solution would be to include a small wrapper script in your image, and then use that to run commands from the SAClientUtil directory. For example, if I have the following in saclientwrapper.sh:
#!/bin/sh
cmd=$1
shift
saclientpath=$(ls -d /SAClientUtil/SAClientUtil.*)
echo "got path: $saclientpath"
cd "$saclientpath"
exec "$saclientpath/bin/$cmd" "$#"
Then I can do this:
RUN curl https://cloud.appscan.com/api/SCX/StaticAnalyzer/SAClientUtil?os=linux --output SAClientUtil.zip
RUN unzip SAClientUtil.zip -d SAClientUtil
COPY saclientwrapper.sh /saclientwrapper.sh
RUN sh /saclientwrapper.sh appscan.sh
And this will produce, when building the image:
STEP 6: RUN sh /saclientwrapper.sh appscan.sh
got path: /SAClientUtil/SAClientUtil.8.0.1374
COMMAND SYNTAX
appscan <command> [options]
ADDITIONAL COMMAND HELP
appscan help <command>
.
.
.

issues in accessing docker environment variables in systemd service files

1) I am running a docker container with following cmd (passing few env variables with -e option)
$ docker run --name=xyz -d -e CONTAINER_NAME=xyz -e SSH_PORT=22 -e NWMODE=HOST -e XDG_RUNTIME_DIR=/run/user/0 --net=host -v /mnt:/mnt -v /dev:/dev -v /etc/sysconfig/network-scripts:/etc/sysconfig/network-scripts -v /:/hostroot/ -v /etc/hostname:/etc/host_hostname -v /etc/localtime:/etc/localtime -v /var/run/docker.sock:/var/run/docker.sock --privileged=true cf3681e04bfb
2) After running the container as above, i check the env variable NWMODE inside the container, and it shows correctly as shown below :
$ docker exec -it xyz bash
$ env | grep NWMODE
NWMODE=HOST
3) Now, i created a sample service 'b' shown below which executes a script b.sh (where i try to access NWMODE) :
root#ubuntu16:/etc/systemd/system# cat b.service
[Unit]
Description=testing service b
[Service]
ExecStart=/bin/bash /etc/systemd/system/b.sh
root#ubuntu16:/etc/systemd/system# cat b.sh
#!/bin/bash`
systemctl import-environment
echo "NWMODE:" $NWMODE`
4) Now if i start service 'b' and see its logs, it shows that it is not able to access NWMODE env variable
$ systemctl start b
$ journalctl -fu b
...
systemd[1]: Started testing service b.
bash[641]: NWMODE: //blank for $NWMODE here`
5) Now rather than having 'systemctl import-environment' in b.sh, if i do following then the b.service logs show the correct value of NWMODE env variable:
$ systemctl import-environment
$ systemctl start b
Though the step 5 above works i can't go for it, as all the services in my system will be started automatically by systemd. In that case, can anyone please let me know how can i access the environment variables (passed using 'docker run...' cmd above) in a service file (say for e.g. in b.sh above). Can this be achieved somehow with systemctl import-environment or there is some other way ?
systemd unsets all environment variables to provide a clean environment. Afaik that is intended to be a security feature.
Workaround: Create a file /etc/systemd/system.conf.d/myenvironment.conf:
[Manager]
DefaultEnvironment=CONTAINER_NAME=xyz NWMODE=HOST XDG_RUNTIME_DIR=/run/user/0
systemd will set the environment variables declared in this file.
You can set up an ENTRYPOINT script that automatically creates this file before running systemd. Example:
RUN echo '#! /bin/bash \n\
echo "[Manager] \n\
DefaultEnvironment=$(while read -r Line; do echo -n "$Line" ; done < <(env) \n\
" >/etc/systemd/system.conf.d/myenvironment.conf \n\
exec /lib/systemd/systemd \n\
' >/usr/local/bin/setmyenv && chmod +x /usr/bin/setmyenv
ENTRYPOINT /usr/bin/setmyenv
Instead of creating the script within Dockerfile you can store it outside and add it with COPY:
#! /bin/bash
echo "[Manager]
DefaultEnvironment=$(while read -r Line; do echo -n "$Line" ; done < <(env)
" >/etc/systemd/system.conf.d/myenvironment.conf
exec /lib/systemd/systemd
TL;DR
Run the the command using bash, first store the docker environment variables to a file (or just pipe them two awk), extract & export the variable and finally run your main script.
ExecStart=/bin/bash -c "cat /proc/1/environ | tr '\0' '\n' > /home/env_file; export MY_ENV_VARIABLE=$(awk -F= -v key="MY_ENV_VARIABLE" '$1==key {print $2}' /home/env_file); /usr/bin/python3 /usr/bin/my_python_script.py"
Whatever #mviereck is saying is true, still I have found another solution to this problem.
My use case is to pass an environment variable to my system-d container in the Docker run command (docker run -e MY_ENV_VARIABLE="some_val") and use that in the python script that is run through the system-d unit file.
According to this post (https://forums.docker.com/t/where-are-stored-the-environment-variables/65762) the container environment variables can be found in the running process /proc/1/environ inside the container. Performing a cat does show that the environment variable MY_ENV_VARIABLE=some_val does exist, though in some mangled form.
$ cat /proc/1/environ
HOSTNAME=271fbnd986bdMY_ENV_VARIABLE=some_valcontainer=dockerLC_ALL=CDEBIAN_FRONTEND=noninteractiveHOME=/rootroot#271fb0d986bd
The main task now would be to extract MY_ENV_VARIABLE="some_val" value and pass it to the ExecStart directive in the system-d unit file.
(extraction code referenced from How to grep for value in a key-value store from plain text)
# this outputs a nice key,value pair
$ cat /proc/1/environ | tr '\0' '\n'
HOSTNAME=861f23cd1b33
MY_ENV_VARIABLE=some_val
container=docker
LC_ALL=C
DEBIAN_FRONTEND=noninteractive
HOME=/root
# we can store this in a file for use, too
$ cat /proc/1/environ | tr '\0' '\n' > /home/env_var_file
# we can then reuse the file to extract the value of interest against a key
$ awk -F= -v key="MY_ENV_VARIABLE" '$1==key {print $2}' /home/env_file
some_val
Now in the ExecStart directive in the system-d unit file we can do this:
[Service]
Type=simple
ExecStart=/bin/bash -c "cat /proc/1/environ | tr '\0' '\n' > /home/env_file; export MY_ENV_VARIABLE=$(awk -F= -v key="MY_ENV_VARIABLE" '$1==key {print $2}' /home/env_file); /usr/bin/python3 /usr/bin/my_python_script.py"

Dockerfile RUN shell-script not running during docker build

I try to build a custom image for the EMQ MQTT server. But the script update_config.sh is not executed by during docker copmose up.
Dockerfile:
FROM emqttd-docker-v2.3.5
# change configuration file
ADD update_config.sh /opt/emqttd/etc/update_config.sh
ADD ./certs/MyEMQ1.key /opt/emqttd/etc/certs/MyEMQ1.key
ADD ./certs/MyEMQ1.pem /opt/emqttd/etc/certs/MyEMQ1.pem
ADD ./certs/MyRootCA.pem /opt/emqttd/etc/certs/MyRootCA.pem
WORKDIR /opt/emqttd/etc/
#update the emqtt config file
RUN /bin/ash -c /opt/emqttd/etc/update_config.sh
update_config.sh
#!/bin/ash
cd /opt/emqttd/etc
cp ./emq.conf ./emq.conf.bak
sed -i 's|.*listener.ssl.external.keyfile.*|listener.ssl.external.keyfile = etc/certs/MyEMQ1.key|g' ./emq.conf
sed -i 's|.*listener.ssl.external.certfile.*|listener.ssl.external.certfile = etc/certs/MyEMQ1.pem|g' ./emq.conf
sed -i 's|.*listener.ssl.external.cacertfile.*|listener.ssl.external.cacertfile = etc/certs/MyRootCA.pem|g' ./emq.conf
sed -i 's|.*listener.ssl.external.verify.*|listener.ssl.external.verify = verify_peer|g' ./emq.conf
I use docker-compose to build the image.
The update_config.sh script is copied to the image but not executed.
What I tried so far:
Used COPY instead of ADD to copy the file
Tried the RUN /bin/ash -c /opt/emqttd/etc/update_config.sh in the following
flavors:
RUN /bin/ash -c /opt/emqttd/etc/update_config.sh
RUN /opt/emqttd/etc/update_config.sh
RUN ./update_config.sh
Tried to add RUN chmod +x /opt/emqttd/etc/update_config.sh before the line RUN /bin/ash -c /opt/emqttd/etc/update_config.sh which results in the error chmod: /opt/emqttd/etc/update_config.sh: Operation not permitted during build
Can anyone help me? Thanks.
Just add ENTRYPOINT ["/bin/bash", "update_config.sh" ] this as your last line.
And also update_config.sh file to start your application and make your container in infinite loop.
Example update_config.sh:
#!/bin/ash
cd /opt/emqttd/etc
cp ./emq.conf ./emq.conf.bak
sed -i 's|.*listener.ssl.external.keyfile.*|listener.ssl.external.keyfile = etc/certs/MyEMQ1.key|g' ./emq.conf
sed -i 's|.*listener.ssl.external.certfile.*|listener.ssl.external.certfile = etc/certs/MyEMQ1.pem|g' ./emq.conf
sed -i 's|.*listener.ssl.external.cacertfile.*|listener.ssl.external.cacertfile = etc/certs/MyRootCA.pem|g' ./emq.conf
sed -i 's|.*listener.ssl.external.verify.*|listener.ssl.external.verify = verify_peer|g' ./emq.conf
sh start_your_app.sh
touch 1.txt;tail -f 1.txt #This will make your container in running infinite so that even after all the steps of this script has been executed your container will continue running. until you kill tail -f 1.txt command.
Hope this will help.
Thank you!
ash - is one of the smallest shells. This command interpreter has 24 built-in commands and 10 different command-line options.
ash hasn't all commands which you need. You should use /bin/bash

jenkins pipeline: multiline shell commands with pipe

I am trying to create a Jenkins pipeline where I need to execute multiple shell commands and use the result of one command in the next command or so. I found that wrapping the commands in a pair of three single quotes ''' can accomplish the same. However, I am facing issues while using pipe to feed output of one command to another command. For example
stage('Test') {
sh '''
echo "Executing Tests"
URL=`curl -s "http://localhost:4040/api/tunnels/command_line" | jq -r '.public_url'`
echo $URL
RESULT=`curl -sPOST "https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=$URL" | jq -r '.code'`
echo $RESULT
'''
}
Commands with pipe are not working properly. Here is the jenkins console output:
+ echo Executing Tests
Executing Tests
+ curl -s http://localhost:4040/api/tunnels/command_line
+ jq -r .public_url
+ URL=null
+ echo null
null
+ curl -sPOST https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=null
I tried entering all these commands in the jenkins snippet generator for pipeline and it gave the following output:
sh ''' echo "Executing Tests"
URL=`curl -s "http://localhost:4040/api/tunnels/command_line" | jq -r \'.public_url\'`
echo $URL
RESULT=`curl -sPOST "https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=$URL" | jq -r \'.code\'`
echo $RESULT
'''
Notice the escaped single quotes in the commands jq -r \'.public_url\' and jq -r \'.code\'. Using the code this way solved the problem
UPDATE: : After a while even that started to give problems. There were certain commands executing prior to these commands. One of them was grunt serve and the other was ./ngrok http 9000. I added some delay after each of these commands and it solved the problem for now.
The following scenario shows a real example that may need to use multiline shell commands. Which is, say you are using a plugin like Publish Over SSH and you need to execute a set of commands in the destination host in a single SSH session:
stage ('Prepare destination host') {
sh '''
ssh -t -t user#host 'bash -s << 'ENDSSH'
if [[ -d "/path/to/some/directory/" ]];
then
rm -f /path/to/some/directory/*.jar
else
sudo mkdir -p /path/to/some/directory/
sudo chmod -R 755 /path/to/some/directory/
sudo chown -R user:user /path/to/some/directory/
fi
ENDSSH'
'''
}
Special Notes:
The last ENDSSH' should not have any characters before it. So it
should be at the starting position of a new line.
use ssh -t -t if you have sudo within the remote shell command
I split the commands with &&
node {
FOO = world
stage('Preparation') { // for display purposes
sh "ls -a && pwd && echo ${FOO}"
}
}
The example outputs:
- ls -a (the files in your workspace
- pwd (location workspace)
- echo world

Dockerfile - Defining an ENV variable with a dynamic value

I want to update the PATH environment variable with a dynamic value. This is what I've tried so far in my Dockerfile:
...
ENV PATH '$(dirname $(find /opt -name "ruby" | grep -i bin)):$PATH'
...
But export shows that the command was not interpreted:
root#97287b22c251:/# export
declare -x PATH="\$(dirname \$(find /opt -name \"ruby\" | grep -i bin)):\$PATH"
I don't want to hardcode the value. Is it possible to achieve it?
Thanks
we can't do that, as that would be a huge security issue. Meaning you could run and environment variable like this
ENV PATH $(rm -rf /)
However, you can pass the information through a --build-arg (ARG) when building an image;
ARG DYNAMIC_VALUE
ENV PATH=${DYNAMIC_VALUE:-unknown}
RUN echo $PATH
and build an image with:
> docker build --build-arg DYNAMIC_VALUE=$(dirname $(find /opt -name "ruby" | grep -i bin)):$PATH .
Or, if you want to copy information from an existing env-var on the host;
> export DYNAMIC_VALUE=foobar
> docker build --build-arg DYNAMIC_VALUE .
Not sure if something like this is what you are looking for... slightly modified what you have already. My main question would be, what are you attempting to accomplish with this portion?:
'$(dirname $(find /opt -name "ruby" | grep -i bin)):$PATH'
Part of the problem could be usage of single and double quotes resulting in expansions.
FROM alpine:3.4
RUN PATH_TO_ADD=$(dirname $(find /opt -name "ruby" | grep -i bin)) || echo Error locating files
ENV PATH "$PATH:$PATH_TO_ADD"

Resources