How to set environment variable in pre-start in Upstart script? - environment-variables

We have a custom C++ daemon application that forks once. So we've been doing this in our Upstart script on Ubuntu 12.04 and it works perfectly:
expect fork
exec /path/to/the/app
However now we need to pass in an argument to our app which contains the number of CPUs on the machine on which it runs:
cat /proc/cpuinfo | grep processor | wc -l
Our first attempt was this:
expect fork
exec /path/to/the/app -t `cat /proc/cpuinfo | grep processor | wc -l`
While that starts our app with the correct -t value, Upstart tracks the wrong pid value, I'm assuming because those cat, grep & wc commands all launch processes in exec before our app.
I also tried this, and even it doesn't work, I guess because setting an env var runs a process? Upstart still tracks the wrong pid:
expect fork
script
NUM_CORES=32
/path/to/the/app -t $NUM_CORES
end script
I've also tried doing this in an env stanza but apparently those don't run commands:
env num_cores=`cat /proc/cpuinfo | grep processor | wc -l`
Also tried doing this in pre-start, but env vars set there don't have any values in the exec stanza:
pre-start
NUM_CORES=32
end script
Any idea how to get this NUM_CORES set properly, and still get Upstart to track the correct pid for our app that forks once?

It's awkward. The recommended method is to write an env file in the pre-start stanza and then source it in the script stanza. It's ridiculous, I know.
expect fork
pre-start script
exec >"/tmp/$UPSTART_JOB"
echo "NUM_CORES=$(cat /proc/cpuinfo | grep processor | wc -l)"
end script
script
. "/tmp/$UPSTART_JOB"
/path/to/app -t "$NUM_CORES"
end script
post-start script
rm -f "/tmp/$UPSTART_JOB"
end script
I use the exec line in the pre-start because I usually have multiple env variables and I don't want to repeat the redirection code.
This only works because the '. ' command is a built-in in dash and thus no process is spawned.

According to zram-config's upstart config:
script
NUM_CORES=$(grep -c ^processor /proc/cpuinfo | sed 's/^0$/1/')
/path/to/the/app -t $NUM_CORES
end script

I would add
export NUM_CORES
after assigning it a value in "script". I remember that a /bin/sh symlinked to a non-Bash shell may run scripts, so I would avoid Bash-only constructs.
Re: using the "env" stanza, it passes values literally and does not process them using shell conventions.

Related

how to get the timestamps of a command execution in dockerfile

This is normal way of doing in shell
starttime=$(date '+%d/%m/%Y %H:%M:%S')
#echo $starttime
# sleep for 5 seconds
sleep 5
# end time
endtime=$(date '+%d/%m/%Y %H:%M:%S')
#echo $endtime
STARTTIME=$(date -d "${starttime}" +%s)
ENDTIME=$(date -d "${endtime}" +%s)
RUNTIME=$((ENDTIME-STARTTIME))
echo "Seconds ${RUNTIME} in sec"
Wanted the same way in a docker file
Wanted to get the timestamps before and after execution of a command in dockerfile
Could some please help on this.
It is exactly the same. A RUN command runs an ordinary Bourne shell command line (wrapping it in sh -c). If you have this much scripting involved you might consider writing it into a shell script, COPYing the script into your image, then RUNning it.
If this is just for temporary diagnostics, and you don't need to calculate the time in seconds, you can just run date as is without the rest of the scripting.
RUN date; make; date # except this won't actually stop on failure
If you were especially motivated you could take the script from the question, make it take a command as an argument, and write a script around it
#!/bin/sh
starttime=$(date '+%d/%m/%Y %H:%M:%S')
sh -c "$#"
rc=$?
endtime=$(date '+%d/%m/%Y %H:%M:%S')
...
exit "$rc"
Then in your Dockerfile you can use the SHELL directive to make this run RUN commands. You will rarely see RUN commands using JSON arrays, and this will bypass your script.
# must be executable and have a correct #!/bin/sh line
COPY timeit.sh /usr/local/bin
SHELL ["/usr/local/bin/timeit.sh"]
RUN make
RUN ["/bin/echo", "this will not be timed"]

Path is different depending on how you connect to container

I have an Alpine docker container and depending on how I connect using ssh the path is different. If I connect using a PTY shell:
ssh root#localhost sh -lc env | grep PATH
this prints:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
However if don't use this shell:
ssh root#localhost sh -c env | grep PATH
this prints:
PATH=/bin:/usr/bin:/sbin:/usr/sbin
Why is this happening? What do I need to do so that the second command produces the same output as the first command?
With sh -l you start a login shell:
When invoked as an interactive login shell, or a non-interactive shell with the --login option, it first attempts to read and execute commands from /etc/profile and ~/.profile, in that order. The --noprofile option may be used to inhibit this behavior.
...
A non-interactive shell invoked with the name sh does not attempt to read any other startup files.
From https://linux.die.net/man/1/sh
That is you can probably edit the profile files to make the login shell behave similar to noprofile but it might become difficult the other way around.
I'll answer my own question. This stack overflow post has the main info needed: Where to set system default environment variables in Alpine linux?
Given that, there are two alternatives:
Declare PATH using the ENV option of the Dockerfile
Or add PermitUserEnvironment yes to sshd_config file and define PATH in ~/.ssh/environment

Best practice to include a bash script in a Docker image

I'm creating a Dockerfile that needs to execute a command, let's call it foo
In order to execute foo, I need to create a .cfc in current directory with token information to call this foo service.
So basically I should do something like
ENV FOO_TOKEN token
ENV FOO_HOST host
ENV FOO_SHARED_DIRECTORY directory
ENV LIBS_TARGET target
and then put the first three variables in a .cfg file and then launch a command using the last variable as target.
Given that if run more than one CMD in a Dockerfile, only the last one will be considered, how should I do that?
My ideal execution is docker run -e "FOO_TOKEN=aaaaaaa" -e "FOO_HOST=myhost" -e "FOO_SHARED_DIRECTORY=Shared" -e "LIBS_TARGET=target/scala-2.11/*.jar" -it --rm --name my-ci-deploy foo/foo:latest
If you wanted to keep everything in the Dockerfile (something I think is rather desirable), you can do something nasty like:
ENV SCRIPT=IyEvdXNyL2Jpbi9lbnYgYmFzaApwZG9fc3Fsc3J2PTAKc3Vkbz0KdmVuZG9yPSQoIGxzYl9yZWxlYXNlIC1p
RUN echo -n "$SCRIPT" | base64 -d | /usr/bin/env bash
Where the contents of SCRIPT= are derived by piping your shell script thusly:
cat my_script.sh | base64 --wrap=0
You may have to adjust the /usr/bin/env bash if you have a really minimal (Alpine) setup.

Openwrt Script - Autostartup Shadowsocks

I would like to create a script for openwrt that every day changes some variables inside the Shadowsocks service. This is the script but i don't know where to put it or how to manage to call it every day or every reboot of router.
#!/bin/sh /etc/rc.common
restart=0
for i in `uci show shadowsocks | grep alias | sed -r 's/.*\[(.*)\].*/\1/'`
do
server=$(uci get shadowsocks.#servers[${i}].alias)
result=$(nslookup $server)
new_ip=$(echo "${result}" | tail -n +3 | awk -F" " '/^Address 1/{ print $3}')
if [ -n "$new_ip" ]; then
logger -t shadowsocks "nslookup $server -> $new_ip"
old_ip=$(uci get shadowsocks.#servers[${i}].server)
if [ "$old_ip" != "$new_ip" ]; then
logger -t shadowsocks "detect $server ip address change ($old_ip -> $new_ip)"
restart=1
uci set shadowsocks.#servers[${i}].server=${new_ip}
fi
else
logger -t shadowsocks "nslookup $server fail"
fi
done
if [ $restart -eq 1 ]; then
logger -t shadowsocks "restart for server ip address change"
uci commit shadowsocks
/etc/init.d/shadowsocks restart
fi
You can use cron utility. Cron is a time-based job scheduler in Unix-like computer OS. It allows to run jobs/programs/scripts at specified times.
OpenWrt comes with a cron system by default, provided by busybox.
Cron is not enabled by default, so your jobs won't be run. To activate cron in Openwrt:
/etc/init.d/cron start
/etc/init.d/cron enable
Ref: https://oldwiki.archive.openwrt.org/doc/howto/cron
Now considering your question, if you want to run mentioned script every day:
Edit cron file using crontab -e command. And write below line.
0 0 * * * sh /path/to/your/script.sh
This command will run your script at 00:00 (every day mid night). You can easily modify the above command to schedule your job at any other time. Good reference to generate cron job entry: https://crontab.guru/
To see if crontab is working properly:
tail -f /var/log/syslog | grep CRON
Now coming to your second question "Run script at every reboot of router":
You can put your script in /etc/rc.local. This file will be executed as as a shell script on every boot up by /etc/rc.d/S95done in Openwrt. So just edit /etc/rc.local with sh /path/to/your/script.sh Make sure your script is executable and doing your task properly.

How to pass run time arguments in dockerfile?

I'm trying to dockerize my django project, in order to run the project with gunicorn from the shell I use:
gunicorn --bind :8000 --workers $(( 2 * `cat /proc/cpuinfo | grep 'core id' | wc -l` + 1 )) MyQ.wsgi:application
which works great,
the idea is to utilize as many cores as I can as defined in gunicorn documentation.
the $(( 2 * cat /proc/cpuinfo | grep 'core id' | wc -l + 1 )) part simply returns 2*n+1 where n is the amount of cores in the system.
However,I'm having some trouble rewriting this command to a Dockerfile, here is my current attempt:
CMD ["gunicorn", "--bind :8000", "--workers", "$(( 2 * `cat /proc/cpuinfo | grep 'core id' | wc -l` + 1 ))", "MyQ.wsgi:application"]
This crashes with the following error when I run docker run:
gunicorn: error: argument -w/--workers: invalid int value: "$(( 2 * `cat /proc/cpuinfo | grep 'core id' | wc -l` + 1 ))"
so basically the "$..." is not being evaluated, and I don't know how to fix that.
I think it's better to define an Environment Variable with ENV instruction in your Dockerfile and use that in your CMD instruction. This way you can set your Environment Variable when creating a Container from your Docker Image.
Deifine environment variable like this in your Dockerfile:
ENV WORKERS 1
Then change your CMD instruction to this:
CMD ["sh", "-c", "gunicorn --bind :8000 --workers $WORKERS MyQ.wsgi:application"]
finally when you are creating the Container pass your WORKERS environmet variable with -e argument.
There are two forms of the CMD (and ENTRYPOINT and RUN) commands. The form you wrote is preferred:
CMD ["command_name", "--option", "value"]
But, it doesn't run a shell to preprocess the command line. So if you run, for instance,
CMD ["ls", ">", "/host/directory/foo.ls"]
it will pass > as an argument to the program and not do a shell redirect.
So for your construct to work, you need to use the other form, that does implicitly wrap it in a shell execution (/bin/sh -c '...')
CMD gunicorn --bind :8000 ...
In practice, trying to force runtime constraints like worker count via the Dockerfile isn't what you want; you should allow things like this to be specified in the docker run command or similar. #HassanMusavi's answer is a better one.
Dockerfile does not support run time arguments(which you want to compute while running docker file). But in your case you can write a script 'test.sh' which has entry for gunicorn with the parameters.
And in CMD define the script path like CMD["test.sh"]. So when you create a container from this image, it will run your script in the container and will evaluate $ expression and get the cores at run time(even though cat /proc/cpuinfo will run in your container but it will list down the cores of your docker machine). This way you dont have to depend on calculating cores and passing it as -e.

Resources