I'm starting a python script with supervisord on a linux debian platform. The user selected for executing the script shall depend on the value of an environmental variable. How can i make the field "user=" in a supervisord configuration file conditional?
First, I have added to the supervisor.service an environmental variable SPECIALUSER=myuser (file /lib/systemd/system/supervisor.service)
[Service]
ExecStart=/usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf
ExecStop=/usr/bin/supervisorctl $OPTIONS shutdown
ExecReload=/usr/bin/supervisorctl -c /etc/supervisor/supervisord.conf $OPTIONS reload
KillMode=process
Restart=on-failure
Environment=SPECIALUSER=myuser
Then I try to use the variable inside my supervisord.conf file:
[program:myprogram]
command=python myscript.py
user="if [ %(ENV_SPECIALUSER)s = myuser]; then root; else standarduser; fi"
But I get the following error when i try to reread the supervisord.conf
ERROR: CANT_REREAD: Invalid user name "if [ myuser = myuser ]; then root; else standarduser; fi" in section 'program:myprogram' (file: '/etc/supervisor/conf.d/supervisord.conf')
The environmental variable is interpreted correctly but the bash script, not.
I thought about entering the name of the user directly in the variable Environment=SPECIALUSER=root, but the environmental varialble is not always available.
If the environment variable is set to SPECIALUSER=myuser, I expect supervisor.d to interpret my program as
[program:myprogram]
command=python myscript.py
user=root
In all other cases as
[program:myprogram]
command=python myscript.py
user=standarduser
According to the documentation the user parameter value is never "interpreted" or sent to a shell. This means that it tries to use the entire value as the username.
http://supervisord.org/configuration.html#program-x-section-settings
All parameters aren't interpreted or sent to a shell. This means that you can't insert conditionals generally in parameters in your supervisord.conf.
If your goal is to just use different users on say different platforms or one for development and another on a deploymentserver I suggest creating a dedicated user for the service.
If your goal is to only sometimes run as superuser I suggest always using user=root in your supervisord.conf and wrapping your program in a small shell script that interprets this environment variable and drops privileges accordingly.
This other SO question might help you:
https://unix.stackexchange.com/questions/132663/how-do-i-drop-root-privileges-in-shell-scripts
Related
When I use any command with sudo the environment variables are not there. For example after setting HTTP_PROXY the command wget works fine without sudo. However if I type sudo wget it says it can't bypass the proxy setting.
First you need to export HTTP_PROXY. Second, you need to read man sudo, and look at the -E flag. This works:
$ export HTTP_PROXY=foof
$ sudo -E bash -c 'echo $HTTP_PROXY'
Here is the quote from the man page:
-E, --preserve-env
Indicates to the security policy that the user wishes to preserve their
existing environment variables. The security policy may return an error
if the user does not have permission to preserve the environment.
The trick is to add environment variables to sudoers file via sudo visudo command and add these lines:
Defaults env_keep += "ftp_proxy http_proxy https_proxy no_proxy"
taken from ArchLinux wiki.
For Ubuntu 14, you need to specify in separate lines as it returns the errors for multi-variable lines:
Defaults env_keep += "http_proxy"
Defaults env_keep += "https_proxy"
Defaults env_keep += "HTTP_PROXY"
Defaults env_keep += "HTTPS_PROXY"
For individual variables you want to make available on a one off basis you can make it part of the command.
sudo http_proxy=$http_proxy wget "http://stackoverflow.com"
You can also combine the two env_keep statements in Ahmed Aswani's answer into a single statement like this:
Defaults env_keep += "http_proxy https_proxy"
You should also consider specifying env_keep for only a single command like this:
Defaults!/bin/[your_command] env_keep += "http_proxy https_proxy"
A simple wrapper function (or in-line for loop)
I came up with a unique solution because:
sudo -E "$#" was leaking variables that was causing problems for my command
sudo VAR1="$VAR1" ... VAR42="$VAR42" "$#" was long and ugly in my case
demo.sh
#!/bin/bash
function sudo_exports(){
eval sudo $(for x in $_EXPORTS; do printf '%q=%q ' "$x" "${!x}"; done;) "$#"
}
# create a test script to call as sudo
echo 'echo Forty-Two is $VAR42' > sudo_test.sh
chmod +x sudo_test.sh
export VAR42="The Answer to the Ultimate Question of Life, The Universe, and Everything."
export _EXPORTS="_EXPORTS VAR1 VAR2 VAR3 VAR4 VAR5 VAR6 VAR7 VAR8 VAR9 VAR10 VAR11 VAR12 VAR13 VAR14 VAR15 VAR16 VAR17 VAR18 VAR19 VAR20 VAR21 VAR22 VAR23 VAR24 VAR25 VAR26 VAR27 VAR28 VAR29 VAR30 VAR31 VAR32 VAR33 VAR34 VAR35 VAR36 VAR37 VAR38 VAR39 VAR40 VAR41 VAR42"
# clean function style
sudo_exports ./sudo_test.sh
# or just use the content of the function
eval sudo $(for x in $_EXPORTS; do printf '%q=%q ' "$x" "${!x}"; done;) ./sudo_test.sh
Result
$ ./demo.sh
Forty-Two is The Answer to the Ultimate Question of Life, The Universe, and Everything.
Forty-Two is The Answer to the Ultimate Question of Life, The Universe, and Everything.
How?
This is made possible by a feature of the bash builtin printf. The %q produces a shell quoted string. Unlike the parameter expansion in bash 4.4, this works in bash versions < 4.0
Add code snippets to /etc/sudoers.d
Don't know if this is available in all distros, but in Debian-based distros, there is a line at or near the tail of the /etc/sudoers file that includes the folder /etc/sudoers.d. Herein, one may add code "snippets" that modify sudo's configuration. Specifically, they allow control over all environment variables used in sudo.
As with /etc/sudoers, these "code snippets" should be edited using visudo. You can start by reading the README file, which is also a handy place for keeping any notes you care to make:
$ sudo visudo -f /etc/sudoers.d/README
# files for your snippets may be created/edited like so:
$ sudo visudo -f /etc/sudoers.d/20_mysnippets
Read the "Command Environment" section of 'man 5 sudoers'
Perhaps the most informative documentation on environment configuration in sudo is found in the Command environment section of man 5 sudoers. Here, we learn that a sudoers environment variables that are blocked by default may be "whitelisted" using the env_check or env_keep options; e.g.
Defaults env_keep += "http_proxy HTTP_PROXY"
Defaults env_keep += "https_proxy HTTPS_PROXY"
Defaults env_keep += "ftp_proxy FTP_PROXY"
And so, in the OP's case, we may "pass" the sudoer's environment variables as follows:
$ sudo visudo -f /etc/sudoers.d/10_myenvwlist
# opens the default editor for entry of the following lines:
Defaults env_keep += "http_proxy HTTP_PROXY"
Defaults env_keep += "https_proxy HTTPS_PROXY"
# and any others deemed useful/necessary
# Save the file, close the editor, and you are done!
Get your bearings from '# sudo -V'
The OP presumably discovered the missing environment variable in sudo by trial-and-error. However, it is possible to be proactive: A listing of all environment variables, and their allowed or denied status is available (and unique to each host) from the root prompt as follows:
# sudo -V
...
Environment variables to check for safety:
...
Environment variables to remove:
...
Environment variables to preserve:
...
Note that once an environment variable is "whitelisted" as above, it will appear in subsequent listings of sudo -V under the "preserve" listing.
If you have the need to keep the environment variables in a script you can put your command in a here document like this. Especially if you have lots of variables to set things look tidy this way.
# prepare a script e.g. for running maven
runmaven=/tmp/runmaven$$
# create the script with a here document
cat << EOF > $runmaven
#!/bin/bash
# run the maven clean with environment variables set
export ANT_HOME=/usr/share/ant
export MAKEFLAGS=-j4
mvn clean install
EOF
# make the script executable
chmod +x $runmaven
# run it
sudo $runmaven
# remove it or comment out to keep
rm $runmaven
I am trying to write a dockerfile in which I add a few java-options to a script called envvars.
To achieve that I want to append a few text-lines to said file like so:
RUN echo "JAVA_OPTS=$JAVA_OPTS -Djavax.net.ssl.trustStore=${CERT_DIR}/${HOSTNAME}_truststore.jks" >> ${BIN_DIR}/envvars
RUN echo "JAVA_OPTS=$JAVA_OPTS -Djavax.net.ssl.trustStorePassword=${PWD_TRUSTSTORE}" >> ${BIN_DIR}/envvars
RUN echo "export JAVA_OPTS" >> ${BIN_DIR}/envvars
The issue here is, that I want the misc. placeholders ${varname} (those with curly braces) to be replaced during execution of the docker build command while the substring '$JAVA_OPTS' (i.e. those without braces) should be echoed and thus added to the envvars file verbatim, i.e. in the end the result in the /usr/local/apache2/bin/envvars file should read:
...
JAVA_OPTS=$JAVA_OPTS -Djavax.net.ssl.trustStore=/usr/local/apache2/cert/myserver_truststore.jks
JAVA_OPTS=$JAVA_OPTS -Djavax.net.ssl.trustStorePassword=my_secret
export JAVA_OPTS
How can one escape a $-sign from variable substitution in dockerfiles?
I found hints to use \$ or $$ but neither worked for me.
In case that matters (which I hope/expect not to): I am building the image using "Docker Desktop" on Windows 10 but I would expect the dockerfile to be agnostic of that.
first you need to add this # escape=` to your Dockerfile since \ is an escape charachter in the Dockerfile . then you can use \$ to escape the dollar sign in the RUN section
Example:
# escape=`
RUN echo "JAVA_OPTS=\$JAVA_OPTS -Djavax.net.ssl.trustStore=${CERT_DIR}/${HOSTNAME}_truststore.jks" >> ${BIN_DIR}/envvars
that will be JAVA_OPTS=$JAVA_OPTS in your env file
I have a supervisord file where like this
[program:decrypt]
command=export KEYTOKEN=$(aws kms decrypt --ciphertext-blob fileb://<(echo %(ENV_TOKENENC)s | base64 -d) --output text --query Plaintext --region %(ENV_REGION)s | base64 -d )
I am passing the environment ENV_TOKENENC,ENV_REGION to the container and I can echo those variables and confirm that the docker container is getting them, also the command to decrypt kms value also works.But when I put the kms decrypt command in supervised file it throws error saying ('ENV_REGION')&('ENV_CONSULTOKENENC') which cannot be expanded.
Am I putting the right value in supervisord file?
Setting an environment variable is easy, if you're setting it to a constant value:
[program:decrypt]
command=/usr/bin/env foo=bar baz=qux /path/to/something ...
or, with less overhead:
environment=foo="bar",baz="qux"
command=/path/to/something ...
However, dynamically generating that variable's value requires a shell:
[program:decrypt]
command=/bin/sh -c 'foo=$(generate-bar) /path/to/something'
Note that export is not actually needed here, as var=value something as part of a single command exports var having value value during the execution of something.
I am using following code to append ";C:\Python27" to environment variable PATH..
#echo off
Setx Path "%PATH%;C:\Python27" -M
PAUSE
but if i run this batch file more than once, it is appending ";C:\Python27" many times that should not happen.
SO i have to check for ;C:\Python27 before appending it to PATH variable.
Is there any command for this purpose?
The following Powershell should do it:
$needPython = $env:path | select-string -NotMatch -SimpleMatch "c:\python27"
if ($needPython) {
[Environment]::SetEnvironmentVariable("tstpath", $env:path + ";c:\python27", "User")
}
You can change User to Machine or Process to set a machine or process level environment variable.
You can run this directly from a powershell prompt.
If you're running this from a dos command line use (you need the full path to your script or .\ if it's in the current directory):
powershell "& '.\myscript.ps1'"
I'm trying to set up a shell script that will start a screen session (or rejoin an existing one) only if it is invoked from an interactive shell. The solution I have seen is to check if $- contains the letter "i":
#!/bin/sh -e
echo "Testing interactivity..."
echo 'Current value of $- = '"$-"
if [ `echo \$- | grep -qs i` ]; then
echo interactive;
else
echo noninteractive;
fi
However, this fails, because the script is run by a new noninteractive shell, invoked as a result of the #!/bin/sh at the top. If I source the script instead of running it, it works as desired, but that's an ugly hack. I'd rather have it work when I run it.
So how can I test for interactivity within a script?
Give this a try and see if it does what you're looking for:
#!/bin/sh
if [ $_ != $0 ]
then
echo interactive;
else
echo noninteractive;
fi
The underscore ($_) expands to the absolute pathname used to invoke the script. The zero ($0) expands to the name of the script. If they're different then the script was invoked from an interactive shell. In Bash, subsequent expansion of $_ gives the expanded argument to the previous command (it might be a good idea to save the value of $_ in another variable in order to preserve it).
From man bash:
0 Expands to the name of the shell or shell script. This is set
at shell initialization. If bash is invoked with a file of com‐
mands, $0 is set to the name of that file. If bash is started
with the -c option, then $0 is set to the first argument after
the string to be executed, if one is present. Otherwise, it is
set to the file name used to invoke bash, as given by argument
zero.
_ At shell startup, set to the absolute pathname used to invoke
the shell or shell script being executed as passed in the envi‐
ronment or argument list. Subsequently, expands to the last
argument to the previous command, after expansion. Also set to
the full pathname used to invoke each command executed and
placed in the environment exported to that command. When check‐
ing mail, this parameter holds the name of the mail file cur‐
rently being checked.
$_ may not work in every POSIX compatible sh, although it probably works in must.
$PS1 will only be set if the shell is interactive. So this should work:
if [ -z "$PS1" ]; then
echo noninteractive
else
echo interactive
fi
try tty
if tty 2>&1 |grep not ; then echo "Not a tty"; else echo "a tty"; fi
man tty :
The tty utility writes the name of the terminal attached to standard
input to standard output. The name that is written is the string
returned by ttyname(3). If the standard input is not a terminal, the
message ``not a tty'' is written.
You could try using something like...
if [[ -t 0 ]]
then
echo "Interactive...say something!"
read line
echo $line
else
echo "Not Interactive"
fi
The "-t" switch in the test field checks if the file descriptor given matches a terminal (you could also do this to stop the program if the output was going to be printed to a terminal, for example). Here it checks if the standard in of the program matches a terminal.
Simple answer: don't run those commands inside ` ` or [ ].
There is no need for either of those constructs here.
Obviously I can't be sure what you expected
[ `echo \$- | grep -qs i` ]
to be testing, but I don't think it's testing what you think it's testing.
That code will do the following:
Run echo \$- | grep -qs i inside a subshell (due to the ` `).
Capture the subshell's standard output.
Replace the original ` ` expression with a string containing that output.
Pass that string as an argument to the [ command or built-in (depending on your shell).
Produce a successful return code from [ only if that string was nonempty (assuming the string didn't look like an option to [).
Some possible problems:
The -qs options to grep should cause it to produce no output, so I'd expect [ to be testing an empty string regardless of what $- looks like.
It's also possible that the backslash is escaping the dollar sign and causing a literal 'dollar minus' (rather than the contents of a variable) to be sent to grep.
On the other hand, if you removed the [ and backticks and instead said
if echo "$-" | grep -qs i ; then
then:
your current shell would expand "$-" with the value you want to test,
echo ... | would send that to grep on its standard input,
grep would return a successful return code when that input contained the letter i,
grep would print no output, due to the -qs flags, and
the if statement would use grep's return code to decide which branch to take.
Also:
no backticks would replace any commands with the output produced when they were run, and
no [ command would try to replace the return code of grep with some return code that it had tried to reconstruct by itself from the output produced by grep.
For more on how to use the if command, see this section of the excellent BashGuide.
If you want to test the value of $- without forking an external process (e.g. grep) then you can use the following technique:
if [ "${-%i*}" != "$-" ]
then
echo Interactive shell
else
echo Not an interactive shell
fi
This deletes any match for i* from the value of $- then checks to see if this made any difference.
(The ${parameter/from/to} construct (e.g. [ "${-//[!i]/}" = "i" ] is true iff interactive) can be used in Bash scripts but is not present in Dash, which is /bin/sh on Debian and Ubuntu systems.)