I'm runnig specific script using salt and based on the log output of this script I need to choose whether to proceed with other tasks in salt formula.Is there any possibility to ask for user approval in salt formula. Basically I need something like this code snippet in bash:
echo "Proceed with copying users?(yes|no)"
read reply
if [ $reply == 'no' ]
then
exit
fi
You can provide input using stdin,
stdin (str) -- A string of standard input can be specified for the
command to be run using the stdin parameter. This can be useful in
cases where sensitive information must be read from standard input.
you can also just echo the input into a script,
echo "some value" | script
Related
I have a text file on my linux machine which stores a integer value. I want that integer value to be in my robot results report file.
I tried this:
${FILE,path="/home/ubuntu/Build_number.txt"}
and its properly printing the integer in email content of jenkins but when I give it in robot command (which is on execute shell in jenkins), it does not print any value.
Robot command as:
robot -r report_${FILE,path="/home/ubuntu/Build_number.txt"} Login.robot
Actual: Running this command generates a report file of name report_.html
Expected:
report_(integer value in Build_number.txt)
Indeed, this question to Linux/Bash forum. Bash has another syntax what you use.
Correct command is:
robot -r report_$(cat /home/ubuntu/Build_number.txt) Login.robot
Let us say that I have environment variable PO, with value 1.If I use the LINUX echo command I get:
>echo $PO
1
However, if I use TCL and exec, I do not get interpolation:
>exec echo "\$PO"
$PO
Now, if I do something more elaborate, by using regsub to replace every ${varname} with [ lindex array get env varname ] 0 ], and use substr, it works:
>subst [ regsub -all {\$\{(\S+?)\}} "\${PO}/1" "\[ lindex \[ array get env \\1 \] 1 \]" ]
1/1
I have some corner cases, sure. But why is the exec not giving back what the shell would do?
why is the exec not giving back what the shell would do?
Because exec is not a shell.
When you do echo $PO from a shell, echo is not responsible for resolving the value. It is the shell that converts $PO to the value 1 before calling echo. echo never sees $PO when calling it from the shell.
If you are trying to emulate what the shell does, then you need to do the same work as the shell (or, invoke an actual shell to do the work for you).
Tcl is a lot more careful about where it does interpolation than Unix shells normally are. It keeps environment variables out of the way so that you don't trip over them by accident, and does far less processing when it invokes a subprocess. This is totally by design!
As much as possible (with a few exceptions) Tcl passes the arguments to exec through to the subprocesses it creates. It also has standard mechanisms for quoting strings so that you can control exactly what substitutions happen before the arguments are actually passed to exec. This means that when you do:
exec echo "\$PO"
Tcl is going to do its normal substitution rules and get these exact arguments to the command dispatch: exec, echo, and $PO. This then calls into the exec command, which launches the echo program with one argument, $PO, which does exactly that. (Shells usually substitute the value first.) If you'd instead done:
exec echo {$PO}
you would have got the same effect. Or even if you'd done:
exec {*}{echo $PO}
You still end up feeding the exact same characters into exec as its arguments. If you want to run the shell on it, you should explicitly ask for it:
exec /bin/sh -c {echo $PO}
The bit in the braces there is a full (small) shell script, and will be evaluated as such. And you could do this even:
exec /bin/sh -c {exec echo '$PO'}
It's a bit of a useless thing to do but it works.
You can also do whatever substitutions you want from your own code. My current favourite from Tcl 8.7 (in development) is this:
exec echo [regsub -all -command {\$(\w+)} "\$PO" {apply {- name} {
global env
return $env($name)
}}]
OK, total overkill for this but since you can use any old complex RE and script to do the substitutions, it's a major power tool. (You can do similar things with string map, regsub and subst in older Tcl, but that's quite a bit harder to do.) The sky and your imagination are the only limits.
I've been writing some shell tasks that can't run unless there are secure environment variables present in a Travis CI PR build. There is an auth token that must be present to push some information up about the build, so for builds originating from forks, I'd like to simply skip these parts. They're not critical.
How can I tell if a build is originating from a fork?
From the documentation around "Environment Variables":
TRAVIS_SECURE_ENV_VARS: Whether or not secure environment vars are being used. This value is either "true" or "false".
This is a little ambiguous. Does it mean that secure environment variables are being used anywhere (as in, present in .travis.yml)? That they are being exported as environment variables in the current build? I'm not sure this is a good way to guarantee that I'm testing a pull request that has originated from a fork, but I didn't see any other way to do it.
My first attempted had code that looked something like
[ ${TRAVIS_SECURE_ENV_VARS} = "false" ] && exit 0; # more shell code here...
but this appears to continue ahead and push up without an auth token, failing the task (and the build). Further complicating matters is that should the command fail, the output may contain the auth token...so everything from stderr and stdout is redirected to /dev/null. Considering that builds don't start for several minutes, I'm stuck waiting in a long debug cycle.
My next attempt simply bypassed this built-in environment variable, in favor of instead trying to grab a secure environment variable directly.
[ ${ghToken} -n ] && exit 0;
This fails in the same way as above. I'm beginning to wonder if [ $COND ] && exit 0; really works the way I'm expecting it to in this context. It seems to work just fine when I run equivalent shell scripts locally (Mac OSX and bash).
Does Travis offer a built-in way to determine if the pull request being built originated from the original repository versus a fork?
Here is my current work around.
screenshotsClone: {
command: ['[ ${ghToken} ] &&',
'git submodule add -f', screenshotPullTemplate, 'screenshots > /dev/null 2>&1;'].join(' '),
options: {
stdout: false,
failOnError: false
}
}
I'd rather not silently pass on errors should there be a legitimate problem with the shell task. At this point I might as well remove the environment variable check that precedes it.
Stop using grunt-shell for things in your .travis.yml. Put those shell tasks into their own files so that you can stop using /bin/sh and start using /bin/bash, which is what you've been testing against locally.
Something like this will fix your problems.
Don't forget to mark the shebang as #! /bin/bash, which is crucial for the kinds of checks that should be happening on Travis.
I think what you want is to check if one of your secure environment variables is null to detect that you are running a build from a fork and in that case stop the build script prematurely.
Hence I would suggest you use the -z comparison operator in BASH to detect null strings because the -n operator detects non null string. A non null secure environment variable would mean that you are not running the build from a fork.
My suggestion is to change your line for:
[ -z "${ghToken}" ] && exit 0;
Hope this helps.
I have a shell script I wish to read parameters from an external file, to get files via FTP:
parameters.txt:
FTP_SERVER=ftpserer.foo.org
FTP_USER_NAME=user
FTP_USER_PASSWORD=pass
FTP_SOURCE_DIRECTORY="/data/secondary/"
FTP_FILE_NAME="core.lst"
I cannot find how to read these variables into my FTP_GET.sh script, I have tried using read but it just echoed the vars and doesn't store them as required.
Assuming that 'K Shell' is Korn Shell, and that you are willing to trust the contents of the file, then you can use the dot command '.':
. parameters.txt
This will read and interpret the file in the current shell. The feature has been in Bourne shell since it was first released, and is in the Korn Shell and Bash too. The C Shell equivalent is source, which Bash also treats as a synonym for dot.
If you don't trust the file then you can read the values with read, validate the values, and then use eval to set the variables:
while read line
do
# Check - which is HARD!
eval $line
done
This would be part of a reverse-engineering project.
To determine and document what a shell script (ksh, bash, sh) does, it is comfortable, if you have information about what other programs/scripts it calls.
How could one automate this task? Do you know any program or framework that can parse a shell script? This way for instance, I could recognize external command calls -- a step to the right direction.
For bash/sh/ksh, I think you can easily modify their source to log what has been executed. That would be a solution.
How about:
Get a list of distinct words in that script
Search $PATH to find a hit for each
?
bash -v script.sh ?
Bash's xtrace is your friend.
You can invoke it with:
set -x at the top of your script,
by calling your script with bash -x (or even bash --debugger -x),
or recursively by doing (set -x; export SHELLOPTS; your-script; )
If you can't actually run the script, try loading it into a text editor that supports syntax highlighting for Bash. It will color-code all of the text and should help indicate what is a reserved word, variable, external command, etc.