Unpredictable result with echo command when used in a for loop - buffer

I am trying to write a small shell script to find out the signed and unsigned jars in particular directory. While the script works fine until 4-5 jars, it starts showing unpredictable results if my directory has more jars ( currntly about 25 jars). I am not sure but a possible reason for this might be related to buffering of STDIN/STDOUT. I tried to find a resolution for this in related posts but could not get a clear answer.
Here goes my script: ( it takes JAVA_HOME as an argument ):
#! /bin/bash
JV_HOME=$1
for i in `ls *.jar`
do
echo "scanning $i ..."
FILE=$i
$JV_HOME/bin/jarsigner -verify -verbose -certs $FILE | grep "jar verified" ;
if [ $? -eq 0 ]; then
echo "\n$FILE is code-signed\n"
else
echo "\n$FILE is unsigned/unverified..\n"
fi
done
For some of the jars, it says the jar to be unsigned, when they are actually signed when checked individually with the following command:
$JAVA_HOME/bin/jarsigner -verify -verbose -certs
What could possibly be wrong with the above script?
Thanks in advance,
Pabi

Related

ROS how to find all executables of a package?

I want to ask how to find all the executable names of a package in ROS (Robot Operating System)? For example, find spawn_model in gazebo_ros package. When I inspect the package in my system, it just shows some .xml, .cmake files, without any executables. But I can run it, such as: rosrun gazebo_ros spawn_model.
Thank you!
An easy way to do this is to type: "rosrun name_of_package " and then press tab two times, it should show you all the executables built.
After looking in the bash autocompletion script for rosrun, it looks like the command catkin_find is used to find the location of the executables for a package, and the executables are filtered with a find command.
If you want to create a script to give you a list of the executables follow the instructions below:
Save the following script in a file called rospack-list-executables:
#!/bin/bash
if [[ $# -lt 1 ]]; then
echo "usage: $(basename $0) <pkg_name>"
echo ""
echo " To get a list of all package names use the command"
echo " 'rospack list-names'"
exit
fi
pkgname=${1}
pkgdir="$(catkin_find --first-only --without-underlays --libexec ${pkgname})"
if [[ -n "${pkgdir}" ]]; then
find -L "${pkgdir}" -executable -type f ! -regex ".*/[.].*" ! -regex ".*${pkgdir}\/build\/.*" -print0 | tr '\000' '\n' | sed -e "s/.*\/\(.*\)/\1/g" | sort
else
echo "Cannot find executables for package '${pkgname}'." >&2
exit 1
fi
Then make the rospack-list-executables script executable (chmod +x rospack-list-executables) and place it in a directory that can be found in your $PATH environment variable.
Run the script:
$ rospack-list-executables gazebo_ros
debug
gazebo
gdbrun
gzclient
gzserver
libcommon.sh
perf
spawn_model
You should get the same result that you get when you type the rosrun <pkgname> command and press Tab:
$ rosrun gazebo_ros
debug gazebo gdbrun gzclient gzserver libcommon.sh perf spawn_model
You can check the executables for all packages with the following bash code:
rospack list-names | while read pkgname; do
echo "Executables for package '${pkgname}':";
rospack-list-executables $pkgname; echo "";
done
To enable package autocompletion for your newly created command, type the following:
complete -F _roscomplete rospack-list-executables
If you do not want to have to type the complete command every time you login, you can append it to your .bashrc file:
echo "complete -F _roscomplete rospack-list-executables" >> ~/.bashrc
Now when you type the command rospack-list-executables and press the Tab key, you should get a list of all the available packages to choose from.
catkin_find --first-only --without-underlays --libexec <your package name>)
should give you the folder where the executables are

Running iOS UIAutomation as a post-action build script is return as a posix spawn error

I'm entirely new to using bash and Xcode build scripts and so my code is probably a jungle full of errors.
The idea here is to trigger the script below which will scrape the directory that it is saved in for any .js automation scripts. It will then send these scripts to instruments to be run one at a time. I found some nifty code that created time stamped files and so I used that to create a more meaningful storage system.
#!/bin/bash
# This script should run all (currently only one) tests, independently from
# where it is called from (terminal, or Xcode Run Script).
# REQUIREMENTS: This script has to be located in the same folder as all the
# UIAutomation tests. Additionally, a *.tracetemplate file has to be present
# in the same folder. This can be created with Instruments (Save as template...)
# The following variables have to be configured:
#EXECUTABLE="Plans.app"
# Find the test folder (this script has to be located in the same folder).
ROOT="$( cd -P "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# Prepare all the required args for instruments.
TEMPLATE=`find $ROOT -name '*.tracetemplate'`
#EXECUTABLE=`find ~/Library/Application\ Support/iPhone\ Simulator | grep "${EXECUTABLE}$"`
echo "$BUILT_PRODUCTS_DIR"
echo "$PRODUCT_NAME"
EXECUTABLE="${BUILT_PRODUCTS_DIR}/${PRODUCT_NAME}.app/"
SCRIPTS=`find $ROOT -name '*.js'`
# Prepare traces folder
TRACES="${ROOT}/Traces/`date +%Y-%m-%d_%H-%M-%S`"
mkdir -p "$TRACES"
printf "\n" >> "$ROOT/results.log"
echo `date +%Y-%m-%d_%H-%M-%S` >> "$ROOT/results.log"
# Get the name of the user we should use to run Instruments.
# Currently this is done, by getting the owner of the folder containing this script.
USERNAME=`ls -l "${ROOT}/.." | grep \`basename "$ROOT"\` | awk '{print $3}'`
# Bring simulator window to front. Depending on the localization, the name is different.
osascript -e 'try
tell application "iPhone Simulator" to activate
on error
tell application "iOS Simulator" to activate
end try'
# Prepare an Apple Script that promts for the password.
PASS_SCRIPT="tell application \"System Events\"
activate
display dialog \"Password for user $USER:\" default answer \"\" with hidden answer
text returned of the result
end tell"
# Run all the tests.
for SCRIPT in $SCRIPTS; do
echo -e "\nRunning test script $SCRIPT"
TESTC="sudo -u ${USER} xcrun instruments -l -c -t ${TEMPLATE} ${EXECUTABLE} -e UIARESULTSPATH ${TRACES}/${TRACENAME} -e UIASCRIPT ${SCRIPT} >> ${ROOT}/results.log"
#echo "$COMMAND"
echo "Executing command $TESTC" >> "$ROOT/results.log"
echo "here $TESTC" >> "$ROOT/results.log"
OUTPUT=$(TESTC)
echo $OUTPUT >> "$ROOT/results.log"
echo "Finished logging" >> "$ROOT/results.log"
SCRIPTNAME=`basename "$SCRIPT"`
TRACENAME=`echo "$SCRIPTNAME" | sed 's_\.js$_.trace_g'`
for i in $(ls -A1t $PWD | grep -m 1 '.trace')
do
TRACEFILE="$PWD/$i"
done
if [ -e $TRACEFILE ]; then
mv "$TRACEFILE" "${TRACES}/${TRACENAME}"
fi
if [ `grep " Fail: " results.log | wc -l` -gt 0 ]; then
echo "Test ${SCRIPTNAME} failed. See trace for details."
open "${TRACES}/${TRACENAME}"
exit 1
break
fi
done
rm results.log
A good portion of this was taken from another Stack Overflow answer but because of the repository setup that I'm working with I needed to keep the paths abstract and separate from the root folder of the script. Everything seems to work (although probably not incredibly efficiently) except for the actual xcrun command to launch instruments.
TESTC="sudo -u ${USER} xcrun instruments -l -c -t ${TEMPLATE} ${EXECUTABLE} -e UIARESULTSPATH ${TRACES}/${TRACENAME} -e UIASCRIPT ${SCRIPT} >> ${ROOT}/results.log"
echo "Executing command $TESTC" >> "$ROOT/results.log"
OUTPUT=$(TESTC)
This is turned into the following by whatever black magic Bash runs on:
sudo -u Braains xcrun instruments -l -c -t
/Users/Braains/Documents/Automation/AppName/TestCases/UIAutomationTemplate.tracetemplate
/Users/Braains/Library/Developer/Xcode/DerivedData/AppName-
ekqevowxyipndychtscxwgqkaxdk/Build/Products/Debug-iphoneos/AppName.app/ -e UIARESULTSPATH
/Users/Braains/Documents/Automation/AppName/TestCases/Traces/2014-07-17_16-31-49/ -e
UIASCRIPT /Users/Braains/Documents/Automation/AppName/TestCases/Test-Case_1js
(^ Has inserted line breaks for clarity of the question ^)
The resulting error that I am seeing is:
posix spawn failure; aborting launch (binary ==
/Users/Braains/Library/Developer/Xcode/DerivedData/AppName-
ekqevowxyipndychtscxwgqkaxdk/Build/Products/Debug-iphoneos/AppName.app/AppName).
I have looked all over for a solution to this but I can't find anything because Appium has a similar issue. Unfortunately I don't understand the systems well enough to know how to translate the fixes to Appium to my own code but I imagine it's a similar issue.
I do know that the posix spawn failure is related to threading, but I don't know enough about xcrun to say what's causing the threading issue.
Related info:
- I'm building for the simulator but it'd be great to work on real devices too
- I'm using xCode 5.1.1 and iOS Simulator 7.1
- This script is meant to be run as a build post action script in xCode
- I did get it briefly working once before I broke it and couldn't get it back to the working state. So I think that means all of my permissions are set correctly.
UPDATE: So I've gotten to the root of this problem although I have not found a fix yet. First of all I have no idea what xcrun is for and so I dropped it. Then after playing around I found that my Xcode environment variables are returning the wrong path, probably because of some project setting somewhere. If you copy the Bash command from above but replace Debug-iphoneos with Debug-iphonesimulator the script can be run from the command line and will work as expected.
So for anyone who happens across this the only solution I could find was to hardcode the script for the simulator.
I changed EXECUTABLE="${BUILT_PRODUCTS_DIR}/${PRODUCT_NAME}.app/" to be EXECUTABLE="${SYMROOT}/Debug-iphonesimulator/${EXECUTABLE_PATH}". This is obviously not a great solution but it works for now.

jenkins plugin for triggering build whenever any file changed in a given directory

I am looking for functionality where we have a directory with some files in it.
Whenever any one makes a change in any of the files in the directory, jenkins shoukd trigger a build.
Is there any plugin or mathod for this functionality. Please advise.
Thanks in advance.
I have not tried it myself, but The FSTrigger plugin seems to do what you want:
FSTrigger provides polling mechanisms to monitor a file system and
trigger a build if a file or a set of files have changed.
If you can monitor the directory with a script, you can trigger the build with a HTTP GET, for example with wget or curl:
wget -O- $JENKINS_URL/job/JOBNAME/build
Although slightly related.. it seems like this issue was about monitoring static files on system.. however there are many version control systems for just this purpose.
I answered this in another post if you're using git to track changes on the files themselves:
#!/bin/bash
set -e
job_name="whatever"
JOB_URL="http://myserver:8080/job/${job_name}/"
FILTER_PATH="path/to/folder/to/monitor"
python_func="import json, sys
obj = json.loads(sys.stdin.read())
ch_list = obj['changeSet']['items']
_list = [ j['affectedPaths'] for j in ch_list ]
for outer in _list:
for inner in outer:
print inner
"
_affected_files=`curl --silent ${JOB_URL}${BUILD_NUMBER}'/api/json' | python -c "$python_func"`
if [ -z "`echo \"$_affected_files\" | grep \"${FILTER_PATH}\"`" ]; then
echo "[INFO] no changes detected in ${FILTER_PATH}"
exit 0
else
echo "[INFO] changed files detected: "
for a_file in `echo "$_affected_files" | grep "${FILTER_PATH}"`; do
echo " $a_file"
done;
fi;
You can add the check directly to the top of the job's exec shell, and it will exit 0 if no changes detected.. Hence, you can always poll the top level of the repo for check-in's to trigger a build. And only complete a build if the files in question change.

Best way to search the path in shell

I've got a small script called "onewhich". Its purpose is to behave like which, except that it will only give the FIRST occurrence of any executables specified as options, as found in the order they'd appear in the path.
So for example, if my path is /opt/bin:/usr/bin:/bin, and I have both /opt/bin/runme and /usr/bin/runme, then the command onewhich runme would return /opt/bin/runme.
But if I also have a /usr/bin/doit, then the command onewhich doit runme would return /usr/bin/doit instead.
The idea is to walk through the path, check for each executable specified, and if it exists, show it and exit.
Here's the script so far.
#!/bin/sh
for what in "$#"; do
for loc in `echo "${PATH}" | awk -vRS=: 1`; do
if [ -f "${loc}/${what}" ]; then
echo "${loc}/${what}"
exit 0
fi
done
done
exit 1
The problem is, I want to be better about PATH directories with special characters. Every second shell question here on StackOverflow talks about how bad it is to parse paths with tools like awk and sed. There's even a bash faq entry about it. (Proviso: I'm not using bash for this, but the recommendation is still valid.)
So I tried rewriting the script to separate paths in a pipe, like this"
#!/bin/sh
for what in "$#"; do
echo "${PATH}" | awk -vRS=: 1 | while read loc ; do
if [ -f "${loc}/${what}" ]; then
echo "${loc}/${what}"
exit 0
fi
done
done
exit 1
I'm not sure if this gives me any real advantage (since $loc is still inside quotes), but it also doesn't work because for some reason, the exit 0 seems to be ignored. Or ... it exits something (the sub-shell with the while loop that terminates the pipe, maybe), but the script exits with a value of 1 every time.
What's a better way to step through directories in ${PATH} without the risk that special characters will confuse things?
Alternately, am I reinventing the wheel? Is there maybe a way to do this that's built in to existing shell tools?
This needs to run in both Linux and FreeBSD, which is why I'm writing it in Bourne instead of bash.
Thanks.
This doesn't directly answer your question, but does eliminate the need to parse PATH at all:
onewhich () {
for what in "$#"; do
which "$what" 2>/dev/null && break
done
}
This just calls which on each command on the input list until it finds a match.
To parse PATH, you can simply set `IFS=':'.
if [ "${IFS:-x}" = "${IFS-x}" ]; then
# Only preserve the value of IFS if it is currently set
OLDIFS=$IFS
fi
IFS=":"
for f in $PATH; do # Do not quote $PATH, to allow word splitting
echo $f
done
if [ "${OLDIFS:-x}" = "${OLDIFS-x}" ]; then
IFS=$OLDIFS
fi
The above will fail if any of the directories in PATH actually contain colons.
Your first method looks to me as if it should work. In practical terms, if it's really the $PATH you'll be searching, it's unlikely you'll have spaces and newlines embedded in directories there. If you do, it's probably time to refactor.
But still, I don't think you're at risk from the possibility of bad names clobbering your loop, since you're wrapping variables in quotes. At worst, I suspect you might miss the odd valid executable, but I can't see how the script would generate errors. (I don't see how the script would miss valid executables, and I haven't tested - I'm just saying I don't see problems at first glance.)
As for your second question, about the loop, I think you've hit the nail on the head. When you run a pipe like this | that | while condition; do things; done, the while loop runs in its own shell at the end of the pipe. Exiting that shell may terminate the actions of the pipe, but that only brings you back to the parent shell, which has its own thread of execution that terminates with exit 1.
As for a better way to do this, I would consider which.
#!/bin/sh
for what in "$#"; do
which "$what"
done | head -1
And if you really want the exit values as well:
#!/bin/sh
for what in "$#"; do
which "$what" && exit 0
done
exit 1
The second might even be fewer resources, as it doesn't have to open a file handle and pipe through head.
You can also split your path using IFS. For example, if you wanted to wrap your loops the other way around, you could do this:
#!/bin/sh
IFS=":"
for loc in $PATH; do
for what in "$#"; do
if [ -x "$loc"/"$what" ]; then
echo "$loc"/"$what"
exit 0
fi
done
done
exit 1
Note that under normal circumstances, you might want to save the old value of $IFS, but you seem to be doing things in a stand-alone script, so the "new" value gets thrown out when the script exits.
All the above code is untested. YMMV.
Another way to get around the need to parse PATH at all is to run the builtin type command in new shell with a stripped environment (i. e. there simply are no functions or aliases to look up; cf. env -i sh -c 'type cmd 2>/dev/null).
# using `cmd` instead of $(cmd) for portability
onewhich() {
ec=0 # exit code
for cmd in "$#"; do
command -p env -i PATH="$PATH" sh -c '
export LC_ALL=C LANG=C
cmd="$1"
path="`type "$cmd" 2>/dev/null`"
if [ X"$path" = "X" ]; then
printf "%s\n" "error: command \"${cmd}\" not found in PATH" 1>&2
exit 1
else
case "$path" in
*\ /*)
path="/${path#*/}"
printf "%s\n" "$path";;
*)
printf "%s\n" "error: no disk file: $path" 1>&2
exit 1;;
esac
exit 0
fi
' _ "$cmd"
[ $? != 0 ] && ec=1
done
[ $ec != 0 ] && return 1
}
onewhich awk ls sed
onewhich builtin
onewhich if
Since which on success returns two full command paths if two commands are specified as arguments, exit 0 in the first onewhich script above aborts the program prematurely. In addition, if two commands are specified as arguments to which, the exit code of which is set to 1 even if only one command lookup failed (cf. which awk sedxyz ls; echo $?). To mimic this behaviour of the which command it is necessary to toggle on/off two variables (cnt and nomatches below).
onewhich() (
IFS=":"
nomatches=0
for cmd in "$#"; do
cnt=0
for loc in $PATH ; do
if [ $cnt = 0 ] && [ -x "$loc"/"$cmd" ]; then
echo "$loc"/"$cmd"
cnt=1
fi
done
[ $cnt = 0 ] && nomatches=1
done
[ $nomatches = 1 ] && exit 1 || exit 0 # exit 1: at least one cmd was not in PATH
)
onewhich awk ls sed
onewhich awk lsxyz sed
onewhich builtin
onewhich if

How can I tell from a within a shell script if the shell that invoked it is an interactive shell?

I'm trying to set up a shell script that will start a screen session (or rejoin an existing one) only if it is invoked from an interactive shell. The solution I have seen is to check if $- contains the letter "i":
#!/bin/sh -e
echo "Testing interactivity..."
echo 'Current value of $- = '"$-"
if [ `echo \$- | grep -qs i` ]; then
echo interactive;
else
echo noninteractive;
fi
However, this fails, because the script is run by a new noninteractive shell, invoked as a result of the #!/bin/sh at the top. If I source the script instead of running it, it works as desired, but that's an ugly hack. I'd rather have it work when I run it.
So how can I test for interactivity within a script?
Give this a try and see if it does what you're looking for:
#!/bin/sh
if [ $_ != $0 ]
then
echo interactive;
else
echo noninteractive;
fi
The underscore ($_) expands to the absolute pathname used to invoke the script. The zero ($0) expands to the name of the script. If they're different then the script was invoked from an interactive shell. In Bash, subsequent expansion of $_ gives the expanded argument to the previous command (it might be a good idea to save the value of $_ in another variable in order to preserve it).
From man bash:
0 Expands to the name of the shell or shell script. This is set
at shell initialization. If bash is invoked with a file of com‐
mands, $0 is set to the name of that file. If bash is started
with the -c option, then $0 is set to the first argument after
the string to be executed, if one is present. Otherwise, it is
set to the file name used to invoke bash, as given by argument
zero.
_ At shell startup, set to the absolute pathname used to invoke
the shell or shell script being executed as passed in the envi‐
ronment or argument list. Subsequently, expands to the last
argument to the previous command, after expansion. Also set to
the full pathname used to invoke each command executed and
placed in the environment exported to that command. When check‐
ing mail, this parameter holds the name of the mail file cur‐
rently being checked.
$_ may not work in every POSIX compatible sh, although it probably works in must.
$PS1 will only be set if the shell is interactive. So this should work:
if [ -z "$PS1" ]; then
echo noninteractive
else
echo interactive
fi
try tty
if tty 2>&1 |grep not ; then echo "Not a tty"; else echo "a tty"; fi
man tty :
The tty utility writes the name of the terminal attached to standard
input to standard output. The name that is written is the string
returned by ttyname(3). If the standard input is not a terminal, the
message ``not a tty'' is written.
You could try using something like...
if [[ -t 0 ]]
then
echo "Interactive...say something!"
read line
echo $line
else
echo "Not Interactive"
fi
The "-t" switch in the test field checks if the file descriptor given matches a terminal (you could also do this to stop the program if the output was going to be printed to a terminal, for example). Here it checks if the standard in of the program matches a terminal.
Simple answer: don't run those commands inside ` ` or [ ].
There is no need for either of those constructs here.
Obviously I can't be sure what you expected
[ `echo \$- | grep -qs i` ]
to be testing, but I don't think it's testing what you think it's testing.
That code will do the following:
Run echo \$- | grep -qs i inside a subshell (due to the ` `).
Capture the subshell's standard output.
Replace the original ` ` expression with a string containing that output.
Pass that string as an argument to the [ command or built-in (depending on your shell).
Produce a successful return code from [ only if that string was nonempty (assuming the string didn't look like an option to [).
Some possible problems:
The -qs options to grep should cause it to produce no output, so I'd expect [ to be testing an empty string regardless of what $- looks like.
It's also possible that the backslash is escaping the dollar sign and causing a literal 'dollar minus' (rather than the contents of a variable) to be sent to grep.
On the other hand, if you removed the [ and backticks and instead said
if echo "$-" | grep -qs i ; then
then:
your current shell would expand "$-" with the value you want to test,
echo ... | would send that to grep on its standard input,
grep would return a successful return code when that input contained the letter i,
grep would print no output, due to the -qs flags, and
the if statement would use grep's return code to decide which branch to take.
Also:
no backticks would replace any commands with the output produced when they were run, and
no [ command would try to replace the return code of grep with some return code that it had tried to reconstruct by itself from the output produced by grep.
For more on how to use the if command, see this section of the excellent BashGuide.
If you want to test the value of $- without forking an external process (e.g. grep) then you can use the following technique:
if [ "${-%i*}" != "$-" ]
then
echo Interactive shell
else
echo Not an interactive shell
fi
This deletes any match for i* from the value of $- then checks to see if this made any difference.
(The ${parameter/from/to} construct (e.g. [ "${-//[!i]/}" = "i" ] is true iff interactive) can be used in Bash scripts but is not present in Dash, which is /bin/sh on Debian and Ubuntu systems.)

Resources