Run a loop in bash with timeout in a single line - timeout

I need to have a single line command that does something in a loop with a timeout.
Something like
export TIMEOUT=60
export BLOCK_SIZE=65536
COMMAND="timeout TIMEOUT while true; do dd if=/tmp/nfsometer_trace/mnt/TESTFILE of=/dev/null bs=BLOCK_SIZE; done"
The command will be executed in 'sh' by doing the following:
echo $COMMAND > COMMAND_FILE
sh COMMAND_FILE
But this gives me a syntax error:
syntax error near unexpected token `do'
Is there any way to have a single line command to timeout an infinite loop?

A command like
while true ; do echo $(( i++ )) ; sleep 5 ; done
will produce output like
0
1
2
3
But if I try to use that command directly in timeout, I get similar error messages, i.e.
timeout 20 while true ; do echo $(( i++ )) ; sleep 5 ; done
bash: syntax error near unexpected token do
Some programs (ssh for example), will process long strings of logic, if quoted so the argument is all one string.
timeout 20 "while true ; do echo $(( i++ )) ; sleep 5 ; done"
timeout: failed to run command 'while true ; do ...'
same error msg whether I use single of dbl-quotes to pass in string.
Reading info timeout we see that
COMMAND must not be a special built-in utility (*note Special built in utilities::).
Maybe while loops qualify as special built-in utility?
Finally, note that I have removed worrying about does using the "wrapper" information you have, ie.
export TIMEOUT=60
export BLOCK_SIZE=65536
COMMAND="timeout TIMEOUT while ....
as an cause of your problem.
When you have a problem like this, it is better to prove to yourself that your command is working in a simpler case and then get it work inside a more complex usage.
Given this evidence, I don't think timeout is designed to do what you want.
As a last resort, I recommend that you try converting that while loop into a script and calling just
timeout 20 myLooperScript
IHTH

Related

Vertica's vsql.exe returns errorlevel 0 when facing ERROR 3326: Execution time exceeded run time cap

I am using vsql.exe on an external Vertica database for which I don't have any administrative access. I use some views with simple SELECT+FROM+WHERE queries.
These queries 90% of the time work just fine, but some times, randomly, I get this error:
ERROR 3326:  Execution time exceeded run time cap of 00:00:45
The strange thing is that this error can happen way after those 45 seconds, even after 3 minutes. I've been told this is related to having different resource pools, but anyway I don't want to dig into that.
The problem is that when this occurs, vsql.exe returns errorlevel 0 and there is (apparently almost) no way to know this failed.
The output of the query is stored in a csv file. When it succeeds, it ends with (#### rows). But when it fails with this error, it just stops at any point of the csv, and its resulting size is around half of what's expected. This is of course not what you would expect when an error occurs, like no output or an empty one.
If there is a connection error or if the query has syntax errors, the errorlevel is not 0, so in those cases it behaves as expected.
I've tried many things, like increasing the timeout or adding -v ON_ERROR_STOP=ON to the vsql.exe parameters, but none of that helped.
I've googled a lot and found many people having this error, but the solutions are mostly related to increasing the timeouts, not related to the errorlevel returned.
Any help will be greatly appreciated.
TL;DR: how can I detect an error 3326 in a batch file like this?
#echo off
vsql.exe -h <hostname> -U <user> -w <pwd> -o output.cs -Ac "SELECT ....;"
echo %errorlevel% is always 0
if errorlevel 1 echo Error!! But this is never displayed.
Now that's really unexpected to me. I don't have Windows available just now, but trying on my Mac - at first just triggering a deliberate error:
$ vsql -h zbook -d sbx -U dbadmin -w $VSQL_PASSWORD -v ON_ERROR_STOP=ON -Ac "select * from foobarfoo"
ERROR 4566: Relation "foobarfoo" does not exist
$ echo $?
1
With ON_ERROR_STOP set to ON, this should be the behaviour everywhere.
Could you try what I did above through Windows, just with echo %ERRORLEVEL% instead of echo $?, just from the Windows command prompt and not in a batch file?
Next test: I run on resource pool general in my little test database, so I temporarily modify it to a runtime cap of 30 sec, run a silly query that will take over 30 seconds with ON_ERROR_STOP set to ON, collect the value returned by vsql and set the runtime cap of general back to NONE. I also have the %VSQL_* % env variables set so I don't have to repeat them all the time:
rem Windows way to set environment variables for vsql:
set VSQL_HOST=zbook
set VSQL_DATABASE=sbx
set VSQL_USER=dbadmin
set VSQL_PASSWORD=***masked***
Now for the test (backslashes, in Linux/MacOs escape a new line, which enables you to "word wrap" a shell command. Use the caret (^) in Windows for that):
marco ~/1/Vertica/supp $ # set a runtime cap
marco ~/1/Vertica/supp $ vsql -i -c \
"alter resource pool general runtimecap '00:00:30'"
ALTER RESOURCE POOL
Time: First fetch (0 rows): 116.326 ms. All rows formatted: 116.730 ms
marco ~/1/Vertica/supp $ vsql -v ON_ERROR_STOP=ON -iAc \
"select count(*) from one_million_rows a cross join one_million_rows b"
ERROR 3326: Execution time exceeded run time cap of 00:00:30
marco ~/1/Vertica/supp $ # test the return code
marco ~/1/Vertica/supp $ echo $?
1
marco ~/1/Vertica/supp $ # clear the runtime cap
marco ~/1/Vertica/supp $ vsql -i -c \
"alter resource pool general runtimecap NONE "
ALTER RESOURCE POOL
Time: First fetch (0 rows): 11.148 ms. All rows formatted: 11.383 ms
So it works in my case. Your line:
if errorlevel 1 echo Error!! But this is never displayed.
... never echoes anything because the previous line, with echo will return 0 to the shell, overriding the previous errorlevel.
Try it command by command on your Windows command prompt, and see what happens. Just echo %errorlevel%, without evaluating it.
And I notice that you are trying to export to CSV format. Then, try this:
Format the output unaligned (-A)
set the field separator to comma (-F ',')
remove the footer '(n rows)' (-P footer)
limit the output to 5 rows in the query for test
(I show the output before redirecting to file):
marco ~/1/Vertica/supp $ vsql -A -F ',' -P footer -c "select * from one_million_rows limit 5"
id,id_desc,dob,category,busid,revenue
0,0,1950-01-01,1,====== boss ========,0.000
1,-1,1950-01-02,2,kbv-000001kbv-000001,0.010
2,-2,1950-01-03,3,kbv-000002kbv-000002,0.020
3,-3,1950-01-04,4,kbv-000003kbv-000003,0.030
4,-4,1950-01-05,5,kbv-000004kbv-000004,0.040
Not aligning is much faster than aligning.
Then, as you spend most time in the fetching of the rows (that's because you get a timeout in the middle of an output file write process), try fetching more rows at a time than the default 1000. You will need to play with the value, depending on the network settings at your site until you get your best value:
-v ROWS_AT_A_TIME=10000
Once you're happy with the tested output, try this command (change the SELECT for your needs, of course ....):
marco ~/1/Vertica/supp $ vsql -A -F ',' -P footer \
-v ON_ERROR_STOP=ON -v ROWS_AT_A_TIME=10000 -o one_million_rows.csv \
-c "select * from one_million_rows"
marco ~/1/Vertica/supp $ wc -l one_million_rows.csv
1000001 one_million_rows.csv
The table actually contains one million rows. Note the line count in the file: 1,000,001. That's the title line included, but the footer (1000000 rows) removed.

error while executing lua script for redis server

I was following this simple tutorial to try out a simple lua script
http://www.redisgreen.net/blog/2013/03/18/intro-to-lua-for-redis-programmers/
I created a simple hello.lua file with these lines
local msg = "Hello, world!"
return msg
And i tried running simple command
EVAL "$(cat /Users/rsingh/Downloads/hello.lua)" 0
And i am getting this error
(error) ERR Error compiling script (new function): user_script:1: unexpected symbol near '$'
I can't find what is wrong here and i haven't been able to find someone who has come across this.
Any help would be deeply appreciated.
Your problem comes from the fact you are executing this command from an interactive Redis session:
$ redis-cli
127.0.0.1:6379> EVAL "$(cat /path/to/hello.lua)" 0
(error) ERR Error compiling script (new function): user_script:1: unexpected symbol near '$'
Within such a session you cannot use common command-line tools like cat et al. (here cat is used as a convenient way to get the content of your script in-place). In other words: you send "$(cat /path/to/hello.lua)" as a plain string to Redis, which is not Lua code (of course), and Redis complains.
To execute this sample you must stay in the shell:
$ redis-cli EVAL "$(cat /path/to/hello.lua)" 0
"Hello, world!"
If you are coming from windows and trying to run a lua script you should use this format:
redis-cli --eval script.lua
Run this from the folder where your script is located and it will load a multi line file and execute it.
On the off chance that anyone's come to this from Windows instead, I found I had to do a lot of juggling to achieve the same effect. I had to do this:
echo “local msg = 'Hello, world!'; return msg” > hello.lua
for /F "delims=" %i in ('type hello.lua') do #set cmd=%i
redis-cli eval "%cmd%" 0
.. if you want it saved as a file, although you'll have to have all the content on one line. If you don’t just roll the content into a set command
set cmd=“local msg = 'Hello, world!'; return msg”
redis-cli eval "%cmd%" 0

Best way to search the path in shell

I've got a small script called "onewhich". Its purpose is to behave like which, except that it will only give the FIRST occurrence of any executables specified as options, as found in the order they'd appear in the path.
So for example, if my path is /opt/bin:/usr/bin:/bin, and I have both /opt/bin/runme and /usr/bin/runme, then the command onewhich runme would return /opt/bin/runme.
But if I also have a /usr/bin/doit, then the command onewhich doit runme would return /usr/bin/doit instead.
The idea is to walk through the path, check for each executable specified, and if it exists, show it and exit.
Here's the script so far.
#!/bin/sh
for what in "$#"; do
for loc in `echo "${PATH}" | awk -vRS=: 1`; do
if [ -f "${loc}/${what}" ]; then
echo "${loc}/${what}"
exit 0
fi
done
done
exit 1
The problem is, I want to be better about PATH directories with special characters. Every second shell question here on StackOverflow talks about how bad it is to parse paths with tools like awk and sed. There's even a bash faq entry about it. (Proviso: I'm not using bash for this, but the recommendation is still valid.)
So I tried rewriting the script to separate paths in a pipe, like this"
#!/bin/sh
for what in "$#"; do
echo "${PATH}" | awk -vRS=: 1 | while read loc ; do
if [ -f "${loc}/${what}" ]; then
echo "${loc}/${what}"
exit 0
fi
done
done
exit 1
I'm not sure if this gives me any real advantage (since $loc is still inside quotes), but it also doesn't work because for some reason, the exit 0 seems to be ignored. Or ... it exits something (the sub-shell with the while loop that terminates the pipe, maybe), but the script exits with a value of 1 every time.
What's a better way to step through directories in ${PATH} without the risk that special characters will confuse things?
Alternately, am I reinventing the wheel? Is there maybe a way to do this that's built in to existing shell tools?
This needs to run in both Linux and FreeBSD, which is why I'm writing it in Bourne instead of bash.
Thanks.
This doesn't directly answer your question, but does eliminate the need to parse PATH at all:
onewhich () {
for what in "$#"; do
which "$what" 2>/dev/null && break
done
}
This just calls which on each command on the input list until it finds a match.
To parse PATH, you can simply set `IFS=':'.
if [ "${IFS:-x}" = "${IFS-x}" ]; then
# Only preserve the value of IFS if it is currently set
OLDIFS=$IFS
fi
IFS=":"
for f in $PATH; do # Do not quote $PATH, to allow word splitting
echo $f
done
if [ "${OLDIFS:-x}" = "${OLDIFS-x}" ]; then
IFS=$OLDIFS
fi
The above will fail if any of the directories in PATH actually contain colons.
Your first method looks to me as if it should work. In practical terms, if it's really the $PATH you'll be searching, it's unlikely you'll have spaces and newlines embedded in directories there. If you do, it's probably time to refactor.
But still, I don't think you're at risk from the possibility of bad names clobbering your loop, since you're wrapping variables in quotes. At worst, I suspect you might miss the odd valid executable, but I can't see how the script would generate errors. (I don't see how the script would miss valid executables, and I haven't tested - I'm just saying I don't see problems at first glance.)
As for your second question, about the loop, I think you've hit the nail on the head. When you run a pipe like this | that | while condition; do things; done, the while loop runs in its own shell at the end of the pipe. Exiting that shell may terminate the actions of the pipe, but that only brings you back to the parent shell, which has its own thread of execution that terminates with exit 1.
As for a better way to do this, I would consider which.
#!/bin/sh
for what in "$#"; do
which "$what"
done | head -1
And if you really want the exit values as well:
#!/bin/sh
for what in "$#"; do
which "$what" && exit 0
done
exit 1
The second might even be fewer resources, as it doesn't have to open a file handle and pipe through head.
You can also split your path using IFS. For example, if you wanted to wrap your loops the other way around, you could do this:
#!/bin/sh
IFS=":"
for loc in $PATH; do
for what in "$#"; do
if [ -x "$loc"/"$what" ]; then
echo "$loc"/"$what"
exit 0
fi
done
done
exit 1
Note that under normal circumstances, you might want to save the old value of $IFS, but you seem to be doing things in a stand-alone script, so the "new" value gets thrown out when the script exits.
All the above code is untested. YMMV.
Another way to get around the need to parse PATH at all is to run the builtin type command in new shell with a stripped environment (i. e. there simply are no functions or aliases to look up; cf. env -i sh -c 'type cmd 2>/dev/null).
# using `cmd` instead of $(cmd) for portability
onewhich() {
ec=0 # exit code
for cmd in "$#"; do
command -p env -i PATH="$PATH" sh -c '
export LC_ALL=C LANG=C
cmd="$1"
path="`type "$cmd" 2>/dev/null`"
if [ X"$path" = "X" ]; then
printf "%s\n" "error: command \"${cmd}\" not found in PATH" 1>&2
exit 1
else
case "$path" in
*\ /*)
path="/${path#*/}"
printf "%s\n" "$path";;
*)
printf "%s\n" "error: no disk file: $path" 1>&2
exit 1;;
esac
exit 0
fi
' _ "$cmd"
[ $? != 0 ] && ec=1
done
[ $ec != 0 ] && return 1
}
onewhich awk ls sed
onewhich builtin
onewhich if
Since which on success returns two full command paths if two commands are specified as arguments, exit 0 in the first onewhich script above aborts the program prematurely. In addition, if two commands are specified as arguments to which, the exit code of which is set to 1 even if only one command lookup failed (cf. which awk sedxyz ls; echo $?). To mimic this behaviour of the which command it is necessary to toggle on/off two variables (cnt and nomatches below).
onewhich() (
IFS=":"
nomatches=0
for cmd in "$#"; do
cnt=0
for loc in $PATH ; do
if [ $cnt = 0 ] && [ -x "$loc"/"$cmd" ]; then
echo "$loc"/"$cmd"
cnt=1
fi
done
[ $cnt = 0 ] && nomatches=1
done
[ $nomatches = 1 ] && exit 1 || exit 0 # exit 1: at least one cmd was not in PATH
)
onewhich awk ls sed
onewhich awk lsxyz sed
onewhich builtin
onewhich if

How can I tell from a within a shell script if the shell that invoked it is an interactive shell?

I'm trying to set up a shell script that will start a screen session (or rejoin an existing one) only if it is invoked from an interactive shell. The solution I have seen is to check if $- contains the letter "i":
#!/bin/sh -e
echo "Testing interactivity..."
echo 'Current value of $- = '"$-"
if [ `echo \$- | grep -qs i` ]; then
echo interactive;
else
echo noninteractive;
fi
However, this fails, because the script is run by a new noninteractive shell, invoked as a result of the #!/bin/sh at the top. If I source the script instead of running it, it works as desired, but that's an ugly hack. I'd rather have it work when I run it.
So how can I test for interactivity within a script?
Give this a try and see if it does what you're looking for:
#!/bin/sh
if [ $_ != $0 ]
then
echo interactive;
else
echo noninteractive;
fi
The underscore ($_) expands to the absolute pathname used to invoke the script. The zero ($0) expands to the name of the script. If they're different then the script was invoked from an interactive shell. In Bash, subsequent expansion of $_ gives the expanded argument to the previous command (it might be a good idea to save the value of $_ in another variable in order to preserve it).
From man bash:
0 Expands to the name of the shell or shell script. This is set
at shell initialization. If bash is invoked with a file of com‐
mands, $0 is set to the name of that file. If bash is started
with the -c option, then $0 is set to the first argument after
the string to be executed, if one is present. Otherwise, it is
set to the file name used to invoke bash, as given by argument
zero.
_ At shell startup, set to the absolute pathname used to invoke
the shell or shell script being executed as passed in the envi‐
ronment or argument list. Subsequently, expands to the last
argument to the previous command, after expansion. Also set to
the full pathname used to invoke each command executed and
placed in the environment exported to that command. When check‐
ing mail, this parameter holds the name of the mail file cur‐
rently being checked.
$_ may not work in every POSIX compatible sh, although it probably works in must.
$PS1 will only be set if the shell is interactive. So this should work:
if [ -z "$PS1" ]; then
echo noninteractive
else
echo interactive
fi
try tty
if tty 2>&1 |grep not ; then echo "Not a tty"; else echo "a tty"; fi
man tty :
The tty utility writes the name of the terminal attached to standard
input to standard output. The name that is written is the string
returned by ttyname(3). If the standard input is not a terminal, the
message ``not a tty'' is written.
You could try using something like...
if [[ -t 0 ]]
then
echo "Interactive...say something!"
read line
echo $line
else
echo "Not Interactive"
fi
The "-t" switch in the test field checks if the file descriptor given matches a terminal (you could also do this to stop the program if the output was going to be printed to a terminal, for example). Here it checks if the standard in of the program matches a terminal.
Simple answer: don't run those commands inside ` ` or [ ].
There is no need for either of those constructs here.
Obviously I can't be sure what you expected
[ `echo \$- | grep -qs i` ]
to be testing, but I don't think it's testing what you think it's testing.
That code will do the following:
Run echo \$- | grep -qs i inside a subshell (due to the ` `).
Capture the subshell's standard output.
Replace the original ` ` expression with a string containing that output.
Pass that string as an argument to the [ command or built-in (depending on your shell).
Produce a successful return code from [ only if that string was nonempty (assuming the string didn't look like an option to [).
Some possible problems:
The -qs options to grep should cause it to produce no output, so I'd expect [ to be testing an empty string regardless of what $- looks like.
It's also possible that the backslash is escaping the dollar sign and causing a literal 'dollar minus' (rather than the contents of a variable) to be sent to grep.
On the other hand, if you removed the [ and backticks and instead said
if echo "$-" | grep -qs i ; then
then:
your current shell would expand "$-" with the value you want to test,
echo ... | would send that to grep on its standard input,
grep would return a successful return code when that input contained the letter i,
grep would print no output, due to the -qs flags, and
the if statement would use grep's return code to decide which branch to take.
Also:
no backticks would replace any commands with the output produced when they were run, and
no [ command would try to replace the return code of grep with some return code that it had tried to reconstruct by itself from the output produced by grep.
For more on how to use the if command, see this section of the excellent BashGuide.
If you want to test the value of $- without forking an external process (e.g. grep) then you can use the following technique:
if [ "${-%i*}" != "$-" ]
then
echo Interactive shell
else
echo Not an interactive shell
fi
This deletes any match for i* from the value of $- then checks to see if this made any difference.
(The ${parameter/from/to} construct (e.g. [ "${-//[!i]/}" = "i" ] is true iff interactive) can be used in Bash scripts but is not present in Dash, which is /bin/sh on Debian and Ubuntu systems.)

unix at command pass variable to shell script?

I'm trying to setup a simple timer that gets started from a Rails Application. This timer should wait out its duration and then start a shell script that will start up ./script/runner and complete the initial request. I need script/runner because I need access to ActiveRecord.
Here's my test lines in Rails
output = `at #{(Time.now + 60).strftime("%H:%M")} < #{Rails.root}/lib/parking_timer.sh STRING_VARIABLE`
return render :text => output
Then my parking_timer.sh looks like this
#!/bin/sh
~/PATH_TO_APP/script/runner -e development ~/PATH_TO_APP/lib/ParkingTimer.rb $1
echo "All Done"
Finally, ParkingTimer.rb reads the passed variable with
ARGV.each do|a|
puts "Argument: #{a}"
end
The problem is that the Unix command "at" doesn't seem to like variables and only wants to deal with filenames. I either get one of two errors depending on how I position "s
If I put quotes around the right hand side like so
... "~/PATH_TO_APP/lib/parking_timer.sh STRING_VARIABLE"
I get,
-bash: ~/PATH_TO_APP/lib/parking_timer.sh STRING_VARIABLE: No such file or directory
I I leave the quotes out, I get,
at: garbled time
This is all happening on a Mac OS 10.6 box running Rails 2.3 & Ruby 1.8.6
I've already messed around w/ BackgrounDrb, and decided its a total PITA. I need to be able to cancel the job at any time before it is due.
After playing around with irb a bit, here's what I found.
The backtick operator invokes the shell after ruby has done any interpretation necessary. For my test case, the strace output looked something like this:
execve("/bin/sh", ["sh", "-c", "echo at 12:57 < /etc/fstab"], [/* 67 vars */]) = 0
Since we know what it's doing, let's take a look at how your command will be executed:
/bin/sh -c "at 12:57 < RAILS_ROOT/lib/parking_timer.sh STRING_VARIABLE"
That looks very odd. Do you really want to pipe parking_timer.sh, the script, as input into the at command?
What you probably ultimately want is something like this:
/bin/sh -c "RAILS_ROOT/lib/parking_timer.sh STRING_VARIABLE | at 12:57"
Thus, the output of the parking_timer.sh command will become the input to the at command.
So, try the following:
output = `#{Rails.root}/lib/parking_timer.sh STRING_VARIABLE | at #{(Time.now + 60).strftime("%H:%M")}`
return render :text => output
You can always use strace or truss to see what's happening. For example:
strace -o strace.out -f -ff -p $IRB_PID
Then grep '^exec' strace.out* to see where the command is being executed.

Resources