I've been using GNU parallel and I want to keep output order (--kepp-order), grouped by jobs (--grouped) but also with sorted stdout and stderr. Right now, the grouped options first print stdout and only after does it print stderr.
As an example, any way that these two commands give the same output?
seq 4 | parallel -j0 'sleep {}; echo -n start{}>&2; sleep {}; echo {}end'
seq 4 | parallel -j0 'sleep {}; echo -n start{} ; sleep {}; echo {}end'
thanks,
As per the comment to the other answer, to keep the output ordered, simply have parallel's bash invocation redirect stderr to stdout:
parallel myfunc '2>&1'
E.g.,
parallel -j8 eval \{1} -w1 \{2} '2>&1' ::: "traceroute -a -f9" traceroute6 ::: ordns.he.net one.one.one.one google-public-dns-a.google.com
You cannot do that if you still want stderr and stdout to be separated.
The reason for this is that stderr and stdout are buffered to 2 different files using buffered output.
But maybe you can explain a bit more on what you need this for. In that case there might be a solution.
Assuming that you don't have to use gnu parallel, and the main requirements are parallel execution with maintained ordered output of both stderr and stdout; we can create a solution that allows for the following example usage(plus providing return code), where you will have the results of the executions in a list, where each list element is in return a list of 3 strings: indexed as 0=stdout, 1=stderr and 2=return code.
source mapfork.sh
ArgsMap=("-Pn" "-p" "{}" "{}")
Args=("80" "google.com" "25" "tutanota.com" "80" "apa bepa")
declare -a Results=$(mapfork nmap "(${ArgsMap[*]#Q})" "(${Args[*]#Q})")
So, in order to print for example the stderr results, of the third destination ("apa bepa"), you can do:
declare -a res3="${Results[2]}"
declare -p res3
# declare -a res3=([0]=$'Starting Nmap 7.70 ( https://nmap.org ) at 2019-06-21 18:55 CEST\nNmap done: 0 IP addresses (0 hosts up) scanned in 0.09 seconds' [1]=$'Failed to resolve "apa bepa".\nWARNING: No targets were specified, so 0 hosts scanned.' [2]="0")
printf '%b\n' "${res3[1]}"
mapfork.sh is shown below. It is a bit complicated but it's parts have been explained in other answers so I won't provide the details here as well:
Capture both stdout and stderr in Bash [duplicate]
How can I make an array of lists (or similar) in bash?
#!/bin/bash
# reference: https://stackoverflow.com/questions/13806626/capture-both-stdout-and-stderr-in-bash
nullWrap(){
local -i i; i="$1"
local myCommand="$2"
local -a myCommandArgs="$3"
local myfifo="$4"
local stderr
local stdout
local stdret
. <(\
{ stderr=$({ stdout=$(eval "$myCommand ${myCommandArgs[*]#Q}"); stdret=$?; } 2>&1 ;\
declare -p stdout >&2 ;\
declare -p stdret >&2) ;\
declare -p stderr;\
} 2>&1)
local -a Arr=("$stdout" "$stderr" "$stdret")
printf "${i}:%s\u0000" "(${Arr[*]#Q})" > "$myfifo"
}
mapfork(){
local command
command="$1"
local -a CommandArgs="$2"
local -a Args="$3"
local -a PipedArr
local -i i
local myfifo=$(mktemp /tmp/temp.XXXXXXXX)
rm "$myfifo"
mkfifo "$myfifo"
local -a placeHolders=()
for ((i=0;i<${#CommandArgs[#]};i++)); do
[[ "${CommandArgs[$i]}" =~ ^\{\}$ ]] && placeHolders+=("$i") ;done
for ((i=0;i<${#Args[#]};i+=0)); do
# if we have placeholders in CommandArgs we need to take args
# from Args to replace.
if [[ ${#placeHolders[#]} -gt 0 ]]; then
for ii in "${placeHolders[#]}"; do
CommandArgs["$ii"]="${Args[$i]}"
i+=1; done; fi
nullWrap "$i" "$command" "(${CommandArgs[*]#Q})" "$myfifo" &
done
for ((i=0;i<${#Args[#]};i+=$(("${#placeHolders[#]}")))) ; do
local res
res=$(read -d $'\u0000' -r temp <"$myfifo" && printf '%b' "$temp")
local -i resI
resI="${res%%:*}"
PipedArr[$resI]="${res#*:}"
done
# reference: https://stackoverflow.com/questions/41966140/how-can-i-make-an-array-of-lists-or-similar-in-bash
printf '%s' "(${PipedArr[*]#Q})"
}
Related
We are using browserless docker to read content on certain websites. For few websites, the CPU increases proportional to the number of sessions open.
Specifics
docker version browserless/chrome:1.35-chrome-stable
Script to reproduce once docker is up and running.
#!/bin/bash
HOST='localhost:3000'
curl_new_session() {
echo $(curl -XPOST http://$HOST/webdriver/session -d ' {"desiredCapabilities":{"browserName":"chrome","platform":"ANY","chromeOptions":{"binary":"","args":["--window-size=1400,900","--no-sandbox","--headless"]},"goog:chromeOptions":{"args":["--window-size=1400,900","--no-sandbox","--headless"]}}}' | jq '.sessionId') | tr -d '"'
{"desiredCapabilities":{"browserName":"chrome","platform":"ANY","chromeOptions":{"binary":"","args":["--window-size=1400,900","--no-sandbox","--headless"]},"goog:chromeOptions":{"args":["--window-size=1400,900","--no-sandbox","--headless"]}}}
}
# we open the session and keep it running
curl_visit_url() {
local id=$1
local url=$2
echo "http://$HOST/webdriver/session/$id/url"
echo $(curl http://$HOST/webdriver/session/$id/url -d '{"url":"'$url'"}' |jq '' )
}
for i in {1..5}
do
id=$(curl_new_session)
echo $id
curl_visit_url $id 'http://monday.com' &
sleep 0.5
echo '.'
done
This specific site(monday.com), uses too much cpu but this can happen with other sites as well.
Question
Considering we encounter such websites from time to time. What's the best way to handle them?
We want the sessions to be kept alive and not close after opening them.
i am not aware of lua script but i need some help.
Basically current lua script will receive structure.
in those structure has address parameter where will get two index parameters(ipv6 & ipv4) addresses.
lua script need to implement below case
ping ipv6 address and result will get store in local variable.
if local variable gets (ping success) will connect/call uv.tcp_connect for passed ipv6 address.
otherwise i will check the same for ipv4 address and try to connect/call uv.tcp_connect.
I am using online lua editor there its returning nil.
local results = load('ping -q -c1 -6 localhost 2>&1 >/dev/null && printf "IPv6: true" || (ping -q -c1 www.google.com 2>&1 >/dev/null && printf "IPv4 true" || printf "false")')
print(results)
output is:nil
and
if i am using in lua online editor ..
local handler = io.popen("ping -c 3 -i 0.5 www.google.com")-- wrong here.
local response = handler:read("*a")
print(response)
output error :
lua: main.lua:3: expected near '"ping -c 3 -i 0.5 www.google.com"'
kindly suggest me , am i missing something above.
To store output of system commands i suggest io.popen().
An example for conditional ping that tries first IPv6 and if fail IPv4...
> code.cmd
-- cmd(shell)
return function(shell)
return io.popen(shell, 'r'):read('a+')
end
> results={}
> results.ping=load(code.cmd)()('ping -q -c1 -6 localhost 2>&1 >/dev/null && printf "IPv6: true" || (ping -q -c1 localhost 2>&1 >/dev/null && printf "IPv4 true" || printf "false")')
> print(results.ping)
IPv6: true
...typed in a Lua console.
EDIT
Online Lua Environments dont support above code!
I started a docker container based on an image which has a file "run.sh" in it. Within a shell script, i use docker exec as shown below
docker exec <container-id> sh /test.sh
test.sh completes execution but docker exec does not return until i press ctrl+C. As a result, my shell script never ends. Any pointers to what might be causing this.
I could get it working with adding the -it parameters:
docker exec -it <container-id> sh /test.sh
Mine works like a charm with this command. Maybe you only forgot the path to the binary (/bin/sh)?
docker exec 7bd877d15c9b /bin/bash /test.sh
File location at
/test.sh
File Content:
#!/bin/bash
echo "Hi"
echo
echo "This works fine"
sleep 5
echo "5"
Output:
ArgonQQ#Terminal ~ docker exec 7bd877d15c9b /bin/bash /test.sh
Hi
This works fine
5
ArgonQQ#Terminal ~
My case is a script a.sh with content
like
php test.php &
if I execute it like
docker exec contianer1 a.sh
It also never returned.
After half a day googling and trying
changed a.sh to
php test.php >/tmp/test.log 2>&1 &
It works!
So it seems related with stdin/out/err.
>/tmp/test.log 2>&1
Please try.
And please note that my test.php is a dead loop script that monitors a specified process, if the process is down, it will restart it. So test.php will never exit.
As described here, this "hanging" behavior occurs when you have processes that keep stdout or stderr open.
To prevent this from happening, each long-running process should:
be executed in the background, and
close both stdout and stderr or redirect them to files or /dev/null.
I would therefore make sure that any processes already running in the container, as well as the script passed to docker exec, conform to the above.
OK, I got it.
docker stop a590382c2943
docker start a590382c2943
then will be ok.
docker exec -ti a590382c2943 echo "5"
will return immediately, while add -it or not, no use
actually, in my program, the deamon has the std input and std output, std err. so I change my python deamon like following, things work like a charm:
if __name__ == '__main__':
# do the UNIX double-fork magic, see Stevens' "Advanced
# Programming in the UNIX Environment" for details (ISBN 0201563177)
try:
pid = os.fork()
if pid > 0:
# exit first parent
os._exit(0)
except OSError, e:
print "fork #1 failed: %d (%s)" % (e.errno, e.strerror)
os._exit(0)
# decouple from parent environment
#os.chdir("/")
os.setsid()
os.umask(0)
#std in out err, redirect
si = file('/dev/null', 'r')
so = file('/dev/null', 'a+')
se = file('/dev/null', 'a+', 0)
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
# do second fork
while(True):
try:
pid = os.fork()
if pid == 0:
serve()
if pid > 0:
print "Server PID %d, Daemon PID: %d" % (pid, os.getpid())
os.wait()
time.sleep(3)
except OSError, e:
#print "fork #2 failed: %d (%s)" % (e.errno, e.strerror)
os._exit(0)
I am trying to output the following tcpdump grep expression to a file :
tcpdump -vvvs 1024 -l -A tcp port 80 | grep -E 'X-Forwarded-For:' --line-buffered | awk '{print $2}
I understand it is related to the line-buffered option, that sends the output to stdin. However, if I don't use --line-buffered I don't get any output at all from my tcpdump.
How can I use grep so that it will send my output directly to stdout / file in this case ?
I am trying to output the following tcpdump grep expression to a file
Then redirect the output of the last command in the pipeline to the file:
tcpdump -vvvs 1024 -l -A tcp port 80 | grep -E 'X-Forwarded-For:' --line-buffered | awk '{print $2}' >file
I understand it is related to the line-buffered option, that sends the output to stdin.
No, that's not with --line-buffered does:
$ man grep
...
--line-buffered
Force output to be line buffered. By default, output is line
buffered when standard output is a terminal and block buffered
otherwise.
so it doesn't change where the output goes, it just changes when the data is actually written to the output descriptor if it's not a terminal. It's not a terminal in this case - it's a pipe - so, by default, it's block buffered, so if grep writes 4 lines of output, and that's less than a full buffer block (buffer blocks, in this context, are typically 4K bytes in most modern UN*Xes and on Windows, so it's likely that those 4 lines won't fill the buffer), those lines will not immediately be written by grep to the pipe, so they won't show up immediately.
--line-buffered changes that behavior, so that each line is written to the pipe as it's generated, and awk sees it sooner.
You're using -l with tcpdump, which has the same effect, at least on UN*X:
$ man tcpdump
...
-l Make stdout line buffered. Useful if you want to see the data
while capturing it. E.g.,
tcpdump -l | tee dat
or
tcpdump -l > dat & tail -f dat
Note that on Windows,``line buffered'' means ``unbuffered'', so
that WinDump will write each character individually if -l is
specified.
-U is similar to -l in its behavior, but it will cause output to
be ``packet-buffered'', so that the output is written to stdout
at the end of each packet rather than at the end of each line;
this is buffered on all platforms, including Windows.
So the pipeline, as you've written it, will cause grep to see each line that tcpdump prints as soon as tcpdump prints it, and cause awk to see each of those lines that contains "X-Forwarded-For:" as soon as grep sees it and matches it.
However, if I don't use --line-buffered I don't get any output at all from my tcpdump.
You'll see it eventually, as long as grep produces a buffer's worth of output; however, that could take a very long time. --line-buffered causes grep to write out each line as it's produced, so it shows up as soon as grep produces it, rather than the buffer is full.
How can I use grep so that it will send my output directly to stdout / file in this case ?
grep is sending its (standard) output to awk, which is presumably what you want; you're extracting the second field from grep's output and printing only that.
So you don't want grep to send its (standard) output directly to the terminal or to a file, you want it to send its output to awk and have awk send its (standard) output there. If you want the output to be printed on your terminal, your command is doing the right thing; if you want it sent to a file, redirect the standard output of awk to that file.
I am attempting to use popen to pipe a string containing multiple quotes to netcat. I have a Python command that works fine but I am turning it into an nmap script. I am not as familiar with Lua.
Python version:
python -c 'print "\x1b%-12345X#PJL FSDIRLIST NAME=\"0:\\..\\..\\..\\\" ENTRY=1 COUNT=999999\x0d\x0a\x1b%-12345X\x0d\x0a"' | nc 192.168.0.1 9100
Lua attempted version:
local handle = assert(io.popen("python -c 'print \"\x1b%-12345X#PJL FSDIRLIST NAME=\"0:\\..\\..\\..\\\" ENTRY=1 COUNT=999999\x0d\x0a\x1b%-12345X\x0d\x0a\"' | nc " .. host .. " " .. port, "r"))
This results in the following error:
File "<string>", line 1
print "2345X#PJL FSDIRLIST NAME="0:\..\..\..\" ENTRY=1 COUNT=999999
Is there a way to organize that string so that Lua will accept it?
Try using a long string
[[python -c 'print "\x1b%-12345X#PJL FSDIRLIST NAME=\"0:\\..\\..\\..\\\" ENTRY=1 COUNT=999999\x0d\x0a\x1b%-12345X\x0d\x0a"' | nc 192.168.0.1 9100]]