I'm trying to use GNU Parallel to help me process some remote files that I don't want to save locally.
My command looks somewhat like that:
python list_files.py | \
parallel -j5 'aws s3 cp s3://s3-bucket/{} -' | \
parallel -j5 --round --pipe -l 5000 "python process_and_print.py"
process_and_print.py prints output for some input lines, but that output doesn't get to stdout immediately like I expected, instead I only see the output after the process is finished. If I remove the --round parameter is all works as expected.
Where does all that data get saved? Do I have a way to print it to stdout, line by line, without buffering?
Where does all that data get saved?
All buffered output from GNU Parallel is buffered in temporary files in $TMPDIR / --tmpdir which defaults to /tmp. You cannot see the files, as they are immediately removed (but kept open) to avoid you having to clean up, if GNU Parallel is killed.
Do I have a way to print it to stdout, line by line,
--line-buffer
without buffering?
-u disables buffering all together, but then you cannot guarantee line-by-line.
--line-buffer buffers a full line in memory from version 20170822 and thus does not buffer in /tmp.
Related
If you run GNU parallel with --joblog path/to/logfile and then delete a line from said logfile while parallel is running, GNU parallel is no longer able to append future completed jobs to it.
Execute this MWE:
#!/usr/bin/bash
parallel -j1 -n0 --joblog log sleep 1 ::: $(seq 10) &
sleep 5 && sed -i '$ d' log
If you tail -f log prior to execution, you can see that parallel keeps writing to this file. However, if you cat log after 10 seconds, you will see that nothing was written to the actual file now on disk after the third entry or so.
What's the reason behind this? Is there a way to delete something from the file and have GNU parallel be able to still write to it?
Some background as to why this happened:
Using GNU parallel, I started a few jobs on remote machines with --sshloginfile. I then needed to pkill a few jobs on one of the machines because a colleague needed to use it (and I subsequently removed the machine from the sshloginfile so that parallel wouldn't reuse it for new runs). If you pkill those processes started on the remote machine, they get an Exitval of 0 (it looks like they finished without issues; you can't tell that they were killed). I wanted to remove them immediately from the joblog so that when I restart parallel --resume later, parallel can have a look at the joblog and determine what's missing.
Turns out, this was a bad idea, as now my joblog is useless.
While #MarkSetchell is absolutely right in his comment, root problem here is due to man sed lying:
-i[SUFFIX], --in-place[=SUFFIX]
edit files in place (makes backup if SUFFIX supplied)
sed -i does not edit files in place.
What it does is to make a temporary file in the same dir, copy the input file to the temporary file while doing the editing, and finally renaming the temporary file to the input file's name. Similar to this:
sed '$ d' log > sedXxO11P
mv sedXxO11P log
It is clear that the original log and sedXxO11P have different inodes - let us call them ino1 and ino2. GNU Parallel has ino1 open and really does not know about the existence of ino2. GNU Parallel will happily append to ino1 completely unaware that when it closes the file, the file will vanish because it has already been unlinked.
So you need to change the content of the file without changing the inode:
#!/usr/bin/bash
seq 10 | parallel -j1 -n0 --joblog log sleep 1 &
sleep 5
# Obvious race condition here:
# Anything appended to log before sed is done is lost.
# This can be avoided by suspending parallel while running this
tmp=$RANDOM$$
cp log $tmp
(rm $tmp; sed '$ d' >log) < $tmp
wait
cat log
This works right now. But do not expect this to be a supported feature - ever.
Strangely, I had assumed the -f option was for "force", not for "file".
I ran tar -xz because I wanted to see if any files would be overwritten. Now it has extracted all the files but has not returned control back to me. Should I just kill the process? Is it waiting for input?
-f commands tar to read the archive from a file. Without it, it tries to read it from stdin.
You can input Ctrl-C to kill it or Ctrl-D (Ctrl-Z in Windows) to send it EOF (at which point, it'll probably complain about incorrect archive format).
Without an -f option, tar will attempt to read from the TAPE device specified by the TAPE environment variable, or a file built into tar (usually something like /dev/st0 or stdin) if TAPE isn't set to anything.
I'm running a job on a cluster (using PBS) that runs out of memory. I'm trying to print the memory status for each node separately while my other job is running. I created a shell script and included a call to that script from inside my job submission script. But when I'm submitting my job it gives me permission denied error on the line that calls the script. I don't understand why do I get that error.
Secondly, I was thinking that I can have a 'watch free' or 'watch ps aux' in my script file but now I'm thinking if that will cause my submitted job to get stuck in memory-watching script and never continue to get to the main line that calls my parallel program.
After all, how can I achieve logging my memory in PBS for the jobs I'm submitting. My code is a C++ program using MRMPI (MPI MapReduce) library.
To see how much memory is being used throughout the job, run qstat -f:
$ qstat -f | grep used
resources_used.cput = 00:02:51
resources_used.energy_used = 0
resources_used.mem = 6960kb
resources_used.vmem = 56428kb
resources_used.walltime = 00:01:26
To examine past jobs you can look in the accounting file. This is located in the server_priv/accounting directory, the default is /var/spool/torque/server_priv/accounting/.
The entries look like this:
09/14/2015 10:52:11;E;202.napali;user=dbeer group=company jobname=intense.sh queue=batch ctime=1442248534 qtime=1442248534 etime=1442248534 start=1442248536 owner=dbeer#napali exec_host=napali/0-2 Resource_List.neednodes=1:ppn=3 Resource_List.nodect=1 Resource_List.nodes=1:ppn=3 session=20415 total_execution_slots=3 unique_node_count=1 end=0 Exit_status=0 resources_used.cput=1989 resources_used.energy_used=0 resources_used.mem=9660kb resources_used.vmem=58500kb resources_used.walltime=995
NOTE: if your ssh access to computing nodes of the cluster is closed, this method won't work!
This is how I ended up doing this. It might not be the best way but it works:
In summary, I added some short sleep periods in between my map and reduce steps by calling c++ sleep() function. And also wrote a script that ssh's to the nodes my job is running on and then gets the memory status on those nodes writing them in a file (using 'free' or 'top' commands).
More detailed: in my PBS job script, somewhere before the call to my binary, I added this line:
#this goes in job script, before the call to the job binary:
cat $PBS_NODEFILE > /some/path/nodelist.log
This writes a list of the nodes that my job runs on, into a file.
I have a second script "watchmem.sh":
#!/bin/bash
for i in $(seq 60)
do
while read line;
do
ssh $line 'bash -s' < /some/path/remote.sh "$line"
done < /some/path/nodelist.log
sleep 10
done
This script reads the file nodelist.log that we generated before, performs an ssh into each node and calls a third (and last script), remote.sh, on each of those nodes.
remote.sh contains the commands that we run on every node of our job. In this case it prints the current time and the result of 'free' into separate files for each node:
#remote.sh
echo "Current time : $(date)" >> $1
free >> $1 #this can be replaced by top by specifying a -n for it
Comparing the times from these files and the times I'm printing from my binary let's me find out the memory consumption (alloc/dealloc) in each step.
The sleep periods in my job is to make sure my scripts capture the memory status in between steps. 'sleep 10' in my script is to avoid unnecessary writes to the file; this period should be comparable to the sleep duration in the main job.
My problem is that: I have a bash script that do something and then call 800 bsub jobs like this:
pids=
rm -f ~/.count-*
for i in `ls _some_files_`; do
of=~/.count-${i}
bsub -I "grep _something_ $i > $of" &
pids="${!} ${pids}"
done
wait ${pids}
Then the scripts process the output files $of and echo the results.
The trouble is that I got a lot of lines like:
Job <7536> is submitted to default queue <interactive>.
<<Waiting for dispatch ...>>
<<Starting on hostA>>
It's actually 800 times the 3 lines above. Is there a way of suppressing this LSF lines?
I've tried in the loop above:
bsub -I "grep _something_ $i > $of" &> /dev/null
I does remove the LSF verbosity but instead of submitting almost all 800 jobs at once and then take less than 4 min to run, it submits just few jobs at a time and I have to wait more than an hour for the script to finish.
AFAIK lsf bsub doesn't seem to have a option to surpress all this verbosity. What can I do here?
You can suppress this output by setting the environment variable BSUB_QUIET to any value (including empty) before your bsub. So, before your loop say you could add:
export BSUB_QUIET=
Then if you want to return it back to normal you can clear the variable with:
unset BSUB_QUIET
Hope that helps you out.
Have you considered using job dependencies and post-process the logfiles?
1) Run each "child" job (removing the "-Is") and output the IO to separate output file. Each job should be submitted with a jobname (see -J). the jobname could form an array.
2) Your final job would be dependent on the children finishing (see -w).
Besides running concurrent across the cluster, another advantage of this approach is that your overall process is not susceptible to IO issues.
Is there a way to run shell commands without output buffering?
For example, hexdump file | ./my_script will only pass input from hexdump to my_script in buffered chunks, not line by line.
Actually I want to know a general solution how to make any command unbuffered?
Try stdbuf, included in GNU coreutils and thus virtually any Linux distro. This sets the buffer length for input, output and error to zero:
stdbuf -i0 -o0 -e0 command
The command unbuffer from the expect package disables the output buffering:
Ubuntu Manpage: unbuffer - unbuffer output
Example usage:
unbuffer hexdump file | ./my_script
AFAIK, you can't do it without ugly hacks. Writing to a pipe (or reading from it) automatically turns on full buffering and there is nothing you can do about it :-(. "Line buffering" (which is what you want) is only used when reading/writing a terminal. The ugly hacks exactly do this: They connect a program to a pseudo-terminal, so that the other tools in the pipe read/write from that terminal in line buffering mode. The whole problem is described here:
http://www.pixelbeat.org/programming/stdio_buffering/
The page has also some suggestions (the aforementioned "ugly hacks") what to do, i.e. using unbuffer or pulling some tricks with LD_PRELOAD.
You could also use the script command to make the output of hexdump line-buffered (hexdump will be run in a pseudo terminal which tricks hexdump into thinking its writing its stdout to a terminal, and not to a pipe).
# cf. http://unix.stackexchange.com/questions/25372/turn-off-buffering-in-pipe/
stty -echo -onlcr
script -q /dev/null hexdump file | ./my_script # FreeBSD, Mac OS X
script -q -c "hexdump file" /dev/null | ./my_script # Linux
stty echo onlcr
One should use grep or egrep "--line-buffered" options to solve this. no other tools needed.