Ordering Output in MPI - stdout

in a simple MPI program I have used a column wise division of a large matrix.
How can I order the output so that each matrix appears next to the other ordered ?
I have tried this simple code the effect is quite different from the wanted:
for(int i=0;i<10;i++)
{
for(int k=0;k<numprocs;k++)
{
if (my_id==k){
for(int j=1;j<10;j++)
printf("%d",data[i][j]);
}
MPI_Barrier(com);
}
if(my_id==0)
printf("\n");
}
Seems that each process has his own stdout and so is impossible to have ordered lines output without sending all the data to one master which will print out. Is my guess true ? Or what I'm doing wrong ?

You guessed right. The MPI standard does not specify how stdout from different nodes should be collected for printing at the originating process. It is often the case that when multiple processes are doing prints the output will get merged in an unspecified way. fflush doesn't help.
If you want the output ordered in a certain way, the most portable method would be to send the data to the master process for printing.
For example, in pseudocode:
if (rank == 0) {
print_col(0);
for (i = 1; i < comm_size; i++) {
MPI_Recv(buffer, .... i, ...);
print_col(i);
}
} else {
MPI_Send(data, ..., 0, ...);
}
Another method which can sometimes work would be to use barries to lock step processes so that each process prints in turn. This of course depends on the MPI Implementation and how it handles stdout.
for(i = 0; i < comm_size; i++) {
MPI_Barrier(MPI_COMM_WORLD);
if (i == rank) {
printf(...);
}
}
Of course, in production code where the data is too large to print sensibly anyway, data is eventually combine by having each process writing to a separate file and merged separately, or using MPI I/O (defined in the MPI2 standards) to coordinate parallel writes.

I produced ordered output to a file before using the exact same method. You could try printing to a temporary file, printing the contents of said file and then deleting it.

Have the root processor do all of the printing. Use MPI_Send/MPI_Recv or MPI_Gather (or whatever) to send the data in turn from each processor to the root.

To solve this problem you can use short sleep. I use and then it works in 99%
printf("text nr 1\n");
MPI_Barrier(MPI_COMM_WORLD);
usleep(100);
printf("text nr 2\n");
It's not very elegant but works.

Related

Save vector to file during debug session (Xcode)

My application has crashed in an assert, and the debugger is attached. To be able to reproduce the crash I want to save a C++ vector with 397 struct{uint64_t, uint64_t} elements to file.
My first approach was to try to print the vector. I can print the vector to the console, but it seems like only the first 256 values are written. Is it possible to remove the 256 element restriction?
I've also searched for a way to save the vector to file from within the debugger, but I've not found any way. I've not even found a way to save a memory region, but I guess that must be possible...
Since you mentioned that you're stopped in the debugger in Xcode, I'll assume you're debugging with lldb. You can use the expression command to execute essentially arbitrary code when you're stopped in the debugger, for example:
expression for(int j = 0; j < 10; j++) { (void)NSLog(#"%d", j); }
Will execute a for loop and print the numbers 0 through 9. You should be able to use a similar technique to iterate over your vector and write it to a file. You can combine multiple expressions using a semicolon, just as if you were writing normal code (well, except for newlines). For example, this will write "Hello, world" to a temporary file at /tmp/vector.dat, not exactly what you want, but I think you'll get the idea:
expression FILE *fp = (FILE*)fopen("/tmp/vector.dat", "w"); (void)fprintf(fp, "Hello, world!\n"); (void)fclose(fp);

OpenCV parallel_for not using multiple processors

I just saw in the new OpenCV 2.4.3 that they added a universal parallel_for. So following this example, I tried to implement it myself. I got it all functioning with my code, but when I timed its processing vs a similar loop done in a typical serial fashion with a regular "for" command, the results were insignificantly faster, or often a tiny bit slower!
I thought maybe this had something to do with my pushing into vectors or something (I'm a pretty big noob to parallel processing), so I set up a test loop of just running through a big number and it still doesn't work.
Code:
class Parallel_Test : public cv::ParallelLoopBody
{
private:
double* const mypointer;
public:
Parallel_Test(double* pointer)
: mypointer(pointer){
}
void operator() (const Range& range) const
{
//This constructor needs to be here otherwise it is considered an abstract class.
// qDebug()<<"This should never be called";
}
void operator ()(const cv::BlockedRange& range) const
{
for (int x = range.begin(); x < range.end(); ++x){
mypointer[x]=x;
}
}
};
//TODO Loop pixels in parallel
double t = (double)getTickCount();
//TEST PARALELL LOOPING AT ALL
double data1[1000000];
cv::parallel_for(BlockedRange(0, 1000000), Parallel_Test(data1));
t = ((double)getTickCount() - t)/getTickFrequency();
qDebug() << "Parallel TEST time " << t << endl;
t = (double)getTickCount();
for(int i =0; i<1000000; i++){
data1[i]=i;
}
t = ((double)getTickCount() - t)/getTickFrequency();
qDebug() << "SERIAL Scan time " << t << endl;
output:
Parallel TEST time 0.00415479
SERIAL Scan time 0.00204597
Wow! I found the answer! "parallel_for" and "parallel_for_" (with a trailing underscore!) are totally different. You need the trailing underscore to make it work! Otherwise it will just run your loop in serial and you will have to use a BLOCKEDRANGE instead of a range! AHH!
Thanks to #Daniil Osokin and especially #Vladislav Vinogradov for pointing this out!
So again you code will need to look something like this:
cv::parallel_for_(Range(0, 1000000), Parallel_Test(data1));
More updated details at: http://answers.opencv.org/question/3730/how-to-use-parallel_for/
The problem is most likely that your loop body is too small.
It appears all you are doing is assigning a pointer in one vector to another.
You really need to think of a parallel for as an inefficient for loop, that is the work inside each iteration needs to be large enough so that you wouldn't dream of getting speedups by unrolling the loop because in addition to the usual decrement, compare and jump that can go on you also have a few interlocked instructions and perhaps a virtual function call or two and some allocations.
So instead of copying a pointer try doing a good amount of real math or work on a large array of data.

Cuda - selective memory store

In my kernel, if a condition is met, I update an item of the output buffer
if (condition(input[i])) //?
output[i] = 1;
otherwise the output may stay the same, having value of 0.
The density of updates are quite unpredictable, depending on the input. Furthermore which output location will be updated is also not known. (i may force them though, in some cases)
My question is, is it better to write all items, to achieve coalescing, or do a selective write?
output[i] = condition(input[i]); //?
Would you mind discussing your statements?
Coalescing is achieved even if some threads in the warp do not participate in the load or store, as long as all participating threads satisfy the requirements of coalescing. So conditional writes should have no effect on memory throughput.
However, doing a conditional write may involve additional instructions due to involving a branch (this would probably explain, for example, the difference in performance measured by Eugene in his answer).
On my setup kernel that does conditional set (option 1) runs for 1.727 us and option 2 1.399 us. This is my code (setConditional is the faster one):
__global__ void conditionalSet(unsigned int* array) {
if ((threadIdx.x & 3) == 0) {
array[threadIdx.x] = 1;
}
}
__global__ void setConditional(unsigned int* array) {
array[threadIdx.x] = (threadIdx.x & 3) == 0 ? 1 : 0;
}

char buffers comparison

i have two char buffers which i am trying to compare parts of them. i am having a weird problem. i have the following code:
char buffer1[50], buffer2[60];
// Get buffer1 and buffer2 from the network by reading sockets
for(int i = 0; i < 20; i++)
{
if(buffer1[15+i] != buffer2[25+i])
{
printf("%c", buffer1[15+i]);
printf("%c", buffer2[25+i]);
printf("%02x", (unsigned char)buffer1[15+i]);
printf("%02x", (unsigned char)buffer2[25+i]);
break;
}
}
The above code is a simplified version of my actual code which I didnt copy-paste here because its too long. Just in case this might help, I got those two buffer over the network by reading sockets.
The problem is the loop breaks even when both the buffers are the same. To check what is in the buffers, I added the two print statements inside the if statement. And the weird thing is is, the printf statements both print the same value for %c and %02x, but the comparison fails and the loop breaks.
(Disclaimer: I'm not a C/++ expert)
It seems to me like the data is changing while you're looking at it. Two quick questions come to mind:
If you run this in the debugger, and go over the loop step-by-step, does it still happen? If it doesn't, then I strongly suspect my second question will lead you to the answer.
Is the read operation asynchronous? It seems like data is still being read while you're inside the for loop, meaning you didn't wait for the read to finish.
The only thing I see is a timing issue. If they are not the same on the if statement and they are the same on the print statement someone changed them in between.

Parsing really big log files (>1Gb, <5Gb)

I need to parse very large log files (>1Gb, <5Gb) - actually I need to strip the data into objects so I can store them in a DB. The log file is sequential (no line breaks), like:
TIMESTAMP=20090101000000;PARAM1=Value11;PARAM2=Value21;PARAM3=Value31;TIMESTAMP=20090101000100;PARAM1=Value11;PARAM2=Value21;PARAM3=Value31;TIMESTAMP=20090101000152;PARAM1=Value11;PARAM2=Value21;PARAM3=Value31;...
I need to strip this into the table:
TIMESTAMP | PARAM1 | PARAM2 | PARAM3
The process need to be as fast as possible. I'm considering using Perl, but any suggestions using C/C++ would be really welcome. Any ideas?
Best regards,
Arthur
Write a prototype in Perl and compare its performance against how fast you can read data off of the storage medium. My guess is that you'll be I/O bound, which means that using C won't offer a performance boost.
This presentation about the use of Python generators blew my mind:
http://www.dabeaz.com/generators-uk/
David M. Beazley shows how to process multi-gigabyte log files by basically defining a generator for each processing step. The generators are then 'plugged' into each other until you have some simple utility functions
lines = lines_from_dir("access-log*","www")
log = apache_log(lines)
for r in log:
print r
which can then be used for all sorts of querying:
stat404 = set(r['request'] for r in log
if r['status'] == 404)
large = (r for r in log
if r['bytes'] > 1000000)
for r in large:
print r['request'], r['bytes']
He also shows that performance compares well to the performance of standard unix tools like grep, find etc.
Of course this being Python, it's much easier to understand and most importantly easier to customise or adapt to different problem sets than perl or awk scripts.
(The code examples above are copied from the presentation slides.)
Lex handles this sort of things amazingly well.
But really, use AWK. It's performance is not bad, even comparing with Perl, etc. Of cource Map/Reduce would work quite well, but what about the overhead of splitting the file into appropriate chunks?
Try AWK
The key won't be the language because the problem is I/O bound, so pick the language that you feel most comfortable with.
The key is how it is coded. You'll be fine as long as you don't load the whole file in memory -- load chunks at a time, and save the data chunks at a time, it will be more efficient.
Java has a PushbackInputStream that may make this easier to code. The idea is that you guess how much to read, and if you read too little, then push the data back, and read a larger chunk.
Then when you've read too much, process the data and then push back the remaining bit and continue to the next iteration of the loop.
Something like this should work.
use strict;
use warnings;
my $filename = shift #ARGV;
open my $io, '<', $filename or die "Can't open $filename";
my ($match_buf, $read_buf, $count);
while (($count = sysread($io, $read_buf, 1024, 0)) != 0) {
$match_buf .= $read_buf;
while ($match_buf =~ s{TIMESTAMP=(\d{14});PARAM1=([^;]+);PARAM2=([^;]+);PARAM3=([^;]+);}{}) {
my ($timestamp, #params) = ($1, $2, $3, $4);
print $timestamp ."\n";
last unless $timestamp;
}
}
This is easily handled in Perl, Awk, or C. Here's a start on a version in C for you:
#include <stdio.h>
#include <err.h>
int
main(int argc, char **argv)
{
const char *filename = "noeol.txt";
FILE *f;
char buffer[1024], *s, *p;
char line[1024];
size_t n;
if ((f = fopen(filename, "r")) == NULL)
err(1, "cannot open %s", filename);
while (!feof(f)) {
n = fread(buffer, 1, sizeof buffer, f);
if (n == 0)
if (ferror(f))
err(1, "error reading %s", filename);
else
continue;
for (s = p = buffer; p - buffer < n; p++) {
if (*p == ';') {
*p = '\0';
strncpy(line, s, p-s+1);
s = p + 1;
if (strncmp("TIMESTAMP", line, 9) != 0)
printf("\t");
printf("%s\n", line);
}
}
}
fclose(f);
}
Sounds like a job for sed:
sed -e 's/;\?[A-Z0-9]*=/|/g' -e 's/\(^\|\)\|\(;$\)//g' < input > output
You might want to take a look at Hadoop (java) or Hadoop Streaming (runs Map/Reduce jobs with any executable or script).
If you code your own solution, you will probably benefit from reading larger chunks of data from the file and processing them in batches (rather than using, say, readline()) and looking for the newline marking the end of each row. With this approach, you need to be mindful that you may not have retrieved the entirety of the last line, so some logic would be required to handle that.
I don't know what performance benefits you'd realize, since I haven't tested it, but I've leveraged similar techniques with success.
I know this is an exotic language and may be not the best solution to do that but when i've ad hoc data, i consider PADS

Resources