Seemingly inconsistent results when profiling memory in Go - memory
I have recently been running some numerical codes written in Go on large datasets and have been encountering memory management issues. While attempting to profile the problem, I have measured the memory usage of my program in three different ways: with Go's runtime/pprof package, with the unix time utility, and by manually adding up the size of the data that I allocated. These three methods do not give me consistent results.
Below is a simplified version of the code that I am profiling. It allocates several slices, puts values at every index and places each of them inside of a parent slice:
package main
import (
"fmt"
"os"
"runtime/pprof"
"unsafe"
"flag"
)
var mprof = flag.String("mprof", "", "write memory profile to this file")
func main() {
flag.Parse()
N := 1<<15
psSlice := make([][]int64, N)
_ = psSlice
size := 0
for i := 0; i < N; i++ {
ps := make([]int64, 1<<10)
for i := range ps { ps[i] = int64(i) }
psSlice[i] = ps
size += int(unsafe.Sizeof(ps[0])) * len(ps)
}
if *mprof != "" {
f, err := os.Create(*mprof)
if err != nil { panic(err) }
pprof.WriteHeapProfile(f)
f.Close()
}
fmt.Printf("total allocated: %d MB\n", size >> 20)
}
Running this with the command $ time time -f "%M kB" ./mem_test -mprof=out.mprof results in the output:
total allocated: 256 MB
1141216 kB
real 0m0.150s
user 0m0.031s
sys 0m0.113s
Here the first number, 256 MB, is just the size of the arrays computed from unsafe.Sizeof and the second number, 1055 MB, is what time reports. Running the pprof tool results in
(pprof) top1
Total: 108.2 MB
107.8 99.5% 99.5% 107.8 99.5% main.main
These results scale smoothly in the way you would expect them to for slices of smaller or larger lengths.
Why don't these three number line up more closely?
First, you need to provide an error free example. Let's start with the basic numbers. For example,
package main
import (
"fmt"
"runtime"
"unsafe"
)
func WriteMatrix(nm [][]int64) {
for n := range nm {
for m := range nm[n] {
nm[n][m]++
}
}
}
func NewMatrix(n, m int) [][]int64 {
a := make([]int64, n*m)
nm := make([][]int64, n)
lo, hi := 0, m
for i := range nm {
nm[i] = a[lo:hi:hi]
lo, hi = hi, hi+m
}
return nm
}
func MatrixSize(nm [][]int64) int64 {
size := int64(0)
for i := range nm {
size += int64(unsafe.Sizeof(nm[i]))
for j := range nm[i] {
size += int64(unsafe.Sizeof(nm[i][j]))
}
}
return size
}
var nm [][]int64
func main() {
n, m := 1<<15, 1<<10
var ms1, ms2 runtime.MemStats
runtime.ReadMemStats(&ms1)
nm = NewMatrix(n, m)
WriteMatrix(nm)
runtime.ReadMemStats(&ms2)
fmt.Println(runtime.GOARCH, runtime.GOOS)
fmt.Println("Actual: ", ms2.TotalAlloc-ms1.TotalAlloc)
fmt.Println("Estimate:", n*3*8+n*m*8)
fmt.Println("Total: ", ms2.TotalAlloc)
fmt.Println("Size: ", MatrixSize(nm))
// check top VIRT and RES for COMMAND peter
for {
WriteMatrix(nm)
}
}
Output:
$ go build peter.go && /usr/bin/time -f "%M KiB" ./peter
amd64 linux
Actual: 269221888
Estimate: 269221888
Total: 269240592
Size: 269221888
^C
Command exited with non-zero status 2
265220 KiB
$
$ top
VIRT 284268 RES 265136 COMMAND peter
Is this what you expected?
See MatrixSize for the correct way to calculate the memory size.
In the infinite loop that allows us to use the top command, pin the matrix as resident by updating it.
What results do you get when you run this program?
BUG:
Your result from /usr/bin/time is 1056992 KiB which too large by a factor of four. It's a bug in your version of /usr/bin/time, ru_maxrss is reported in KBytes not pages. My version of Ubuntu has been patched.
References:
Re: GNU time: incorrect results
time-1.7 counts rusage wrong on Linux
GNU Project Archives: time
“time” 1.7-24 source package in Ubuntu. ru_maxrss is reported in KBytes not pages. (Closes: #649402)
#649402 - [PATCH] time overestimates max RSS by a factor of 4 - Debian Bug report logs
Subject: Fix ru_maxrss reporting Author: Richard Kettlewell
Bug-Debian: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=649402
--- time-1.7.orig/time.c
+++ time-1.7/time.c
## -392,7 +398,7 ##
ptok ((UL) resp->ru.ru_ixrss) / MSEC_TO_TICKS (v));
break;
case 'M': /* Maximum resident set size. */
- fprintf (fp, "%lu", ptok ((UL) resp->ru.ru_maxrss));
+ fprintf (fp, "%lu", (UL) resp->ru.ru_maxrss);
break;
case 'O': /* Outputs. */
fprintf (fp, "%ld", resp->ru.ru_oublock);
Related
Why is GraalVM + native-image slower than GraalVM alone on a while loop?
Just for fun, I'm trying to compare gcc (9.4.0), OpenJDK (11.0.12), GraalVM (22.3.r19) and GraalVM + native-image (22.3.r19) performances on a "while loop n++" use case (see programs below). Bottom line on Linux is (see results below): native-image is slower than all other options. So I'm wondering: am I missing something (magic command-line option)? Or is it just life that native-image is slower on this particular program (and that's fine)? Count.java public class Count { public static void main(String[] args) { int n = 0; int inc = Math.random() >= 0 ? 1 : 0; // to prevent the optimizer from removing the loop while (n < 1000000000) { n += inc; } System.out.println(n); } } count.c #include <time.h> #include <stdlib.h> int main(int argc, char *argv[]) { int n = 0; srand(time(NULL)); int inc = rand() >= 0 ? 1 : 0; // to prevent the optimizer from removing the loop while (n < 1000000000) { n += inc; } } gcc: me#laptop:~/dev/java-count-graalvm$ gcc -O2 -s -DNDEBUG count.c -o count me#laptop:~/dev/java-count-graalvm$ time ./count real 0m0,261s user 0m0,261s sys 0m0,000s OpenJDK 11: me#laptop:~/dev/java-count-graalvm$ time java -classpath target/classes Count 1000000000 real 0m0,632s user 0m0,612s sys 0m0,030s GraalVM: me#laptop:~/dev/java-count-graalvm$ time java -classpath target/classes Count 1000000000 real 0m0,326s user 0m0,362s sys 0m0,013s GraalVM native-image: me#laptop:~/dev/java-count-graalvm$ native-image -cp target/classes Count me#laptop:~/dev/java-count-graalvm$ time ./count 1000000000 real 0m1,283s user 0m1,271s sys 0m0,013s For the sake of sanity, I commented-out the while loop and native image is returning in 3 milliseconds: bruno#hearne:~/dev/java-count-graalvm$ time ./count 0 real 0m0,003s user 0m0,000s sys 0m0,003s So I would say that the penalty is coming from the while loop and nothing else.
Amount of local memory per CUDA thread
I read in NVIDIA documentation (http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#features-and-technical-specifications, table #12) that the amount of local memory per thread is 512 Ko for my GPU (GTX 580, compute capability 2.0). I tried unsuccessfully to check this limit on Linux with CUDA 6.5. Here is the code I used (its only purpose is to test local memory limit, it doesn't make any usefull computation): #include <iostream> #include <stdio.h> #define MEMSIZE 65000 // 65000 -> out of memory, 60000 -> ok inline void gpuAssert(cudaError_t code, const char *file, int line, bool abort=false) { if (code != cudaSuccess) { fprintf(stderr,"GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line); if( abort ) exit(code); } } inline void gpuCheckKernelExecutionError( const char *file, int line) { gpuAssert( cudaPeekAtLastError(), file, line); gpuAssert( cudaDeviceSynchronize(), file, line); } __global__ void kernel_test_private(char *output) { int c = blockIdx.x*blockDim.x + threadIdx.x; // absolute col int r = blockIdx.y*blockDim.y + threadIdx.y; // absolute row char tmp[MEMSIZE]; for( int i = 0; i < MEMSIZE; i++) tmp[i] = 4*r + c; // dummy computation in local mem for( int i = 0; i < MEMSIZE; i++) output[i] = tmp[i]; } int main( void) { printf( "MEMSIZE=%d bytes.\n", MEMSIZE); // allocate memory char output[MEMSIZE]; char *gpuOutput; cudaMalloc( (void**) &gpuOutput, MEMSIZE); // run kernel dim3 dimBlock( 1, 1); dim3 dimGrid( 1, 1); kernel_test_private<<<dimGrid, dimBlock>>>(gpuOutput); gpuCheckKernelExecutionError( __FILE__, __LINE__); // transfer data from GPU memory to CPU memory cudaMemcpy( output, gpuOutput, MEMSIZE, cudaMemcpyDeviceToHost); // release resources cudaFree(gpuOutput); cudaDeviceReset(); return 0; } And the compilation command line: nvcc -o cuda_test_private_memory -Xptxas -v -O2 --compiler-options -Wall cuda_test_private_memory.cu The compilation is ok, and reports: ptxas info : 0 bytes gmem ptxas info : Compiling entry function '_Z19kernel_test_privatePc' for 'sm_20' ptxas info : Function properties for _Z19kernel_test_privatePc 65000 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads ptxas info : Used 21 registers, 40 bytes cmem[0] I got an "out of memory" error at runtime on the GTX 580 when I reached 65000 bytes per thread. Here is the exact output of the program in the console: MEMSIZE=65000 bytes. GPUassert: out of memory cuda_test_private_memory.cu 48 I also did a test with a GTX 770 GPU (on Linux with CUDA 6.5). It ran without error for MEMSIZE=200000, but the "out of memory error" occurred at runtime for MEMSIZE=250000. How to explain this behavior ? Am I doing something wrong ?
It seems you are running into not a local memory limitation but a stack size limitation: ptxas info : Function properties for _Z19kernel_test_privatePc 65000 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads The variable that you had intended to be local is on the (GPU thread) stack, in this case. Based on the information provided by #njuffa here, the available stack size limit is the lesser of: The maximum local memory size (512KB for cc2.x and higher) GPU memory/(#of SMs)/(max threads per SM) Clearly, the first limit is not the issue. I assume you have a "standard" GTX580, which has 1.5GB memory and 16 SMs. A cc2.x device has a maximum of 1536 resident threads per multiprocessor. This means we have 1536MB/16/1536 = 1MB/16 = 65536 bytes stack. There is some overhead and other memory usage that subtracts from the total available memory, so the stack size limit is some amount below 65536, somewhere between 60000 and 65000 in your case, apparently. I suspect a similar calculation on your GTX770 would yield a similar result, i.e. a maximum stack size between 200000 and 250000.
associative arrays in awk challenging memory limits
This is related to my recent post in Awk code with associative arrays -- array doesn't seem populated, but no error and also to optimizing loop, passing parameters from external file, naming array arguments within awk My basic problem here is simply to compute from detailed ancient archival financial market data, daily aggregates of #transactions, #shares, value, BY DATE, FIRM-ID, EXCHANGE, etc. Learnt to use associative arrays in awk for this, and was thrilled to be able to process 129+ million lines in clock time of under 11 minutes. Literally before I finished my coffee. Became a little more ambitious, and moved from 2 array subscripts to 4, and now I am unable to process more than 6500 lines at a time. Get error messages of the form: K:\User Folders\KRISHNANM\PAPERS\FII_Transaction_Data>zcat RAW_DATA\2003_1.zip | gawk -f CODE\FII_daily_aggregates_v2.awk > OUTPUT\2003_1.txt& gawk: CODE\FII_daily_aggregates_v2.awk:33: (FILENAME=- FNR=49300) fatal: more_no des: nextfree: can't allocate memory (Not enough space) On some runs the machine has told me it lacks as little as 52 KB of memory. I have what I think of a std configuration with Win-7 and 8MB RAM. (Economist by training, not computer scientist.) I realize that going from 2 to 4 arrays makes the problem computationally much more complex for the computer, but is there something one can do to improve memory management at least a little bit. I have tried closing everything else I am doing. The error always has to do only with memory, never with disk space or anything else. Sample INPUT: 49290,C198962542782200306,6/30/2003,433581,F5811773991200306,S5405611832200306,B5086397478200306,NESTLE INDIA LTD.,INE239A01016,6/27/2003,1,E9035083824200306,REG_DL_STLD_02,591.13,5655,3342840.15,REG_DL_INSTR_EQ,REG_DL_DLAY_P,DL_RPT_TYPE_N,DL_AMDMNT_DEL_00 49291,C198962542782200306,6/30/2003,433563,F6292896459200306,S6344227311200306,B6110521493200306,GRASIM INDUSTRIES LTD.,INE047A01013,6/27/2003,1,E9035083824200306,REG_DL_STLD_02,495.33,3700,1832721,REG_DL_INSTR_EQ,REG_DL_DLAY_P,DL_RPT_TYPE_N,DL_AMDMNT_DEL_00 49292,C198962542782200306,6/30/2003,433681,F6513202607200306,S1724027402200306,B6372023178200306,HDFC BANK LTD,INE040A01018,6/26/2003,1,E745964372424200306,REG_DL_STLD_02,242,2600,629200,REG_DL_INSTR_EQ,REG_DL_DLAY_D,DL_RPT_TYPE_N,DL_AMDMNT_DEL_00 49293,C7885768925200306,6/30/2003,48128,F4406661052200306,S7376401565200306,B4576522576200306,Maruti Udyog Limited,INE585B01010,6/28/2003,3,E912851176274200306,REG_DL_STLD_04,125,44600,5575000,REG_DL_INSTR_EQ,REG_DL_DLAY_P,DL_RPT_TYPE_N,DL_AMDMNT_DEL_00 49294,C7885768925200306,6/30/2003,48129,F4500260787200306,S1312094035200306,B4576522576200306,Maruti Udyog Limited,INE585B01010,6/28/2003,4,E912851176274200306,REG_DL_STLD_04,125,445600,55700000,REG_DL_INSTR_EQ,REG_DL_DLAY_P,DL_RPT_TYPE_N,DL_AMDMNT_DEL_00 49295,C7885768925200306,6/30/2003,48130,F6425024637200306,S2872499118200306,B4576522576200306,Maruti Udyog Limited,INE585B01010,6/28/2003,3,E912851176274200306,REG_DL_STLD_04,125,48000,6000000,REG_DL_INSTR_EU,REG_DL_DLAY_P,DL_RPT_TYPE_N,DL_AMDMNT_DEL_00 Code BEGIN { FS = "," } # For each array subscript variable -- DATE ($10), firm_ISIN ($9), EXCHANGE ($12), and FII_ID ($5), after checking for type = EQ, set up counts for each value, and number of unique values. ( $17~/_EQ\>/ ) { if (date[$10]++ == 0) date_list[d++] = $10; if (isin[$9]++ == 0) isin_list[i++] = $9; if (exch[$12]++ == 0) exch_list[e++] = $12; if (fii[$5]++ == 0) fii_list[f++] = $5; } # For cash-in, buy (B), or cash-out, sell (S) count NR = no of records, SH = no of shares, RV = rupee-value. (( $17~/_EQ\>/ ) && ( $11~/1|2|3|5|9|1[24]/ )) {{ ++BNR[$10,$9,$12,$5]} {BSH[$10,$9,$12,$5] += $15} {BRV[$10,$9,$12,$5] += $16} } (( $17~/_EQ\>/ ) && ( $11~/4|1[13]/ )) {{ ++SNR[$10,$9,$12,$5]} {SSH[$10,$9,$12,$5] += $15} {SRV[$10,$9,$12,$5] += $16} } END { { print NR, "records processed."} { print " " } { printf("%-11s\t%-13s\t%-20s\t%-19s\t%-7s\t%-7s\t%-14s\t%-14s\t%-18s\t%-18s\n", \ "DATE", "ISIN", "EXCH", "FII", "BNR", "SNR", "BSH", "SSH", "BRV", "SRV") } { for (u = 0; u < d; u++) { for (v = 0; v < i; v++) { for (w = 0; w < e; w++) { for (x = 0; x < f; x++) #check first below for records with zeroes, don't print them { if (BNR[date_list[u],isin_list[v],exch_list[w],fii_list[x]] + SNR[date_list[u],isin_list[v],exch_list[w],fii_list[x]] > 0) { BR = BNR[date_list[u],isin_list[v],exch_list[w],fii_list[x]] SR = SNR[date_list[u],isin_list[v],exch_list[w],fii_list[x]] BS = BSH[date_list[u],isin_list[v],exch_list[w],fii_list[x]] BV = BRV[date_list[u],isin_list[v],exch_list[w],fii_list[x]] SS = SSH[date_list[u],isin_list[v],exch_list[w],fii_list[x]] SV = SRV[date_list[u],isin_list[v],exch_list[w],fii_list[x]] { printf("%-11s\t%13s\t%20s\t%19s\t%7d\t%7d\t%14d\t%14d\t%18.2f\t%18.2f\n", \ date_list[u], isin_list[v], exch_list[w], fii_list[x], BR, SR, BS, SS, BV, SV) } } } } } } } } Expected output 6 records processed. DATE ISIN EXCH FII BNR SNR BSH SSH BRV SRV 6/27/2003 INE239A01016 E9035083824200306 F5811773991200306 1 0 5655 0 3342840.15 0.00 6/27/2003 INE047A01013 E9035083824200306 F6292896459200306 1 0 3700 0 1832721.00 0.00 6/26/2003 INE040A01018 E745964372424200306 F6513202607200306 1 0 2600 0 629200.00 0.00 6/28/2003 INE585B01010 E912851176274200306 F4406661052200306 1 0 44600 0 5575000.00 0.00 6/28/2003 INE585B01010 E912851176274200306 F4500260787200306 0 1 0 445600 0.00 55700000.00 It is in this case that as the number of input records exceeds 6500, I end up having memory problems. Have about 7 million records in all. For a 2 array subscript problem, albeit on a different data set, where 129+ million lines were processed in clock time of 11 minutes using the same GNU-AWK on the same machine, see optimizing loop, passing parameters from external file, naming array arguments within awk Question: is it the case that awk is not very smart with memory management, but that some other more modern tools (say, SQL) would accomplish this task with the same memory resources? Or is this simply a characteristic of associative arrays, which I found magical in enabling me to avoid many passes over the data, many loops and SORT procedures, but which maybe work well up to 2 array subscripts, and then face exponential memory resource costs after that? Afterword: the super-detailed almost-idiot-proof tutorial along with the code provided by Ed Morton in comments below makes a dramatic difference, especially his GAWK script tst.awk. He taught me about (a) using SUBSEP intelligently (b) tackling needless looping, which is crucial in this problem which tends to have very sparse arrays, with various AWK constructs. Compared to performance with my old code (only up to 6500 lines of input accepted on one machine, another couldn't even get that far), the performance of Ed Morton's tst.awk can be seen from the table below: **filename start end min in ln out lines 2008_1 12:08:40 AM 12:27:18 AM 0:18 391438 301160 2008_2 12:27:18 AM 12:52:04 AM 0:24 402016 314177 2009_1 12:52:05 AM 1:05:15 AM 0:13 302081 238204 2009_2 1:05:15 AM 1:22:15 AM 0:17 360072 276768 2010_1 "slept" 507496 397533 2010_2 3:10:26 AM 3:10:50 AM 0:00 76200 58228 2010_3 3:10:50 AM 3:11:18 AM 0:00 80988 61725 2010_4 3:11:18 AM 3:11:47 AM 0:00 86923 65885 2010_5 3:11:47 AM 3:12:15 AM 0:00 80670 63059** Times were obtained simply from using %time% on lines before and after tst.awk was executed, all put in a simple batch script, "min" is the clock time taken (per whatever rounding EXCEL does by default), "in ln" and "out lines" are lines of input and output, respectively. From processing the entire data that we have, from Jan 2003 to Jan 2014, we find the theoretical max number of output records = #dates*#ISINs*#Exchanges*#FIIs = 2992*2955*567*82268, while the actual number of total output lines is only 5,261,942, which is only 1.275*10^(-8) of the theoretical max -- very sparse indeed. That there was sparseness, we did guess earlier, but that the arrays could be SO sparse -- which matters a lot for memory management -- we had no way of telling till something actually completed, for a real data set. Time taken seems to increase exponentially in input size, but within limits that pose no practical difficulty. Thanks a ton, Ed.
There is no problem with associative arrays in general. In awk (except gawk for true 2D arrays) an associative array with 4 subscripts is identical to one with 2 subscripts since in reality it only has one subscript which is the concatenation of each of the pseudo-subscripts separated by SUBSEP. Given you say I am unable to process more than 6500 lines at a time. the problem is far more likely to be in the way you wrote your code than any fundamental awk issue so if you'd like more help, post a small script with sample input and expected output that demonstrates your problem and attempted solution to see if we have suggestions on way to improve it's memory usage. Given your posted script, I expect the problem is with those nested loops in your END section When you do: for (i=1; i<=maxI; i++) { for (j=1; j<=maxJ; j++) { if ( arr[i,j] != 0 ) { print arr[i,j] } } } you are CREATING arr[i,j] for every possible combination of i and j that didn't exist prior to the loop just by testing for arr[i,j] != 0. If you instead wrote: for (i=1; i<=maxI; i++) { for (j=1; j<=maxJ; j++) { if ( (i,j) in arr ) { print arr[i,j] } } } then the loop itself would not create new entries in arr[]. So change this block: if (BNR[date_list[u],isin_list[v],exch_list[w],fii_list[x]] + SNR[date_list[u],isin_list[v],exch_list[w],fii_list[x]] > 0) { BR = BNR[date_list[u],isin_list[v],exch_list[w],fii_list[x]] SR = SNR[date_list[u],isin_list[v],exch_list[w],fii_list[x]] BS = BSH[date_list[u],isin_list[v],exch_list[w],fii_list[x]] BV = BRV[date_list[u],isin_list[v],exch_list[w],fii_list[x]] SS = SSH[date_list[u],isin_list[v],exch_list[w],fii_list[x]] SV = SRV[date_list[u],isin_list[v],exch_list[w],fii_list[x]] which is probably unnecessarily turning each of BNR, SNR, BSH, BRV, SSH, and SRV into huge but highly sparse arrays, to something like this: idx = date_list[u] SUBSEP isin_list[v] SUBSEP exch_list[w] SUBSEP fii_list[x] BR = (idx in BNR ? BNR[idx] : 0) SR = (idx in SNR ? SNR[idx] : 0) if ( (BR + SR) > 0 ) { BS = (idx in BSH ? BSH[idx] : 0) BV = (idx in BRV ? BRV[idx] : 0) SS = (idx in SSH ? SSH[idx] : 0) SV = (idx in SRV ? SRV[idx] : 0) and let us know if that helps. Also check your code for other places where you might be doing the same. The reason you have this problem with 4 subscripts when you didn't with 2 is simply that you have 4 levels of nesting in the loops now creating much larger and more sparse arrays when when you just had 2. Finally - you have some weird syntax in your script, some of which #MarkSetchell pointed out in a comment, and your script isn't as efficient as it could be since you're not using else statements and so testing for multiple conditions that can't possibly all be true and you're testing the same condition repeatedly, and it's not robust as you aren't anchoring your REs (e.g you test /4|1[13]/ instead of /^(4|1[13])$/ so for example your 4 would match on 14 or 41 etc. instead of just 4 on its own) so change your whole script to this: $ cat tst.awk BEGIN { FS = "," } # For each array subscript variable -- DATE ($10), firm_ISIN ($9), EXCHANGE ($12), and FII_ID ($5), after checking for type = EQ, set up counts for each value, and number of unique values. $17 ~ /_EQ\>/ { if (!seenDate[$10]++) date_list[++d] = $10 if (!seenIsin[$9]++) isin_list[++i] = $9 if (!seenExch[$12]++) exch_list[++e] = $12 if (!seenFii[$5]++) fii_list[++f] = $5 # For cash-in, buy (B), or cash-out, sell (S) count NR = no of records, SH = no of shares, RV = rupee-value. idx = $10 SUBSEP $9 SUBSEP $12 SUBSEP $5 if ( $11 ~ /^([12359]|1[24])$/ ) { ++BNR[idx]; BSH[idx] += $15; BRV[idx] += $16 } else if ( $11 ~ /^(4|1[13])$/ ) { ++SNR[idx]; SSH[idx] += $15; SRV[idx] += $16 } } END { print NR, "records processed." print " " printf "%-11s\t%-13s\t%-20s\t%-19s\t%-7s\t%-7s\t%-14s\t%-14s\t%-18s\t%-18s\n", "DATE", "ISIN", "EXCH", "FII", "BNR", "SNR", "BSH", "SSH", "BRV", "SRV" for (u = 1; u <= d; u++) { for (v = 1; v <= i; v++) { for (w = 1; w <= e; w++) { for (x = 1; x <= f; x++) { #check first below for records with zeroes, don't print them idx = date_list[u] SUBSEP isin_list[v] SUBSEP exch_list[w] SUBSEP fii_list[x] BR = (idx in BNR ? BNR[idx] : 0) SR = (idx in SNR ? SNR[idx] : 0) if ( (BR + SR) > 0 ) { BS = (idx in BSH ? BSH[idx] : 0) BV = (idx in BRV ? BRV[idx] : 0) SS = (idx in SSH ? SSH[idx] : 0) SV = (idx in SRV ? SRV[idx] : 0) printf "%-11s\t%13s\t%20s\t%19s\t%7d\t%7d\t%14d\t%14d\t%18.2f\t%18.2f\n", date_list[u], isin_list[v], exch_list[w], fii_list[x], BR, SR, BS, SS, BV, SV } } } } } } I added seen in front of 4 array names just because by convention arrays testing for the pre-existence of a value are typically named seen. Also, when populating the SNR[] etc arrays I created an idx variable first instead of repeatedly using the field numbers every time for both ease of changing it in future and mostly because string concatenation is relatively slow in awk and that's whats happening when you use multiple indices in an array so best to just do the string concatenation once explicitly. And I changed your date_list[] etc arrays to start at 1 instead of zero because all awk-generated arrays, strings and field numbers start at 1. You CAN create an array manually that starts at 0 or -357 or whatever number you want but it'll save shooting yourself in the foot some day if you always start them at 1. I expect it could be made more efficient still by restricting the nested loops to only values that could exist for the enclosing loop index combinations (e.g. not every value of u+v+w is possible so there will be times when you shouldn't bother looping on x). For example: $ cat tst.awk BEGIN { FS = "," } # For each array subscript variable -- DATE ($10), firm_ISIN ($9), EXCHANGE ($12), and FII_ID ($5), after checking for type = EQ, set up counts for each value, and number of unique values. $17 ~ /_EQ\>/ { if (!seenDate[$10]++) date_list[++d] = $10 if (!seenIsin[$9]++) isin_list[++i] = $9 if (!seenExch[$12]++) exch_list[++e] = $12 if (!seenFii[$5]++) fii_list[++f] = $5 # For cash-in, buy (B), or cash-out, sell (S) count NR = no of records, SH = no of shares, RV = rupee-value. idx = $10 SUBSEP $9 SUBSEP $12 SUBSEP $5 if ( $11 ~ /^([12359]|1[24])$/ ) { seen[$10,$9] seen[$10,$9,$12] ++BNR[idx]; BSH[idx] += $15; BRV[idx] += $16 } else if ( $11 ~ /^(4|1[13])$/ ) { seen[$10,$9] seen[$10,$9,$12] ++SNR[idx]; SSH[idx] += $15; SRV[idx] += $16 } } END { printf "d = %d\n", d | "cat>&2" printf "i = %d\n", i | "cat>&2" printf "e = %d\n", e | "cat>&2" printf "f = %d\n", f | "cat>&2" print NR, "records processed." print " " printf "%-11s\t%-13s\t%-20s\t%-19s\t%-7s\t%-7s\t%-14s\t%-14s\t%-18s\t%-18s\n", "DATE", "ISIN", "EXCH", "FII", "BNR", "SNR", "BSH", "SSH", "BRV", "SRV" for (u = 1; u <= d; u++) { date = date_list[u] for (v = 1; v <= i; v++) { isin = isin_list[v] if ( (date,isin) in seen ) { for (w = 1; w <= e; w++) { exch = exch_list[w] if ( (date,isin,exch) in seen ) { for (x = 1; x <= f; x++) { fii = fii_list[x] #check first below for records with zeroes, don't print them idx = date SUBSEP isin SUBSEP exch SUBSEP fii if ( (idx in BNR) || (idx in SNR) ) { if (idx in BNR) { bnr = BNR[idx] bsh = BSH[idx] brv = BRV[idx] } else { bnr = bsh = brv = 0 } if (idx in SNR) { snr = SNR[idx] ssh = SSH[idx] srv = SRV[idx] } else { snr = ssh = srv = 0 } printf "%-11s\t%13s\t%20s\t%19s\t%7d\t%7d\t%14d\t%14d\t%18.2f\t%18.2f\n", date, isin, exch, fii, bnr, snr, bsh, ssh, brv, srv } } } } } } } }
Infinite loop in opencv_traincascade CvCascadeClassifier::fillPassedSamples
So I have been playing around with opencv's newest LBP cascade trainer, and I keep running into infinite loop. I believe the reason may be caused by my limited negative (background) image set. However the program should not run into infinite loop... I managed to identify the location of infinite loop and made some modification to the source code not only to avoid the infinite loop, but also improved the detection performance in the resulting cascade file. However, I would still like someone who understands the code to tell me if this is a proper fix and why it works (or otherwise): Sample preparation: So I have one positive image, and used "createsamples" to generate 100 distorted / rotated positive samples: opencv_createsamples -img positive1.png -num 100 -bg neg.txt -vec samples.vec -maxidev 50 -w 100 -h 62 -maxxangle 0 -maxyangle 0.6 -maxzangle 0.4 -show 1 I have only 5 negative samples in the "negative" directory. Then my training command: opencv_traincascade -data cascade_result -vec samples.vec -bg neg.txt -numStages 10 -numPos 100 -numNeg 200 -featureType LBP -w 100 -h 62 -bt DAB -minHitRate 0.99 -maxFalseAlarmRate 0.2 -weightTrimRate 0.95 -maxDepth 1 Note that I set -numNeg 200 even though I only have 5 negative images in "neg.txt". Later I found out numNeg does not need to match the number of negative images, as the program "crops" out pieces of images from your negative images repeatedly to use against positive images for training. At stage 4 is where I run into the infinite loop, and it is in (see "// !!!!!" ): int CvCascadeClassifier::fillPassedSamples( int first, int count, bool isPositive, int64& consumed ) { int getcount = 0; Mat img(cascadeParams.winSize, CV_8UC1); cout << "isPos: "<< isPositive << "; first: "<< first << "; count: "<< count << endl; for( int i = first; i < first + count; i++ ) { int inner_count = 0; // !!!!! Here is the start of infinite loop for( ; ; ) { // !!!!! This is my code to fix the infinite loop: /* inner_count++; if (inner_count > numNeg * 200) // there should be less than 200 crops of negative images per negative image { cout << "force exit the loop: inner count: "<< inner_count << "; consumed: " << consumed << endl; break; } */ bool isGetImg = isPositive ? imgReader.getPos( img ) : imgReader.getNeg( img ); if( !isGetImg ) return getcount; consumed++; featureEvaluator->setImage( img, isPositive ? 1 : 0, i ); if( predict( i ) == 1 ) { getcount++; break; } } } return getcount; } I think the problem is imgReader.getNeg(img) keeps cropping from the negative set until "preduct(i) == 1" condition is satisfied to exit the infinite loop. I do not understand what "predict(i)" does, but I do guess that if negative set is small and limited, it will run out of "variety" of images for "predict(i)" to return 1... so loop never finishes. One solution is to increate negative set which is what I am going to try next. The other quicker solution is the code I added in // !!!!! to limite the number of try's to 200 per negative images on average, then force exit the loop if no good candidate is found. With this fix, my training session went on to stage 5, then stopped there. I put the cascade xml in my app, and it performed reasonably well, better than if I set stop at stage 4 to avoid infinite loop. I hope someone who understands the code more would enlighten us further... thank you
joe you may meet the same problem like mine. The problem is caused because opencv_traincascade.exe doesn't get the image width and height correctly or the original image width and height are smaller than training window size. You can add two lines pointed by arrow in the follow code to opencv/appa/traincascade/imagestorage.cpp to solve the problem. bool CvCascadeImageReader::NegReader::nextImg() { Point _offset = Point(0,0); size_t count = imgFilenames.size(); for( size_t i = 0; i < count; i++ ) { src = imread( imgFilenames[last++], 0 ); if(src.rows<winSize.height || src.cols < winSize.width) <----------- continue; <----------- if( src.empty() ) continue; .... Hope this solution will help you.
How to get the size of struct and its contents in bytes in golang?
I have a struct, say: type ASDF struct { A uint64 B uint64 C uint64 D uint64 E uint64 F string } I create a slice of that struct: a := []ASDF{} I do operations on that slice of the struct (adding/removing/updating structs that vary in contents); how can I get the total size in bytes (for memory) of the slice and its contents? Is there a built-in to do this or do I need to manually run a calculation using unsafe.Sizeof and then len each string?
Sum the size of all memory, excluding garbage collector and other overhead. For example, package main import ( "fmt" "unsafe" ) type ASDF struct { A uint64 B uint64 C uint64 D uint64 E uint64 F string } func (s *ASDF) size() int { size := int(unsafe.Sizeof(*s)) size += len(s.F) return size } func sizeASDF(s []ASDF) int { size := 0 s = s[:cap(s)] size += cap(s) * int(unsafe.Sizeof(s)) for i := range s { size += (&s[i]).size() } return size } func main() { a := []ASDF{} b := ASDF{} b.A = 1 b.B = 2 b.C = 3 b.D = 4 b.E = 5 b.F = "ASrtertetetetetetetDF" fmt.Println((&b).size()) a = append(a, b) c := ASDF{} c.A = 10 c.B = 20 c.C = 30 c.D = 40 c.E = 50 c.F = "ASetDF" fmt.Println((&c).size()) a = append(a, c) fmt.Println(len(a)) fmt.Println(cap(a)) fmt.Println(sizeASDF(a)) } Output: 69 54 2 2 147 http://play.golang.org/p/5z30vkyuNM
I'm afraid to say that unsafe.Sizeof is the way to go here if you want to get any result at all. The in-memory size of a structure is nothing you should rely on. Notice that even the result of unsafe.Sizeof is inaccurate: The runtime may add headers to the data that you cannot observe to aid with garbage collection. For your particular example (finding a cache size) I suggest you to go with a static size that is sensible for many processors. In almost all cases doing such micro-optimizations is not going to pay itself off.