How to perform calculation over a log file - parsing

I have a that looks like this:
I, [2009-03-04T15:03:25.502546 #17925] INFO -- : [8541, 931, 0, 0]
I, [2009-03-04T15:03:26.094855 #17925] INFO -- : [8545, 6678, 0, 0]
I, [2009-03-04T15:03:26.353079 #17925] INFO -- : [5448, 1598, 185, 0]
I, [2009-03-04T15:03:26.360148 #17925] INFO -- : [8555, 1747, 0, 0]
I, [2009-03-04T15:03:26.367523 #17925] INFO -- : [7630, 278, 0, 0]
I, [2009-03-04T15:03:26.375845 #17925] INFO -- : [7640, 286, 0, 0]
I, [2009-03-04T15:03:26.562425 #17925] INFO -- : [5721, 896, 0, 0]
I, [2009-03-04T15:03:30.951336 #17925] INFO -- : [8551, 4752, 1587, 1]
I, [2009-03-04T15:03:30.960007 #17925] INFO -- : [5709, 5295, 0, 0]
I, [2009-03-04T15:03:30.966612 #17925] INFO -- : [7252, 4928, 0, 0]
I, [2009-03-04T15:03:30.974251 #17925] INFO -- : [8561, 4883, 1, 0]
I, [2009-03-04T15:03:31.230426 #17925] INFO -- : [8563, 3866, 250, 0]
I, [2009-03-04T15:03:31.236830 #17925] INFO -- : [8567, 4122, 0, 0]
I, [2009-03-04T15:03:32.056901 #17925] INFO -- : [5696, 5902, 526, 1]
I, [2009-03-04T15:03:32.086004 #17925] INFO -- : [5805, 793, 0, 0]
I, [2009-03-04T15:03:32.110039 #17925] INFO -- : [5786, 818, 0, 0]
I, [2009-03-04T15:03:32.131433 #17925] INFO -- : [5777, 840, 0, 0]
I'd like to create a shell script that calculates the average of the 2nd and 3rd fields in brackets (840 and 0 in the last example). An even tougher question: is it possible to get the average of the 3rd field only when the last one is not 0?
I know I could use Ruby or another language to create a script, but I'd like to do it in Bash. Any good suggestions on resources or hints in how to create such a script would help.

Use bash and awk:
cat file | sed -ne 's:^.*INFO.*\[\([0-9, ]*\)\][ \r]*$:\1:p' | awk -F ' *, *' '{ sum2 += $2 ; sum3 += $3 } END { if (NR>0) printf "avg2=%.2f, avg3=%.2f\n", sum2/NR, sum3/NR }'
Sample output (for your original data):
avg2=2859.59, avg3=149.94
Of course, you do not need to use cat, it is included there for legibility and to illustrate the fact that input data can come from any pipe; if you have to operate on an existing file, run sed -ne '...' file | ... directly.
EDIT
If you have access to gawk (GNU awk), you can eliminate the need for sed as follows:
cat file | gawk '{ if(match($0, /.*INFO.*\[([0-9, ]*)\][ \r]*$/, a)) { cnt++; split(a[1], b, / *, */); sum2+=b[2]; sum3+=b[3] } } END { if (cnt>0) printf "avg2=%.2f, avg3=%.2f\n", sum2/cnt, sum3/cnt }'
Same remarks re. cat apply.
A bit of explanation:
sed only prints out lines (-n ... :p combination) that match the regular expression (lines containing INFO followed by any combination of digits, spaces and commas between square brackets at the end of the line, allowing for trailing spaces and CR); if any such line matches, only keep what's between the square brackets (\1, corresponding to what's between \(...\) in the regular expression) before printing (:p)
sed will output lines that look like: 8541, 931, 0, 0
awk uses a comma surrounded by 0 or more spaces (-F ' *, *') as field delimiters; $1 corresponds to the first column (e.g. 8541), $2 to the second etc. Missing columns count as value 0
at the end, awk divides the accumulators sum2 etc by the number of records processed, NR
gawk does everything in one shot; it will first test whether each line matches the same regular expression passed in the previous example to sed (except that unlike sed, awk does not require a \ in fron the round brackets delimiting areas or interest). If the line matches, what's between the round brackets ends up in a[1], which we then split using the same separator (a comma surrounded by any number of spaces) and use that to accumulate. I introduced cnt instead of continuing to use NR because the number of records processed NR may be larger than the actual number of relevant records (cnt) if not all lines are of the form INFO ... [...comma-separated-numbers...], which was not the case with sed|awk since sed guaranteed that all lines passed on to awk were relevant.

Posting the reply I pasted to you over IM here too, just because it makes me try StackOverflow out :)
# replace $2 with the column you want to avg;
awk '{ print $2 }' | perl -ne 'END{ printf "%.2f\n", $total/$n }; chomp; $total+= $_; $n++' < log

Use nawk or /usr/xpg4/bin/awk on Solaris.
awk -F'[],]' 'END {
print s/NR, t/ct
}
{
s += $(NF-3)
if ($(NF-1)) {
t += $(NF-2)
ct++
}
}' infile

Use Python
logfile= open( "somelogfile.log", "r" )
sum2, count2= 0, 0
sum3, count3= 0, 0
for line in logfile:
# find right-most brackets
_, bracket, fieldtext = line.rpartition('[')
datatext, bracket, _ = fieldtext.partition(']')
# split fields and convert to integers
data = map( int, datatext.split(',') )
# compute sums and counts
sum2 += data[1]
count2 += 1
if data[3] != 0:
sum3 += data[2]
count3 += 1
logfile.close()
print sum2, count2, float(sum2)/count2
print sum3, count3, float(sum3)/count3

Related

How to parse values with AWK when column number is inconsistent

Input file:
6 31236622 HLA_C*05:01:01:01 A T . PASS AF=0.07724;MAF=0.07724;R2=0.98466;IMPUTED GT:DS:HDS:GP 1|0:0.999:0.999,0.000:0.001,0.999,0.000 0|0:0:0,0:1,0,0 1|1:1.994:0.995,1.000:0.000,0.006,0.994
6 29910248 HLA_A*01:01 A T . PASS AF=0.15969;MAF=0.15969;R2=0.97333;IMPUTED GT:DS:HDS:GP 0|0:0:0,0:1,0,0 1|0:1.000:1.000,0.000:0.000,1.000,0.000 0|0:0:0,0:1,0,0
6 31322134 HLA_B*55:01 A T . PASS AF=0.01091;MAF=0.01091;R2=0.94511;IMPUTED GT:DS:HDS:GP 0|0:0:0,0:1,0,0 0|0:0:0,0:1,0,0 0|0:0:0,0:1,0,0
6 31322132 HLA_B*55 A T . PASS AF=0.01091;MAF=0.01091;R2=0.94485;IMPUTED GT:DS:HDS:GP 0|0:0:0,0:1,0,0 0|0:0:0,0:1,0,0 0|0:0:0,0:1,0,0
6 31322006 HLA_B*44:02:01:01 A T . PASS AF=0.08074;MAF=0.08074;R2=0.97706;IMPUTED GT:DS:HDS:GP 1|0:0.999:0.999,0.000:0.001,0.999,0.000 0|0:0:0,0:1,0,0 1|1:1.997:0.998,0.999:0.000,0.003,0.997
I want to parse a specific number from each column after the "GT:DS:HDS:GP" column, specifically, the numbers after "x|x:". So desired output is:
0.999, 0, 1.994
0, 1.000, 0
0, 0, 0
0, 0, 0
0.999, 0, 1.997
To parse the desired values from (e.g.) line 4, I can use:
awk -F: '{for (i=5; i<=NF; i+=3) printf "%s%s", $i, (i+3 <= NF ? ", " : ORS)}'
Line 5 would require:
awk -F: '{for (i=9; i<=NF; i+=3) printf "%s%s", $i, (i+3 <= NF ? ", " : ORS)}'
So the problem with the input file is that column 3 (space delimited) contains a variable number of colons, which makes colons a poor delimiter for this particular input file (but the desired values are surrounded by colons!)
I though about using "|" as delimiter, with substr($i,3,?), but the desired values have an inconsistent number of digits (hence the "?").
Is there a flexible awk code to get the desired output?
You may try this awk:
awk -v OFS=', ' '$9 == "GT:DS:HDS:GP" {for (i=10; i<=NF; ++i) if ($i ~ /^[0-9]+\|[0-9]+:/ && split($i, a, /:/)) printf "%s", (i == 10 ? "" : OFS) a[2]; print ""}' file
0.999, 0, 1.994
0, 1.000, 0
0, 0, 0
0, 0, 0
0.999, 0, 1.997
An expanded form:
awk -v OFS=', ' '
$9 == "GT:DS:HDS:GP" {
for (i=10; i<=NF; ++i)
if ($i ~ /^[0-9]+\|[0-9]+:/ && split($i, a, /:/))
printf "%s", (i == 10 ? "" : OFS) a[2]
print ""
}' file
Why do you care about the space-delimited columns at all?
awk '{ sub(/.* GT:DS:HDS:GP */, "");
i = split($0, n, /[0-9]\|[0-9]:/);
sep = "";
for(x=2; x<=i; x++) {
sub(/:.*/, "", n[x]); printf("%s%s", sep, n[x]); sep=", " }
printf "\n"; }' file
We successively pick apart each line, first by removing everything through GT:DS:HDS:GP from the line, then by splitting the remaining string into n on the specified delimiter, and then cleaning up the resulting fields by removing everything after the first colon in each, and printing the result. (We skip the first one, which only contains the useless short or empty string before the first delimiter.)
Output for your sample:
0.999, 0, 1.994
0, 1.000, 0
0, 0, 0
0, 0, 0
0.999, 0, 1.997
I have no idea what these fields stands for so I just picked single-letter variable names; you can probably improve the readability by giving these variables more descriptive names.

How do I convert an integer to a list of indexes in Lua

I'm pretty new to Lua, I'm trying to convert an integer into an array of indexes but cannot find a robust way to do this.
Here's two examples of what I'm trying to achieve:
Input: 0x11
Desired output: [0, 4]
Input: 0x29
Desired output: [0, 3, 5]
This will work if you're on Lua 5.3 or newer:
local function oneBits(n)
local i, rv = 0, {}
while n ~= 0 do
if n & 1 == 1 then
table.insert(rv, i)
end
i = i + 1
n = n >> 1
end
return rv
end

How to use ReadFile from kernel32 with NSIS

I'm trying to open a file and read the content inside a buffer using NSIS Installer.
Unfortunatly, everything works except KERNEL32::ReadFile. I read that a lot of people have some problem with this API, and i can't find a solution.
Here is my code :
StrCpy $2 $2\TOS.TXT
System::Call 'Kernel32::CreateFile(t, i, i, i, i, i, i) i (r2, 0x80000000, 0, 0, 4, 0x80, 0) .r3'
System::Call 'kernel32::GetFileSize(pr3, p0)i.r7' ; Call API to read 32-bit file size
System::Call "kernel32::VirtualAlloc(i0, ir7, i0x3000, i0x40) .r1"
System::Call "KERNEL32::ReadFile(pr3,pr1,ir7,*i,p0)i.r3"
The file is well opened and the buffer is well created with the correct size.
Any help is appreciated,
Thank,
Chris.
Your VirtualAlloc call is missing the output type before .r1.
There is no need to use the System plug-in just to read from a simple file.
!macro MakeTestFile
FileOpen $0 "$temp\nsis_test.txt" w
FileWrite $0 "Hello$\nWorld!"
FileClose $0
!macroend
StrCpy $9 "$temp\nsis_test.txt"
!insertmacro MakeTestFile
FileOpen $0 "$9" r
FileRead $0 $1 ; Line 1
FileRead $0 $2 ; Line 2
FileClose $0
MessageBox MB_OK $1$2
!insertmacro MakeTestFile
FileOpen $0 "$9" r
FileSeek $0 1 SET
FileReadByte $0 $1
FileClose $0
IntFmt $1 "0x%.2X" $1
MessageBox MB_OK "Byte #2 is $1"
!insertmacro MakeTestFile
System::Call 'KERNEL32::CreateFile(t, i, i, p, i, i, p) p (r9, 0x80000000, 0, 0, 4, 0x80, 0) .r3'
System::Call 'KERNEL32::GetFileSize(pr3, p0)i.r7'
System::Call "KERNEL32::VirtualAlloc(p0, pr7, i0x3000, i0x40)p.r1"
System::Call "KERNEL32::ReadFile(pr3,pr1,ir7,*i,p0)i.r3"
System::Call "USER32::MessageBoxA(p$hwndparent,pr1,t 'System::Call',i0)"
System::Call "KERNEL32::VirtualFree(pr1,p0,i0x8000)"
The System plug-in also has System::Alloc/StrAlloc/Free so there is no need to call VirtualAlloc directly if you need memory.

List of lists instead of plain list?

I'm learing Erlang. Here is a simple task: convert integers like 1011, 111213, 12345678 to lists [10, 11], [11, 12, 13] and [12, 34, 56, 78] correspondingly.
Here is the function I wrote:
num_to_list(0) -> [];
num_to_list(Num) -> [Num rem 100 | [num_to_list((Num - Num rem 100) div 100)]].
But num_to_list(1234) gives me [34,[12,[]]]. Now I don't care that the list is reversed. I don't understand why it is not a plain list.
num_to_list returns list. You dont need use [] around it in num_to_list(Num). I mean
num_to_list(0) -> [];
num_to_list(Num) -> [Num rem 100 | num_to_list((Num - Num rem 100) div 100)].

How to use some text processing(awk etc..) to put some character in a text file at certain lines

I have a text file which has hex values, one value on one separate line. A file has many such values one below another. I need to do some analysis of the values for which i need to but some kind of delimiter/marker say a '#' in this file before line numbers 32,47,62,77... difference between two line numbers in this patterin is 15 always.
I am trying to do it using awk. I tried few things but didnt work.
What is the command in awk to do it?
Any other solution involving some other language/script/tool is also welcome.
Thank you.
-AD
This is how you can use AWK for it,
awk 'BEGIN{ i=0; } \
{if (FNR<31) {print $0} \
else {i++; if (i%15) {print $0} else {printf "#%s\n",$0}}\
}' inputfile.txt > outputfile.txt
How it works,
BEGIN sets an iterator for counting from your starting line 32
FNR<31 starts counting from the 31st record (the next record needs a #)
input lines are called records and FNR is an AWK variable that counts them
Once we start counting, the i%15 prefixes a # on every 15th line
$0 prints the record (the line) as is
You can type all the text with white spaces skipping the trailing '\' on a single command line.
Or, you can use it as an AWK file,
# File: comment.awk
BEGIN{ i=0; }
$0 ~ {\
if (FNR<31) {print $0} \
else {\
i++; \
if (i%15) {\
print $0
}\
else {\
printf "#%s\n",$0
}\
}\
}
And run it as,
awk -f comment.awk inputfile.txt > outputfile.txt
Hope this will help you to use more AWK.
Python:
f_in = open("file.txt")
f_out = open("file_out.txt","w")
offset = 4 # 0 <= offset < 15 ; first marker after fourth line in this example
for num,line in enumerate(f_in):
if not (num-offset) % 15:
f_out.write("#\n")
f_out.write(line)
Haskell:
offset = 31;
chunk_size = 15;
main = do
{
(h, t) <- fmap (splitAt offset . lines) getContents;
mapM_ putStrLn h;
mapM_ ((putStrLn "#" >>) . mapM_ putStrLn) $
map (take chunk_size) $
takeWhile (not . null) $
iterate (drop chunk_size) t;
}

Resources