Combine two files of different lengths into one file - join

The situation is:
file_1
7010-1
7010-2
7010-3
file_2
7010,xxx,yyy,7123,01
7010,xxx,yyy,7122,02
7010,xxx,yyy,9101,03
7010,xxx,yyy,7123,01
7010,xxx,yyy,7122,02
7010,xxx,yyy,9101,03
7010,xxx,yyy,7123,01
7010,xxx,yyy,7122,02
7010,xxx,yyy,9101,03
7010,xxx,yyy,7119,04
7010,xxx,yyy,7117,05
7010,xxx,yyy,7112,06
desired output
7010-1,xxx,yyy,7123,01
7010-1,xxx,yyy,7122,02
7010-1,xxx,yyy,9101,03
7010-2,xxx,yyy,7123,01
7010-2,xxx,yyy,7122,02
7010-2,xxx,yyy,9101,03
7010-3,xxx,yyy,7123,01
7010-3,xxx,yyy,7122,02
7010-3,xxx,yyy,9101,03
7010-3,xxx,yyy,7119,04
7010-3,xxx,yyy,7117,05
7010-3,xxx,yyy,7112,06
I don't expect join to be the right option here, since I don't want the rows be pre-sorted (due to columns 2, 3, 4, and 5), is that correct? Would rather go for awk, something like awk 'NR==FNR {h[$1] = $0; next} {print $1,$2,$3,$4,h[$1]}' file_1 file_2. But something is missing. Alternative solutions are also welcome.

awk 'BEGIN { FS=","; OFS="," }
{ if (!n || $5 < n) { getline id < "file_1" }
$1=id; print; n=$5 }' file_2

Related

AWK take some data input from file and set as variable in output

I have some data in file and need to print in output some format to the data in print.
Example content to parse:
012231-33339411.sxz.ree.fg*-*
U2FsdGVkX1+1pfXeR/h4u6P/BrItX75L0wHVIka4yA6tqS9a5CFUWvLu1AB4x2m8NpmJ>fyoXdADqlWDiGWi6Pw1a8NgNDbdTOlMtGBz4FCi8n97UdVQX9f0a2u9d5l7lOCxVDDzd>wJXbi9x4O+Dmo/lm9DbWAjBGKwWu0tTQxsU2TIpqv
FhUZmGd3E6vN+puPXz4yXeVQhMfQ+K8OpSM2ZuTpKCtDgm0SdUDyFnalA4lxHaFZqh+E>3+9JgHK7/KiiZmIJshUmqrwnkX0yKihCcOXCzaFITiByxBM/7PGeJo0IBAjyKI/GflgQ>8GsIWWRkCJnz2OMiYKr8uOMOAfTHnW57Dq+orDG1p
012236-33349111.sxz.ree.fg*-*
bCRIVArOSClIWrZz6KciBFT2iPjqsS/qMRSBYinBzpDmESj8kZHoGQ46BMq+LgHJiY5P>7yygNxCkEv25GKGViKTX1X6KSSLZ+RVNEts4N7jzVLoufZ+X/TAv2Ib7pnnEj7h4rWDn>y7KP1XrTynItaas5z5fpFt2zUHFNElvNmyrjbFZVp
DUsnWWDuvemWUr5YwOLxeRCnwTvfw71gwGEVeBzIJq4TsZb2/G8j9vpb/L7KNybsyQNN>DlOTMW5CHzd5otyYaNBcYo9V/4ky63q2vZMzQDWtCwVPaTKREPUqPLRKea3VkQnnsUic>/iBe+6Sv5GYl+XPGbIjWbTJWLQmc1kv8LXPyvUmTm
cUVypKp9fDlyFUkOkEVAxW8dMxHJ0c83BPw37GkCvsR9itkzO0FpX0Zn+OvRQRkUCyzr>dgijhcH
I need some way to take in Awk the first variable from begin to "-"
Example:
variable1=012231
and
variable1=012236
Variable 2 the 4 digits after the - character
Example:
Variable2=3333
and
variable2=3334
Variable 3 the 2 digits after the 4 digits of variable2
Example:
variable3=94
and
variable3=91
Variable 4 as the text before the newline
Example:
variable4=U2FsdGVkX1+1pfXeR/h4u6P/BrItX75L0wHVIka4yA6tqS9a5CFUWvLu1AB4x2m8NpmJ>fyoXdADqlWDiGWi6Pw1a8NgNDbdTOlMtGBz4FCi8n97UdVQX9f0a2u9d5l7lOCxVDDzd>wJXbi9x4O+Dmo/lm9DbWAjBGKwWu0tTQxsU2TIpqv
FhUZmGd3E6vN+puPXz4yXeVQhMfQ+K8OpSM2ZuTpKCtDgm0SdUDyFnalA4lxHaFZqh+E>3+9JgHK7/KiiZmIJshUmqrwnkX0yKihCcOXCzaFITiByxBM/7PGeJo0IBAjyKI/GflgQ>8GsIWWRkCJnz2OMiYKr8uOMOAfTHnW57Dq+orDG1p
and
variable4=bCRIVArOSClIWrZz6KciBFT2iPjqsS/qMRSBYinBzpDmESj8kZHoGQ46BMq+LgHJiY5P>7yygNxCkEv25GKGViKTX1X6KSSLZ+RVNEts4N7jzVLoufZ+X/TAv2Ib7pnnEj7h4rWDn>y7KP1XrTynItaas5z5fpFt2zUHFNElvNmyrjbFZVp
DUsnWWDuvemWUr5YwOLxeRCnwTvfw71gwGEVeBzIJq4TsZb2/G8j9vpb/L7KNybsyQNN>DlOTMW5CHzd5otyYaNBcYo9V/4ky63q2vZMzQDWtCwVPaTKREPUqPLRKea3VkQnnsUic>/iBe+6Sv5GYl+XPGbIjWbTJWLQmc1kv8LXPyvUmTm
cUVypKp9fDlyFUkOkEVAxW8dMxHJ0c83BPw37GkCvsR9itkzO0FpX0Zn+OvRQRkUCyzr>dgijhcH
Example print expected in output:
'012231' '3333' '94' 'U2FsdGVkX1+1pfXeR/h4u6P/BrItX75L0wHVIka4yA6tqS9a5CFUWvLu1AB4x2m8NpmJ>fyoXdADqlWDiGWi6Pw1a8NgNDbdTOlMtGBz4FCi8n97UdVQX9f0a2u9d5l7lOCxVDDzd>wJXbi9x4O+Dmo/lm9DbWAjBGKwWu0tTQxsU2TIpqv
FhUZmGd3E6vN+puPXz4yXeVQhMfQ+K8OpSM2ZuTpKCtDgm0SdUDyFnalA4lxHaFZqh+E>3+9JgHK7/KiiZmIJshUmqrwnkX0yKihCcOXCzaFITiByxBM/7PGeJo0IBAjyKI/GflgQ>8GsIWWRkCJnz2OMiYKr8uOMOAfTHnW57Dq+orDG1p'
'012236' '3334' '91' 'bCRIVArOSClIWrZz6KciBFT2iPjqsS/qMRSBYinBzpDmESj8kZHoGQ46BMq+LgHJiY5P>7yygNxCkEv25GKGViKTX1X6KSSLZ+RVNEts4N7jzVLoufZ+X/TAv2Ib7pnnEj7h4rWDn>y7KP1XrTynItaas5z5fpFt2zUHFNElvNmyrjbFZVp
DUsnWWDuvemWUr5YwOLxeRCnwTvfw71gwGEVeBzIJq4TsZb2/G8j9vpb/L7KNybsyQNN>DlOTMW5CHzd5otyYaNBcYo9V/4ky63q2vZMzQDWtCwVPaTKREPUqPLRKea3VkQnnsUic>/iBe+6Sv5GYl+XPGbIjWbTJWLQmc1kv8LXPyvUmTm
cUVypKp9fDlyFUkOkEVAxW8dMxHJ0c83BPw37GkCvsR9itkzO0FpX0Zn+OvRQRkUCyzr>dgijhcH'
Haved tested the following code with result of print selecting by number of record and counting the fixed width of the field, without care the format or shape of the content.
awk -v FIELDWIDTHS="6 1 4 2 2 15" 'NR==1{print $1" "$3" "$4}NR==2{print}NR==3{print $1" "$3" "$4}NR==4{print}' file
But it`s a large file with variable lenght of number of records in the large string so the equal will not work for this case I will need catch this string to a variable to print it later in the output as field in all the sequences of show this field.
Could help me with some code to parse the input and print the output as close to the need, please explain how to take the positions in the input.
Thank in advance.
Using any awk in any shell on every Unix box:
$ cat tst.awk
split($0,f,"-") > 1 {
if ( NR > 1 ) {
prt()
delete var
}
var[1] = f[1]
var[2] = substr(f[2],1,4)
var[3] = substr(f[2],5,2)
next
}
{ var[4] = var[4] $0 }
END { prt() }
function prt( i) {
for ( i=1; i<=4; i++ ) {
printf "\047%s\047%s", var[i], (i<4 ? OFS : ORS)
}
}
$ awk -f tst.awk file
'012231' '3333' '94' 'U2FsdGVkX1+1pfXeR/h4u6P/BrItX75L0wHVIka4yA6tqS9a5CFUWvLu1AB4x2m8NpmJ>fyoXdADqlWDiGWi6Pw1a8NgNDbdTOlMtGBz4FCi8n97UdVQX9f0a2u9d5l7lOCxVDDzd>wJXbi9x4O+Dmo/lm9DbWAjBGKwWu0tTQxsU2TIpqvFhUZmGd3E6vN+puPXz4yXeVQhMfQ+K8OpSM2ZuTpKCtDgm0SdUDyFnalA4lxHaFZqh+E>3+9JgHK7/KiiZmIJshUmqrwnkX0yKihCcOXCzaFITiByxBM/7PGeJo0IBAjyKI/GflgQ>8GsIWWRkCJnz2OMiYKr8uOMOAfTHnW57Dq+orDG1p'
'012236' '3334' '91' 'bCRIVArOSClIWrZz6KciBFT2iPjqsS/qMRSBYinBzpDmESj8kZHoGQ46BMq+LgHJiY5P>7yygNxCkEv25GKGViKTX1X6KSSLZ+RVNEts4N7jzVLoufZ+X/TAv2Ib7pnnEj7h4rWDn>y7KP1XrTynItaas5z5fpFt2zUHFNElvNmyrjbFZVpDUsnWWDuvemWUr5YwOLxeRCnwTvfw71gwGEVeBzIJq4TsZb2/G8j9vpb/L7KNybsyQNN>DlOTMW5CHzd5otyYaNBcYo9V/4ky63q2vZMzQDWtCwVPaTKREPUqPLRKea3VkQnnsUic>/iBe+6Sv5GYl+XPGbIjWbTJWLQmc1kv8LXPyvUmTmcUVypKp9fDlyFUkOkEVAxW8dMxHJ0c83BPw37GkCvsR9itkzO0FpX0Zn+OvRQRkUCyzr>dgijhcH'

awk/join ? How do I print field from column based on match between two files

I have three files I'd like to join into one, semicolon delimited (note multiple occurences in file 1 of first row value possible)
File 1:
1;FOO;BAR;NU
1;V;V;E
2;F;B;N
3;FOO;NU;BAR
File 2:
1;YES
2;NO
3;YES
File 3:
1;NO
2;NO
3;YES
Desired outcome: (file1 $0, file2 $2, file3 $2)
1;FOO;BAR;NU;YES;NO
1;V;V;E;YES;NO
2;F;B;N;NO;NO
3;FOO;NU;BAR;YES;YES
I cant get my head around how this can be done... so any help would be appreciated!
This might work for you (GNU join):
join -t\; file1 file2 | join -t\; - file3
Join file1 and file2 first using ; as the field delimiter and pipe the result to a second invocation of join using stdin and file3 and the same delimiter.
Using gnu awk you may do this:
awk 'BEGIN { FS=OFS=";" }
ARGIND == 1 {
++fr[$1]
map[$1][fr[$1]] = $0
next
}
$1 in fr {
for (i in map[$1])
map[$1][i] = map[$1][i] OFS $2
}
END {
for (i in map)
for (j in map[i])
print map[i][j]
}' file1 file2 file3
1;FOO;BAR;NU;YES;NO
1;V;V;E;YES;NO
2;F;B;N;NO;NO
3;FOO;NU;BAR;YES;YES

Performant comparisons in awk?

I've got a python script that runs through some logs and figured it'd be instructive to do a few benchmarks against some other approaches before deploying this out. When looking at awk, I'm hoping to minimize overhead to get a 'fair' shake at beating the somewhat optimized python variant.
My log entries look like:
--------
SomeField=SomeValue
OptionallyAppearingField=WhoKnowsWhat
AnotherField=AnotherValue
ExtraStuff=OneBonusKey=1,SecondBonusKey=2,ThirdBonusKey=3,...
--------
And I'm keen to get the value of AnotherField when one of our ThirdBonusKeys exists and has a certain value (actually just the number 1).
The 'stupid' way here is to set our RS to '--------' and then just apply a regex to $0 twice, first to see if ThirdBonusKey=1 is in the record, and then to extract AnotherField=(desired_value).
But that seems like an unfair comparison, given it's just throwing a regex at the problem (twice!). Without a guaranteed ordering of fields to leverage awk's cool FS skills, is there a quicker or more appropriate approach here? It's possible that the answer is just "this is not a job for awk", and that's okay too, I guess.
Cyrus has kindly pointed out that the sketch of code I gave above is not technically code, and he's technically correct, so here's a reasonably stupid implementation:
awk 'BEGIN{RS="--------"} { if ($0 ~ /ThirdBonusKey=1/) { for(i=1;i<NF;i++) {if ($i ~ "AnotherField=") { print $i }}}}'
Given input
--------
SomeField=SomeValue
OptionallyAppearingField=WhoKnowsWhat
AnotherField=DesiredValue1
ExtraStuff=OneBonusKey=1,SecondBonusKey=2,ThirdBonusKey=1,...
--------
SomeField=SomeValue
OptionallyAppearingField=WhoKnowsWhat
AnotherField=DesiredValue2
ExtraStuff=OneBonusKey=1,SecondBonusKey=2,ThirdBonusKey=0,...
--------
SomeField=
ExtraStuff=
--------
we'd expect output
AnotherField=DesiredValue1
Most efficiently I expect:
$ awk '/^AnotherField=/{val=$0; next} /[=,]ThirdBonusKey=1(,|$)/{print val}' file
AnotherField=DesiredValue1
but more robustly and easier to enhance to do anything else you want later:
$ cat tst.awk
BEGIN { FS="[,=[:space:]]"; OFS="=" }
/^-+$/ {
if ( f["ExtraStuff_ThirdBonusKey"] == 1 ) {
print "AnotherField", f["AnotherField"]
}
delete f
next
}
{
if ( $1 == "ExtraStuff" ) {
pfx = $1
sub(/[^=]+=/,"")
f[pfx] = $0
pfx = pfx "_"
}
else {
pfx = ""
}
for (i=1; i<NF; i+=2) {
f[pfx $i] = $(i+1)
}
}
$ awk -f tst.awk file
AnotherField=DesiredValue1
That second script first stores all of the values in an array f[] so you can access the values by their names, here's what the contents of that array look like:
$ cat tst.awk
BEGIN { FS="[,=[:space:]]"; OFS="=" }
/^-+$/ {
for (i in f) printf "> f[%s]=%s\n", i, f[i]
if ( f["ExtraStuff_ThirdBonusKey"] == 1 ) {
print "AnotherField", f["AnotherField"]
}
print "----"
delete f
next
}
{
if ( $1 == "ExtraStuff" ) {
pfx = $1
sub(/[^=]+=/,"")
f[pfx] = $0
pfx = pfx "_"
}
else {
pfx = ""
}
for (i=1; i<NF; i+=2) {
f[pfx $i] = $(i+1)
}
}
.
$ awk -f tst.awk file
----
> f[OptionallyAppearingField]=WhoKnowsWhat
> f[AnotherField]=DesiredValue1
> f[ExtraStuff_SecondBonusKey]=2
> f[ExtraStuff_ThirdBonusKey]=1
> f[ExtraStuff_OneBonusKey]=1
> f[SomeField]=SomeValue
> f[ExtraStuff]=OneBonusKey=1,SecondBonusKey=2,ThirdBonusKey=1,...
AnotherField=DesiredValue1
----
> f[OptionallyAppearingField]=WhoKnowsWhat
> f[AnotherField]=DesiredValue2
> f[ExtraStuff_SecondBonusKey]=2
> f[ExtraStuff_ThirdBonusKey]=0
> f[ExtraStuff_OneBonusKey]=1
> f[SomeField]=SomeValue
> f[ExtraStuff]=OneBonusKey=1,SecondBonusKey=2,ThirdBonusKey=0,...
----
> f[SomeField]=
> f[ExtraStuff]=
----
Given that you can create whatever conditions and/or print whatever combinations of fields you want in any input or output order.

Parse and change the output of a system through Powershell

initially I have to state, that I have little to no experience with powershell so far. A previous system generates the wrong output for me. So I want to use PowerShell to change this. From the System I get an output looking like this:
TEST1^|^9999^|^Y^|^NOT IN^|^('1','2','3')^|^N^|^LIKE^|^('4','5','6','7')^|^...^|^Y^|^NOT IN^|^('8','9','10','11','12')
TEST2^|^9998^|^Y^|^NOT IN^|^('4','5','6')^|^N^|^LIKE^|^('6','7','8','9')^|^...^|^Y^|^NOT IN^|^('1','2','15','16','17')^|^Y^|^NOT IN^|^('18','19','20','21','22')
When you look at it, there is a starting part for each line (TEST1^|^9999^|^) followed by a1 to a-n tuples (example: Y^|^NOT IN^|^('1','2','3')^|^).
The way I want this to look like is here:
TEST1^|^9999^|^Y^|^NOT IN^|^('1','2','3')
TEST1^|^9999^|^N^|^LIKE^|^('4','5','6','7')
TEST1^|^9999^|^Y^|^NOT IN^|^('8','9','10','11','12')
TEST2^|^9998^|^Y^|^NOT IN^|^('4','5','6')
TEST2^|^9998^|^N^|^LIKE^|^('6','7','8','9')
TEST2^|^9998^|^Y^|^NOT IN^|^('1','2','15','16','17')
TEST2^|^9998^|^Y^|^NOT IN^|^('18','19','20','21','22')
So the tuples shall be printed out per line, with the starting part attached in front.
My solution approach is the AWK equivalent in Powershell, but to date I lack the understanding of how to tackle the issue of how to deal with an indetermined number of tuples and to repeat the starting block.
I thank you so much in advance for your help!
I'd split the lines at ^|^ and recombine the fields of the resulting array in a loop. Something like this:
$sp = '^|^'
Get-Content 'C:\path\to\input.txt' | % {
$a = $_ -split [regex]::Escape($sp)
for ($i=2; $i -lt $a.length; $i+=3) {
"{0}$sp{1}$sp{2}$sp{3}$sp{4}" -f $a[0,1,$i,($i+1),($i+2)]
}
} | Set-Content 'C:\path\to\output.txt'
The data looks quite regular so you could loop over it using | as the delimiter and counting the following cells in 3s:
$data = #"
TEST1^|^9999^|^Y^|^NOT IN^|^('1','2','3')^|^N^|^LIKE^|^('4','5','6','7')^|^Y^|^NOT IN^|^('8','9','10','11','12')
TEST2^|^9998^|^Y^|^NOT IN^|^('4','5','6')^|^N^|^LIKE^|^('6','7','8','9')^|^Y^|^NOT IN^|^('1','2','15','16','17')^|^Y^|^NOT IN^|^('18','19','20','21','22')
"#
$data.split("`n") | % {
$ds = $_.split("|")
$heading = "$($ds[0])|$($ds[1])"
$j = 0
for($i = 2; $i -lt $ds.length; $i += 1) {
$line += "|$($ds[$i])" -replace "\^(\((?:'\d+',?)+\))\^?",'$1'
$j += 1
if($j -eq 3) {
write-host $heading$line
$line = ""
$j = 0
}
}
}
Parsing an arbitary length string record to row records is quite error prone. A simple solution would be processing the data row-by-row and creating output.
Here is a simple illustration how to process a single row. Processing the whole input file and writing output is left as trivial an exercise to the reader.
$s = "TEST1^|^9999^|^Y^|^NOT IN^|^('1','2','3')^|^N^|^LIKE^|^('4','5','6','7')^|^Y^|^NOT IN^|^('8','9','10','11','12')"
$t = $s.split('\)', [StringSplitOptions]::RemoveEmptyEntries)
$testNum = ([regex]::match($t[0], "(?i)(test\d+\^\|\^\d+)")).value # Hunt for 1st colum values
$t[0] = $t[0] + ')' # Fix split char remove
for($i=1;$i -lt $t.Length; ++$i) { $t[$i] = $testNum + $t[$i] + ')' } # Add 1st colum and split char remove
$t
TEST1^|^9999^|^Y^|^NOT IN^|^('1','2','3')
TEST1^|^9999^|^N^|^LIKE^|^('4','5','6','7')
TEST1^|^9999^|^Y^|^NOT IN^|^('8','9','10','11','12')

parsing issue with comma separated csv file

I am trying to extract 4th column from csv file (comma separated, and skipping first 2 header lines) using this command,
awk 'NR <2 {next}{FS =","}{print $4}' filename.csv | more
However, it doesn't work because the first column cantains comma, thus 4th column is not really 4th. Below is an example of a row:
"sdfsdfsd, sfsdf", 454,fgdfg, I_want_this_column,sdfgdg,34546, 456465, etc
Unless you have specific reasons for using awk, I would recommend using a CSV parsing library. Many scripting languages have one built-in (or at least available) and they'll save you from these headaches.
if your first column has quotes always,
$ awk 'BEGIN{ FS="\042[ ]*," } { m=split($2,a,","); print a[3] } ' file
I_want_this_column
if the column you want is always the last 2nd,
$ awk -F"," '{print $(NF-1)}' file
I_want_this_column
You can try this demo script to break down the columns
awk 'BEGIN{ FS="," }
{
for(i=1;i<=NF;i++){
# save normal
if($i !~ /^[ ]*\042|[ ]*\042[ ]*$/){
a[++j]=$i
}
# if quotes at the end
if(f==1 && $i ~ /[ ]*\042[ ]*$/){
s=s","$i
a[++j]=s
#reset
s="";f=0
}
# if quotes in front
if($i ~ /^[ ]*\042/){
s=s $i
f=1
}
if(f==1 && ( $i !~/\042/ ) ){
s=s","$i
}
}
}
END{
# print columns
for(p=1;p<=j;p++){
print "Field "p,": "a[p]
}
} ' file
output
$ cat file
"sdfsdfsd, sfsdf", "454,fgdfg blah , words ", I_want_this_column,sdfgdg
$ ./shell.sh
Field 1 : "sdfsdfsd, sfsdf"
Field 2 : fgdfg blah
Field 3 : "454,fgdfg blah , words "
Field 4 : I_want_this_column
Field 5 : sdfgdg
You shouldn't use awk here. Use Python csv module or Perl Text::CSV or Text::CSV_XS modules or another real csv parser.
Related question -
parse csv file using gawk
If you can't avoid awk, this piece of code does the job you need:
BEGIN {FS=",";}
{
f=0;
j=0;
for (i = 1; i <=NF ; ++i) {
if (f) {
a[j] = a[j] "," $(i);
if ($(i) ~ "\"$") {
f = 0;
}
}
else {
++j;
a[j] = $(i);
if ((a[j] ~ "^\"[^\"]*$")) {
f = 1;
}
}
}
for (i = 1; i <= j; ++i) {
gsub("^\"","",a[i]);
gsub("\"$","",a[i]);
gsub("\"\"","\"",a[i]);
print "i = \"" a[i] "\"";
}
}
Working with CSV files that have quoted fields with commas inside can be difficult with the standard UNIX text tools.
I wrote a program called csvquote to make the data easy for them to handle. In your case, you could use it like this:
csvquote filename.csv | awk 'NR <2 {next}{FS =","}{print $4}' | csvquote -u | more
or you could use cut and tail like this:
csvquote filename.csv | tail -n +3 | cut -d, -f4 | csvquote -u | more
The code and docs are here: https://github.com/dbro/csvquote

Resources