How to print the starting position of pattern in grep - grep

In python's regex (re) library I can do re.search("<pattern>", string).start() to get the start of the pattern (if pattern exists).
How can I do the same in the unix command line tool grep?
E.g. If pattern= "th.n" and the string is "somethingwrong", I expect to see the number 5 (considering 1-based but 4 in a 0-based would be ok)
Thank you!

For example:
echo "abcdefghij" | grep -aob "e"
outputs :
4:e
Here:
-b to gets the byte offset
-a tells grep to use the input as text
-o outputs the findings
With your example:
echo ""somethingwrong"" | grep -aob "th.n"
4:thin
This works great on multiple matches:
echo "abcdefghiqsdqdqdfjjklqsdljkhqsdlf" | grep -aob "f"
5:f
16:f
32:f

Maybe a Perl one-liner would be a happy medium between having to write a Python program and the simplicity of a standard Unix tool.
Given this file:
$ cat foo.txt
This thing
that thing
Not here
another thing way over here that has another thing and a third thing
thank you.
You could run this Perl one-liner:
$ perl -lne'while(/th.n/g){print $.," ",$-[0]," ",$_;}' foo.txt
1 5 This thing
2 5 that thing
4 8 another thing way over here that has another thing and a third thing
4 45 another thing way over here that has another thing and a third thing
4 63 another thing way over here that has another thing and a third thing
5 0 thank you.
Also, the greplike search tool ack (that I wrote)has a --column option to display the column:
$ ack th.n --column foo.txt /dev/null
foo.txt
1:6:This thing
2:6:that thing
4:9:another thing way over here that has another thing and a third thing
5:1:thank you.
Or with the --nogroup option so the filename appears on each line.
$ ack th.n --column --nogroup foo.txt /dev/null
foo.txt:1:6:This thing
foo.txt:2:6:that thing
foo.txt:4:9:another thing way over here that has another thing and a third thing
foo.txt:5:1:thank you.
I had to add the search of /dev/null because ack's output would be different if there was only one file being searched.
ripgrep has a --column option, too.
$ rg --column --line-number th.n foo.txt
1:6:This thing
2:6:that thing
4:9:another thing way over here that has another thing and a third thing
5:1:thank you.

Related

Counting the number of times each pattern in a file appears in a separate file

I am trying to scan a file (test.txt), something like this:
make
bake
baker
makes
take
cook
sbake
for patterns listed in a separate file (ref.txt):
ake
make
bake
look
I have tried looping with grep like so:
while read seq; do grep -c "$seq" test.txt; done > out.txt < ref.txt
However, it doesn't count partial matches only exact matches (or inconsistent in counting partial matches) and I output:
4
1
2
0
instead of
6
2
3
0
Thanks for any help!
See why-is-using-a-shell-loop-to-process-text-considered-bad-practice for some, but not all, of the reasons not to try to do this with a shell loop.
The standard UNIX tool for manipulating text is awk:
$ awk 'NR==FNR{cnt[$0]=0;next} {for (re in cnt) cnt[re]+=gsub(re,"&")} END{for (re in cnt) print re, cnt[re]}' ref.txt test.txt
ake 6
bake 3
look 0
make 2
The above assumes the text in your ref.txt file doesn't contain any regexp metacharacters or if it does then a regexp match is desirable. If it can but you need a string instead of regexp match, you'd need a slightly different solution.
$ while read -r line; do grep -c $line test.txt ; done < ref.txt
6
2
3
0

Grep Filenames from ls for specific part of them

I want to extract a specific part out of the filenames to work with them.
Example:
ls -1
REZ-Name1,Surname1-02-04-2012.png
REZ-Name2,Surname2-07-08-2013.png
....
So I want to get only the part with the name.
How can this be achieved ?
There are several ways to do this. Here's a loop:
for file in REZ-*-??-??-????.png
do
name=${file#*-}
name=${name%-??-??-????.png}
echo "($name)"
done
Given a variety of filenames with all sorts of edge cases from spacing, additional hyphens and line feeds:
REZ-Anna-Maria,de-la-Cruz-12-32-2015.png
REZ-Bjørn,Dæhlie-01-01-2015.png
REZ-First,Last-12-32-2015.png
REZ-John Quincy,Adams-11-12-2014.png
REZ-Ridiculous example # this is one filename
is ridiculous,but fun-22-11-2000.png # spanning two lines
it outputs:
(Anna-Maria,de-la-Cruz)
(Bjørn,Dæhlie)
(First,Last)
(John Quincy,Adams)
(Ridiculous example
is ridiculous,but fun)
If you're less concerned with correctness, you can simplify it further:
$ ls | grep -o '[^-]*,[^-]*'
Maria,de
Bjørn,Dæhlie
First,Last
John Quincy,Adams
is ridiculous,but fun
In this case, cut makes more sense than grep:
ls -l | cut -f2 -d-
cut the second field from the input, using '-' as the field delimiter. That other guy's answer will correctly handle some cases mine will not, but for one off uses, I generally find the semantics of cut to be much easier to remember.

duplicate grep output when comparing two files

I have literally been at this for 5 hours, I have busybox on my device, and I unfortunately do not have -X in grep to make my life easier.
edit;
I have two list both of them have mac addresses, essentially I am just wanting to achieve offline mac address lookup so I don't have to keep looking it up online
list.txt has vendor mac prefix of course this isn't the complete list but just for an example
00:13:46
00:15:E9
00:17:9A
00:19:5B
00:1B:11
00:1C:F0
scan will have list of different mac addresses unknown to which vendor they go to. Which will be full length mac addresses. when ever there is a match I want the line in scan to be output.
Pretty much it does that, but it outputs everything from the scan file, and then it will output matching one at the end, and causing duplicate. I tried sort -u, but it has no effect its as if there is two different output from two different methods, the reason why I say that is because it will instantly output scan file that has everything in it, and couple seconds later it will output the matching one.
From searching I came across this
#!/bin/bash
while read line; do
grep -F 'list' 'scan'
done < list.txt
which displays the duplicate result when/if found, the output is pretty much echoing my scan file then displaying the matched pattern, this creating duplicate
This is frustrating me that I have not found a solution after click on all the links in google up to page 9.
Please someone help me.
I don't know if the Busybox sed supports this out of the box, but it should be easy to do in Awk or Perl instead then.
Create a sed script to print lines from file2 which are covered by a prefix in file1 by transforming each line in file1 into a sed command to print a match for that regular expression:
sed 's%.*%/&/p%' file1 | sed -n -f - file2
The same in Awk:
awk 'NR==FNR { a[++i]="^" $0; next }
{ for (j=1; j<=i; ++j) if ($0 ~ a[j]) print }' file1 file2
Ok guys I did a nested for loop (probably very in efficient) but I got it working printing the matching mac addresses using this
#!/usr/bin/bash
for scanlist in `cat scan | cut -d: -f1,2,3`
do
for listt in `cat list`
do
if [[ $scanlist == $listt ]]; then
grep $scanlist scan
fi
done
done
if anyone can make this more elegant but it works for me for now. I think the problem I had was one list contained just 00:11:22 while my other list contained 00:11:22:33:44:55 that is why I cut it on my scanlist to make same length as my other list. So this only output the matches instead of doing duplicate output.

Recursively grep results and pipe back

I need to find some matching conditions from a file and recursively find the next conditions in previously matched files , i have something like this
input.txt
123
22
33
The files where you need to find above terms in following files, the challenge is if 123 is found in say 10 files , the 22 should be searched in these 10 files only and so on...
Example of files are like f1,f2,f3,f4.....f1200
so it is like i need to grep -w "123" f* | grep -w "123" | .....
its not possible to list them manually so any easier way?
You can solve this using awk script, i ve encountered a similar problem and this will work fine
awk '{ if(!NR){printf("grep -w %d f*|",$1)} else {printf("grep -w %d f*",$1)} }' input.txt | sh
What it Does?
it reads input.txt line by line
until it is at last record , it prints grep -w %d | (note there is a
pipe here)
which is then sent to shell for execution and results are piped back
to back
and when you reach the end the pipe is avoided
Perhaps taking a meta-programming viewpoint would help. Have grep output a series of grep commands. Or write a little PERL program. Maybe Ruby, if the mood suits.
You can use grep -lw to write the list of file names that matched (note that it will stop after finding the first match).
You capture the list of file names and use that for the next iteration in a loop.

grep for a string which has a specific number in the end

I want to grep for the string THREAD: 2. It has a space in between. Not able to figure out how.
I tried with grep "THREAD:[ \2]", but its not working
Please let me know.
Try grep "THREAD: 2" <filename>? You just want a literal '2', right?
If you are using GNU grep you could try using the alias egrep or grep -e with 'THREAD: 2$'
You might have to use '^.*THREAD: 2$'
grep reports back the entire line that has matched your pattern. If you wish to look at lines that contains THREAD: 2 then the following should work -
grep "THREAD: 2" filename
However, if you wish to fetch lines that could contain THREAD: and any number then you can use a character class. So in that case the answer would be -
grep "THREAD: [0-9]" filename
You can add + after the character class which means one or more numbers so that you can match numbers like 1,2,3 or 11,12,13 etc.
If you only want to fetch THREAD: 2 from your line then you will have to use an option of grep which is -o. It means show me only my pattern from the file not the entire line.
grep -o "THREAD: 2" filename
You can look up man page for grep and play around with all the options.

Resources