How to replace line number delimiter in grep output - grep

I am doing:
egrep -e "String" -in -A 2 file1.log
I am getting output as
111:some text
112-some text
113-some text
How to replace ^'[0-9]{1,9}-' to ^'[0-9]{1,9}:' for each line number at start of each line in a file? So that it will look like:
111:some text
112:some text
113:some text

Pipe the output into sed. This works here:
grep ... | sed 's/^\([0-9]\+\)-/\1:/'

Related

How to get lines after match using grep for command line output?

Trying to trim the output of a command on terminal. I want to see only strings after blah in a command line output. I tried
<command> | grep -A "blah"
but getting an error output as
grep: illegal option -- A
I am using cut in-conjunction with grep to get strings after a keyword "blah" in this case
echo "random text string blah strings after" | grep -o "blah.*$" | cut -c 5-
grep portion of command extracts whole line after "blah" including "blah" and cut command removes first 4 characters from this string. Only first occurrence of "blah" will be used as delimiter to trim the line.

grep return only one match per line

example file :
foobar random text foobar random text foobar
text
text
text
If I use grep and search for the word foobar, how can I prevent grep to return me the first line 3 times, because it founds 3 times foobar ? What I would like to have is only one return per line, even if the word has been found multiple times on the line
Simple awk alternative:
awk '/\<foobar\>/{print NR,"foobar"}' file
The output(for your exemplary input):
1 foobar
\< and \> mean word boundaries
NR - contains current line number
With perl:
perl -ne 'print $.," ",$1,"\n" if /\b(foobar)\b/' file
The output:
1 foobar
file.txt:
foobar random text foobar random text foobar
text
text
text
command:
grep foobar file.txt
output:
foobar random text foobar random text foobar
grep version: GNU grep 3.4
So, the line containing foobar is shown only once. If you see more lines, include option -n to see the line numbers of each output line, i.e.,
grep -n foobar file.txt

Use awk to parse and modify every CSV field

I need to parse and modify a each field from a CSV header line for a dynamic sqlite create table statement. Below is what works from the command line with the appropriate output:
echo ",header1,header2,header3"| awk 'BEGIN {FS=","}; {for(i=2;i<=NF;i++){printf ",%s text ", $i}; printf "\n"}'
,header1 text ,header2 text ,header3 text
Well, it breaks when it is run from within a bash shell script. I got it to work by writing the output to a file like below:
echo $optionalHeaders | awk 'BEGIN {FS=","}; {for(i=2;i<=NF;i++){printf ",%s text ", $i}; printf "\n"}' > optionalHeaders.txt
This sucks! There are a lot of examples that show how to parse/modify specific Nth fields. This issue requires each field to be modified. Is there a more concise and elegant Awk one liner that can store its contents to a variable rather than writing to a file?
sed is usually the right tool for simple substitutions on a single line. Take your pick:
$ echo ",header1,header2,header3" | sed 's/[^,][^,]*/& text/g'
,header1 text,header2 text,header3 text
$ echo ",header1,header2,header3" | sed -r 's/[^,]+/& text/g'
,header1 text,header2 text,header3 text
The last 1 above requires GNU sed to use EREs instead of BREs. You can do the same in awk using gsub() if you prefer:
$ echo ",header1,header2,header3" | awk '{gsub(/[^,]+/,"& text")}1'
,header1 text,header2 text,header3 text
I found the problem and it was me... I forgot to echo the contents of the variable to the Awk command. Brianadams comment was so simple that forced me to re-look at my code and find the problem! Thanks!
I am ok with resolving this but if anyone wants to propose a more concise and elegant Awk one liner - that would be cool.
You can try the following:
#! /bin/bash
header=",header1,header2,header3"
newhead=$(awk 'BEGIN {FS=OFS=","}; {for(i=2;i<=NF;i++) $i=$i" text"}1' <<<"$header")
echo "$newhead"
with output:
,header1 text,header2 text,header3 text
Instead of modifying fields one by one, another option is with a simple substitution:
echo ",header1,header2,header3" | awk '{gsub(/[^,]+/, "& text", $0); print}'
That is, replace a sequence of non-comma characters with text appended.
Another alternative would be replacing the commas, but due to the irregularities of your header line (first comma must be left alone, no comma at the end), that's a bit less easy:
echo ",header1,header2,header3" | awk '{gsub(/,/, " text,", $0); sub(/^ text,/, "", $0); print $0 " text"}'
Btw, the rough equivalent of the two commands in sed:
echo ",header1,header2,header3" | sed -e 's/[^,]\{1,\}/& text/g'
echo ",header1,header2,header3" | sed -e 's/\(.\),/\1 text,/g' -e 's/$/ text/'

how can I grep for a text and display the paragraph

below text is in a file,
Pseudo name=Apple
Code=42B
state=fault
Pseudo name=Prance
Code=43B
state=good
need to grep for 42B from the above file so that the output only should display
Pseudo name=Apple
Code=42B
state=fault
perl -00ne "print if /Code=42B/i"
use before and after modifiers
grep -B 1 -A 1 42B file.txt
Unfortunately, only on AIX:
grep -p Code=42B file.txt

Use grep to report back only line numbers

I have a file that possibly contains bad formatting (in this case, the occurrence of the pattern \\backslash). I would like to use grep to return only the line numbers where this occurs (as in, the match was here, go to line # x and fix it).
However, there doesn't seem to be a way to print the line number (grep -n) and not the match or line itself.
I can use another regex to extract the line numbers, but I want to make sure grep cannot do it by itself. grep -no comes closest, I think, but still displays the match.
try:
grep -n "text to find" file.ext | cut -f1 -d:
If you're open to using AWK:
awk '/textstring/ {print FNR}' textfile
In this case, FNR is the line number. AWK is a great tool when you're looking at grep|cut, or any time you're looking to take grep output and manipulate it.
All of these answers require grep to generate the entire matching lines, then pipe it to another program. If your lines are very long, it might be more efficient to use just sed to output the line numbers:
sed -n '/pattern/=' filename
Bash version
lineno=$(grep -n "pattern" filename)
lineno=${lineno%%:*}
I recommend the answers with sed and awk for just getting the line number, rather than using grep to get the entire matching line and then removing that from the output with cut or another tool. For completeness, you can also use Perl:
perl -nE 'say $. if /pattern/' filename
or Ruby:
ruby -ne 'puts $. if /pattern/' filename
using only grep:
grep -n "text to find" file.ext | grep -Po '^[^:]+'
You're going to want the second field after the colon, not the first.
grep -n "text to find" file.txt | cut -f2 -d:
To count the number of lines matched the pattern:
grep -n "Pattern" in_file.ext | wc -l
To extract matched pattern
sed -n '/pattern/p' file.est
To display line numbers on which pattern was matched
grep -n "pattern" file.ext | cut -f1 -d:

Resources