grep question using backslash - grep

I have the following file:
asdasd
asd
asd
incompatible:svcnotallowed:svc\:/network/bmb/clerver\:default
incompatible:svcnotallowed:svc\:/network1/bmb/clerver\:default
incompatible:svcnotallowed:svc\:/network2/bmb/clerver\:default
asdasd
asd
asd
as
And now suppose I have the two variables v1="incompatible:svcnotallowed:" and v2="svc\:/network1/bmb/clerver\:default".
I would like to search the entire file using v1 and v2. I know this is a problem caused due to the file having a'\' in it. I just dont know how to eliminate it. I have tried storing v1 and v2 (both variable contents and grep usage) using single quotes, but in vain.
This is the series of commands I have tried :
grep "$v1$v2" file
grep '$v1$v2' file
I need this to work in KSH
please let me know the right way to use grep in this scenario.
Thanks.

grep -F "$v1$v2" file should do the trick -- with the -F option, it treats the pattern as a fixed string, so backslashes don't get interpreted as escapes or backreferences.
But fgrep "$v1$v2" file would probably be the most portable solution. As tomkaith13 notes in his comment, the -F option to grep isn't universally supported. On Solaris, the default grep doesn't support -F, but the version in /usr/xpg4/bin does.

Since you are using ksh, you can just use it to read the files
v1="incompatible:svcnotallowed:"
v2="svc\:/network1/bmb/clerver\:default"
while read -r line
do
case "$line" in
"$v1$v2" ) echo "$line";;
esac
done < file

Related

I can't get grep to find "configname:.*:" from a file

I am trying to get strings that are separated by colons from a file with grep. I have managed fine so far but I ran into a problem where grep is just ignoring one of the characters I am trying to include in the search.
The file I am searching contains this line
configname:user:ip:password:macaddr
and the command I am running is grep -o "configname:.*:" sshutil_config
I thought this would find "configname:user:" but all it does is remove "macaddr" from the output. configname:user:ip:password:
The username will change and so I never know what it is so I can't grep specifically for it but I will know the configname and so I am trying to search for the username using it. I need to get the username out of the file as input and save it to a variable using usr=$(grep -o "whatever_this_needs_to_be" sshutil_config)
Thanks in advance,
-A\\/
Change "configname:.*:" to "configname:[^:]*:"
This is also possible if you don't want to use grep
user="$( cut -d ':' -f 2 <<< "$data" )"

How to grep a matching filename AND extension from pattern file to a text file?

Content of testfile.txt
/path1/abc.txt
/path2/abc.txt.1
/path3/abc.txt123
Content of pattern.txt
abc.txt$
Bash Command
grep -i -f pattern.txt testfile.txt
Output:
/path1/abc.txt
This is a working solution, but currently the $ in the pattern is manually added to each line and this edited pattern file is uploaded to users. I am trying to avoid the manual amendment.
Alternate solution to loop and read line by line, but required scripting skills or upload scripts to user environment.
Want to keep the original pattern files in an audited environment, users just login and run simple cut-n-paste commands.
Any one liner solution?
You can use sed to add $ to pattern.txt and then use grep, but you might run into issues due to regexp metacharacters like the . character. For example, abc.txt$ will also match abc1txt. And unless you take care of matching only the basename from the file path, abc.txt$ will also match /some/path/foobazabc.txt.
I'd suggest to use awk instead:
$ awk '!f{a[$0]; next} $NF in a' pattern.txt f=1 FS='/' testfile.txt
/path1/abc.txt
pattern.txt f=1 FS='/' testfile.txt here a flag f is set between the two files and field separator is also changed to / for the second file
!f{a[$0]; next} if flag f is not set (i.e. for the first file), build an array a with line contents as the key
$NF in a for the second file, if the last field matches a key in array a, print the line
Just noticed that you are also using -i option, so use this for case insensitive matching:
awk '!f{a[tolower($0)]; next} tolower($NF) in a'
Since pattern.txt contains only a single pattern, and you don't want to change it, since it is an audited file, you could do
grep -i -f "$(<pattern.txt)'$' testfile.txt
instead. Note that this would break, if the maintainer of the file one day decided to actually write there a terminating $.
IMO, it would make more sense to explain to the maintainer of pattern.txt that he is supposed to place there a simple regular expression, which is going to match your testfile. In this case s/he can decide whether the pattern really should match only the right edge or some inner part of the lines.
If pattern.txt contains more than one line, and you want to add the $ to each line, you can likewise do a
grep -i -f <(sed 's/$/$/' <pattern.txt) testfile.txt
As the '$' symbol indicates pattern end. The following script should work.
#!/bin/bash
file_pattern='pattern.txt' # path to pattern file
file_test='testfile.txt' # path to test file
while IFS=$ read -r line
do
echo "$line"
grep -wn "$line" $file_test
done < "$file_pattern"
You can remove the IFS descriptor if the pattern file comes with leading/trailing spaces.
Also the grep option -w matches only whole word and -n provides with line number.

How can i make grep show a line ignoring the words i want?

I am trying to use grep with the pwd command.
So, if i enter pwd, it shows me something like:
/home/hrq/my-project/
But, for purposes of a script i am making, i need to use it with grep, so it only prints what is after hrq/, so i need to hide my home folder always (the /home/hrq/) excerpt, and show only what is onwards (like, in this case, only my-project).
Is it possible?
I tried something like
pwd | grep -ov 'home', since i saw that the "-v" flag would be equivalent to the NOT operator, and combine it with the "-o" only matching flag. But it didn't work.
Given:
$ pwd
/home/foo/tmp
$ echo "$PWD"
/home/foo/tmp
Depending on what it is you really want to do, either of these is probably what you really should be using rather than trying to use grep:
$ basename "$PWD"
tmp
$ echo "${PWD#/home/foo/}"
tmp
Use grep -Po 'hrq/\K.*', for example:
grep -Po 'hrq/\K.*' <<< '/home/hrq/my-project/'
my-project/
Here, grep uses the following options:
-P : Use Perl regexes.
-o : Print the matches only (1 match per line), not the entire lines.
\K : Cause the regex engine to "keep" everything it had matched prior to the \K and not include it in the match. Specifically, ignore the preceding part of the regex when printing the match.
SEE ALSO:
grep manual
perlre - Perl regular expressions

(bash) grep -i not making search case insensitive for input files

I am trying to search inside a folder containing several files. The name of the files is written in upper case with a .sub extension in lower case:
AAA.sub
BBB.sub
CCC.sub
DDD.sub
I am searching a pattern trough those file using grep, however i would like to only use lower case letter for the input files.
In the man page for grep it is written:
-i, --ignore-case
Ignore case distinctions in both the PATTERN and the input files. (-i is specified by POSIX.)
So, if i understood properly:
grep -i subckt /schematics/aaa
and
grep -i subckt /schematics/AAA
Are supposed to both be able to search a pattern "subckt" in the file "aaa" regardless of its case (AAA or aaa) and if two files named aaa and AAA are present at the same time in the foler, i expect grep to search trough both of them.
However when i try my search with the 1st instruction (lower case) it does not work, giving me "no such file or directory" message.
When i try to search with the 2nd instruction (upper case) it works properly.
I obviously understood something wrong about how the -i option with grep, can anyone give me an answer regarding this matter?
Is it possible to be case insensitive with the input files when using grep?
EDIT:
My question was lacking details, even tough i have found the answer to my problem i will add the details in case someone else stumbles upon this:
I have one file that contains a list of each file name i want to grep. My list looks like this:
aaa capacitor C_0
bbb capacitor C_0
ccc resistor R_in
...
The grep is done inside a perl script, the perl script parses the list file and gets the name of each individual file name (aaa bbb ccc) inside a while loop.
However the name inside the list file is written in lower case whereas the name of the files i want to grep is written in upper case.
This is why i wanted to have the input file search to be case insensitive so that i could directly do a grep -i subck aaa and it would search inside the file 'AAA'
However, since the grep is launched from a perl script, and since it is apparently not possible to have grep behave like that, i used the uc() function of perl to convert aaa to AAA and do my grep with it. (see my answer below)
-i affects how the contents are searched, not the name of the files.
When the man page says "Ignore case distinctions in both the PATTERN and the input files." that really means that case is ignored in the pattern ( searching for AAA and aaa are equivalent) and the contents of the input files (a line would match if it includes "AAA" or "aaa" or even "AaA")
I think you want to either list all the filenames on the command line, or find a glob (i.e. wildcard) that matches all the filenames:
grep -i subckt *.sub
In Unix/Linux shells (bash, zsh, and so on) "*" is processed by the shell (bash) not the command (grep). The command receives the list of files and actually can't tell the difference between whether a user typed "grep foo *" and "grep foo file1 file2 file3" (if the directory includes those 3 files)
Please try the following command
find . -iname aaa.sub | grep -rn subckt
find with -iname option will list out files ignoring their case. In the above case find . -iname will list out both aaa.sub & AAA.sub. The output is piped to the grep command.
I have found a way to circumvent my problem by using the uc (upper case) function of perl to convert the input files for the grep function into upper case.
The grep command was launched from a perl script in the first place:
grep -i subckt /schematics/aaa
So, i just did that in my perl script:
$tmp=aaa
$tmp=uc($tmp)
grep -i subckt /schematics/$tmp
Now, the "aaa" name is just an example. In the perl script it is recovered from another parsed file that is written in lower case.
Thanks for the answers tough.
grep uses the filenames as they are listed on the command line. The -i option affects the contents of the files, not the names of the files.
You can use find to select filenames to be searched. The -iname option lets you match files ignoring case.
grep subckt $(find /schematics -iname aaa.sub -print)
If you have many filenames, or those filenames include spaces or other characters that would confuse the shell, the safe and secure way to do this is using the -print0 and -0 options:
find /schematics -iname aaa.sub -print0 | xargs -r -0 grep -i subckt

Can grep show only words that match search pattern?

Is there a way to make grep output "words" from files that match the search expression?
If I want to find all the instances of, say, "th" in a number of files, I can do:
grep "th" *
but the output will be something like (bold is by me);
some-text-file : the cat sat on the mat
some-other-text-file : the quick brown fox
yet-another-text-file : i hope this explains it thoroughly
What I want it to output, using the same search, is:
the
the
the
this
thoroughly
Is this possible using grep? Or using another combination of tools?
Try grep -o:
grep -oh "\w*th\w*" *
Edit: matching from Phil's comment.
From the docs:
-h, --no-filename
Suppress the prefixing of file names on output. This is the default
when there is only one file (or only standard input) to search.
-o, --only-matching
Print only the matched (non-empty) parts of a matching line,
with each such part on a separate output line.
Cross distribution safe answer (including windows minGW?)
grep -h "[[:alpha:]]*th[[:alpha:]]*" 'filename' | tr ' ' '\n' | grep -h "[[:alpha:]]*th[[:alpha:]]*"
If you're using older versions of grep (like 2.4.2) which do not include the -o option, then use the above. Else use the simpler to maintain version below.
Linux cross distribution safe answer
grep -oh "[[:alpha:]]*th[[:alpha:]]*" 'filename'
To summarize: -oh outputs the regular expression matches to the file content (and not its filename), just like how you would expect a regular expression to work in vim/etc... What word or regular expression you would be searching for then, is up to you! As long as you remain with POSIX and not perl syntax (refer below)
More from the manual for grep
-o Print each match, but only the match, not the entire line.
-h Never print filename headers (i.e. filenames) with output lines.
-w The expression is searched for as a word (as if surrounded by
`[[:<:]]' and `[[:>:]]';
The reason why the original answer does not work for everyone
The usage of \w varies from platform to platform, as it's an extended "perl" syntax. As such, those grep installations that are limited to work with POSIX character classes use [[:alpha:]] and not its perl equivalent of \w. See the Wikipedia page on regular expression for more
Ultimately, the POSIX answer above will be a lot more reliable regardless of platform (being the original) for grep
As for support of grep without -o option, the first grep outputs the relevant lines, the tr splits the spaces to new lines, the final grep filters only for the respective lines.
(PS: I know most platforms by now would have been patched for \w.... but there are always those that lag behind)
Credit for the "-o" workaround from #AdamRosenfield answer
It's more simple than you think. Try this:
egrep -wo 'th.[a-z]*' filename.txt #### (Case Sensitive)
egrep -iwo 'th.[a-z]*' filename.txt ### (Case Insensitive)
Where,
egrep: Grep will work with extended regular expression.
w : Matches only word/words instead of substring.
o : Display only matched pattern instead of whole line.
i : If u want to ignore case sensitivity.
You could translate spaces to newlines and then grep, e.g.:
cat * | tr ' ' '\n' | grep th
Just awk, no need combination of tools.
# awk '{for(i=1;i<=NF;i++){if($i~/^th/){print $i}}}' file
the
the
the
this
thoroughly
grep command for only matching and perl
grep -o -P 'th.*? ' filename
I was unsatisfied with awk's hard to remember syntax but I liked the idea of using one utility to do this.
It seems like ack (or ack-grep if you use Ubuntu) can do this easily:
# ack-grep -ho "\bth.*?\b" *
the
the
the
this
thoroughly
If you omit the -h flag you get:
# ack-grep -o "\bth.*?\b" *
some-other-text-file
1:the
some-text-file
1:the
the
yet-another-text-file
1:this
thoroughly
As a bonus, you can use the --output flag to do this for more complex searches with just about the easiest syntax I've found:
# echo "bug: 1, id: 5, time: 12/27/2010" > test-file
# ack-grep -ho "bug: (\d*), id: (\d*), time: (.*)" --output '$1, $2, $3' test-file
1, 5, 12/27/2010
cat *-text-file | grep -Eio "th[a-z]+"
You can also try pcregrep. There is also a -w option in grep, but in some cases it doesn't work as expected.
From Wikipedia:
cat fruitlist.txt
apple
apples
pineapple
apple-
apple-fruit
fruit-apple
grep -w apple fruitlist.txt
apple
apple-
apple-fruit
fruit-apple
I had a similar problem, looking for grep/pattern regex and the "matched pattern found" as output.
At the end I used egrep (same regex on grep -e or -G didn't give me the same result of egrep) with the option -o
so, I think that could be something similar to (I'm NOT a regex Master) :
egrep -o "the*|this{1}|thoroughly{1}" filename
To search all the words with start with "icon-" the following command works perfect. I am using Ack here which is similar to grep but with better options and nice formatting.
ack -oh --type=html "\w*icon-\w*" | sort | uniq
You could pipe your grep output into Perl like this:
grep "th" * | perl -n -e'while(/(\w*th\w*)/g) {print "$1\n"}'
grep --color -o -E "Begin.{0,}?End" file.txt
? - Match as few as possible until the End
Tested on macos terminal
$ grep -w
Excerpt from grep man page:
-w: Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character.
ripgrep
Here are the example using ripgrep:
rg -o "(\w+)?th(\w+)?"
It'll match all words matching th.

Resources