Trying to grep a phrase out of multiple files as they are constantly populated (logs), but with hint as to which file was updated with the phrase.
For example:
grep bindaddr /vservers/*/var/log
gets me:
/vservers/11010/var/log:bindaddr=xxx.xxx.xxx.xxx
/vservers/12525/var/log:bindaddr=xxx.xxx.xxx.xxx
/vservers/12593/var/log:bindaddr=xxx.xxx.xxx.xxx
Which is cool, but I need this for tail -f.
tail -fn 100 /vservers/*/var/log | grep bindaddr
gets me the lines needed but no indicator in which file, so I need a mix of the two.
If you use -v in tail, you get a verbose mode: from man tab --> "always output headers giving file names". This way, whenever something happens in a file, you will get the header on the preceding line.
Together with this, you can use grep -B1 to show the match + the previous line.
All together, this should do:
tail -fvn 100 /vservers/*/var/log | grep -B1 bindaddr
Test
Doing this in one tab:
$ echo "hi" >> a2
$ echo "hi" >> a2
$ echo "hi" >> a1
$ echo "hi" >> a2
I got this in the other one:
$ tail -vfn 100 /tmp/a* | grep -B1 "h"
==> /tmp/a1 <==
==> /tmp/a2 <==
hi
hi
==> /tmp/a1 <==
hi
==> /tmp/a2 <==
hi
Something like this to put the filename in the front of each line from tail:
#!/bin/bash
# Arrange to kill all descendants on exit/interrupt
trap "kill 0" SIGINT SIGTERM EXIT
for f in *.txt; do
tail -f "$f" | sed "s/^/"$f": /" > /dev/tty &
done
# grep in stdin (i.e. /dev/tty)
grep bina -
I think some people are coming to this post looking for a way to display the filename while grepping the tail of multiple files:
for f in path/to/files*.txt; do echo $f; tail $f | grep 'SEARCH-THIS'; done;
This will display an output like this
filename1.txt
search result 1
search result 2
filenam2.txt
search result 3
search result 4
...
Related
I have a problem with this Linux command:
ls | grep -E 'i{2,3}'
.It should take a file that has at least 2 i and max 3 i, but it doesn't work.
This is the output
ls:
life.py, viiva.txt, viiiiiiiiiva.txt
grep:
viiva.txt, viiiiiiiiiva.txt (with the first 3 I highlighted)
Thanks for the help.
Issue with OP's attempt grep -E 'i{2,3}' will match two or three consecutive occurrences of i anywhere in the input, so 4 or more consecutive i is also a valid match.
Parsing ls output is not recommended, see Why not parse ls (and what to do instead)?. If you wish to pass the filenames after filtering to some other command, find is a good option.
$ ls
1i2i3i.txt aibi.txt II.txt life.py viiiiiiiiiva.txt viiva.txt
$ # files with 2 or 3 consecutive i
$ # note that the regex will act on entire filename, thus anchors are not needed
$ find -type f -regextype egrep -regex '[^i]*i{2,3}[^i]*'
./viiva.txt
$ # files with 2 or 3 i anywhere in the name
$ find -type f -regextype egrep -regex '[^i]*i[^i]*i[^i]*(i[^i]*)?'
./aibi.txt
./1i2i3i.txt
./viiva.txt
$ # files with 2 or 3 i anywhere in the name, ignoring case
$ find -type f -regextype egrep -iregex '[^i]*i[^i]*i[^i]*(i[^i]*)?'
./II.txt
./aibi.txt
./1i2i3i.txt
./viiva.txt
If filenames won't cause an issue, you can grep -xE or grep -ixE with above regex, where x option will make sure the regex matches the whole line, instead of anywhere in the line. Or you can also use awk:
$ # NF will give number of fields after splitting on i
$ ls | awk -F'i' 'NF>=3 && NF<=4'
1i2i3i.txt
aibi.txt
viiva.txt
$ ls | awk -F'[iI]' 'NF>=3 && NF<=4'
1i2i3i.txt
aibi.txt
II.txt
viiva.txt
I am trying to find which words happened in logfiles plus show the logfilename for anything that matches following pattern:
'BA10\|BA20\|BA21\|BA30\|BA31\|BA00'
so if file dummylogfile.log contains BA10002 I would like to get a result such as:
dummylogfile.log:BA10002
it is totally fine if the logfile shows up twice for duplicate matches.
the closest I got is:
for f in $(find . -name '*.err' -exec grep -l 'BA10\|BA20\|BA21\|BA30\|BA31\|BA00' {} \+);do printf $f;printf ':';grep -o 'BA10\|BA20\|BA21\|BA30\|BA31\|BA00' $f;done
but this gives things like:
./register-05-14-11-53-59_24154.err:BA10
BA10
./register_mdw_files_2020-05-14-11-54-32_24429.err:BA10
BA10
./process_tables.2020-05-18-11-18-09_11428.err:BA30
./status_load_2020-05-18-11-35-31_9185.err:BA30
so,
1) there are empty lines with only the second match and
2) the full match (e.g., BA10004) is not shown.
thanks for the help
There are a couple of options you can pass to grep:
-H: This will report the filename and the match
-o: only show the match, not the full line
-w: The match must represent a full word (string build from [A-Za-z0-9_])
If we look at your regex, you use BA01, this will match only BA01 which can appear anywhere in the text, also mid word. If you want the regex to match a full word, it should read BA01[[:alnum:]_]* which adds any sequence of word-constituent characters (equivalent to [A-Za-z0-9_]). You can test this with
$ echo "foo BA01234 barBA012" | grep -Ho "BA01"
(standard input):BA01
(standard input):BA01
$ echo "foo BA01234 barBA012" | grep -How "BA01"
$ echo "foo BA01234 barBA012" | grep -How "BA01[[:alnum:]_]*"
(standard input):BA01234
So your grep should look like
grep -How "\('BA10\|BA20\|BA21\|BA30\|BA31\|BA00'\)[[:alnum:]_]*" *.err
From your example it seems that all files are in one directory. So the following works right away:
grep -l 'BA10\|BA20\|BA21\|BA30\|BA31\|BA00' *.err
If the files are in different directories:
find . -name '*.err' -print | xargs -I {} grep 'BA10\|BA20\|BA21\|BA30\|BA31\|BA00' {} /dev/null
Explanation: the addition of /dev/null to the filename {} forces grep to report the matching filename
I am new to linux and I am experimenting with basic terminal commands. I found out that I can list all users using compgen -u but what if I only want to display the bottom line outputs ?
Ok lets say the output of compgen -u goes like this:
extra
extra
extra
extra
extra
extra
extra
extra
extra
John
William
Kate
Harold
I can only use grep to find a single text (ex. compgen -u | grep John). But what if I want to use grep to display John as well as all the remaining entries after it ?
sed or awk solution would be easier, but if you can only use grep, then the option --after-context (or -A) might do:
grep -A 5 John file
The drawback is that you need to know the number of lines to display after the matching (or use an arbitrary big number for the rest of the file).
compgen -u | grep -A$(compgen -u| wc -l) John
Explanation:
From man grep
-A NUM, --after-context=NUM
Print NUM lines of trailing context after matching lines. Places a line containing a group separator (described under --group-separator) between
contiguous groups of matches.
grep -A -- print number of rows after pattern
$() -- Execute unix command
compgen -u| wc -l --> Get total number of rows of output of command.
You can use the following one-liner :
n=$( compgen -u | grep -n John | head -1 | cut -d ":" -f 1 ) && compgen -u | tail -n +$n
This finds out the line number for first occurrence of John, and prints everything starting that line.
I am running a Teamspeak 3 server on a Ubuntu server and I would like to fetch the clients currently connected using a script.
The script currently outputs this from the Teamspeak Server Query:
clid=1 cid=11 client_database_id=161 client_nickname=Music client_type=1|clid=3 cid=11 client_database_id=153 client_nickname=Music\sBot client_type=0|clid=5 cid=1 client_database_id=68 client_nickname=Unknown\sfrom\s127.0.0.1:52537 client_type=1|clid=12 cid=11 client_database_id=3 client_nickname=FriendlyMan client_type=0|clid=16 cid=11 client_database_id=161 client_nickname=Windows\s10\sUser client_type=0|clid=20 cid=11 client_database_id=225 client_nickname=3C2J0N47H4N client_type=0
How can I extract the nicknames from this mess?
More Specifically only the ones that contain "client_type=0".
Played around with GREP (grep -E -o 'client_nickname=\w+'), close to what I want.
client_nickname=Music
client_nickname=Music
client_nickname=Unknown
client_nickname=FriendlyMan
client_nickname=Windows
client_nickname=3C2J0N47H4N
Desired Output:
Music Bot,FriendlyMan,Windows 10 User,3C2J0N47H4N
Our input consists of a single line:
$ cat file
clid=1 cid=11 client_database_id=161 client_nickname=Music client_type=1|clid=3 cid=11 client_database_id=153 client_nickname=Music\sBot client_type=0|clid=5 cid=1 client_database_id=68 client_nickname=Unknown\sfrom\s127.0.0.1:52537 client_type=1|clid=12 cid=11 client_database_id=3 client_nickname=FriendlyMan client_type=0|clid=16 cid=11 client_database_id=161 client_nickname=Windows\s10\sUser client_type=0|clid=20 cid=11 client_database_id=225 client_nickname=3C2J0N47H4N client_type=0
Using grep + sed
Here is one approach that starts with grep and then uses sed to cleanup to the final format:
$ grep -oP '(?<=client_nickname=)[^=]+(?=client_type=0)' file | sed -nE 's/\\s/ /g; H;1h; ${x; s/ *\n/,/g;p}'
Music Bot,FriendlyMan,Windows 10 User,3C2J0N47H4N
Using awk
Here is another approach that just uses awk:
$ awk -F'[= ]' '/client_type=0/{gsub(/\\s/, " ", $8); printf (f?",":"")$8; f=1} END{print ""}' RS='|' file
Music Bot,FriendlyMan,Windows 10 User,3C2J0N47H4N
The awk code uses | as the record separator and awk reads in one record at a time. Each record is divided into fields with the field separator being either a space or an equal sign. If the record contains the text client_type=0, then we replace all occurrences of \s in field 8 with space and then print the resulting field 8.
Using bash
#!/bin/bash
sep=
( cat file; echo "|"; ) | while read -r -d\| clid cid db name type misc
do
[ "$type" = "client_type=0" ] || continue
name=${name//\\s/ }
printf "%s%s" "$sep" "${name#client_nickname=}"
sep=,
done
echo ""
This produces the output:
Music Bot,FriendlyMan,Windows 10 User,3C2J0N47H4N
I would like to grep a specific word 'foo' inside specific files, then get the N lines around my match and show only the blocks that contain a second grep.
I found this but it doesn't really work...
find . | grep -E '.*?\.(c|asm|mac|inc)$' | \
xargs grep --color -C3 -rie 'foo' | \
xargs -n1 --delimiter='--' | grep --color -l 'bar'
For instance I have the file 'a':
a
b
c
d
bar
f
foo
g
h
i
j
bar
l
The file b:
a
bar
c
d
e
foo
g
h
i
j
k
I expect this for grep -c2 on both files because bar is contained in the -c2 range of foo. I do not get any match for ./bar because bar is not in the range -c2 of foo...
--
./foo- bar
./foo- f
./foo- **foo**
./foo- g
./foo- h
--
Any ideas?
You could do this pretty simply with a "while read line" loop:
find -regextype posix-extended -regex "./file[a-z]" | while read line; do grep -nHC2 "foo" $line | grep --color bar; done
Output:
./filea-5-bar
./filec-46-... host pwns.me [94.23.120.252]: 451 4.7.1 Local bar
configuration error ...
In this example, I created the following files:
filea - your example a
fileb - your example b
filec - some random exim log output with foo and bar tossed in 2 lines apart
filed - the same exim log output, but with foo and bar tossed in 3 lines apart
You could also pipe the output after done, to alter the format:
; done | sed 's/-([0-9]{1,6})-/: line: \1 ::: /'
Formatted output
./filea: line: 5 ::: bar
./filec: line: 46 ::: ... host pwns.me [94.23.120.252]: 451 4.7.1 Local bar configuration error ...
I think I only understand the first line of your question and this does what I think you mean!
#!/bin/bash
N=2
pattern1=a
pattern2=z
matchinglines=$(awk -v p="$pattern1" '$0~p{print NR}' file) # Generate array of matching line numbers
for x in ${matchinglines[#]}
do
((start=x-N))
[[ $start -lt 1 ]] && start=1 # Avoid passing negative line nmumbers to sed
((end=x+N))
echo DEBUG: Checking block between lines $start and $end
sed -ne "${start},${end}p" file | grep -q "$pattern2"
[[ $? -eq 0 ]] && sed -ne "${start},${end}p" file
done
You need to set pattern1 and pattern2 at the start of the script. It basically does some awk to build an array of the line numbers that match your first pattern. Then it loops through the array and sets the start and end range to +/-N either side of each matching line number. It then uses sed to extraact that block and passes it through grep to see if it contains pattern2 printing it if it does. It may not be the most efficient, but it is easy enough to understand and maintain.
It assumes your file is called file
pipe it twice
grep "[^foo\n]" | grep "\n{ntimes}foo\n{ntimes}"