Multiple counts in grep? - grep

So I have a big log file where each line contains a date. I would like to count the number of lines containing each date.
I came up with an awful solution, consisting of manually typing each of the following commands:
grep -c "2014-01-01" big.log
grep -c "2014-01-02" big.log
grep -c "2014-01-03" big.log
I could also have written a small Python script, but that seems overkill. Is there a quicker / more elegant solution?

You can maybe use of a regex and then uniq -c to count the results.
See an example:
$ cat a
2014-01-03 aaa
2014-01-03 aaa
2014-01-02 aaa
2014-01-01 aaa
2014-01-04 aaa
hello
2014-01-01 aaa
And let's look for all the 2014-01-0X, being X a digit, and count them:
$ grep -o "2014-01-0[0-9]" a | sort | uniq -c
2 2014-01-01
1 2014-01-02
2 2014-01-03
1 2014-01-04
Note piping to sort is needed to make uniq -c work properly. You can see more info about it in my answer to what is the meaning of delimiter in cut and why in this command it is sorting twice?.

Borrowing fedorqui's sample date file - thanks #fedorqui :-)
awk '/2014/{x[$1]++} END{for (k in x) print x[k],k}' file
2 2014-01-01
1 2014-01-02
2 2014-01-03
1 2014-01-04

try this
grep '2014-01-01' big.log |wc -l
grep '2014-01-02' big.log |wc -l
grep '2014-01-03' big.log |wc -l
Hope this will solve ur prob

Related

Extract specific number from command outout

I have the following issue.
In a script, I have to execute the hdparm command on /dev/xvda1 path.
From the command output, I have to extract the MB/sec values calculated.
So, for example, if executing the command I have this output:
/dev/xvda1:
Timing cached reads: 15900 MB in 1.99 seconds = 7986.93 MB/sec
Timing buffered disk reads: 478 MB in 3.00 seconds = 159.09 MB/sec
I have to extract 7986.93 and 159.09.
I tried:
grep -o -E '[0-9]+', but it returns to me all the six number in the output
grep -o -E '[0-9]', but it return to me only the first character of the six values.
grep -o -E '[0-9]+$', but the output is empty, I suppose because the number is not the last character set of outoput.
How can I achieve my purpose?
To get the last number, you can add a .* in front, that will match as much as possible, eating away all the other numbers. However, to exclude that part from the output, you need GNU grep or pcregrep or sed.
grep -Po '.* \K[0-9.]+'
Or
sed -En 's/.* ([0-9.]+).*/\1/p'
Consider using awk to just print the fields you want rather than matching on numbers. This will work using any awk in any shell on every Unix box:
$ hdparm whatever | awk 'NF>1{print $(NF-1)}'
7986.93
159.09

print the rest of input along with matching line

I am new to linux and I am experimenting with basic terminal commands. I found out that I can list all users using compgen -u but what if I only want to display the bottom line outputs ?
Ok lets say the output of compgen -u goes like this:
extra
extra
extra
extra
extra
extra
extra
extra
extra
John
William
Kate
Harold
I can only use grep to find a single text (ex. compgen -u | grep John). But what if I want to use grep to display John as well as all the remaining entries after it ?
sed or awk solution would be easier, but if you can only use grep, then the option --after-context (or -A) might do:
grep -A 5 John file
The drawback is that you need to know the number of lines to display after the matching (or use an arbitrary big number for the rest of the file).
compgen -u | grep -A$(compgen -u| wc -l) John
Explanation:
From man grep
-A NUM, --after-context=NUM
Print NUM lines of trailing context after matching lines. Places a line containing a group separator (described under --group-separator) between
contiguous groups of matches.
grep -A -- print number of rows after pattern
$() -- Execute unix command
compgen -u| wc -l --> Get total number of rows of output of command.
You can use the following one-liner :
n=$( compgen -u | grep -n John | head -1 | cut -d ":" -f 1 ) && compgen -u | tail -n +$n
This finds out the line number for first occurrence of John, and prints everything starting that line.

Search file for usernames, and sort number of instances for each user in file?

I am tasked with taking a file that has line entries that include string username=xxxx:
$ cat file.txt
Yadayada username=jdoe blablabla
Yadayada username=jdoe blablabla
Yadayada username=jdoe blablabla
Yadayada username=dsmith blablabla
Yadayada username=dsmith blablabla
Yadayada username=sjones blablabla
And finding how many times each user in the file shows up, which I can do manually by feeding username=jdoe for example:
$ grep -r "username=jdoe" file.txt | wc -l | tr -d ' '
3
What's the best way to report each user in the file, and the number of lines for each user, sorted from highest to lowest instances:
3 jdoe
2 dsmith
1 sjones
Been thinking of how to approach this, but drawing blanks, figured I'd check with our gurus on this forum. :)
TIA,
Don
In GNU awk:
$ awk '
BEGIN { RS="[ \n]" }
/=/ {
split($0,a,"=")
u[a[2]]++ }
END {
PROCINFO["sorted_in"]="#val_num_desc"
for(i in u)
print u[i],i
}' file
3 jdoe
2 dsmith
1 sjones
Using grep :
$ grep -o 'username=[^ ]*' file | cut -d "=" -f 2 | sort | uniq -c | sort -nr
Awk alone:
awk '
{sub(/.*username=/,""); sub(/ .*/,"")}
{a[$0]++}
END {for(i in a) printf "%d\t%s\n",a[i],i | "sort -nr"}
' file.txt
This uses awk's sub() function to achieve what grep -o does in other answers. It embeds the call to sort within the awk script. You could of course use that pipe after the awk script rather than within it if you prefer.
Oh, and unlike the other awk solutions presented here, this one (1) is portable to non-GNU-awk environments (like BSD, macOS) and doesn't depend on the username being in a predictable location on each line (i.e. $2).
Why might awk be a better choice than simpler tools like uniq? It probably wouldn't, for a super simple requirement like this. But good to have in your toolbox if you want something with the capability of a little more text processing.
Using sed, uniq, and sort:
sed 's/.*username=\([^ ]*\).*/\1/' file.txt | sort | uniq -c | sort -nr
If there are lines without usernames:
sed -n 's/.*username=\([^ ]*\).*/\1/p' input | sort | uniq -c | sort -nr
$ awk -F'[= ]' '{print $3}' file | sort | uniq -c | sort -nr
3 jdoe
2 dsmith
1 sjones
Following awk may help you on same too.
awk -F"[ =]" '{a[$3]++} END{for(i in a){print a[i],i | "sort -nr"}}' Input_file

GREP to columns along with comma seperation

Im greping a bunch of files in a directory as below
grep -EIho 'abc|def' *|sort|uniq -c >>counts.csv
My output is
150 abc
130 def
What I need is Current date (-1) and the result of grep like below to be inserted to counts.csv
5/21/2018 150,130
grep..|sort|uniq -c
|awk -v d="$(date -d '1 day ago' +%D)" 'NR==1{printf "%s",d}{printf "%s",","$1;}END{print ""}'
will do it.
With your example data, it gives:
05/21/18,150,130

How do I 'grep -c' and avoid printing files with zero '0' count

The command 'grep -c blah *' lists all the files, like below.
% grep -c jill *
file1:1
file2:0
file3:0
file4:0
file5:0
file6:1
%
What I want is:
% grep -c jill * | grep -v ':0'
file1:1
file6:1
%
Instead of piping and grep'ing the output like above, is there a flag to suppress listing files with 0 counts?
SJ
How to grep nonzero counts:
grep -rIcH 'string' . | grep -v ':0$'
-r Recurse subdirectories.
-I Ignore binary files (thanks #tongpu, warlock).
-c Show count of matches. Annoyingly, includes 0-count files.
-H Show file name, even if only one file (thanks #CraigEstey).
'string' your string goes here.
. Start from the current directory.
| grep -v ':0$' Remove 0-count files. (thanks #LaurentiuRoescu)
(I realize the OP was excluding the pipe trick, but this is what works for me.)
Just use awk. e.g. with GNU awk for ENDFILE:
awk '/jill/{c++} ENDFILE{if (c) print FILENAME":"c; c=0}' *

Resources