So using exim and sed commands I'm getting the results as below
10 /home/user1
20 /home/user2/public_html
30 /home/user3
40 /home/user4/public_html
50 /home/user5
60 /home/user6/public_html
This shows how many mails has been send from user.
How can I get the result in descending order and get the username only?
i.e., from above result I want to grep user6 and then run /scripts/suspendacct user6
With awk and sort:
awk -F '[/ ]' '{print $1,$4}' file | sort -n -r
Output:
60 user6
50 user5
40 user4
30 user3
20 user2
10 user1
Or use cut to get the fields you want and pipe to sort:
$ cut --output-delimiter="" -d / -f 1,3 file | sort -r
60 user6
50 user5
40 user4
30 user3
20 user2
10 user1
Related
i have a directory with a lot of files like this,
if i use grep to seach a string in those files, then it will search by this order, file log.0, then log.1....
but i want grep to search base on time order,
then i do like this,
grep -i 'stg_data.li51_cicmpdtap0521'
$(ls -ltr sequencer_cmbcs_seq_debug.log*) | less
but i get this error
grep:invalid option -- -
after i change to this, it worked,
grep -i 'stg_data.li51_cicmpdtap0521' $(ls -tr
sequencer_cmbcs_seq_debug.log*) | less
why ls -ltr do not work, but ls -tr work ? what's the difference between with -l and without -l here ?
The reason ls -ltr does not work is because grep is trying to use the entire "long" output of the returned directory listing. Essentially what that equates to is something like this:
-rw-rw-rw- 1 user staff 473 May 24 18:14 file
Which would give you a grep command like this:
grep -i 'string' -rw-rw-rw- 1 user staff 473 May 24 18:14 file | less
Notice the dashes in the first column (example 1); grep can't interpret what to make of the input file and returns "invalid option". When you changed your ls command to remove the -l long output you now just have filenames and grep is able to proceed.
I'd like to ask is it possible to combine somehow -v with -A?
I have example file:
abc
1
2
3
ACB
def
abc
1
2
3
ABC
xyz
with -A I can see the parts I want to "cut":
$ grep abc -A 4 grep_v_test.txt
abc
1
2
3
ACB
--
abc
1
2
3
ABC
it there some option to specify something to see only
def
xyz
?
I found this answer - Combining -v flag and -A flag in grep but it is not working for me, I tried
$ sed -e "/abc/{2;2;d}" grep_v_test.txt
sed: -e expression #1, char 8: unknown command: `;'
also
$ sed "/abc/2d" grep_v_test.txt
sed: -e expression #1, char 6: unknown command: `2'
or
$ sed "/abc/+2d" grep_v_test.txt
sed: -e expression #1, char 6: unknown command: `+'
Sed version is:
$ sed --version
GNU sed version 4.2.1
edit1:
Based on comment I experimented a little bit with both solution, but it is not working as I want to
for grep -v -A 1 abc I would expect line abc and 1 to be removed, but the rest will be printed awk 'c&&!--c; /abc/ {c=2}' grep_v_test.txt prints just the line containing 2, which is not what I wanted.
Very similar it is with sed
$ sed -n '/abc/{n;n;p}' grep_v_test.txt
2
2
edit2:
It seems, I'm not able to describe it properly, let me try again.
What grep -A N abc file does is to print N lines after abc. I want to remove what grep -A will show, so in a file
abc
1
2
3
ACB
def
DEF
abc
1
2
3
ABC
xyz
XYZ
I'll just remove the part abc to ABC and I'll print the rest:
abc
1
2
3
ACB
def
DEF
abc
1
2
3
ABC
xyz
XYZ
so 4 lines will remain... The awk solution prints just def and xyz and skips DEF and XYZ...
To skip 5 lines of context starting with the initial matching line is:
$ awk '/abc/{c=5} c&&c--{next} 1' file
def
xyz
See Extract Nth line after matching pattern for other related scripts.
wrt the comments below, here's the difference between this answer and #fedorqui's answer:
$ cat file
now is the Winter
of our discontent
abc
1
2
bar
$ awk '/abc/{c=3} c&&c--{next} 1' file
now is the Winter
of our discontent
bar
$ awk '/abc/ {c=0} c++>2' file
bar
See how the #fedorqui's script unconditionally skips the first 2 lines of the file?
If I understand you properly, you want to print all the lines that occur 4 lines after a given match.
For this you can tweak the solutions in Extract Nth line after matching pattern and say:
$ awk '/abc/ {c=0} c++>4' file
def
DEF
xyz
XYZ
There are file a and b, and want to find common lines and diff lines.
➜ ~ cat a <(echo) b
1
2
3
4
5
1
2
a
4
5
#find common lines
➜ ~ grep -F -f a b
1
2
4
5
#find b-a
➜ ~ grep -F -v -f a b
a
everything is ok, but when have empty line in one file, the grep can't work, see below
# add an empty line in file a
➜ ~ cat a
1
2
3
4
5
# content a is not common
➜ ~ grep -F -f a b
1
2
a
4
5
# b-a is nothing
➜ ~ grep -F -v -f a b
why is so, why once have empty line, grep can't work correctly?
in addition, use grep to find common elements have another problem, e.g.
➜ ~ cat a <(echo) b
1
2
3
4
5
6
1
2
a
4
5
6_id
➜ ~ grep -F -f a b
1
2
4
5
6_id
Can you use comm and diff instead of grep?
to find common lines use:
comm -12 a b
to find diff line:
diff a b
A quick question: ls . | grep -E "^[0-9]" gives me the results in the following format:
1
2
3
4
5
How can I let it be simply displayed as 1 2 3 4 5?
Try
ls . | grep -E "^[0-9]" | tr '\n' ' ' ; echo
try this with tr:
your cmd ....|tr "\n" ' '
try ls . | grep -E "^[0-9" | tr '\n' ' '
Using awk
ls . | awk '/^[0-9]/ {printf "%s ",$0}'
Or more clean:
ls . | awk '/^[0-9]/ {printf "%s ",$0} END {print ""}'
If it is available, you can use the column command from bsdmainutils:
ls | grep '^[0-9]' | column
Output:
1 2 3 4 5
Another test:
seq 50 | column
Example output:
1 6 11 16 21 26 31 36 41 46
2 7 12 17 22 27 32 37 42 47
3 8 13 18 23 28 33 38 43 48
4 9 14 19 24 29 34 39 44 49
5 10 15 20 25 30 35 40 45 50
I have a list of number in a file with format: {integer}\n . So a possible list is:
3
12
53
23
18
32
1
4
i want to use grep to get the count of a specific number, but grep -c "1" file results 3 because it takes into account except the 1, the 12, 18 also. How can i correct this?
Although all the answers until now are logical, and i thought of them and tested before, actually nothing works:
username#domain2:~/code/***/project/random/r2$ cat out.txt
2
16
11
1
13
2
1
16
16
9
username#domain2:~/code/***/project/random/r2$ grep -Pc "^1$" out.txt
0
username#domain2:~/code/***/project/random/r2$ grep -Pc ^1$ out.txt
0
username#domain2:~/code/***/project/random/r2$ grep -c ^1$ out.txt
0
username#domain2:~/code/***/project/random/r2$ grep -c "^1$" out.txt
0
username#domain2:~/code/***/project/random/r2$ grep -xc "^1$" out.txt
0
username#domain2:~/code/***/project/random/r2$ grep -xc "1" out.txt
0
Use the -x flag:
grep -xc 1 file
This is what it means:
-x, --line-regexp
Select only those matches that exactly match the whole line.
There a some other ways you can do this besides grep
$ cat file
3 1 2 100
12 x x x
53
23
18
32
1
4
$ awk '{for(i=1;i<=NF;i++) if ($i=="1") c++}END{print c}' file
2
$ ruby -0777 -ne 'puts $_.scan(/\b1\b/).size' file
2
$ grep -o '\b1\b' file | wc -l
2
$ tr " " "\n" < file | grep -c "\b1\b"
2
Use this regex...
\D1\D
...or ^1$ with multiline mode on.
Tested with RegExr and they both work.
Use e.g. ^123$ to match "Beginning of line, 123, End of line"
grep -wc '23' filename.txt
It will count the number of exact matches of digit 23.