I have tomcat access logs in multiple files.
All the files are under the same directory. I am in MacOS.
Like this in one of the files:
POST /context HTTP/1.1 200 266 20 <url>.com
GET /context1 HTTP/1.1 200 266 20 <url>.in
POST /context2 HTTP/1.1 200 266 20 <url>.de
Now I want to grep/search and print the lines which matches with "context" and ".com".
This has to be done over the multiple files in the directory.
I tried this grep "/context" . | grep ".com" but this does not work over a directory.
If the .com always follows /context, try
grep 'context.*\.com' *
(or a better wildcard if you have other files; maybe try tomcat*.log instead of *?)
If the patterns could be in any order, you could use
grep -E 'context.*\.com|\.com.*context' *
(The -E option switches to a different regex dialect which lets you use | for "or". You could also remove the -E option and use \| but I think this is clumsy and confusing.)
... or switch to Awk.
awk '/context/ && /\.com/' *
Related
Say I have a directory /home/ and within it I have 3 subdirectories /home/red/ /home/blue/ /home/green/
And each subdirectory contains a file each like
/home/red/file1 /home/blue/file2 /home/green/file3
Now I want to find how many times file1,file2, file3 contains the word "hello" within them.
For example,
/home/red/file1 - 23
/home/blue/file2 - 6
/home/green/file3 - 0
Now, going to the locations of file and running the grep command is actually very inefficient when this problem scales.
I have tried using this grep command from the /home/ directory
grep -rnw '/path/to/somewhere/' -e 'pattern'
But this is just giving the occurrences rather than the count.
Is there any command through which I can get what I am looking for?
If the search term occurs at maximum once per line, you can use grep's -c option to report the count instead of the matching lines. So, the command will be grep -rc 'search' (add other options as needed).
If there can be more than one occurrence per line, I'd recommend using ripgrep. Note that rg recursively searches by default, so you can use something like rg -co 'search' from within the home directory (add other options as needed). Add --hidden if you need to search hidden files as well. Add --include-zero if you want to show files even if they didn't have any match.
Instead of grep you can use this find | gnu-awk solution:
cd /home
find {red/file1,blue/file2,green/file3} -type f -exec awk '
{c += gsub(/pattern/, "&")} ENDFILE {print FILENAME, "-", c; c=0}' {} +
I'm trying to find lines with words not preceded by double colons (::).
Example
void myClass::doMything() // I don't want this line
myObj->doMyThing() // I want this line
My goal is to get the lines where some methods are used, but not where the methods are defined.
I try with this command :
grep --color=always -rwna "methodName" --include=*.cpp | grep -v "::methodName"
but it doesn't work : it keeps extracting also lines containing
::methodName
I've also tried by writing
grep --color=always -rwna "methodName" --include=*.cpp | grep -v "\:\:methodName"
egrep --color=always -rwna "methodName" --include=*.cpp | egrep -v "\:\:methodName"
but neither works.
What should I do ?
Although grep is probably most common used tool among all linux CLI tools and is used by every1 and everywhere... still doesnt mean its perfect. The thing you are trying to achieve is not achievable with basic grep's regex - you need python/perl regex here.
As a workaround (I assume you are trying to only find line where method is invoked) you can try:
grep -Eno "(::)?methodName" your_input_files | grep -v "::methodName"
-n to prints line number and I believe it will give convenience to you
-o to prints only matched part, but I use it here to split output - to have each match in separate line (if you have 5x methodName in line of code you will have 5 lines in grep's output)
(::)? to find distinguish if its declaration or invokation of methodName, we will need it when 2nd grep comes to play...
grep -v ...and here it comes, to get rid of what you dont want
I guess you want to use maaaaany times so you can even try to make a function into your .bashrc
find_invocations () {
# below example goes through current dir, but you can improve it :)
grep --color=yes -Eno "(::)?$1" * 2>/dev/null | grep -v "::$1"
}
in above function you might go risky and use $1.* instead of $1 but an unpleasant case is if you have both methodname and ::methodName in same line AFAIR my C++ lessons (ages ago - anno 2010) methodName::methodName is a constructor...
...sorry for bad english
I've finally managed to make it work.
I've tried linux_beginner's suggestion:
grep -Eno '(::)?myMethodName' path/to/one/of/the/files.cpp | grep -v '::myMethodName'
with a single file and this works. (I found I prefer not using the o option, because I also want to se how it's used).
In this search I need anyway to use multiple files. So I've also tried to include more files :
grep -Eno '(::)?myMethodName' --include=*.cpp | grep -v '::myMethodName'
but in this case it remains like stuck in the search (maybe it triggers some slow scripting ? perl or python ?).
I've checked RavinderSingh13's command. Taken in a single instance, it can capture the lines with double colon(and only them, correctly), both on single file or in multiple files :
grep -rna '::myMethodName' path/to/one/of/the/file.cpp
grep -rna '::myMethodName' --include=*.cpp
but there must not be the -w switch, so the following:
grep -rna '::myMethodName' path/to/one/of/the/file.cpp
grep -rna '::myMethodName' --include=*.cpp
don't get any result.
RavinderSingh13's suggestion put inside the pipelining doesn't manage to filter out the double colon lines (my original goal), either with single or multiple files :
grep -rwna 'myMethodName' path/to/one/of/the/files.cpp | grep -v '::[[:alpha:]]+'
-> extracts both myMethodName and ::myMethodName from the chosen file
grep -rwna 'myMethodName' --include=*.cpp | grep -v '::[[:alpha:]]+'
-> extracts both myMethodName and ::myMethodName from all the cpp files
Now, how I could solve:
usually, when I concatenate grep commands I also add to the first of them the switch --color=always, which preserves results coloring also across the piping of multiple commands.
But that... was the culprit !
i.e., doing
grep --color=always -rwna 'myMethodName' --include=*.cpp | grep -v '::myMethodName'
preserves the color in results, but sadly fails to exclude lines containing ::myMethodName, while
grep -rwna 'myMethodName' --include=*.cpp | grep -v '::myMethodName'
gives colorless but correct results (manages to filter out double column lines).
The distribution on which I've experimented these codes and behaviours is Ubuntu 20.04.1 LTS.
Grep version : grep (GNU grep) 3.4
Thanks everybody for the interest.
I have 18 csv files, all between 1mb and 14mb. The sum of all files is 64mb. I want to create a new csv file that contains a subset of those files-- only the lines featuring the pattern "Hello" (or "HELLO", or "hello" ...). Here's what I'm doing
cat *.csv | head -n 1 > new.csv # I want to create a header first
cat *.csv | grep -i "hello" >> new.csv
I'm running Debian on WSL. The output file is much, much larger than the original 64mb (I stopped the process after 1+ hour, and the file was 300+ GB).
How can a subset of a text file be larger than the original files? Does it have anything to do with WSL?
This is not an OS issue. When you redirect your output to new.csv, shell creates that file first, before the glob expression *.csv is evaluated. That means the expansion of *.csv would include new.csv as well. That seems like the root cause of the recursive grep issue you are facing.
You are reading all the files twice, which is not necessary. You can make your operation a lot simpler and efficient with a single awk command:
awk 'NR==1 {print} tolower($0) ~ /hello/ {print}' *.csv > csv.new
mv csv.new new.csv
since the output file is named csv.new it won't interfere with the glob *.csv
NR==1 picks up the first line (header) from the very first file
The awk command can be written more succinctly as:
awk 'NR==1 || tolower($0) ~ /hello/' *.csv > csv.new
You are using *.csv and redirecting the output to new.csv which falls under *.csv which is causing recursion in grep result. perhaps you can try,
grep -i hello *.csv --exclude="new.csv" >> new.csv
I am passing all my svn commit log messages to a file and want to grep only the JIRA issue numbers from that.
Some lines might have more than 1 issue number, but I want to grab only the first occurrence.
The pattern is XXXX-999 (number of alpha and numeric char is not constant)
Also, I don't want the entire line to be displayed, just the JIRA number, without duplicates. I use the following command but it didn't work.
Could someone help please?
cat /tmp/jira.txt | grep '^[A-Z]+[-]+[0-9]'
Log file sample
------------------------------------------------------------------------
r62086 | userx | 2015-05-12 11:12:52 -0600 (Tue, 12 May 2015) | 1 line
Changed paths:
M /projects/trunk/gradle.properties
ABC-1000 This is a sample commit message
------------------------------------------------------------------------
r62084 | usery | 2015-05-12 11:12:12 -0600 (Tue, 12 May 2015) | 1 line
Changed paths:
M /projects/training/package.jar
EFG-1001 Test commit
Output expected:
ABC-1000
EFG-1001
First of all, it seems like you have the second + in the wrong place, it should be at the end of [0-9] expression.
Second, I think all you need to do this is use the -o option to grep (to display only the matching portion of the line), then pipe the grep output through sort -u, like this:
cat /tmp/jira.txt | grep -oE '^[A-Z]+-[0-9]+' | sort -u
Although if it were me, I'd skip the cat step and just give the filename to grep, as so:
grep -oE '^[A-Z]+-[0-9]+' /tmp/jira.txt | sort -u
Six of one, half a dozen of the other, really.
Having a log file in the standard combined access_log format of nginx or apache, how would you, in UNIX shell, calculate the number of visits or page views (i.e. total requests) from each visitor (i.e. IP-address) that a given referrer once brought?
In other words, the number of ALL requests by each visitor that have found a link to your site on another site.
The best snippet I could come up with is the following:
fgrep http://t.co/ /var/www/logs/access.log | cut -d " " -f 1 | \
fgrep -f /dev/fd/0 /var/www/logs/access.log | cut -d " " -f 1 | sort | uniq -c
What does this do?
We first find unique IP-addresses of visits that have http://t.co/ in the log entry. (Notice that this will only count visits that came directly from the ref, but not those that stayed and browsed the site further.)
After having a list of IP-addresses that, at one point, were referred from a given URL, we pipe such list to another fgrep through stdin — /dev/fd/0 (a very inefficient alternative would have been xargs -n1 fgrep access.log -e instead of fgrep -f /dev/fd/0 access.log) for finding all hits from such addresses.
After the second fgrep, we get the same set of IP-addresses that we had in the first step, but now they repeat according to the total number of requests -- now sort, uniq -c, done. :)