I am having an issue with using Ag (The Silver Searcher)...
In the docs it says to use -Q for exact match, but I don't understand why it does not work for my purposes. If I type something like ag -Q actions or ag -Q 'actions' into my terminal, it returns all instances of actions, including things like transactions and any other strings that actions is part of.
I have tried a couple other combinations of flags from the docs, including -s and -S, among others, but still I cannot get strictly strings matching just actions to return for me.
I can't get this to work with grep either. Does anyone know how I can get what I need with ag? (or even with grep)...?
Thank you in advance!
Because ag (and grep), find files that contain something. ag -Q means to interpret the search as an exact literal string, not a fuzzy string or a regex. Okay. But a file that has the word "transactions" in it contains exactly, literally the character sequence actions. Sure, it contains more than that too, but that's not surprising.
Probably you're looking for a word-boundary search, grep '\bactions\b' or ag -w -Q actions (maybe ag -w -Q -s actions). But that is not at all the same thing as "just actions", it's a specific requirement on the things surrounding "actions" (namely that they be the beginning or end of a line, or non-letter characters). You have to tell the computer what you actually mean.
Related
To clarify a bit, while I am aware how to grab words with a single specific character, I'm unsure how to approach looking for multiple of them. For example, what grep command would be used to retrieve only the words containing both "b" and "p" (in any order), not just one or the other?
Using the above example, if you're given words like "bear," "pear," "biography," and "printable," it would only return the last two words. These are some of my previous attempts.
grep -E "\b[bp]\b" input
grep -E "\b(b|p)\b" input
grep -E "\bb.*p\b" input
you can do it with a regular expression. For instance, here the code snippet for your problem.
grep '\w*[b]\w*[p]\w*\|\w*[p]\w*[b]\w*' test.txt
Helpful links to read further:
https://www.cyberciti.biz/faq/grep-regular-expressions/
https://regexr.com/
I have a directory /dir
which has several text files in it, These files may or may not contain the words 'rock' and 'stone', so basically some files might just contain the word 'rock', some may just contain the word 'stone', some may contain both, and some may contain neither.
How can I list all files in this directory that contain both 'rock' and 'stone'? These words might not be on the same line so I don't think piping through grep twice would work.
Appreciate any help, I was not able to find a stackoverflow post with this problem so I figured I'd ask.
To search files that match the given two (or more) words at any line anywhere in the file, you may want to try ugrep:
ugrep -F --files -e 'rock' --and -e 'stone' dir
This only matches files that have both rock and stone in them. Lines are output that have rock or stone, or you can use option -l to just list files. The -F option searches strings (like grep -F and fgrep), --files applies the --and file-wide, which you want instead of applying the --and per line. Note that we have more than one pattern in this case, so option -e should be used (like grep also requires this).
A shorter form with --bool:
ugrep -F --files --bool 'rock stone' dir
where --bool formulates a Boolean query with space as AND (or use AND).
If you want to search directory dir recursively in subdirectories, use option -r.
Even though there are lots of grep questions and answers, these don't answer and I need help in this. I need to make
Title-BEX-override-8>"
expressions to become
Title-BEX>"
Any letters or words among Title-BEX and >" should be terminated. I need an exact grep expression for this.
And some optional answers can be about this: I want to do is thin multiple files. And prefer doing this in Mac.
grep cannot do text replacement.
try sed
sed 's/Title-BEX-override-8/Title-BEX/g' file
-i option can let you do it "in place". but I don't know the corresponding option is for your sed on mac.. :(
I am tasked with white labeling an application so that it contains no references to our company, website, etc. The problem I am running into is that I have many different patterns to look for and would like to guarantee that all patterns are removed. Since the application was not developed in-house (entirely) we cannot simply look for occurrences in messages.properties and be done. We must go through JSP's, Java code, and xml.
I am using grep to filter results like this:
grep SOME_PATTERN . -ir | grep -v import | grep -v // | grep -v /* ...
The patterns are escaped when I'm using them on the command line; however, I don't feel this pattern matching is very robust. There could possibly be occurrences that have import in them (unlikely) or even /* (the beginning of a javadoc comment).
All of the text output to the screen must come from a string declaration somewhere or a constants file. So, I can assume I will find something like:
public static final String SOME_CONSTANT = "SOME_PATTERN is currently unavailable";
I would like to find that occurrence as well as:
public static final String SOME_CONSTANT = "
SOME_PATTERN blah blah blah";
Alternatively, if we had an internal crawler / automated tests, I could simply pull back the xhtml from each page and check the source to ensure it was clean.
To address your concern about missing some occurrences, why not filter progressively:
Create a text file with all possible
matches as a starting point.
Use filter X (grep for '^import',
for example) to dump probable false
positives into a tmp file.
Use filter X again to remove those
matches from your working file (a
copy of [1]).
Do a quick visual pass of the tmp
file and add any real matches back
in.
Repeat [2]-[4] with other filters.
This might take some time, of course, but it doesn't sound like this is something you want to get wrong...
I would use sed, not grep!
Sed is used to perform basic text transformations on an input stream.
Try s/regexp/replacement/ option with sed command.
You can also try awk command. It has an option -F for fields separation, you can use it with ; to separate lines of you files with ;.
The best solution will be however a simple script in Perl or in Python.
I use procmail to do extensive sorting on my inbox. My next to last recipe matches the incoming From: to a (very) long white/gold list of historically good email addresses, and patterns of email addresses. The recipe is:
# Anything on the goldlist goes straight to inbox
:0
* ? formail -zxFrom: -zxReply-To | fgrep -i -f $HOME/Mail/goldlist
{
LOG="RULE Gold: "
:0:
$DEFAULT
}
The final recipe puts everything left in a suspect folder to be examined as probable spam. Goldlist is currenltty 7384 lines long (yikes...). Every once in a while, I get a piece of spam that has slipped through and I want to fix the failing pattern. I thought I read a while ago about a special flag in grep that helped show the matching patterns, but I can't find that again. Is there a way to use grep that shows the pattern from a file that matched the scanned text? Or another similar tool that would answer the question short of writing a script to scan pattern by pattern?
grep -o will output only the matched text (as opposed to the whole line). That may help. Otherwise, I think you'll need to write a wrapper script to try one pattern at a time.
I'm not sure if this will help you or not. There is a "-o" parameter to output only the matching expression.
From the man page:
-o, --only-matching
Show only the part of a matching line that matches PATTERN.