How do I make better the following method of comparing certain lines (lines that start with "#") of two files? I feel certain this could be done on one line and without embarrassing temporary files. I am pretty new to Linux so go easy on me! Thanks in advance.
grep "^#" myfile1 > temp1
grep "^#" myfile2 > temp2
diff temp1 temp2
In Bash, you can use <(...), which will handle temporaries (usually implemented as named pipes under the hood) for you:
diff <(grep "^#" myfile1) <(grep "^#" myfile2)
Related
I'm trying to reduce a .sm file1 - around 10 GB by filtering it using a fair long set of words (around 180.108 items) listed in a text file file2.
File1 is structured as follows:
word <http://internet.address.com> 1
i.e. one word followed by a blank space, an internet address, and a number.
File2 is a simple .txt file, a list of words, one on each line.
My aim is to create a third file File3 containing only those lines in file1 whose first word matches with the word-list of file2, and disregard the rest.
My attempt is the following:
grep -w -F -f file2.txt file1.sm > file3.sm
I've also attempted something along this line:
gawk 'FNR==NR {a[$1]; next } !($2 in a)' file2.txt file1.sm > file3.sm
but with no success. I understand /^ and \b might play a part here, but I don't know how to fit them in the syntax. I've looked around extensively but no solution seems to fit.
My problem is that here grep reads the entire file1's line, and it can happen that the matching word lies in the webpage address, which I'm not interested in finding out.
sed 's/^/^/' file2.txt | grep -f - file1.sm
join is the best tool for this, not grep/awk:
join -t' ' <(sort file1.sm) <(sort file2.txt) >file3.sm
I am trying to scan a file (test.txt), something like this:
make
bake
baker
makes
take
cook
sbake
for patterns listed in a separate file (ref.txt):
ake
make
bake
look
I have tried looping with grep like so:
while read seq; do grep -c "$seq" test.txt; done > out.txt < ref.txt
However, it doesn't count partial matches only exact matches (or inconsistent in counting partial matches) and I output:
4
1
2
0
instead of
6
2
3
0
Thanks for any help!
See why-is-using-a-shell-loop-to-process-text-considered-bad-practice for some, but not all, of the reasons not to try to do this with a shell loop.
The standard UNIX tool for manipulating text is awk:
$ awk 'NR==FNR{cnt[$0]=0;next} {for (re in cnt) cnt[re]+=gsub(re,"&")} END{for (re in cnt) print re, cnt[re]}' ref.txt test.txt
ake 6
bake 3
look 0
make 2
The above assumes the text in your ref.txt file doesn't contain any regexp metacharacters or if it does then a regexp match is desirable. If it can but you need a string instead of regexp match, you'd need a slightly different solution.
$ while read -r line; do grep -c $line test.txt ; done < ref.txt
6
2
3
0
I created a test file with the following:
<cert>
</cert>
I'm now trying to find this with grep and the following command, but it take forever to run.
How can I search quickly for files that contain adjacent lines like these?
tr -d '\n' | grep '<cert></cert>' test.test
So, from the comments, you're trying to get the filenames that contain an empty <cert>..</cert> element. You're using several tools wrong. As #iiSeymour pointed out, tr only reads from standard input-- so if you want to use it to select from lots of filenames, you'll need to use a loop. grep prints out matching lines, not filenames; though you could use grep -l to see the filenames instead.
But you're only joining lines because grep works one line at a time; so let's use a better tool. Here's how to search with awk:
awk '/<cert>/ { started=1; }
/<\/cert>/ { if (started) { print FILENAME; nextfile;} }
!/<cert>/ { started = 0; }' file1 file2 *.txt
It checks each line and keeps track of whether the previous line matched <cert>. (!/pattern/ sets the flag back to zero on lines not matching /pattern/.) Call it with all your files (or with a wildcard like *.txt).
And a friendly suggestion: Next time, try each command separately (you've been stuck on this for hours and you still don't know what grep does?). And have a quick look at the manual for the tools you want to use. Unix tools are usually too complex for simple trial and error.
Any help would be greatly appreciated. I can read code and figure it out, but I have trouble writing from scratch.
I need help starting a ksh script that would search a file for multiple strings and write each line containing one of those strings to an output file.
If I use the following command:
$ grep "search pattern" file >> output file
...that does what I want it to. But I need to search multiple strings, and write the output in the order listed in the file.
Again... any help would be great! Thank you in advance!
Have a look at the regular expression manuals. You can specify multiple strings in the search expression such as grep "John|Bill"
Man grep will teach you a lot about regular expressions, but there are several online sites where you try them out, such as regex101 and (more colorful) regexr.
Sometimes you need egrep.
egrep "first substring|second substring" file
When you have a lot substrings you can put them in a variable first
findalot="first substring|second substring"
findalot="${findalot}|third substring"
findalot="${findalot}|find me too"
skipsome="notme"
skipsome="${skipsome}|dirty words"
egrep "${findalot}" file | egrep -v "${skipsome}"
Use "-f" in grep .
Write all the strings you want to match in a file ( lets say pattern_file , the list of strings should be one per line)
and use grep like below
grep -f pattern_file file > output_file
I'm trying to run this command to do some cleanups.
egrep -v -f ref_file.css my_file.css
However, it is giving me an error.
egrep: Unmatched ( or \ (
How can I go around that? I'm on mac terminal.
Thanks,
Tee
I know I'm a bit late, but maybe this can help other users looking for the same...
If you just want to check the differences between two files, then you can use diff, as mentioned in the comments. However, if the files are somewhat similar, and you are looking for a way to check the differences visually, then you can use sdiff file1 file2 to get a diff-ish output but showing the files side by side.
In the output, | between the two files means that the line is somewhat shared between the two files, but with differences in the text or format.
< and > means that the line to the left or to the right exists in the left or the right file, and not in the other, as you may have expected.
You may then grep the output from sdiff for ' | ', ' < ', and ' > ', and analize that in case you need to do further processing on the files...