A confusing error of executing commands in foreach in Csh - foreach

The program is very simple:
#!/bin/csh -f
foreach path ( fileA.txt fileB.txt )
wc -l $path
grep "test" $path
end
However, the output is:
fileA.txt/wc: Not a directory.
fileA.txt/grep: Not a directory.
fileB.txt/wc: Not a directory.
fileB.txt/grep: Not a directory.
So what's wrong with the code and what's the correct way of doing it?

You should never use path as a generic variable name in C-Shell since it contains the current search directories for the shell to find the command programs.
This will work much better than your code:
#!/bin/csh -f
foreach mypath ( fileA.txt fileB.txt )
wc -l $mypath
grep "test" $mypath
end

Related

Do Bazel genrules offer a temp directory?

Does Bazel offer a variable substitution for a temp directory in genrules?
Sometimes I need a staging area before creating the final output artefact.
I am imagining something like this:
genrule(
name = "example",
srcs = [ "a.txt" ],
cmd = "cp $< $(TMP)/b.txt && cp $(TMP)/b.txt $#",
)
$(TMP) would be a folder generated for me by Bazel on each rule execution.
No it doesn't. (As of Bazel 0.23.1)
It does set $TMPDIR though (even with --incompatible_strict_action_env), so mktemp should work. But $TMPDIR is by no means a dedicated temp directory (it's often just /tmp), so be careful what you clobber.
I migrated my genrule to a full Starlark rule. There I can do
tmp = ctx.actions.declare_directory("TMP_" + ctx.label.name)
and just use that directory as my temp in further actions.
It is similar to what the Starlark tutorial shows, in https://docs.bazel.build/versions/2.0.0/skylark/rules-tutorial.html#creating-a-file. The difference is that I do not register that directory as an output. That is, I don't do something like
return [DefaultInfo(files = depset([tmp]))]
You can make your own inside the bash code:
export TMP=$(mktemp -d || mktemp -d -t bazel-tmp)
trap "rm -rf $TMP" EXIT # Delete on exit
# Do things...

Getting `No such file or directory` error on egrep in function on zsh

I've created a small function to allow me to grep through my command history on zsh. The command history 1 will display the entire command history. And running history 1 | egrep ls shows just those command containing ls.
So my function looks like this:
h() {
if [ -z "$*" ]
then
history 1
else
history 1 | egrep "$#"
fi
}
Unfortunately this only results in the following error message:
$ h ls
egrep: ls: No such file or directory
I'm at a loss as to what is wrong in my script. I've trie both grep and egrep to no avail.
What is the full path of grep or egrep?
It's possible that it's running in an alternate shell which has a different PATH set. Try using an explicit /usr/bin/grep or /usr/bin/egrep and see if that fixes anything.
Create a file history.zsh (slightly changed from the original):
#!/bin/zsh
h() {
if [ -z "$*" ]
then
history
else
history | fgrep "$*"
fi
}
Now source this file (so "h" will be refreshed):
. history.zsh
And call the new function:
$ h ls
30 h ls
31 ls
I've abandoned the function. Further reading on the subject of zsh history lead me to this very elegant solution that meets my needs. https://coderwall.com/p/jpj_6q
In a nutshell you add this to your .zshrc:
autoload -U up-line-or-beginning-search
autoload -U down-line-or-beginning-searc
zle -N up-line-or-beginning-search
zle -N down-line-or-beginning-search
bindkey "^[[A" up-line-or-beginning-search # Up
bindkey "^[[B" down-line-or-beginning-search # Down
Now your history can be searched by entering a partial term and using the up or down arrow keys to walk through the matches from your history file.

Shell: Find Matching Lines Across Many Files

I am trying to use a shell script (well a "one liner") to find any common lines between around 50 files.
Edit: Note I am looking for a line (lines) that appears in all the files
So far i've tried grep grep -v -x -f file1.sp * which just matches that files contents across ALL the other files.
I've also tried grep -v -x -f file1.sp file2.sp | grep -v -x -f - file3.sp | grep -v -x -f - file4.sp | grep -v -x -f - file5.sp etc... but I believe that searches using the files to be searched as STD in not the pattern to match on.
Does anyone know how to do this with grep or another tool?
I don't mind if it takes a while to run, I've got to add a few lines of code to around 500 files and wanted to find a common line in each of them for it to insert 'after' (they were originally just c&p from one file so hopefully there are some common lines!)
Thanks for your time,
When I first read this I thought you were trying to find 'any common lines'. I took this as meaning "find duplicate lines". If this is the case, the following should suffice:
sort *.sp | uniq -d
Upon re-reading your question, it seems that you are actually trying to find lines that 'appear in all the files'. If this is the case, you will need to know the number of files in your directory:
find . -type f -name "*.sp" | wc -l
If this returns the number 50, you can then use awk like this:
WHINY_USERS=1 awk '{ array[$0]++ } END { for (i in array) if (array[i] == 50) print i }' *.sp
You can consolidate this process and write a one-liner like this:
WHINY_USERS=1 awk -v find=$(find . -type f -name "*.sp" | wc -l) '{ array[$0]++ } END { for (i in array) if (array[i] == find) print i }' *.sp
old, bash answer (O(n); opens 2 * n files)
From #mjgpy3 answer, you just have to make a for loop and use comm, like this:
#!/bin/bash
tmp1="/tmp/tmp1$RANDOM"
tmp2="/tmp/tmp2$RANDOM"
cp "$1" "$tmp1"
shift
for file in "$#"
do
comm -1 -2 "$tmp1" "$file" > "$tmp2"
mv "$tmp2" "$tmp1"
done
cat "$tmp1"
rm "$tmp1"
Save in a comm.sh, make it executable, and call
./comm.sh *.sp
assuming all your filenames end with .sp.
Updated answer, python, opens only each file once
Looking at the other answers, I wanted to give one that opens once each file without using any temporary file, and supports duplicated lines. Additionally, let's process the files in parallel.
Here you go (in python3):
#!/bin/env python
import argparse
import sys
import multiprocessing
import os
EOLS = {'native': os.linesep.encode('ascii'), 'unix': b'\n', 'windows': b'\r\n'}
def extract_set(filename):
with open(filename, 'rb') as f:
return set(line.rstrip(b'\r\n') for line in f)
def find_common_lines(filenames):
pool = multiprocessing.Pool()
line_sets = pool.map(extract_set, filenames)
return set.intersection(*line_sets)
if __name__ == '__main__':
# usage info and argument parsing
parser = argparse.ArgumentParser()
parser.add_argument("in_files", nargs='+',
help="find common lines in these files")
parser.add_argument('--out', type=argparse.FileType('wb'),
help="the output file (default stdout)")
parser.add_argument('--eol-style', choices=EOLS.keys(), default='native',
help="(default: native)")
args = parser.parse_args()
# actual stuff
common_lines = find_common_lines(args.in_files)
# write results to output
to_print = EOLS[args.eol_style].join(common_lines)
if args.out is None:
# find out stdout's encoding, utf-8 if absent
encoding = sys.stdout.encoding or 'utf-8'
sys.stdout.write(to_print.decode(encoding))
else:
args.out.write(to_print)
Save it into a find_common_lines.py, and call
python ./find_common_lines.py *.sp
More usage info with the --help option.
Combining this two answers (ans1 and ans2) I think you can get the result you are needing without sorting the files:
#!/bin/bash
ans="matching_lines"
for file1 in *
do
for file2 in *
do
if [ "$file1" != "$ans" ] && [ "$file2" != "$ans" ] && [ "$file1" != "$file2" ] ; then
echo "Comparing: $file1 $file2 ..." >> $ans
perl -ne 'print if ($seen{$_} .= #ARGV) =~ /10$/' $file1 $file2 >> $ans
fi
done
done
Simply save it, give it execution rights (chmod +x compareFiles.sh) and run it. It will take all the files present in the current working directory and will make an all-vs-all comparison leaving in the "matching_lines" file the result.
Things to be improved:
Skip directories
Avoid comparing all the files two times (file1 vs file2 and file2 vs file1).
Maybe add the line number next to the matching string
Hope this helps.
Best,
Alan Karpovsky
See this answer. I originally though a diff sounded like what you were asking for, but this answer seems much more appropriate.

How can I have grep not print out 'No such file or directory' errors?

I'm grepping through a large pile of code managed by git, and whenever I do a grep, I see piles and piles of messages of the form:
> grep pattern * -R -n
whatever/.git/svn: No such file or directory
Is there any way I can make those lines go away?
You can use the -s or --no-messages flag to suppress errors.
-s, --no-messages suppress error messages
grep pattern * -s -R -n
If you are grepping through a git repository, I'd recommend you use git grep. You don't need to pass in -R or the path.
git grep pattern
That will show all matches from your current directory down.
Errors like that are usually sent to the "standard error" stream, which you can pipe to a file or just make disappear on most commands:
grep pattern * -R -n 2>/dev/null
I have seen that happening several times, with broken links (symlinks that point to files that do not exist), grep tries to search on the target file, which does not exist (hence the correct and accurate error message).
I normally don't bother while doing sysadmin tasks over the console, but from within scripts I do look for text files with "find", and then grep each one:
find /etc -type f -exec grep -nHi -e "widehat" {} \;
Instead of:
grep -nRHi -e "widehat" /etc
I usually don't let grep do the recursion itself. There are usually a few directories you want to skip (.git, .svn...)
You can do clever aliases with stances like that one:
find . \( -name .svn -o -name .git \) -prune -o -type f -exec grep -Hn pattern {} \;
It may seem overkill at first glance, but when you need to filter out some patterns it is quite handy.
Have you tried the -0 option in xargs? Something like this:
ls -r1 | xargs -0 grep 'some text'
Use -I in grep.
Example: grep SEARCH_ME -Irs ~/logs.
I redirect stderr to stdout and then use grep's invert-match (-v) to exclude the warning/error string that I want to hide:
grep -r <pattern> * 2>&1 | grep -v "No such file or directory"
I was getting lots of these errors running "M-x rgrep" from Emacs on Windows with /Git/usr/bin in my PATH. Apparently in that case, M-x rgrep uses "NUL" (the Windows null device) rather than "/dev/null". I fixed the issue by adding this to .emacs:
;; Prevent issues with the Windows null device (NUL)
;; when using cygwin find with rgrep.
(defadvice grep-compute-defaults (around grep-compute-defaults-advice-null-device)
"Use cygwin's /dev/null as the null-device."
(let ((null-device "/dev/null"))
ad-do-it))
(ad-activate 'grep-compute-defaults)
One easy way to make grep return zero status all the time is to use || true
→ echo "Hello" | grep "This won't be found" || true
→ echo $?
0
As you can see the output value here is 0 (Success)

Executing shell command from ruby console returning Permission Denied Error?

Getting permission denied error while executing shell command from ruby console.
And the same shell command is working from shell.
From Shell..
tests#tests-workstation:~$ "`grep '^datadir=' /etc/mysql/my.cnf | cut -f 2 -d '='`/db_backups"
bash: /db_backups: is a directory
tests#tests-workstation:~$
From ruby console..
>> %x["`grep '^datadir=' /etc/mysql/my.cnf | cut -f 2 -d '='`/db_backups"]
sh: /db_backups: Permission denied
=> ""
Any Idea !
You're trying to execute a directory and the shells are saying no; bash says no by saying "/db_backups: is a directory" whereas sh says "/db_backups: Permission denied". If you just execute the backedticked part:
grep '^datadir=' /etc/mysql/my.cnf | cut -f 2 -d '='
You'll almost certainly see no output at all and the reason is probably that your regular expression is too tight, something like this:
grep '^[ ]*datadir[ ]*=' /etc/mysql/my.cnf | cut -f2 -d'='
Would serve you better; the character classes contain a space and a tab.
Now that you're looking for the right things we can move on to why it still won't work. The %x[] quoter tries to execute its argument using the shell. When you feed the backticked grep stuff:
`grep '^[ ]*datadir[ ]*=' /etc/mysql/my.cnf | cut -f2 -d'='`/db_backups
to the shell, you should get a directory name that ends with /db_backups but you can't execute a directory. I think you want this to produce the directory name:
d = %x[echo `grep '^[ ]*datadir[ ]*=' /etc/mysql/my.cnf | cut -f2 -d'='`/db_backups].strip
Note the leading echo and the .strip call on the returned string. The .strip is necessary to remove the newline from what echo produces.
I think you're going through a lot of trouble for something that could easily be done with just a couple lines of Ruby:
dir = nil
File.open('/etc/mysql/my.cnf').each do |line|
if(m = line.match(/^\s*datadir\s*=\s*(\S+)/))
dir = m[1] + '/db_backups'
break
end
end
You could probably tighten that up a bit if you wanted but I think that that's at least less confusing than putting shell backticks inside Ruby backticks.
It looks like you just want to get field 2 from the file. Then just do it in Ruby using split
File.open("file").each do |line|
if line[/^datadir/]
print line.split("=",2)[0]
end
end
There is no need to specifically shell out to call grep. This is inefficient and non-portable

Resources