On my 100TB cluster, I need to find dirs and files that have a "deny" ACE within their ACL, then remove that ACE on each instance. I'm using the following:
# find . -print0 | xargs -0 ls -led | grep deny -B4
and get this output (partial, for example only)
-r--rw---- 1 chris GroupOne 4096 Mar 6 18:12 ./directoryA/fileX.txt
OWNER: user:chris
GROUP: group:GroupOne
0: user:chris allow file_gen_read,std_write_dac,file_write_attr
1: user:chris deny file_write,append,file_write_ext_attr,execute
--
-r--rwxrwx 1 chris GroupOne 14728221 Mar 6 18:12 ./directoryA/subdirA/fileZ.txt
OWNER: user:chris
GROUP: group:GroupOne
0: user:chris allow file_gen_read,std_write_dac,file_write_attr
1: user:chris deny file_write,append,file_write_ext_attr,execute
--
OWNER: user:bob
GROUP: group:GroupTwo
0: user:bob allow dir_gen_read,dir_gen_write,dir_gen_execute,std_write_dac,delete_child,object_inherit,container_inherit
1: group:GroupTwo allow std_read_dac,std_write_dac,std_synchronize,dir_read_attr,dir_write_attr,object_inherit,container_inherit
2: group:GroupTwo deny list,add_file,add_subdir,dir_read_ext_attr,dir_write_ext_attr,traverse,delete_child,object_inherit,container_inherit
--
As you can see, depending on where the "deny" ACE is, I can see/not-see the path. I could increase the -B value (I've seen up to 8 ACEs on a file) but then I would get more output to distill from...
What I need to do next is extract $ACENUMBER and $PATHTOFILE so that I can execute this command:
chmod -a# $ACENUMBER $PATHTOFILE
Additional issue is that the find command (above) gives a relative path, whereas I need the full path. I guess that would need to be edited somehow.
Any guidance on how to accomplish this?
For the second part of your question: To output absolute file paths make your argument a full path i.e.
find /my/full/path -print0
for x in $(find /mydirectory -mindepth 1); do
if [ $(ls -led $x | grep deny | wc -l | sed -e 's/^[ \t]*//') -gt 0 ]; then
chmod -a# "$(ls -led $x | grep deny | cut -d ":" -f 1)" $x;
fi;
done
Related
I have a problem with this Linux command:
ls | grep -E 'i{2,3}'
.It should take a file that has at least 2 i and max 3 i, but it doesn't work.
This is the output
ls:
life.py, viiva.txt, viiiiiiiiiva.txt
grep:
viiva.txt, viiiiiiiiiva.txt (with the first 3 I highlighted)
Thanks for the help.
Issue with OP's attempt grep -E 'i{2,3}' will match two or three consecutive occurrences of i anywhere in the input, so 4 or more consecutive i is also a valid match.
Parsing ls output is not recommended, see Why not parse ls (and what to do instead)?. If you wish to pass the filenames after filtering to some other command, find is a good option.
$ ls
1i2i3i.txt aibi.txt II.txt life.py viiiiiiiiiva.txt viiva.txt
$ # files with 2 or 3 consecutive i
$ # note that the regex will act on entire filename, thus anchors are not needed
$ find -type f -regextype egrep -regex '[^i]*i{2,3}[^i]*'
./viiva.txt
$ # files with 2 or 3 i anywhere in the name
$ find -type f -regextype egrep -regex '[^i]*i[^i]*i[^i]*(i[^i]*)?'
./aibi.txt
./1i2i3i.txt
./viiva.txt
$ # files with 2 or 3 i anywhere in the name, ignoring case
$ find -type f -regextype egrep -iregex '[^i]*i[^i]*i[^i]*(i[^i]*)?'
./II.txt
./aibi.txt
./1i2i3i.txt
./viiva.txt
If filenames won't cause an issue, you can grep -xE or grep -ixE with above regex, where x option will make sure the regex matches the whole line, instead of anywhere in the line. Or you can also use awk:
$ # NF will give number of fields after splitting on i
$ ls | awk -F'i' 'NF>=3 && NF<=4'
1i2i3i.txt
aibi.txt
viiva.txt
$ ls | awk -F'[iI]' 'NF>=3 && NF<=4'
1i2i3i.txt
aibi.txt
II.txt
viiva.txt
The command 'grep -c blah *' lists all the files, like below.
% grep -c jill *
file1:1
file2:0
file3:0
file4:0
file5:0
file6:1
%
What I want is:
% grep -c jill * | grep -v ':0'
file1:1
file6:1
%
Instead of piping and grep'ing the output like above, is there a flag to suppress listing files with 0 counts?
SJ
How to grep nonzero counts:
grep -rIcH 'string' . | grep -v ':0$'
-r Recurse subdirectories.
-I Ignore binary files (thanks #tongpu, warlock).
-c Show count of matches. Annoyingly, includes 0-count files.
-H Show file name, even if only one file (thanks #CraigEstey).
'string' your string goes here.
. Start from the current directory.
| grep -v ':0$' Remove 0-count files. (thanks #LaurentiuRoescu)
(I realize the OP was excluding the pipe trick, but this is what works for me.)
Just use awk. e.g. with GNU awk for ENDFILE:
awk '/jill/{c++} ENDFILE{if (c) print FILENAME":"c; c=0}' *
#!/bin/ksh
inputpath="/home/beaadmin/SET4/Input.txt" #Give patterns to be searched
contentpath="/home/beaadmin/SET4/FILES"
outpath="/home/beaadmin/SET4/impacted"
count=1
while read line
do
echo "Line :$count"
echo "$line"
return=$(find $contentpath -iname "*" | xargs grep "$line*")
if [ $? -eq 0 ]
then
echo "$line" >> $outpath
else
echo ""
fi
let count=$count+1
done < $inputpath
let say I have string1,string2,string3 and File1,File2,File3..
I want to find string1 in File1,File2,File3 and if match found then write it to output dir.same way to find for string2,string3..But the above code not finding it
Assuming string1,string2,etc... are on separate lines of Input.txt, there's only a small bug.
Change this:
xargs grep "$line*"
To this:
xargs grep "$line.*"
Also, two suggestions about your find command. First, -iname "*" doesn't have any effect. Second, your find command will find some directories, which will cause grep some (probably non-fatal) errors. You could fix that with e.g.,
find -type f $contentpath
I've created a small function to allow me to grep through my command history on zsh. The command history 1 will display the entire command history. And running history 1 | egrep ls shows just those command containing ls.
So my function looks like this:
h() {
if [ -z "$*" ]
then
history 1
else
history 1 | egrep "$#"
fi
}
Unfortunately this only results in the following error message:
$ h ls
egrep: ls: No such file or directory
I'm at a loss as to what is wrong in my script. I've trie both grep and egrep to no avail.
What is the full path of grep or egrep?
It's possible that it's running in an alternate shell which has a different PATH set. Try using an explicit /usr/bin/grep or /usr/bin/egrep and see if that fixes anything.
Create a file history.zsh (slightly changed from the original):
#!/bin/zsh
h() {
if [ -z "$*" ]
then
history
else
history | fgrep "$*"
fi
}
Now source this file (so "h" will be refreshed):
. history.zsh
And call the new function:
$ h ls
30 h ls
31 ls
I've abandoned the function. Further reading on the subject of zsh history lead me to this very elegant solution that meets my needs. https://coderwall.com/p/jpj_6q
In a nutshell you add this to your .zshrc:
autoload -U up-line-or-beginning-search
autoload -U down-line-or-beginning-searc
zle -N up-line-or-beginning-search
zle -N down-line-or-beginning-search
bindkey "^[[A" up-line-or-beginning-search # Up
bindkey "^[[B" down-line-or-beginning-search # Down
Now your history can be searched by entering a partial term and using the up or down arrow keys to walk through the matches from your history file.
Is there any way I could use grep to ignore some files when searching something, something equivalent to svnignore or gitignore? I usually use something like this when searching source code.
grep -r something * | grep -v ignore_file1 | grep -v ignore_file2
Even if I could set up an alias to grep to ignore these files would be good.
--exclude option on grep will also work:
grep perl * --exclude=try* --exclude=tk*
This searches for perl in files in the current directory excluding files beginning with try or tk.
You might also want to take a look at ack which, among many other features, by default does not search VCS directories like .svn and .git.
find . -path ./ignore -prune -o -exec grep -r something {} \;
What that does is find all files in your current directory excluding the directory (or file) named "ignore", then executes the command grep -r something on each file found in the non-ignored files.
Use shell expansion
shopt -s extglob
for file in !(file1_ignore|file2_ignore)
do
grep ..... "$file"
done
I thinks grep does not have filename filtering.
To accomplish what you are trying to do, you can combine find, xargs, and grep commands.
My memory is not good, so the example might not work:
find -name "foo" | xargs grep "pattern"
Find is flexible, you can use wildcards, ignore case, or use regular expressions.
You may want to read manual pages for full description.
after reading next post, apparently grep does have filename filtering.
Here's a minimalistic version of .gitignore. Requires standard utils: awk, sed (because my awk is so lame), egrep:
cat > ~/bin/grepignore #or anywhere you like in your $PATH
egrep -v "`awk '1' ORS=\| .grepignore | sed -e 's/|$//g' ; echo`"
^D
chmod 755 ~/bin/grepignore
cat >> ./.grepignore #above set to look in cwd
ignorefile_1
...
^D
grep -r something * | grepignore
grepignore builds a simple alternation clause:
egrep -v ignorefile_one|ignorefile_two
not incredibly efficient, but good for manual use