Rename every other file - automator

I have quite some files I'd like to get renamed (Too many to do it manually), normally I'd use Automator but the thing is, I need every other file to be called the same as the file before it, but with a 2 instead of 1
This is my files so far
xxx - 401a - yyy.zzz
xxx - 401b - yyy.zzz
xxx - 402a - yyy.zzz
xxx - 402b - yyy.zzz
xxx - 403a - yyy.zzz
And so on, but I'd like to rename them to something like
xxx - 401-pt1.zzz
xxx - 401-pt2.zzz
xxx - 402-pt1.zzz
xxx - 402-pt2.zzz
xxx - 403-pt1.zzz
Anyway to do this with automator? Asking because I've really rarely use Automator, meaning I'm not exactly what you'd call an expert.
EDIT: This is what I'm trying to achieve: http://wiki.plexapp.com/index.php/Media_Naming_and_Organization_Guide#Stacked_Episodes

If you don't know anything about that tool than you won't mind using something else? How about a simple shell command?
Here is the same operation in Bash in increasing level of genericity. You probably don't care for the first two as it has hardcoded yyy's and zzz's but I did it to show you around the sed command a bit to understand the third one.
for file in * ; do mv "$file" "$(echo $file | sed 's/a - yyy/-pt1/g')"; mv "$file" "$(echo $file | sed 's/b - yyy/-pt2/g')"; done
for file in * ; do mv "$file" "$(echo $file | sed 's/a - .*.zzz/-pt1.zzz/g')"; mv "$file" "$(echo $file | sed 's/b - .*.zzz/-pt2.zzz/g')"; done
for file in * ; do mv "$file" "$(echo $file | sed -r 's/a - .*.([a-z]{3})/-pt1.\1/')"; mv "$file" "$(echo $file | sed -r 's/b - .*.([a-z]{3})/-pt2.\1/')"; done
There must be many other ways with other commands, of course.
You'll get unimportant errors because the mv is done twice on the whole batch.

Related

Is there any way to using hadolint for multiple dockerfiles?

Hadolint is an awesome tool for linting Dockerfiles. I am trying
to integrated to my CI but I am dealing with for run over multiple Dockerfiles. Does someone know how the syntax look like? Here is how my dirs appears to:
dir1/Dockerfile
dir2/Dockerfile
dir3/foo/Dockerfile
in gitlab-ci
stage: hadolint
image: hadolint/hadolint:latest-debian
script:
- mkdir -p reports
- |
hadolint dir1/Dockerfile > reports/dir1.json \
hadolint dir2/Dockerfile > reports/dir2.json \
hadolint dir3/foo/Dockerfile > reports/dir3.json
But the sample above is now working.
So as far as I found it, hadolint runs recursively. So in my case:
- hadolint */Dockerfile > reports/all_reports.json
But the problem with this approach is that all reports will be in one file which humper the maintenance and clarity
If you want to keep all reports separated (one per top-level directory), you may want to rely on some shell snippet?
I mean something like:
- |
find . -name Dockerfile -exec \
sh -c 'src=${1#./} && { set -x && hadolint "$1"; } | tee -a "reports/${src%%/*}.txt"' sh "{}" \;
Explanation:
find . -name Dockerfile loops over all Dockerfiles in the current directory;
-exec sh -c '…' runs a subshell for each Dockerfile, setting:
$0 = "sh" (dummy value)
$1 = "{}" (the full, relative path of the Dockerfile), "{}" and \; being directly related to the find … -exec pattern;
src=${1#./} trims the path, replacing ./dir1/Dockerfile with dir1/Dockerfile
${src%%/*} extracts the top-level directory name (dir1/Dockerfile → dir1)
and | tee -a … copies the output, appending hadolint's output to the top-level directory report file, for each parsed Dockerfile (while > … should be avoided here for obvious reasons, if you have several Dockerfiles in a single top-level directory).
I have replaced the .json extension with .txt as hadolint does not seem to output JSON data.

How to search for 2 key words from files in a directory and print their filename if it occurs more than once

I am trying to grep or find for 2 specific words in each file in a directory. And then If i find more than one file found with such a combination - only then I should print those file names to a CSV file.
Here is what I tried so far:
find /dir/test -type f -printf "%f\n" | xargs grep -r -l -e 'ABCD1' -e 'ABCD2' > log1.csv
But this will provide all file names that have "ABCD1" and "ABCD2". In other words, this command will print the filename even if there is only one file that has this combo.
I will need to grep the entire directory for those 2 words and both words MUST be in more than one file if it has to write the filenames to CSV. I should also be able to include sub directories
Any help would be great!
Thanks
find + GNU grep solution:
find . -type f -exec grep -qPz 'ABCD1[\s\S]*ABCD2|ABCD2[\s\S]*ABCD1' {} \; -printf "%f\n" \
| tee /tmp/flist | [[ $(wc -l) -gt 1 ]] && cat /tmp/flist > log1.csv
Alternative way:
grep -lr 'ABCD2' /dir/test/* | xargs grep -l 'ABCD1' | tee /tmp/flist \
| [[ $(wc -l) -gt 1 ]] && sed 's/.*\/\([^\/]*\)$/\1/' /tmp/flist > log1.csv

How to pass a URL to Wget

If I have a document with many links and I want to download especially one picture with the name www.website.de/picture/example_2015-06-15.jpeg, how can I write a command that downloads me automatically exactly this one I extracted out of my document?
My idea would be this, but I'll get a failure message like "wget: URL is missing":
grep -E 'www.website.de/picture/example_2015-06-15.jpeg' document | wget
Use xargs:
grep etc... | xargs wget
It takes its stdin (grep's output), and passes that text as command line arguments to whatever application you tell it to.
For example,
echo hello | xargs echo 'from xargs '
produces:
from xargs hello
Using back ticks would be the easiest way of doing it:
wget `grep -E 'www.website.de/picture/example_2015-06-15.jpeg' document`
This will do too:
wget "$(grep -E 'www.website.de/picture/example_2015-06-15.jpeg' document)"

Delete a list of files with find and grep

I want to delete all files which have names containing a specific word, e.g. "car".
So far, I came up with this:
find|grep car
How do I pass the output to rm?
find . -name '*car*' -exec rm -f {} \;
or pass the output of your pipeline to xargs:
find | grep car | xargs rm -f
Note that these are very blunt tools, and you are likely to remove files that you did not intend to remove. Also, no effort is made here to deal with files that contain characters such as whitespace (including newlines) or leading dashes. Be warned.
To view what you are going to delete first, since rm -fr is such a dangerous command:
find /path/to/file/ | grep car | xargs ls -lh
Then if the results are what you want, run the real command by removing the ls -lh, replacing it with rm -fr
find /path/to/file/ | grep car | xargs rm -fr
I like to use
rm -rf $(find . | grep car)
It does exactly what you ask, logically running rm -rf on the what grep car returns from the output of find . which is a list of every file and folder recursively.
You can use ls and grep to find your files and rm -rf to delete the files.
rm -rf $(ls | grep car)
But this is not a good idea to use this command if there is a chance of directories or files, you don't want to delete, having names with the character pattern you are specifying with grep.
You really want to use find with -print0 and rm with --:
find [dir] [options] -print0 | grep --null-data [pattern] | xargs -0 rm --
A concrete example (removing all files below the current directory containing car in their filename):
find . -print0 | grep --null-data car | xargs -0 rm --
Why is this necessary:
-print0, --null-data and -0 change the handling of the input/output from parsed as tokens separated by whitespace to parsed as tokens separated by the \0-character. This allows the handling of unusual filenames (see man find for details)
rm -- makes sure to actually remove files starting with a - instead of treating them as parameters to rm. In case there is a file called -rf and do find . -print0 | grep --null-data r | xargs -0 rm, the file -rf will possibly not be removed, but alter the behaviour of rm on the other files.
This finds a file with matching pattern (*.xml) and greps its contents for matching string (exclude="1") and deletes that file if a match is found.
find . -type f -name "*.xml" -exec grep exclude=\"1\" {} \; -exec rm {} \;
Most of the other solutions presented here have problems with handling file names with spaces in them. Here's a solution that handles spaces properly.
grep -lRZ car . | xargs -0 rm
Notes on arguments used:
-l tells grep to print only filenames
-R enables grep recursive search in subfolders
-Z tells grep to separate results by \0 instead of \n
-0 tells xargs to separate input arguments by \0 instead of whitespace
car is the regular expression to search for
. is the folder where to search
Can also use rm -f to force the removal (as usual).
A bit of necromancy, but you can also use find, grep, and xargs
find . -type f | grep -e "pattern1" -e "pattern2" | xargs rm -rf
^ Find will need some attention to make it work for your needs potentially, such as is a file, mindepth, maxdepth and any globbing.
when find | grep car | xargs rm -f get results:
/path/to/car
/path/to/car copy
some files which contain whitespace will not be removed.
So my answer is:
find | grep car | while read -r line ; do
rm -rf "${line}"
done
So the file contains whitespace could be removed.
find start_dir -iname \*car\* -exec rm -v {} \;
I use:
find . | grep "car" | while read i; do echo $i; rm -f "$i"; done
This works even if there are spaces in the filename as well as in recursive manner, searching for directories as well.
Use rm with wildcard *
rm * will delete all files
rm *.ext will delete all files which have ext as extension
rm word* will delete all files which starts with word.

grep multiple extension current and subfolders

I'm trying to grep multiple extensions within the current and all sub-folders.
grep -i -r -n 'hello' somepath/*.{php,html}
This is only grepping the current folder but not sub-folders.
What would be a good way of doing this?
Using only grep:
grep -irn --include='*.php' --include='*.html' 'hello' somepath/
One of these:
find '(' -name '*.php' -o -name '*.html' ')' -exec grep -i -n hello {} +
find '(' -name '*.php' -o -name '*.html' ')' -print0 | xargs -0 grep -i -n hello
I was looking the same and when decided to do a bash script I started with vim codesearch and surprise I already did this before!
#!/bin/bash
context="$3"
#ln = line number mt = match mc = file
export GREP_COLORS="sl=32:mc=00;33:ms=05;40;31:ln="
if [[ "$context" == "" ]]; then context=5; fi
grep --color=always -n -a -R -i -C"$context" --exclude='*.mp*'\
--exclude='*.avi'\
--exclude='*.flv'\
--exclude='*.png'\
--exclude='*.gif'\
--exclude='*.jpg'\
--exclude='*.wav'\
--exclude='*.rar'\
--exclude='*.zip'\
--exclude='*.gz'\
--exclude='*.sql' "$2" "$1" | less -R
paste this code into in a file named codesearch and set the chmod to 700 or 770
I guess this could be better here for the next time that I forgot
this script will show with colors the matches and the context around
./codesearch '/full/path' 'string to search'
and optional defining the number of context line around default 5
./codesearch '/full/path' 'string to search' 3
I edited the code and added some eye candy
example ./codesearch ./ 'eval' 2
Looks like this when you have enabled "allow blinking text" in terminal

Resources