Anyone an idea how to flatten a certain folder in my IMAP account? There are literally hundreds of ((sub)sub)subfolders. It is a very old archive (from the time searching was a hell and subfolders were a good idea, yup, most of it last millenium) of which I want to add all message to my standard archive folder. Can't find anything on Google. I have root access to the VPS hosting the mailaccount. Directadmin and Roundcube are installed. It is a standard CentOS 7 Apache installation. Anyone know of any scripts, tools, Thunderbird plugins, or whatever, to do it? I am fluent in php but not python, but willing to look into it if needed.
Thanks in advance!
Try this:
./copy_nested_msgs.pl
http://www.athensfbc.com/imap-tools/public/copy_nested_msgs.tar.gz
-m folder (parent folder whose subfolders you want to copy)
-M folder (destination folder where messages are to be copied)
-S source server:port/user/password
-D destinaton server:port/user/password
-L logfile
[-d] debug mode (optional)
You can copy the messages from source folders to destination folder on the same server or to another server.
It's pretty simple. Use the IMAP RENAME command to rename each of the folders in the structure. For example:
Before
1 list "" *
* LIST (\UnMarked) "/" A
* LIST (\UnMarked) "/" A/B
* LIST (\UnMarked) "/" A/B/C
* LIST (\UnMarked) "/" A/B/C/D
* LIST (\UnMarked) "/" A/B/C/D/E
* LIST (\UnMarked) "/" A/B/C/D/E/F
* LIST (\UnMarked) "/" A/B/C/D/E/F/G
Sort the mailbox list so you do the most deeply-nested one first.
1 RENAME "A/B/C/D/E/F/G" "G"
2 RENAME "A/B/C/D/E/F" "F"
3 RENAME "A/B/C/D/E" "E"
4 RENAME "A/B/C/D" "D"
5 RENAME "A/B/C" "C"
6 RENAME "A/B" "B"
Afterwards...
1 list "" *
* LIST (\UnMarked) "/" A
* LIST (\UnMarked) "/" B
* LIST (\UnMarked) "/" C
* LIST (\UnMarked) "/" D
* LIST (\UnMarked) "/" E
* LIST (\UnMarked) "/" F
* LIST (\UnMarked) "/" G
A simple IMAP script can be written to do this. I can post a link to mine but it's written in Perl. You would have to convert it to the language of your choice but maybe it might be of some help.
Related
Say I have a directory /home/ and within it I have 3 subdirectories /home/red/ /home/blue/ /home/green/
And each subdirectory contains a file each like
/home/red/file1 /home/blue/file2 /home/green/file3
Now I want to find how many times file1,file2, file3 contains the word "hello" within them.
For example,
/home/red/file1 - 23
/home/blue/file2 - 6
/home/green/file3 - 0
Now, going to the locations of file and running the grep command is actually very inefficient when this problem scales.
I have tried using this grep command from the /home/ directory
grep -rnw '/path/to/somewhere/' -e 'pattern'
But this is just giving the occurrences rather than the count.
Is there any command through which I can get what I am looking for?
If the search term occurs at maximum once per line, you can use grep's -c option to report the count instead of the matching lines. So, the command will be grep -rc 'search' (add other options as needed).
If there can be more than one occurrence per line, I'd recommend using ripgrep. Note that rg recursively searches by default, so you can use something like rg -co 'search' from within the home directory (add other options as needed). Add --hidden if you need to search hidden files as well. Add --include-zero if you want to show files even if they didn't have any match.
Instead of grep you can use this find | gnu-awk solution:
cd /home
find {red/file1,blue/file2,green/file3} -type f -exec awk '
{c += gsub(/pattern/, "&")} ENDFILE {print FILENAME, "-", c; c=0}' {} +
I want to find whether any script in one unix directory is using one particular table as "select *" or not. Means I want to find "select * from tablename". Now after select there can be any number of space or newline like "select <any number of space or newline> * <any number of space or newline> from <any number of space or newline> tablename"
Let's take this as the test file:
$ cat file
select
*
from
tablename
Using grep:
$ grep -z 'select\s*[*]\s*from\s*tablename' -- file
select
*
from
tablename
-z tells to treat the input as NUL-separated. Since no sensible text file contains NUL characters, this has the effect of reading in the whole file at once. That allows us to search over multiple lines. (If the file is too big for memory, we would want to think about another approach.) To protect against file names which begin with -, the -- is used to tell grep to stop option processing.
To obtain just the names of the matching files in the current directory:
grep -lz 'select\s*[*]\s*from\s*tablename' -- *
* tells grep to look at all files in the directory. -l tells grep to just print the names of matching files and not the matching text.
More on the need for --
Let's consider a directory with one file:
$ ls
-l-
Now, lets run a grep command:
$ grep 'regex' *
grep: invalid option -- '-'
Usage: grep [OPTION]... PATTERN [FILE]...
Try 'grep --help' for more information.
The problem here is that grep interprets the file name as two options: l and -. Since the second is not a legal option, it reports an error. To protect against this, we need to use --. The following will run without error:
grep 'regex' -- *
The Google Bazel build tool makes it easy enough to explain that each CoffeeScript file in a particular directory tree needs to be compiled to a corresponding output JavaScript file:
[genrule(
name = 'compile-' + f,
srcs = [f],
outs = [f.replace('src/', 'static/').replace('.coffee', '.js')],
cmd = 'coffee --compile --map --output $$(dirname $#) $<',
) for f in glob(['src/**/*.coffee'])]
But given, say, 100 CoffeeScript files, this will invoke the coffee tool 100 separate times, adding many seconds to the compilation process. If instead it could be explained to Bazel that the coffee command can take many input files as input, then files could be batched together and offered to fewer coffee invocations, allowing the startup time of the process to be amortized over more files than just one.
Is there any way to explain to Bazel that coffee can be invoked with many files at once?
I haven't worked with coffee script, so this may need to be adjusted (particularly the --output #D part), but something like this might work:
coffee_files = glob(['src/**/*.coffee'])
genrule(
name = 'compile-coffee-files',
srcs = coffee_files,
outs = [f.replace('src/', 'static/').replace('.coffee', '.js') for f in coffee_files],
cmd = 'coffee --compile --map --output #D $(SRCS)' % coffee)
Note that if just one input coffee script file is changed, the entire genrule will be rerun with all 100 files (the same as with, say, a java_library with 100 input java files).
I use
tar hczf t.tar.gz * --exclude="./test1"
where test1 is the name of a directory to exclude files from being tarred.
Unfortunately, tar still includes those directories. How can I have tar exclude directories?
The * that specifics "all files in the current directory" should be the last item on your cmd-line
tar --exclude="./test1" hczf t.tar.gz *
#--------------------------^-> tarFileName
#------------------------->f (for file)
This illustrates why the --excl... can't go inbetween hczf t.tar.gz.
The f option expects a filename right after it. So we have moved --excl... to the first option.
IHTH
I'm trying to setup a grep command, that searches my current directory, but excludes a directory, only if it's the root directory.
So for the following directories, I want #1 to be excluded, and #2 to be included
1) vendor/phpunit
2) app/views/vendor
I originally started with the below command
grep -Ir --exclude-dir=vendor keywords *
I tried using ^vendor, ^vendor/, ^vendor/, ^vendor, but nothing seems to work.
Is there a way to do this with grep? I was looking to try to do it with one grep call, but if I have to, I can pipe the results to a second grep.
With pipes:
grep -Ir keywords * | grep -v '^vendor/'
The problem with exclude-dir is, it tests the name of the directory and not the path before going into it, so it is not possible to distinguish between two vendor directories based on their depths.
Here is a better solution, which will actually ignore the specified directory:
function grepex(){
excludedir="$1"
shift;
for i in *; do
if [ "$i" != "$excludedir" ]; then
grep $# "$i"
fi
done
}
You use it as a drop-in replacement to grep, just have the excluded dir as the first argument and leave the * off the end. So, your command would look like:
grepex vendor -Ir keywords
It's not perfect, but as long as you don't have any really weird folders (e.g. with names like -- or something), it will cover most use cases. Feel free to refine it if you want something more elaborate.