I successfully used -verbose to find median RGB values for 1 .jpg file.
Next step is finding median RGB values for about 2000 .jpg files.
Id like to figure out how to do this automatically rather than one at a time.
Id also like to figure out how to export the data resulting from -verbose over 2000 files to something like .csv or .txt.
Does anyone know how to best approach this?
This will give you all details of all .jpg images in the current working directory:
identify -verbose *.jpg >verbose.txt
If you are on a Unix system, this one will dump to verbose.txt lines with filename => overall RGB mean value:
for f in *.jpg; do echo "$f => `identify -verbose "$f" | grep mean | tail -n1 | cut -d':' -f2 | xargs`"; done >verbose.txt
This one will dump to verbose.txt lines with filename => R = mean value, G = mean value, B = mean value:
for f in *.jpg; do echo "$f => `identify -verbose "$f" | grep mean | head -n3 | cut -d':' -f2 | xargs | awk '{print "R = "$1" "$2", G = "$3" "$4", B = "$5" "$6}'`"; done >verbose.txt
Related
I have a set of many .json.gz files. In each file, there are entries such as this:
{"type":"e1","public":true, "login":"username1", "org":{"dict","of":"lots_of_things"}}
{"type":"e2","public":true, "login":"username2"}
No matter where in each nested dict "login" appears, I want to be able to detect it and take the username, only if the key "org" does not exist anywhere in the nested dict. I also want to count the number of times each username appears in the files.
My final output should be a file of dicts that looks like this:
{'username2: 1}
because of course username1 wouldn't be counted: the key "org" appears in its dict.
I'm looking for something like:
zgrep -Rv "org" . | zgrep -o 'login":"[^"]*"' /path/to/files/* | cut -d'"' -f3 | sort | uniq -c | sed '1i{
s/\s*\([0-9]*\)\s*\(.*\)/"\2": \1,/;$a}' > outputfile.txt
I'm not sure about this part:
zgrep -Rv "org" . |
The rest successfully creates the type of file I'm looking for. I'm just unsure about the order of operations here.
EDIT
I should have been more clear, I apologize. There are also often multiple instances of the key "login" per main dict object. For example (using "k" for any key that is not login and not org, and using "v" for a value):
{"k":"v","k":{"k":{"k":"v","login":"username1"},"k":"v"},"k":{"k":"v","login":"username2"}}
{"k":{"k":"v","k":"v"},"k":{"org":{"k":"v","k":v,"login":"username3"},"k":"v"},"k":{"k":"v","login":"username4"}}
{"k":{"k":"v"},"k":{"k":{"k":"v","login":"username1"},"login":"username2"}}
Since the key org appears in the second dict, I want to exclude usernames 3 and 4 from the dict I make and save to a file.
For example, I want this in a file:
{'username1': 2}
{'username2': 2}
AWK solution and replacing find -R with more reliable find:
find . -type f -name "*.json.gz" -print0 | xargs -0 zgrep -v -h '"org"' | awk '{ if ( match($0,/"login":"[^"]+"/) ) logins[substr($0,RSTART+8,RLENGTH-8)]++; } END { for ( i in logins ) print("{" i ":" logins[i] "}"); }'
Example output:
{"username2":1}
not grep but gnu sed job with script, your data in 'a'
i=
for e in $(sed -nE '/.*\borg\b.*/!s/.*"login":"(\w+)".*/{\1:}/p' a)
{
let i++;echo ${e/:/:$i}
}
use '>' at end to save in file
if better regex : 'pcregrep' installed, it does as well;
pcregrep -io '(?!.*\borg\b.*)(?<="login":")\w+(?=".*)' a
replace sed... script above, with a bit adjusted printout
This worked:
zgrep -v "org" *.json.gz | zgrep -o 'login":"[^"]*"' | cut -d'"' -f3 | sort | uniq -c | sed '1i{
s/\s*\([0-9]*\)\s*\(.*\)/"\2": \1,/;$a}' > usernames_2011.txt
I want to get a list of all files, in the current directory or any subdirectory, containing a certain string sorted by modification date.
I am having trouble getting the answer to
How to sort the output of "grep -l" chronologically by newest modification date last?
to work for the purpose of a recursive grep search. How do I obtain such a ordered list such that all files that would be found by grep -lr are really included.
Assuming your file names don't contain newlines:
find dir -type f -printf '%T#\t%p\n' | sort | cut -f2- | xargs grep -l whatever
More robustly using GNU versions of the tools to deal with dir/file names containing exoctic characters:
find dir -type f -printf '%T#\t%p\0' | sort -z | cut -z -f2- | xargs -0 grep -l whatever
I need to get a string that consists of the information from the image's EXIF meta data. For example, I would like:
CameraModel, Focal Length in 35mm Format: 24mm, F11, 1/500, ISO 200
I could see all the information present from
identify -format '%[EXIF:*]' image.jpg
However, I'm having trouble consolidating the output and generate the information.
The first problem, while '%[EXIF:*]' prints all EXIF data, if I replace the star with a specific EXIF tag, it doesn't print out anything. I know I could simply print out all EXIF data a few times and use grep to get the one I need, then combine them together, but it feels better to retrieve just the value I'm looking for.
The second problem, the aperture value FNumber is in a format like "63/10", but I need it to be 6.3; Annoyingly, the shutter speed ExposureTime is like 10/5000 and I need it to be 1/500. What kind of conversion do I need for each case?
Thanks!
Here is something to get you started using awk to look for the EXIF keywords and save the corresponding setting as they go past. Then at the END it prints all it found:
#!/bin/bash
identify -format '%[EXIF:*]' a.jpg | awk -F= '
/^exif:Model/ {model=$2}
/^exif:FocalLengthIn35mmFilm/ {focal=$2}
/^exif:FNumber/ {f=$2}
/^exif:ExposureTime/ {t=$2}
/^exif:ISOSpeedRatings/ {ISO=$2}
END {
# Check if fNumber is a rational number and refactor if it is
n=split(f,k,"/")
if(n==2){
f=sprintf("%.1f",k[1]/k[2]);
}
# Check if exposure time is a rational number and refactor if it is
n=split(t,k,"/")
if(n==2){
m=int(k[2]/k[1]);
t=sprintf("1/%d",m);
}
print model,focal,f,t,ISO
}' OFS=,
Sample Output
iPhone 4,35,2.8,1/914,80
I have not tested the conversion from rational numbers too extensively... between the festive whiskies...
Even simpler is to use EXIFTOOL directly.
infile="P1050001.JPG"
exiftool -model -FocalLengthIn35mmFormat -FNumber -ExposureTime -ISO "$infile"
Camera Model Name : DMC-FZ30
Focal Length In 35mm Format : 35 mm
F Number : 2.8
Exposure Time : 1/8
ISO : 200
or
infile="P1050001.JPG"
exiftool -csv -model -FocalLengthIn35mmFormat -FNumber -ExposureTime -ISO "$infile"
P1050001.JPG,DMC-FZ30,35 mm,2.8,1/8,200
jzxu wrote: This is fine, too, however, I noticed the parameter
-csv, how do I get rid of the comma and replace them with spaces?
One way on unix is simply to use tr to replace the command with a space as follows:
exiftool -csv -model -FocalLengthIn35mmFormat -FNumber -ExposureTime -ISO "$infile" | tr "," " "
SourceFile Model FocalLengthIn35mmFormat FNumber ExposureTime ISO
P1050001.JPG DMC-FZ30 35 mm 2.8 1/8 200
Or if you do not want the header line:
exiftool -csv -model -FocalLengthIn35mmFormat -FNumber -ExposureTime -ISO "$infile" | tail -n +2 | tr "," " "
P1050001.JPG DMC-FZ30 35 mm 2.8 1/8 200
There may be other internal EXIFTOOL formatting options. See https://sno.phy.queensu.ca/~phil/exiftool/exiftool_pod.html
I am not an expert on EXIFTOOL, so perhaps I have missed something. But I do not see a space delimited output format. However, this makes the output in tab delimited format.
infile="P1050001.JPG"
exiftool -s3 -T -model -FocalLengthIn35mmFormat -FNumber -ExposureTime -ISO "$infile"
DMC-FZ30 35 mm 2.8 1/8 200
So one could use tr to replace tabs with spaces.
exiftool -s3 -T -model -FocalLengthIn35mmFormat -FNumber -ExposureTime -ISO "$infile" | tr "\t" " "
DMC-FZ30 35 mm 2.8 1/8 200
This works fine for me in Imagemagick 6.9.9.29 Q16 Mac OSX.
infile="P1050001.JPG"
cameramodel=`identify -ping -format "%[EXIF:model]" "$infile"`
focallenght35=`identify -ping -format "%[EXIF:FocalLengthIn35mmFilm]" "$infile"`
fnumber1=`identify -ping -format "%[EXIF:FNumber]" "$infile" | cut -d/ -f1`
fnumber2=`identify -ping -format "%[EXIF:FNumber]" "$infile" | cut -d/ -f2`
exptime1=`identify -ping -format "%[EXIF:ExposureTime]" "$infile" | cut -d/ -f1`
exptime2=`identify -ping -format "%[EXIF:ExposureTime]" "$infile" | cut -d/ -f2`
isospeed=`identify -ping -format "%[EXIF:ISOSpeedRatings]" "$infile"`
fnumber=`echo "scale=1; $fnumber1/$fnumber2" | bc`
exptime=`echo "scale=3; $exptime1/$exptime2" | bc`
echo "CameraModel=$cameramodel, FocalLengthIn35mmFilm: $focallenght35 mm, F$fnumber, $exptime sec, ISO $isospeed"
CameraModel=DMC-FZ30, FocalLengthIn35mmFilm: 35 mm, F2.8, .125 sec, ISO 200
Or alternately,
infile="P1050001.JPG"
declare `convert -ping "$infile" -format "cameramodel=%[EXIF:model]\n focallenght35=%[EXIF:FocalLengthIn35mmFilm]\n fnumber=%[EXIF:FNumber]\n exptime=%[EXIF:ExposureTime]\n isospeed=%[EXIF:ISOSpeedRatings]\n" info:`
fnumber1=`echo $fnumber | cut -d/ -f1`
fnumber2=`echo $fnumber | cut -d/ -f2`
fnumber=`echo "scale=1; $fnumber1/$fnumber2" | bc`
exptime1=`echo $exptime | cut -d/ -f1`
exptime2=`echo $exptime | cut -d/ -f2`
exptime=`echo "scale=0; $exptime2/$exptime1" | bc`
echo "CameraModel=$cameramodel, FocalLengthIn35mmFilm: $focallenght35 mm, F$fnumber, 1/$exptime sec, ISO $isospeed"
CameraModel=DMC-FZ30, FocalLengthIn35mmFilm: 35 mm, F2.8, 1/8 sec, ISO 200
I use the following command that works in imagemagick to get the mean of a picture
identify -format "%[mean]" photo.jpg
the same command does not work under graphicsmagick. Is there an equivalent I can use?
You can do this, for example:
gm identify -verbose photo.jpg | grep -E "Mean|Red|Green|Blue"
Or, if you want Red, Green and Blue as 3 separate integers
gm identify -verbose photo.jpg | awk '/Mean:/{s=s int($2) " "} END{print s}'
0 29 225
Or, if you want the average of all channels, like this:
gm identify -verbose photo.jpg | awk '/Mean:/{n++;t+=$2} END{print int(t/n)}'
85
I would like to grep a specific word 'foo' inside specific files, then get the N lines around my match and show only the blocks that contain a second grep.
I found this but it doesn't really work...
find . | grep -E '.*?\.(c|asm|mac|inc)$' | \
xargs grep --color -C3 -rie 'foo' | \
xargs -n1 --delimiter='--' | grep --color -l 'bar'
For instance I have the file 'a':
a
b
c
d
bar
f
foo
g
h
i
j
bar
l
The file b:
a
bar
c
d
e
foo
g
h
i
j
k
I expect this for grep -c2 on both files because bar is contained in the -c2 range of foo. I do not get any match for ./bar because bar is not in the range -c2 of foo...
--
./foo- bar
./foo- f
./foo- **foo**
./foo- g
./foo- h
--
Any ideas?
You could do this pretty simply with a "while read line" loop:
find -regextype posix-extended -regex "./file[a-z]" | while read line; do grep -nHC2 "foo" $line | grep --color bar; done
Output:
./filea-5-bar
./filec-46-... host pwns.me [94.23.120.252]: 451 4.7.1 Local bar
configuration error ...
In this example, I created the following files:
filea - your example a
fileb - your example b
filec - some random exim log output with foo and bar tossed in 2 lines apart
filed - the same exim log output, but with foo and bar tossed in 3 lines apart
You could also pipe the output after done, to alter the format:
; done | sed 's/-([0-9]{1,6})-/: line: \1 ::: /'
Formatted output
./filea: line: 5 ::: bar
./filec: line: 46 ::: ... host pwns.me [94.23.120.252]: 451 4.7.1 Local bar configuration error ...
I think I only understand the first line of your question and this does what I think you mean!
#!/bin/bash
N=2
pattern1=a
pattern2=z
matchinglines=$(awk -v p="$pattern1" '$0~p{print NR}' file) # Generate array of matching line numbers
for x in ${matchinglines[#]}
do
((start=x-N))
[[ $start -lt 1 ]] && start=1 # Avoid passing negative line nmumbers to sed
((end=x+N))
echo DEBUG: Checking block between lines $start and $end
sed -ne "${start},${end}p" file | grep -q "$pattern2"
[[ $? -eq 0 ]] && sed -ne "${start},${end}p" file
done
You need to set pattern1 and pattern2 at the start of the script. It basically does some awk to build an array of the line numbers that match your first pattern. Then it loops through the array and sets the start and end range to +/-N either side of each matching line number. It then uses sed to extraact that block and passes it through grep to see if it contains pattern2 printing it if it does. It may not be the most efficient, but it is easy enough to understand and maintain.
It assumes your file is called file
pipe it twice
grep "[^foo\n]" | grep "\n{ntimes}foo\n{ntimes}"