Please help. Is there an easy way to take the largest layer of a tiff and zip compress it back as a single layer tiff again with imagemagick or similar?
Just a slightly easier version of Fred's answer. You can generate a list of the area (in pixels) of each layer in a TIF followed by the layer/scene number like this:
magick identify -format "%[fx:w*h] %s\n" image.tif
Sample Output
240000 0
560000 1
200000 2
So, if we do that again, sort it reverse numerically and take the second field of the first result, we will get the number of the layer with the largest area:
layer=$(magick identify -format "%[fx:w*h] %s\n" image.tif | sort -rn | awk 'NR==1{print $2}')
So, the complete solution would look like:
#!/bin/bash
# Get layer number of layer with largest area
layer=$(magick identify -format "%[fx:w*h] %s\n" image.tif | sort -rn | awk 'NR==1{print $2}')
# Extract that layer and recompress as single layer
magick image.tif[$layer] -compress lzw result.tif
If you are using ImageMagick v6 or older:
magick identify ... becomes identify ...
magick image.tif ... becomes convert image.tif ...
In concept, using ImageMagick this can be done in a single command. Here's an example...
magick input.tif -background none -virtual-pixel none ^
( -clone 0--1 +repage -layers merge ) ^
-distort affine "0,0 0,%[fx:s.w==u[-1].w&&s.h==u[-1].h?0:h]" ^
-delete -1 -layers merge output.tif
That starts by reading in the original TIF and setting the background and virtual-pixel settings to "none".
Then inside the parentheses it clones all the layers of the TIF, repages them, and merges them into a single image with the dimensions of the largest layer. That will become a gauge to measure with.
Next it uses "-distort affine" to slide each image out of the viewport and leave it transparent unless the image matches the width and height of that gauge. So after that distort, the largest image will remain unchanged, and all the others will be transparent.
Finish by deleting that gauge image and merging the rest. All the layers are transparent except the largest one, so merging them leaves just that visible one as a single layer.
The command is in Windows syntax using IM7. If you're using ImageMagick v6, use "convert" instead of "magick". To make it work in *nix, change the continued line carets "^" to backslashes "\" and escape the parentheses with backslashes "\(...\)". There may be other issues I've overlooked.
Obviously if there are two or more layers matching the largest dimensions, the output result will only be the first one from the original TIF.
Edited to add: This method will only work if both the greatest width and greatest height are on the same image.
How do you define largest? Width, Height, File size? If the largest dimension from width and height is used, then in Unix, you can do the following on a 3 layer tif file. Get the max dimension of each layer. Then find which layer is the largest. Then use just that layer when reading and writing the file.
Arr=(`identify -format "%[fx:max(w,h)]\n" img.tif`)
echo "${Arr[*]}"
500 1024 770
num=${#Arr[*]}
dim=0
for ((i=0; i<num; i++)); do
if [ ${Arr[$i]} > $dim ]; then
dim=${Arr[$i]}
index=$i
fi
done
echo "$index"
2
convert img.tif[$index] -compress zip newimg.tif
identify newimg.tif
newimg.tif[2] TIFF 770x768 770x768+0+0 8-bit sRGB 3662B 0.000u 0:00.000
I cannot think of any direct and simple method to find the largest layer and extract it in the same command line.
Related
I'm trying to combine 3 images using imageMagicks 7 latest version however the first layer is always missing.
convert "image_03.png" "image_02.png" "image_01.png" -background none -alpha set "product.psd"
However i only get two layers??
Attached are the images below ...
A PSD file expects a layer that is the flattened layer from all the other layers. It needs to be the first layer. Photoshop assumes the first layer is the flattened layer. It must be created in Imagemagick and combined with the other individual layers as the first image in the command sequence when writing a PSD file. So I create it last from clones and then insert it at the first position (0).
Try the following.
Unix syntax:
convert "image_03.png" "image_02.png" "image_01.png" \( -clone 0-2 -flatten \) -insert 0 -background none -alpha set "product.psd"
If on Windows, remove the \s from in front of the parentheses.
Reorder the images as desired for the layers in the PSD file.
I have a series of images from a slow motion capture of pulsing electrical discharges. Many of the frames are nearly black. I would like to selectively keep the frames that are more interesting; eg have more luminosity.
I've considered using ImageMagick or GraphicsMagick (or any other; not married to any tool - I'm up for more efficient suggestions).
How would I go about selecting such images and then discarding the other images without appreciable luminosity levels? I'm assuming that I have to establish a baseline first of "black" and then perhaps visually find the least luminous frame image and then use that as the lower limit to use for getting meaningful images / frames...
Example of DISCARD ("empty" frame):
Example of KEEP (frame with "data"):
I would suggest ImageMagick to Erode the image (clean-up noise), reduce data to a monochrome binary image, and print the statistical mean of the image.
convert 5HzsV.jpg -format "%[mean]" -monochrome -morphology Erode Diamond info:
# => 0
convert lLZFX.jpg -format "%[mean]" -monochrome -morphology Erode Diamond info:
# => 149.992
So a bash script might be as easy as...
for image in $(ls *.jpg)
do
L=$(convert "$image" -format "%[mean]" -monochrome -morphology Erode Diamond info:)
if [[ $L -gt 0 ]]; then
echo "Image $image is not empty! # $L"
fi
done
Of course that can be adjusted to meet your needs.
The way images are encoded, you'll likely find that the 'interesting' images are bigger, because the uniformly dark background compresses better than a random spark. For instance, your empty Jpeg is 21K while the interesting one is 39K.
I have 200 copies of this page(15 * 10 matrix) and i have to write all the numbers from 0 - 9 in each corresponding cell and then extract those digits digitally in a seperate image of (32*32 pixels) for each digit, after scanning each page once. How can i achieve this? This is required for my research purpose. I am a CS student so i can code too.
Update:
For mark : Here is one of the scanned image
This is for some local language ( 0 - 9) ..
Update 2:
The commands for the previous image are working fine but on new images,something is getting wrong(some kind of offsets)..
I am attaching the image below
What changes do u suggest ?
Updated Answer
I have taken your feedback and improved the algorithm to the following bash script now...
#!/bin/bash
################################################################################
# dice
#
# Trim borders off an image (twice) and then dice into 10x15 cells.
#
# Usage: ./dice image
################################################################################
# Pick up image name from first parameter
image="$1"
echo DEBUG: Processing image $image...
# Apply median filter to remove noisy black dots around image and then get the
# dimensions of the "trim box" - note we don't use the (degraded) median-filtered image in
# later steps.
trimbox=$(convert "$image" -median 9x9 -fuzz 50% -format %# info:)
echo DEBUG: trimbox $trimbox
# Now trim original unfiltered image into stage1-$$.png (for debug)
convert "$1" -crop $trimbox +repage stage1-$$.png
echo DEBUG: Trimmed outer: stage1-$$.png
# Now trim column headings
convert stage1-$$.png -crop 2000x2590+120+190 +repage stage2-$$.png
echo DEBUG: Trimmed inner: stage2-$$.png
# Now slice into 10x15 rectangles
echo DEBUG: Slicing and dicing
convert stage2-$$.png -crop 10x15# +repage rectangles-%03d.png
# Now trim the edges off the rectangles and resize all to a constant size
for f in rectangles*png; do
echo DEBUG: Trimming and resizing $f
trimbox=$(convert "$f" -median 9x9 -shave 15x15 -bordercolor black -border 15 -threshold 50% -floodfill +0+0 white -fuzz 50% -format %# info:)
echo DEBUG: Cell trimbox $trimbox
convert "$f" -crop $trimbox +repage -resize 32x32! "$f"
done
Here are the resulting cells - i.e. 150 separate image files. I have put a red border around the individual cells/files so you can see their extent:
Original Answer
I would do that with ImageMagick which is free and installed on most Linux distros and is available for OSX and Windows too. There are Perl, PHP, Java, node, .NET, Ruby, C/C++ bindings too if you prefer those languages. Here I am using the command line in Terminal.
First job is to get rid of noise and trim the outer edges:
convert scan.jpg -median 3x3 -fuzz 50% -trim +repage trimmed1.png
Now, trim again to get rid of outer frame and column titles across the top:
convert trimmed1.png -crop 2000x2590+120+190 +repage trimmed2.png
Now divide into 10 cells by 15 cells and save as rectangles-nnn.png
convert trimmed2.png -crop 10x15# rectangles-%03d.png
Check what we got - yes, 150 images:
ls -l rect*
rectangles-000.png rectangles-022.png rectangles-044.png rectangles-066.png rectangles-088.png rectangles-110.png rectangles-132.png
rectangles-001.png rectangles-023.png rectangles-045.png rectangles-067.png rectangles-089.png rectangles-111.png rectangles-133.png
rectangles-002.png rectangles-024.png rectangles-046.png rectangles-068.png rectangles-090.png rectangles-112.png rectangles-134.png
rectangles-003.png rectangles-025.png rectangles-047.png rectangles-069.png rectangles-091.png rectangles-113.png rectangles-135.png
rectangles-004.png rectangles-026.png rectangles-048.png rectangles-070.png rectangles-092.png rectangles-114.png rectangles-136.png
rectangles-005.png rectangles-027.png rectangles-049.png rectangles-071.png rectangles-093.png rectangles-115.png rectangles-137.png
rectangles-006.png rectangles-028.png rectangles-050.png rectangles-072.png rectangles-094.png rectangles-116.png rectangles-138.png
rectangles-007.png rectangles-029.png rectangles-051.png rectangles-073.png rectangles-095.png rectangles-117.png rectangles-139.png
rectangles-008.png rectangles-030.png rectangles-052.png rectangles-074.png rectangles-096.png rectangles-118.png rectangles-140.png
rectangles-009.png rectangles-031.png rectangles-053.png rectangles-075.png rectangles-097.png rectangles-119.png rectangles-141.png
rectangles-010.png rectangles-032.png rectangles-054.png rectangles-076.png rectangles-098.png rectangles-120.png rectangles-142.png
rectangles-011.png rectangles-033.png rectangles-055.png rectangles-077.png rectangles-099.png rectangles-121.png rectangles-143.png
rectangles-012.png rectangles-034.png rectangles-056.png rectangles-078.png rectangles-100.png rectangles-122.png rectangles-144.png
rectangles-013.png rectangles-035.png rectangles-057.png rectangles-079.png rectangles-101.png rectangles-123.png rectangles-145.png
rectangles-014.png rectangles-036.png rectangles-058.png rectangles-080.png rectangles-102.png rectangles-124.png rectangles-146.png
rectangles-015.png rectangles-037.png rectangles-059.png rectangles-081.png rectangles-103.png rectangles-125.png rectangles-147.png
rectangles-016.png rectangles-038.png rectangles-060.png rectangles-082.png rectangles-104.png rectangles-126.png rectangles-148.png
rectangles-017.png rectangles-039.png rectangles-061.png rectangles-083.png rectangles-105.png rectangles-127.png rectangles-149.png
rectangles-018.png rectangles-040.png rectangles-062.png rectangles-084.png rectangles-106.png rectangles-128.png
rectangles-019.png rectangles-041.png rectangles-063.png rectangles-085.png rectangles-107.png rectangles-129.png
rectangles-020.png rectangles-042.png rectangles-064.png rectangles-086.png rectangles-108.png rectangles-130.png
rectangles-021.png rectangles-043.png rectangles-065.png rectangles-087.png rectangles-109.png rectangles-131.png
Look at first couple:
You can do those 3 steps in 1 like this:
convert scan.jpg -median 3x3 -fuzz 50% -trim +repage \
-crop 2000x2590+120+190 +repage \
-crop 10x15# rectangles-%03d.png
You may want to shave a few pixels off each side of each image and resize to 32x32 with something like (untested):
mogrify -shave 3x3 -resize 32x32! rectangles*png
The following command resizes the larger dimension to 256:
convert -resize 256x256 in.jpg out.jpg
For example, if in.jpg is 1024x512, it resizes it to 256x128.
Is it possible to resize the smaller dimension to 256 (while keeping the aspect ratio) with ImageMagick convert? (I need 512x256)
If not, is there any other easy command line solution?
The fill area flag ^ seems to do exactly what you want:
convert -resize 256x256^ in.jpg out.jpg
If you're on Windows:
The Fill Area Flag ('^' flag) is a special character in Window batch scripts and you will need to escape that character by doubling it. For example '^^', or it will not work.
This only works with ImageMagick 6.3.8-3 and above. For older versions, use this trick.
Maybe the command I suggested in my comment will work, namely
convert in.jpg -resize x256 out.jpg
Or, if you actually want to identify the smaller dimension and resize that explicitly, this should do the trick
#!/bin/bash
image=$1
cmd="x256"
[ $(identify -format "%[fx:w<h?1:0]" "$image") -eq 1 ] && cmd="256x"
convert "$image" -resize $cmd out.jpg
I preset the command to resize by height at line 3. Then I ask ImageMagick to output 1 if the image is taller than wide, and if it is, I change the resize command to resize by width. Then, finally, I do the actual resize. You can re-cast the script various ways to make it shorter, or leave it explicit.
E.g.
if [ $(identify -format "%[fx:w<h?1:0]" in.jpg) -eq 1 ]; then
convert in.jpg -resize x256 out.jpg;
else
convert in.jpg -resize 256x out.jpg;
fi
I'm using ImageMagick to prepare a set of ~20,000 photos for a timelapse video. A common problem in timelapse videos is flickering, due to changing lighting conditions, passing clouds, hue changes, etc.
I've used IM to convert each image to greyscale and -auto-gamma, which is a drastic improvement in lighting "stability". Very good, but not yet perfect.
I would now like to do the following, but can't figure out how.
1. determine ideal auto gamma based only on the lower 30% of the image
2. apply that ideal gamma to the entire image
Each of my images has sky above and buildings below. The sky changes dramatically as clouds pass by, but the buildings' lighting is fairly stable.
I tried -region, but as expected, it only applies the gamma to the region specified. Is it possible to do what I'm hoping for? Thanks for any advice!
Yes, I think so.
You can crop the bottom 30% of the image like this:
convert image.jpg -gravity south -crop x30%+0+0 bottom.jpg
so that means you can get the mean of the bottom 30% of the image like this:
convert image.jpg -gravity south -crop x30%+0+0 -format "%[mean]" info:
and you can also get the quantum range as well all in one go if you add that in:
convert image.jpg -gravity south -crop x30%+0+0 -format "%[mean] %[fx:quantumrange]" info:
Now, the gamma is defined as the logarithm of the mean divided by the logarithm of the midpoint of the dynamic range, but we can normalize both these numbers to the range [0-1] as follows:
log(mean/quantumrange) / log(0.5)
so we'll let bc work that out for us like this:
echo "scale=4; l($mean30/$qr)/l(0.5)" | bc -l
and we can use the result of that to apply a gamma correction to the entire image. So, I have put all that together in a single script, which I call b30gamma. You save it under that name and then type:
chmod +x b30gamma
to make it executable. Then you can run it on an image like this and the result will be saved as out.jpg so as not to destroy the input image:
./b30gamma input.jpg
Here is the script:
#!/bin/bash
# Pick up image name as parameter
image=$1
# Get mean of bottom 30% of image, and quantum range (65535 for Q16, or 255 for Q8)
read mean30 qr < <(convert "$image" -gravity south -crop x30%+0+0 -format "%[mean] %[fx:quantumrange]" info:)
# Gamma = log(mean)/log(dynamic range centre point)
gcorr=$(echo "scale=4;l($mean30/$qr)/l(0.5)" | bc -l)
# Debug
echo Image: $image, Mean: $mean30, quantum range: $qr, Gamma: $gcorr
# Now apply this gamma to entire image
convert "$image" -gamma $gcorr out.jpg