I have a command:
composite -colorspace gray -quality 99 -compose plus t-1.jpg x-1.jpg 2.jpg
I would like to produce the same effect in python. I tried this:
from PIL import Image
imga = Image.open('t-1.jpg')
imgb = Image.open('x-1.jpg')
ab = PIL.Image.blend( imga, imgb, 0.5)
ab.save("test.jpg")
They test.jpg and 2.jpg do not looking anything alike. This would mean that -compose plus does not equal to blend( imga, imgb, 0.5). Docs for the compose plus command is here. What is a comparable operation for -compose plus in imagemagick?
Found it right after I asked the question.
from PIL import Image, ImageChops
imga = Image.open('t-1.jpg')
imgb = Image.open('x-1.jpg')
ab = ImageChops.add(imga, imgb,1,0)
ab.save("test.jpg")
Related
Friends,
I have a stack of color-scanned images. Some are from regular white paper with text or images, others were scanned from colored paper (blank pages, same green colored paper used.)
I'd like to identify these colored paper images. Problems:
paper's color ("background") is not scanned very uniformly, often has a wavy or structured pattern
green tone is quite different depending on the scanner used
scanner does not catch the full sheet resulting in a white or shadowed "border" around green area
My idea was to see if say 90% of the image is some sort of green and tried using a sorted histogram. But because of (1) and esp. (2) I have a hard time picking a working color value from the histogram data.
Any help appreciated!
Edit:
Here are three sample images, scanned from the same sheet of paper.
Have a look at HSV colourspace on Wikipedia - specifically this diagram.
It should be a better place to find the colour of your images, regardless of scanner and calibration.
Now, let's create a lime-green, yellow and cyan block and derive its colour using ImageMagick:
magick -size 100x100 xc:lime -colorspace HSV -channel 0 -separate -format "%[fx:mean*360]" info:
120
magick -size 100x100 xc:yellow -colorspace HSV -channel 0 -separate -format "%[fx:mean*360]" info:
60
magick -size 100x100 xc:magenta -colorspace HSV -channel 0 -separate -format "%[fx:mean*360]" info:
300
magick -size 100x100 xc:cyan -colorspace HSV -channel 0 -separate -format "%[fx:mean*360]" info:
180
Hopefully you can see we are correctly calculating the Hue angle. Now to your image. I have added an artificial frame so you can see how to remove the edges:
We can remove the frame like this:
magick YOURSCAN.jpg -gravity center -crop 80% cropped.jpg
So, my complete suggestion would be to crop and convert to HSV and check the mean Hue. You could also test if the image is fairly saturated so it doesn't pick up grey-ish, uncoloured images. You could also test the variance in the Hue channel to see if there are many different colours - or the spread of the hues is large and reject ones where it is large.
magick YOURSCAN.jpg -gravity center -crop 80% -colorspace HSV -channel 0 -separate -format "%[fx:mean*360]" info:
Just for reference, your 3 images come up with the following Hue angles on a scale of 0..360:
79, 68, 73
I would suggest you test a few more samples to establish a reasonable range.
I have an image (see below) with a border in the red channel and the stuff I'd like to keep in the green channel.
I'd like to:
crop the image to the extent of the red pixels (ignoring any which aren't 100% red)
crop N more pixels (4, say) from each side
delete the red channel
copy the green channel to the red and blue channels (making it white)
save the result into a new file
I've been reading the docs but am stumped, can anyone with more experience of this program help me?
I'm using windows, version: ImageMagick 7.1.0-7 Q16-HDRI x64 2021-09-12
Thanks,
Charlie
Maybe like this:
magick l1AvD.png -trim -shave 10 -channel g -separate result.png
It does the following steps:
trim to any/all channels
remove 10 pixels off all sides
extracts just the green channel and saves as a single-channel, greyscale image
You may want to add -threshold 50% after -separate if you want pure blacks and whites. You may want to add +repage after -separate to make it forget where on the canvas it originally came from.
The answer came from the GitHub discussions page, this seems to work:
magick ^
file.png ^
-strip ^
( +clone ^
-color-threshold "red-red" ^
-set option:MYCROP "%%#" ^
+delete ^
) ^
-crop %%[MYCROP] +repage ^
-shave 4x4 +repage ^
-channel G -separate +channel ^
output.png
I want to display a DEM file (.raw) using Python, but there may be something wrong with the result.
Below is my code:
img1 = open('DEM.raw', 'rb')
rows = 4096
cols = 4096
f1 = np.fromfile(img1, dtype = np.uint8, count = rows * cols)
image1 = f1.reshape((rows, cols)) #notice row, column format
img1.close()
image1 = cv2.resize(image1, (image1.shape[1]//4, image1.shape[0]//4))
cv2.imshow('', image1)
cv2.waitKey(0)
cv2.destroyAllWindows()
And I got this result:
display result
The original DEM file is placed here: DEM.raw
There's nothing wrong with your code, that's what's in your file. You can convert it to a JPEG or PNG with ImageMagick at the command line like this:
magick -size 4096x4096 -depth 8 GRAY:DEM.raw result.jpg
And you'll get pretty much the same:
The problem is elsewhere.
Taking the hint from Fred (#fmw42) and playing around, oops I mean "experimenting carefully and scientifically", I can get a more likely looking result if I treat your image as 4096x2048 pixels with 16 bpp and MSB first endianness:
magick -size 4096x2048 -depth 16 -endian MSB gray:DEM.raw -normalize result.jpg
I want to add logo on product image with engraving effect.
The original logo is
After adding it on the product it should look like this.
how to do this with imagemagick.
Using this as the trophy:
Then something along these lines:
convert trophy.jpg -gravity center \
\( G.png -colorspace gray -channel a -evaluate multiply 0.2 -resize 120x120 \) -composite result.png
So, I am basically loading the trophy, then in some "aside processing" in parentheses, loading the Google logo, converting it to greyscale, reducing the opacity by multiplying it by 0.2, resizing it and compositing it on top of the trophy.
By the way, if you were using GraphicsMagick, which doesn't have the parentheses I used to make sure I only convert the logo to greyscale and not the trophy, you would do it in a different order. First load the logo and process it (greyscale, resize etc), then load the trophy, then swap the order so the trophy goes to the background, like this:
gm convert G.png -colorspace gray -resize ... trophy.jpg -swap -composite result.png
I have four separate images - 2-projected.tif, 3-projected.tif, 4-projected.tif and 5-projected.tif. These are four Landsat images. Image 2-projected.tif corresponds to blue channel, image 3-projected.tif - to green channel, image 4-projected.tif - to red channel, and 5-projected.tif - to infrared. Now I want to create NDVI image. To do this, I first create a combined RGB image, using ImageMagic:
$ convert 4-projected.tif 3-projected.tif 2-projected.tif -combine RGB.tif
So far, so good. And then I try to follow a command from this tutorial, which is supposed to create NDVI image. I do it like so:
$ convert 5-projected.tif RGB.tif -channel RGB -fx '(u.r-v.r)/(u.r+v.r+0.001)' -normalize NDVI.tif
But as a result, I get these error messages:
convert: unable to parse expression (u.r-1.0*v.r)' # error/fx.c/FxGetSymbol/183
1.
convert: divide by zero'(u.r-1.0*v.r)/(u.r+v.r+0.001)'' # error/fx.c/FxEvaluat
eSubexpression/2159.
I'm not sure how can I fix it.
The two bands of interest are the red and the NIR and the formula for NDVI is:
NDVI = (NIR-red)/(NIR+red)
You have two options. First off, if you have the red and the NIR in two separate, single channel images, you can do:
convert red.tif NIR.tif -fx '(u.r-v.r)/(u.r+v.r+0.001)' -normalize -compress lzw NDVI.tif
Here, I am using u.r to refer to the first channel of the first image and v.r to refer to the first channel of the second image.
Alternatively, if the red and NIR are the first two channels in an RGB image (i.e. ImageMagick would call them the red and green channels):
convert RGB.tif -fx '(u.r-u.g)/(u.r+u.g+0.001)' -normalize -compress lzw NDVI.tif
Here I am using u.r to refer to the first channel of the first image and u.g to refer to the second channel of the first image.
The -fx method is extremely powerful, but notoriously slow. This method below should give you the same answer, but I have not checked it too thoroughly:
convert 4-projected.tif -write MPC:red +delete \
5-projected.tif -write MPC:NIR +delete \
\( mpc:red mpc:NIR -evaluate-sequence subtract \) \
\( mpc:red mpc:NIR -evaluate-sequence add \) \
-evaluate-sequence divide -normalize -compress lzw NDVI.tif
If you want to colourise the image with false colour, you could generate a Colour Lookup Table (CLUT) and map the grayscale values in the NDVI image to those colours. So, let's say you wanted to map the darkest blacks in your NDVI image to black, the quite dark values to red, the quite bright values to orange and the very brightest values to green, you could make a CLUT like this:
convert xc:black xc:red xc:orange xc:lime +append clut.png
and apply it the greyscale result from above like this:
convert NDVI.tif -normalize clut.png -clut falsecolour.jpg
If you want to make the orange and green tones longer (more prevalent), you can alter their lengths to make them longer in the CLUT:
convert -size 30x1 xc:black -size 40x1 xc:red -size 80x1 xc:orange -size 100x1 xc:lime +append clut.png
Then re-apply the CLUT:
convert NDVI.tif -normalize clut.png -clut result.jpg