ImageMagick: How can I cut out round from image? - imagemagick

I'm trying to make a round avatar out of some image. How can I cut out round from image using ImageMagick?

There are various ways, but the simplest is probably the undocumented "-vignette" option:
magick CLg93.jpg -vignette 1x1 kim_vignette.jpg
yields
You can deal with non-square images by applying an "offset" to the vignette geometry, for example:
-vignette 1x1+0+50
if the input image is a portrait that is 100 pixels taller than it is wide.

Related

Color interpolation/smoothing in discrete-colored height map

I am currently trying to smooth a height-map of a 2D world. I have multiple images of different 2D worlds, so it's something I'm not going to do manually but rather create a script.
Sample of a heightmap:
As you can see, colors do not blend. I'm looking to blend every space to the color of their neighbours so the slope of the height map is smooth.
What have I tried?
Applying a blur filter, but it's not enough and gives bad quality results.
Applying small noise filters but it's not even close to what I need.
So far...
Here is what happens if I apply the height-map as it is without interpolating the color with it's neighbours.
The result is flat surfaces, instead of slopes/mountain. Hope to make my goal clear.
I believe that interpolating the heights with their neighbours and adding random noise on the surfaces will result in a good quality height-map.
I appreciate your help.
Bonus
Do you have any idea how would I create a simulated normal map from the result of this smooth height-map?
You could try resizing your image down and then back up again to take advantage of interpolation, e.g. for 5% of original size:
magick U0kEbl.png.jpeg -set option:geom "%G" -resize "5%" -resize '%[geom]!' result.png
Here are results for 3%, 5% and 8% of original size:

Image - detect low contrast edge

I have a picture with high and low contrast transitions.
I need to detect edges on the above picture. I need binary image. I can easily detect the black and "dark" blue edges with Sobel operator and thresholding.
However, the edge between "light" blue and "light" yellow color is problematic.
I start with smooth image with median filter for each channel to remove noise.
What I have tried already to detect edges:
Sobel operator
Canny operator
Laplace
grayscale, RGB, HSV, LUV color spaces (with multichannel spaces, edges are detected in each channel and then combined together to create one final edge image)
Preprocessing RGB image with gamma correction (the problem with preprocessing is the image compression. The source image is JPG and if I use preprocessing edge detection often ends with visible grid caused by JPG macroblocks.)
So far, Sobel on RGB works best but the low-contrast line is also low-contrast.
Further thresholding remove this part. I consider edge everything that is under some gray value. If I use high threshold vales like 250, the result for low contrast edge is better but the remaining edges are destroyed. Also I dont like gaps in low-contrast edge.
So, if I change the threshold further and say that all except white is edge, I have edges all over the place.
Do you have any other idea how to combine low and high contrast edge detection so that the edges are without gaps as much as possible and also not all over the place?
Note: For test I use mostly OpenCV and what is not available in OpenCV, I programm myself
IMO this is barely doable, if doable at all if you want an automated solution.
Here I used binarization in RGB space, by assigning every pixel to the closest color among two colors representative of the blue and yellow. (I picked isolated pixels, but picking an average over a region would be better.)
Maybe a k-means classifier could achieve that ?
Update:
Here is what a k-means classifier can give, with 5 classes.
All kudos and points to Yves please for coming up with a possible solution. I was having some fun playing around experimenting with this and felt like sharing some actual code, as much for my own future reference as anything. I just used ImageMagick in Terminal, but you can do the same thing in Python with Wand.
So, to get a K-means clustering segmentation with 5 colours, you can do:
magick edge.png -kmeans 5 result.png
If you want a swatch of the detected colours underneath, you can do:
magick edge.png \( +clone -kmeans 5 -unique-colors -scale "%[width]x20\!" \) -background none -smush +10 result.png
Keywords: Python, ImageMagick, wand, image processing, segmentation, k-means, clustering, swatch.

Stitching images using GraphicsMagick with Blending

I have to stitch number of tiles using GraphicsMagick to create one single image. I am currently using -convert with -mosaic with some overlap to stitch tiles. But the stitched image has border where the overlap is done.
Following is the command I am using:
gm convert -background transparent
-page "+0+0" "E:/Images/Scan 001_TileScan_001_s00_ch00.tif"
-page "+0+948" "E:/Images/Scan 001_TileScan_001_s01_ch00.tif"
-page "+0+1896" "E:/Images/Scan 001_TileScan_001_s02_ch00.tif"
-page "+0+2844" "E:/Images/Scan 001_TileScan_001_s03_ch00.tif"
-mosaic "E:/Output/temp/0.png"
The final image looks like this:
How to stitch and Blend without Border?
I've been part of several projects to make seamless image mosaics. There are a couple of other factors you might like to consider:
Flatfielding. Take a shot of a piece of white card with your lens and lighting setup, then use that to flatten out the image lightness. I don't know if GM has a thing to do this, #fmw42 would know. A flatfield image is specific to a lighting setup, lens aperture setting, focus setting and zoom setting, so you need to lock focus/aperture/zoom after taking one. You'll need to do this correction in linear light.
Lens distortion. Some lenses, especially wide-angle ones, will introduce significant geometric distortion. Take a shot of a piece of graph paper and check that the lines are all parallel. It's possible to use a graph-paper shot to automatically generate a lens model you can use to remove geometric errors, but simply choosing a lens with low distortion is easier.
Scatter. Are you moving the object or the camera? Is the lighting moving too? You can have problems with scatter if you shift the object: bright parts of the object will scatter light into dark areas when they move under a light. You need to model and remove this or you'll see seams in darker areas.
Rotation. You can get small amounts of rotation, depending on how your translation stage works and how carefully you've set the camera up. You can also get the focus changing across the field. You might find you need to correct for this too.
libvips has a package of functions for making seamless image mosaics, including all of the above features. I made an example for you: with these source images (near IR images of painting underdrawing):
Entering:
$ vips mosaic cd1.1.jpg cd1.2.jpg join.jpg horizontal 531 0 100 0
Makes a horizontal join to the file join.jpg. The numbers give a guessed overlap of 100 pixels -- the mosaic program will do a search and find the exact position for you. It then does a feathered join using a raised cosine to make:
Although the images have been flatfielded, you can see a join. This is because the camera sensitivity has changed as the object has moved. The libvips globalbalance operation will automatically take the mosaic apart, calculate a set of weightings for each frame that minimise average join error, and reassemble it.
For this pair I get:
nip2, the libvips GUI, has all this with a GUI interface. There's a chapter in the manual (press F1 to view) about assembling large image mosaics:
https://github.com/jcupitt/nip2/releases
Global balance won't work from the CLI, unfortunately, but it will work from any of the libvips language bindings (C#, Python, Ruby, JavaScript, C, C++, Go, Rust, PHP etc. etc.). For example, in pyvips you can write:
import pyvips
left = pyvips.Image.new_from_file("cd1.1.jpg")
right = pyvips.Image.new_from_file("cd1.2.jpg")
join = left.mosaic(right, "horizontal", 531, 0, 100, 0)
balance = join.globalbalance()
balance.write_to_file("x.jpg")
Here is an example using ImageMagick. But since colors are different, you will only mitigate the sharp edge with a ramped blend. The closer the colors are and the more gradual the blend (i.e. over a larger area), the less it will show.
1) Create red and blue images
convert -size 500x500 xc:red top.png
convert -size 500x500 xc:blue btm.png
2) Create mask that is solid white for most and a gradient where you want to overlap them. Here I have 100 pixels gradient for 100 pixel overlap
convert -size 500x100 gradient: -size 500x400 xc:black -append -negate mask_btm.png
convert mask_btm.png -flip mask_top.png
3) Put masks into the alpha channels of each image
convert top.png mask_top.png -alpha off -compose copy_opacity -composite top2.png
convert btm.png mask_btm.png -alpha off -compose copy_opacity -composite btm2.png
4) Mosaic the two images one above the other with an overlap of 100
convert -page +0+0 top2.png -page +0+400 btm2.png -background none -mosaic result.png
See also my tidbit about shaping the gradient at http://www.fmwconcepts.com/imagemagick/tidbits/image.php#composite1. But I would use a linear gradient for such work (as shown here), because as you overlap linear gradients they sum to a constant white, so the result will be fully opaque where they overlap.
One other thing to consider is trying to match the colors of the images to some common color map. This can be done by a number of methods. For example, histogram matching or mean/std (brightness/contrast) matching. See for example, my scripts: histmatch, matchimage and redist at http://www.fmwconcepts.com/imagemagick/index.php and ImageMagick -remap at https://www.imagemagick.org/Usage/quantize/#remap

How to find if the image has black rectangles of size greater than 5*5

I am trying to write a program in c# or c++, which finds if the image (png or jpg) has a rectangle which is black in color and is greater than size 5 * 5 pixels. If there are multiple such rectangles in an image, it should be able to give me the coordinates of all such rectangles which are black in color.
You could try an Image Correlation:

Cleaning up speckles around text in a scanned image

I've tried -noise radius and -noise geometry and they don't seem to do what I want at all. I have some b&w images (TIFF G4 Fax compression) with lots of noise around the characters. This noise takes the form of pixel blobs that are 1 pixel wide in most cases.
My desire is to do the following 3 steps (in this order):
Whiteout all black pixels that are 1 pixel wide (white pixels to the left and right)
Whiteout all black pixels that are 1 pixel tall (white pixels above and below)
Whiteout all black pixels that are 1 pixel wide (white pixels to the left and right)
Do I have to write code to do this, or can Imagemagick pull it off? If it can, how do you specify the geometry to do it?
Lacking a lot of good answers here, I put this one to the ImageMagick forum and their response was really good. You can read it here ImageMagick Forum
Morphology proved to be the best answer.
Blur then sharpen would be the normal technique for speckle noise.
Imagemagik can do both of these - you might have to play with the amoutn of blurring

Resources