Original image:
I want to isolate and remove (i.e. turn to white) small clumps of pixels with large amounts of white space around them. Examples of pixel areas I would like to turn to white:
The goal is to get larger unbroken areas of white space. Example of the final result:
Small clusters of pixels can be filtered out using the median filter with imagemagick like this:
convert gmyjf.jpg -median 5 gmyjf2.jpg
(where gmyjf.jpg would be the path to the original file)
Related
I am currently trying to smooth a height-map of a 2D world. I have multiple images of different 2D worlds, so it's something I'm not going to do manually but rather create a script.
Sample of a heightmap:
As you can see, colors do not blend. I'm looking to blend every space to the color of their neighbours so the slope of the height map is smooth.
What have I tried?
Applying a blur filter, but it's not enough and gives bad quality results.
Applying small noise filters but it's not even close to what I need.
So far...
Here is what happens if I apply the height-map as it is without interpolating the color with it's neighbours.
The result is flat surfaces, instead of slopes/mountain. Hope to make my goal clear.
I believe that interpolating the heights with their neighbours and adding random noise on the surfaces will result in a good quality height-map.
I appreciate your help.
Bonus
Do you have any idea how would I create a simulated normal map from the result of this smooth height-map?
You could try resizing your image down and then back up again to take advantage of interpolation, e.g. for 5% of original size:
magick U0kEbl.png.jpeg -set option:geom "%G" -resize "5%" -resize '%[geom]!' result.png
Here are results for 3%, 5% and 8% of original size:
I have a picture with high and low contrast transitions.
I need to detect edges on the above picture. I need binary image. I can easily detect the black and "dark" blue edges with Sobel operator and thresholding.
However, the edge between "light" blue and "light" yellow color is problematic.
I start with smooth image with median filter for each channel to remove noise.
What I have tried already to detect edges:
Sobel operator
Canny operator
Laplace
grayscale, RGB, HSV, LUV color spaces (with multichannel spaces, edges are detected in each channel and then combined together to create one final edge image)
Preprocessing RGB image with gamma correction (the problem with preprocessing is the image compression. The source image is JPG and if I use preprocessing edge detection often ends with visible grid caused by JPG macroblocks.)
So far, Sobel on RGB works best but the low-contrast line is also low-contrast.
Further thresholding remove this part. I consider edge everything that is under some gray value. If I use high threshold vales like 250, the result for low contrast edge is better but the remaining edges are destroyed. Also I dont like gaps in low-contrast edge.
So, if I change the threshold further and say that all except white is edge, I have edges all over the place.
Do you have any other idea how to combine low and high contrast edge detection so that the edges are without gaps as much as possible and also not all over the place?
Note: For test I use mostly OpenCV and what is not available in OpenCV, I programm myself
IMO this is barely doable, if doable at all if you want an automated solution.
Here I used binarization in RGB space, by assigning every pixel to the closest color among two colors representative of the blue and yellow. (I picked isolated pixels, but picking an average over a region would be better.)
Maybe a k-means classifier could achieve that ?
Update:
Here is what a k-means classifier can give, with 5 classes.
All kudos and points to Yves please for coming up with a possible solution. I was having some fun playing around experimenting with this and felt like sharing some actual code, as much for my own future reference as anything. I just used ImageMagick in Terminal, but you can do the same thing in Python with Wand.
So, to get a K-means clustering segmentation with 5 colours, you can do:
magick edge.png -kmeans 5 result.png
If you want a swatch of the detected colours underneath, you can do:
magick edge.png \( +clone -kmeans 5 -unique-colors -scale "%[width]x20\!" \) -background none -smush +10 result.png
Keywords: Python, ImageMagick, wand, image processing, segmentation, k-means, clustering, swatch.
I have a customized camera, which contains 3 individual lens+filters arranged in a triangle so in every shot I get 3 single band grayscale images (r, g, b). I want to merge them to get an RGB.
The problem is, since the 3 lens are physically separated, the image captured by them are not aligned. As a result, when I use command qdal_merge in the software pack QGIS, the result looks weird. I may also need to adjust the weight of the r,g,b. I put the raw r,g,b images and the output I generated using qgis in this dropbox folder.
Is there existing open-source tool to do the alignment and merge? If not, how can I do it using opencv?
Combining R,G,B images is possible using a simple pixel intensity distance metric like Sum of Squared Distances (SSD). A better metric is the Normalized Cross-Correlation (NCC) (see Wikipedia) which first normalizes an image matrix into a unit vector, and computes the dot product of such unit vectors (from 2 input images). The higher the NCC value, the greater the similarity of the two input images.
However, NCC similarity may be insufficient for computing the best alignment of two high resolution images, such as the TIFF images you provide. One should therefore use a downsampling method as described below
to align two input images at a smaller size and then simply compute the offset as you rescale the images.
So for the input images, red, green and blue, there are two approaches to align them into a single RGB image:
Consider the blue image as the reference image for example, w.r.t. which we align the red and green images. Now consider red and blue images. Within a certain window, compute the best alignment offset of the red and blue images using the NCC similarity metric, and find the shifted_red image. Do the same for the green and blue images. Now combine the shifted_red, shifted_green and blue images to get the final RGB image.
For high-resolution images, decide a scale_count. Recursively, at each step resize the image by half, compute the offset of the red image w.r.t. the blue image, rescale the offset and apply it. The benefit of doing such a recursive multi-scale alignment is decrease in computation time and increase in accuracy of alignment (you don't know the best window size for searching for alignment offsets for solution (1), so this will work better). Repeat this approach for computing the alignment for green and blue channels, and then combine the final results as in (1).
Since this problem is common in assignments of computational photography courses, I am not going to share any code. I have, however implemented the two approaches and experimented with the images you provide. I don't know which of the input images is red, so I have two results (rescaled to decrease file size):
If IMG_0290_1.tif is Red, IMG_0290_2.tif is Green and IMG_0290_3.tif is blue:
RGB result if red:1, green:2, blue:3
If IMG_0290_3.tif is Red, IMG_0290_2.tif is Green and IMG_0290_1.tif is blue (this looks more correct to me):
RGB result if red:3, green:2, blue:1
When we look at a photo of a group of trees, we are able to identify that the photo is predominantly green and brown, or for a picture of the sea we are able to identify that it is mostly blue.
Does anyone know of an algorithm that can be used to detect the prominent color or colours in a photo?
I can envisage a 3D clustering algorithm in RGB space or something similar. I was wondering if someone knows of an existing technique.
Convert the image from RGB to a color space with brightness and saturation separated (HSL/HSV)
http://en.wikipedia.org/wiki/HSL_and_HSV
Then find the dominating values for the hue component of each pixel. Make a histogram for the hue values of each pixel and analyze in which angle region the peaks fall in. A large peak in the quadrant between 180 and 270 degrees means there is a large portion of blue in the image, for example.
There can be several difficulties in determining one dominant color. Pathological example: an image whose left half is blue and right half is red. Also, the hue will not deal very well with grayscales obviously. So a chessboard image with 50% white and 50% black will suffer from two problems: the hue is arbitrary for a black/white image, and there are two colors that are exactly 50% of the image.
It sounds like you want to start by computing an image histogram or color histogram of the image. The predominant color(s) will be related to the peak(s) in the histogram.
You might want to change the image from RGB to indexed, then you could use a regular histogram and detect the pics (Matlab does this with rgb2ind(), as you probably already know), and then the problem would be reduced to your regular "finding peaks in an array".
Then
n = hist(Y,nbins) bins the elements in vector Y into 10 equally spaced containers and returns the number of elements in each container as a row vector.
Those values in n will give you how many elements in each bin. Then it's just a matter of fiddling with the number of bins to make them wide enough, and with how many elements in each would make you count said bin as a predominant color, then taking the bins that contain those many elements, calculating the index that corresponds with their middle, and converting it to RGB again.
Whatever you're using for your processing probably has similar functions to those
Average all pixels in the image.
Remove all pixels that are farther away from the average color than standard deviation.
GOTO 1 with remaining pixels until arbitrarily few are left (1 or maybe 1%).
You might also want to pre-process the image, for example apply high-pass filter (removing only very low frequencies) to even out lighting in the photo — http://en.wikipedia.org/wiki/Checker_shadow_illusion
I've tried -noise radius and -noise geometry and they don't seem to do what I want at all. I have some b&w images (TIFF G4 Fax compression) with lots of noise around the characters. This noise takes the form of pixel blobs that are 1 pixel wide in most cases.
My desire is to do the following 3 steps (in this order):
Whiteout all black pixels that are 1 pixel wide (white pixels to the left and right)
Whiteout all black pixels that are 1 pixel tall (white pixels above and below)
Whiteout all black pixels that are 1 pixel wide (white pixels to the left and right)
Do I have to write code to do this, or can Imagemagick pull it off? If it can, how do you specify the geometry to do it?
Lacking a lot of good answers here, I put this one to the ImageMagick forum and their response was really good. You can read it here ImageMagick Forum
Morphology proved to be the best answer.
Blur then sharpen would be the normal technique for speckle noise.
Imagemagik can do both of these - you might have to play with the amoutn of blurring