How to detect large galaxies using thresholding? - image-processing

I'm required to create a map of galaxies based on the following image,
http://www.nasa.gov/images/content/690958main_p1237a1.jpg
Basically I need to smooth the image first using a mean filter then apply thresholding to the image.
However, I'm also asked to detect only large galaxies in the image. So what should I adjust the smoothing mask or thresholding in order to achieve that goal?

Both: by smoothing the picture first, the pixels around smaller galaxies will "blend" with the black space and, thus, shift to a lower intensity value. This lower intensity can then be thresholded, leaving only the white centres of bigger galaxies.

Related

find rectangle coordinates in a given image

I'm trying to blindly detect signals in a spectra.
one way that came to my mind is to detect rectangles in the waterfall (a 2D matrix that can be interpret as an image) .
Is there any fast way (in the order of 0.1 second) to find center and width of all of the horizontal rectangles in an image? (heights of rectangles are not considered for me).
an example image will be uploaded (Note I know that all rectangles are horizontal.
I would appreciate it if you give me any other suggestion for this purpose.
e.g. I want the algorithm to give me 9 center and 9 coordinates for the above image.
Since the rectangle are aligned, you can do that quite easily and efficiently (this is not the case with unaligned rectangles since they are not clearly separated). The idea is first to compute the average color of each line and for each column. You should get something like that:
Then, you can subtract the background color (blue), compute the luminance and then compute a threshold. You can remove some artefact using a median/blur before.
Then, you can just scan the resulting 1D array filled with binary values so to locate where each rectangle start/stop. The center of each rectangle is ((x_start+x_end)/2, (y_start+y_end)/2).

How to remove the local average color from an image with OpenCV

I have an image with a gentle gradient background and sharp foreground features that I want to detect. I wish to find green arcs in the image. However, the green arcs are only green relative to the background (they are semitransparent). The green arcs are also only one or two pixels wide.
As a result, for each pixel of the image, I want to subtract the average color of the surrounding pixels. The surrounding 5x5 or 10x10 pixels would be sufficient. This should leave the foreground features relatively untouched but remove the soft background, and I can then select the green channel for further processing.
Is there any way to do this efficiently? It doesn't have to be particularly accurate, but I want to run it at 30Hz on a 1080p image.
You can achieve this by doing a simple box blur on the image and then subtracting that from the original image:
blur(img,blurred,Size(15,15));
diff=img-blurred;
I tried this on a 1920x1080 image and the blur took 25ms and the subtract took 15ms on a decent spec iMac.
If your background is not changing fast, you could calculate the blur over the space of a few frames in a second thread and keep re-using it till you recalculate it again a few frames later then you would only have 15ms subtraction to do for each of your 30fps rather than 45ms of processing.
Depending on how smooth the background gradient is, edge detection followed by dilation might work well for you.
Edge detection will output the 3 lines that you've shown in the sample and discard the background gradient (again, depending on smoothness). Dilating the image then will thicken the edges so that your line contours are more significant.
You can superimpose this edge map on your original image as:
Mat src,edges;
//perform edge detection on src and output in edges.
Mat result;
cvtColor(edges,edges,cv2.COLOR_GRAY2BGR);//change mask to a 3 channel image
subtract(edges,src,result);
subtract(edges,result);
The result should contain only the 3 lines with 3 colours. From here on you can apply color filtering to select the green one.

Finding ROI for a periodic repetative fringe pattern

I am trying to detect ROI for a fixed repetitive pattern in an image using opencv C++.
The ROI which I am trying to find - is shown with red boundary as shown in the pic:
I tried canny edge detection after blurring but it detects edge of the vertical/horizontal black and white lines. This is not something I am trying to detect.
What is the best approach to my problem?
Since you're starting with a binary image you could use
findContours()
to get the contours for the individual strips. Since there are a couple of solitary pixels from noise you should then filter for size using
contourArea(contour)
and merge the points of all contours meeting your size criteria into a combined contour. Then get the bounding box for the combined contour:
boundingRect(combinedContour)

Are the GPUImageBilateralFilter parameters of the GPUImage iOS Library the equivilent of Photoshop Surface Blur parameters?

I understand that a Surface Blur in Photoshop is a bilateral filter.
In the iOS Library GPUImage there is a filter GPUImageBilateralFilter with parameters of texelSpacingMultiplier and distanceNormalizationFactor.
Would these match up directly to the Photoshop Surface Blur options of radius and threshold (respectively)? And would the values to these parameters be the same?
Thanks!
Not exactly.
With your standard Gaussian blurs, you typically specify a pixel radius (or sigma value in pixels) to define the blur strength. Before the recent overhaul of the blurs, you couldn't specify something like this in GPUImage. The blurs instead used a fixed number of samples, with a fixed value of sigma. You could expand them slightly by adjusting the pixel spacing between samples (the texelSpacingMultiplier, which is 1.0 by default).
With my recent revamp of the blurs, you now can specify a true pixel radius for the blur, which in the case of a Gaussian blur sets the size of sigma. When you do this, it generates a shader on the fly that works over the appropriate number of pixels to yield a blur of that strength. The use of sigma here matches Core Image's behavior exactly, although I haven't tested it against Photoshop.
However, the bilateral blur is the last of the blurs that I've needed to update to bring it inline with the others. I haven't yet, so you don't have a great way to bring it in line with Photoshop's behavior yet. This is on my to-do list, but that's fairly lengthy at this point. You're welcome to take a look at how I converted over something like the box blur and to try to adapt the bilateral blur math to enable this for that filter.
The distanceNormalizationFactor isn't related to the blur size, but it works to weight the way that pixel colors are processed from the sampled pixels around the central one. A bilateral blur works by blurring the central color only with surrounding colors that are close enough to the central one (thus preserving edges). This weighting value controls how close a color needs to be to the central pixel's color in order for the center pixel to be blended with it.

Algorithm for determining the prominant colour of a photograph

When we look at a photo of a group of trees, we are able to identify that the photo is predominantly green and brown, or for a picture of the sea we are able to identify that it is mostly blue.
Does anyone know of an algorithm that can be used to detect the prominent color or colours in a photo?
I can envisage a 3D clustering algorithm in RGB space or something similar. I was wondering if someone knows of an existing technique.
Convert the image from RGB to a color space with brightness and saturation separated (HSL/HSV)
http://en.wikipedia.org/wiki/HSL_and_HSV
Then find the dominating values for the hue component of each pixel. Make a histogram for the hue values of each pixel and analyze in which angle region the peaks fall in. A large peak in the quadrant between 180 and 270 degrees means there is a large portion of blue in the image, for example.
There can be several difficulties in determining one dominant color. Pathological example: an image whose left half is blue and right half is red. Also, the hue will not deal very well with grayscales obviously. So a chessboard image with 50% white and 50% black will suffer from two problems: the hue is arbitrary for a black/white image, and there are two colors that are exactly 50% of the image.
It sounds like you want to start by computing an image histogram or color histogram of the image. The predominant color(s) will be related to the peak(s) in the histogram.
You might want to change the image from RGB to indexed, then you could use a regular histogram and detect the pics (Matlab does this with rgb2ind(), as you probably already know), and then the problem would be reduced to your regular "finding peaks in an array".
Then
n = hist(Y,nbins) bins the elements in vector Y into 10 equally spaced containers and returns the number of elements in each container as a row vector.
Those values in n will give you how many elements in each bin. Then it's just a matter of fiddling with the number of bins to make them wide enough, and with how many elements in each would make you count said bin as a predominant color, then taking the bins that contain those many elements, calculating the index that corresponds with their middle, and converting it to RGB again.
Whatever you're using for your processing probably has similar functions to those
Average all pixels in the image.
Remove all pixels that are farther away from the average color than standard deviation.
GOTO 1 with remaining pixels until arbitrarily few are left (1 or maybe 1%).
You might also want to pre-process the image, for example apply high-pass filter (removing only very low frequencies) to even out lighting in the photo — http://en.wikipedia.org/wiki/Checker_shadow_illusion

Resources