Hi I am trying to figure out whether edge detection depends on image conditions (features).
I know there is a huge mathematical basis for any edge detection operator. Also I know edge detection is sensitive for a noise on a picture.
What about brightness, contrast? The point is I am looking how to estimate quality of the image. Is quality of image important for edge detection?
Edges are detected where there is a change in pixel value in either x or y direction in an image. The maths behind this is simple differentiation. Any variation or noise that can change the pixel value can reduce the chances of detecting an edge but then morphological operations can help.
For example, blur is one operation that can reduce the image quality by changing the pixel values. Figure 1 represents an image and its edges.. As I have already mentioned edges are detected where the pixel value changes in one direction, you can see the white lines as edges corresponding to these changes in pixel value.
Figure 2 is blurred image of input image, the edges of this blur image is much less than the actual number of edges..
That is just one example, noise while capturing the image, illumination of object or dark object can give different edges. Depending upon how the noise is affecting the image the edges can increase or decrease.
There are some basic methods of detecting edges, I have used canny edge detection. You can refer to review of classic edge detectors to understand them further.
Related
I work with the segmentation of simulated rock pile images. My input image is a depth image(given below). I tried to apply the canny edge but it doesn't give promising results - rock pile areas which are clustered together are detected as a single large rock, edges abruptly end.
Since a depth image of a rock pile has low changes in intensity, am I right in saying that the canny edge is not appropriate for this purpose?
I have applied the adaptive thresholding operation, it seems to show better results because it does not really work with the gradient but with average intensity values of the neighborhood. The image is given below.
The actual simulated scene
The depth image
The result of canny edge
Adaptive thresholding result
I have a thermal image of human standing either carrying a cold tool or a hot tool. I want to find the place this tool is. So basically i am trying to make an image processing filter which would give me the area of the place where drastic change of intensity of gray color occurs in the relatively smoother background. I have tried canny edge detector but it gives a lot of noise.
Hot Object To be detected: https://imgur.com/0ZyK6WP
Cold object to be detected: https://imgur.com/YYT9rHW
You might increase the Gaussian smoothing kernel to filter out the noise, but that might result in losing out the edges. So in that case you might want to use filter that would preserve the edge and also smooth out the image. Something like Bilateral filter could help in that case. It replaces the intensity of each pixel with a weighted average of intensity values from nearby pixels.
Also have you tried different threshold values foe non-max suppression. As that might be helpful when dealing with false positives.
First time studying image processing...
I just don't understand what does fourier transformed image of an image describe?
For example consider given following pictures, The first one is the image, and the second one is the fourier transformation of the image.
Now my question is:
By given the fourier transformation image, what am i suppose to comprehend from that?
I would really appreciate any help here, i just cannot proceed with my studies without understanding this.
Thanks in advance.
The central part includes the low frequency components. The bright pixels in frequency image (right) indicate strong intensity at the corresponding positions in spatial image (left). Note that there is no one-on-one mapping from the pixel in frequency domain and spatial domain. One frequency pixel consists all the spatial pixels that have the corresponding frequency.
The low frequency domain corresponds to the smooth areas in the image (such as skin, hat, background, etc). The high frequency domain, which is shown in the image away from the central part, includes those sharp edges or some structure with the changes of intensity dramatically along one orientation (such as the hairs, boundary of hat, etc). But since the intensity for those parts is not high enough compared with the smooth structure, the corresponding regions appear dark on the right image.
Note that the camera lens focuses on Lenna, so the background is blurred. If the focus region is at background, the vertical lines behind Lenna would be clear, and the sharp edges of the lines will contribute to high frequency magnitude, thus region away from the center on the right image would be bright.
I have taken on a project to automatically analyse images taken from a microscope of a specific type micro fractures. The problem is that the camera used was on an "auto" setting and so the micro fractures (which look like pin pricks) are a variety of shades from one photo to the next.
The background is also at various saturation levels and there are some items (which appear very bright in the photos) which look like fractures but are something different which I need to discount.
Could anyone recommend a technique I could investigate to help me solve this issue?
This is quite a normal situation in image recognition -- different lighting conditions, different orientation of objects, different scale, different image resolution. Methods have been developed to extract useful features out of such images. I am not an expert in that area, but I suspect that any general book on the subject contains at least a brief review of image normalization and feature extraction methods.
If the micro fractures are sharp edge transitions, then a combination of simple techniques may allow you to find connected regions of strong edge points that correspond to those fractures. If the fractures also appear dark, then you should be able to distinguish them from the bright fracture-like features.
Briefly:
Generate an edge map
(If necessary) Remove edge pixels corresponding to bright features.
Select an edge strength that separates the fractures from the background
Clean up the edge map image
Find connected regions in the edge map image
If you want to find thin features with strong edges in a background, then one step could be to generate an edge map (or edge image) in which each pixel represents the local edge strength. A medium gray pixel surrounded by other medium gray pixels would have relatively low edge strength, whereas a black pixel surrounded by light gray pixels would have relatively high edge strength. Various edge-finding techniques include Sobel, Prewitt, Canny, Laplacian, and Laplacian of Gaussian (LoG); I won't describe those here since Wikipedia has entries on them if you're not familiar with them.
Once you have an edge map, you could use a binary threshold to convert the edge map into black and white pixels. If you have evidence that fractures have an edge strength of 20, then you would use the value 20 as a binarization threshold on the image. Binarization will then leave you with a black and white edge map with white pixels for strong edges, and black pixels for the background.
Once you have the binarized edge map, you may need to perform a morphological "close" operation to ensure that white pixels that may be close to one another become part of the same connected region.
Once you've performed a close on the binarized edge map you can search for connected components (which may be called "contours" or "blobs"). For most applications it's better to identify 4-connected regions in which a pixel is considered connect to pixels to the top, left, bottom, and right, but not to its neighbors at the top left and other corners. If the features are typically single-pixel lines or meandering cracks, and if there isn't much noise, then you might be able to get away with identifying 8-connected regions.
Once you've identified connected regions you can filter based on the area, length of the longest axis, and/or other parameters.
If both dark and light features can have strong edges, and if you want to eliminate the bright features, then there are a few ways to eliminate them. In the original image you might clip the image by setting all values over a threshold brightness to that brightness. If the features you want to keep are darker than the median gray value of the image, then you could ignore all pixels brighter than the median gray value. If the background intensity varies widely, you might calculate a median for some local region.
Once we see your images I'm sure you'll get more suggestions. If the problem you're trying to solve turns out to be similar to one I worked on, which was to find cracks in highly textured surfaces, then I can be more specific about algorithms to try.
I'm trying to implement user-assisted edge detection using OpenCV.
Assume you have an image in which we need to find a polygonal shape. For the sake of discussion, let's say we need to find the top of a rectangular table in a picture. The user will click on the four corners of the table to help us narrow things down. Connecting those four points gives us a polygon, or four vectors.
But the user is not very accurate when clicking on those corners. So I'd like to use edge information from the image to increase the accuracy.
I'm using a Canny edge detector with a fairly high treshold to determine important edges in my image. (more precisely, I'm scaling down, blurring, converting to grayscale, then run Canny). How can I compute whether a vector aligns with an edge in my image? If I have a way to compute "alignment", my overal algorithm comes down to perturbating the location of the four edge points, computing the total "alignment" of my polygon with the edges in the image, until I find an optimum.
What is a good way to define and compute this "alignment" metric?
You may want to try to use FindContours to detect your table or any other contour. Then build a contour also from the user input points. After this you can read about Contour Moments by which you can compare contours. You can compare all the contours from the image with the one built from the user points and then select the closest match.