How to segment underwater seaweed and calculate the area? - machine-learning

I try to segmentseaweed
use k-means.
CLAHE solves "exposure" problem, but seaweed close to the water surface will over-enhance the contrast and cause k-means to analyze different colors
and threshold can't analyze, too.
CLAHE solve glare.
I also tried to change lightness(V) with HSV
Is there any other image segmentation that can be used?
need Deep learning?
need image restoration?
Is there anything else to remove the exposure?
single image can see depth in color?

Related

I have to detect a drastic change of intensity within an image

I have a thermal image of human standing either carrying a cold tool or a hot tool. I want to find the place this tool is. So basically i am trying to make an image processing filter which would give me the area of the place where drastic change of intensity of gray color occurs in the relatively smoother background. I have tried canny edge detector but it gives a lot of noise.
Hot Object To be detected: https://imgur.com/0ZyK6WP
Cold object to be detected: https://imgur.com/YYT9rHW
You might increase the Gaussian smoothing kernel to filter out the noise, but that might result in losing out the edges. So in that case you might want to use filter that would preserve the edge and also smooth out the image. Something like Bilateral filter could help in that case. It replaces the intensity of each pixel with a weighted average of intensity values from nearby pixels.
Also have you tried different threshold values foe non-max suppression. As that might be helpful when dealing with false positives.

Alternative for Threshold in opencv

I am using threshold in Opencv to find the contours. My input is a hand image. Sometimes the threshold is not good so I couldnt find the contours.
I have applied the below preprocessing steps
1. Grabcut
cv::grabCut(image, result,rectangle,bgModel,fgModel, 3,cv::GC_INIT_WITH_RECT);
gray Scale conversion
cvtColor(handMat, handMat, CV_BGR2GRAY);
meadianblur
medianBlur(handMat, handMat, MEDIAN_BLUR_K);
I used the below code to find threshold
threshold( handMat, handMat, 141, 255, THRESH_BINARY||CV_THRESH_OTSU );
Sometimes I get good output and sometimes the threshold output is not good. I have attached the two output images.
Is there any other way than threshold from which contours can be found?
Good threshold Output:
Bad threshold Output
Have you tried an adaptive threshold? A single value of threshold rarely works in real life application. Another truism - threshold is a non-linear operation and hence non-stable. Gradient on the other hand is linear so you may want to find a contour by tracking the gradient if your background is smooth and solid color. Gradient is also more reliable during illumination changes or shadows than thresholding.
Grab-cut, by the way, uses color information to improve segmentation on the boundary when you already found 90% or so of the segment, so it is a post processing step. Also your initialization of grab cut with rectangle lets in a lot of contamination from background colors. Instead of rectangle use a mask where you mark as GC_FGD deep inside your initial segment where you are sure the hand is; mark as GC_BGD far outside your segment where you sure background is; mark GC_PR_FGD or probably foreground everywhere else - this is what will be refined by grab cut. to sum up - your initialization of grab cut will look like a russian doll with three layers indicating foreground (gray), probably foreground (white) and background (balck). You can use dilate and erode to create these layers, see below
Overall my suggestion is to define what you want to do first. Are you looking for contours of arbitrary objects on arbitrary moving background? If you are looking for a contour of a hand to find fingers on relatively uniform background I would:
1. use connected components or MSER to segment out a hand. Possibly improve results with grab cut initialized with the conservative mask and not rectangle!
2. use convexity defects to find fingers if this is your goal;
One issue is to try to find contours without binarizing the image.
If your input is in color, you can try to change color space in order to enhance the difference between the hand and the background.
Otsu try to find an optimal threshold, you can also try to set it manually but Otsu is useful because if the illumination change, the threshold will adapt automatically.
There are also many other kind of binarization : Sauvola, Bradley, Niblack, Kasar... but Otsu is simple, and work well. I suggest you to do preprocessing or postprocessing if you want to improve the binarization result.

Color SURF detector

SURF by default works on Gray image. I am thinking to do SURF on HSV image. My method is to separate the channels into H, S and V. And I use S and V for keypoint detection. I tried to compare the number of keypoints in SV vs RGB and in terms of channel wise, HSV gives more features.
Not sure what I am doing is correct or not. Need some explanation of the possibility of applying SURF on HSV image. I have read a paper on applying SIFT on different color space but not SURF.
Is there better way to achieve this?
Can we apply SURF to color, HSV space?
Thank you for your time.
Can we apply SURF to color, HSV space?
I didn't test it, but as far as I know, SIFT and SURF use quite (in principle) similar detection techniques:
SIFT detector uses the Difference-of-Gaussian (DoG) technique to efficiently approximate the Laplacian-of-Gaussian (LoG), which both are Blob Detection techniques.
SURF detector uses box-filters/box-blurs of arbitrary size to compute (or approximate?) The determinant of the Hessian which is a Blob Detection technique.
Both methods use some strategy to compute those blobs in multiple scales (SIFT: DoG-Pyramid; SURF: integral images to scale the filter sizes). At the end, both methods detect blobs in the given 2D array.
So if SIFT can detect good features in your (H)SV channels, SURF should be able to do the same because in principle they both detect blobs. What you will do is detecting blobs in the hue/saturation/value channel:
hue-blobs: regions of similar color-tone which are surrounded by different (all higher or all lower) color-tones;
saturation-blobs: regions of... yea of what? no idea how to interpret that;
value-blobs: should give very similar results to the grayimage converted RGB image's blobs.
One thing to add: I'm just handling the detector! No idea how SIFT/SURF description is influenced by color data.
I didn't test it, but what you could do is using the interest point HSV values as additional matching criteria. What I used in the original implementation and what speeded up matching image pairs was the sign of the determinant of the Hessian matrix. The sign tells us whether it is a light blob on a dark background or a dark blob on a light background. Obviously, one would not attempt to match a dark blob with a bright blob.
In a similar way, you could use HSV values and use the distance. Why matching blue blobs with yellow blobs. Makes no sense, except white balance or lighting is completely messed up. Maybe my paper about matching line segments can help here. I used HSV there.
As for extracting SURF interest points on the different channels H, S, and V, I agree with the answer of Micka.
What you could try is to make a descriptor using the Hue channel.

improving skin color detection with histogram evaluation

this is my attempt to skin color detection using opencv2 after reading this cool tutorial.
take a face with haar
use the face ROI histogram 2D (on hue and saturation) to model the skin color, calcHist
use this model to evaluate a new image with calcBackProject
apply dilate, erode, blur filters on the result mask.
the better case is this one:
but there is no background and no lights (only ambiental sun light in the room)
in other cases I obtain really worse result, there are a lot of noise in background, hand fingers are black or noised and so on. and when I'm try to get a 0-1 mask for mask only skin.. the final result is not so good.
maybe I can apply other filters, like threshold, or other technique (some other clustering or filling methods? I have looked for floodfill but I don't have a start point) or combining more histograms (rgb histogram for example).. but, how?
all kind of brainstorming are welcome.
In this link is suggested the use of thresholds in the HSV space. You could create a mask with these thresholds and combine with the back histogram, using a AND operation.

OpenCV floor detection by segmentation

I'm working on a way to detect the floor in an image. I'm trying to accomplish this by reducing the image to areas of color and then assuming that the largest area is the floor. (We get to make some pretty extensive assumptions about the environment the robot will operate in)
What I'm looking for is some recommendations on algorithms that would be suited to this problem. Any help would be greatly appreciated.
Edit: specifically I am looking for an image segmentation algorithm that can reliably extract one area. Everything I've tried (mainly PyrSegmentation) seems to work by reducing the image to N colors. This is causing false positives when the camera is looking at an empty area.
Since floor detection is the main aim, I'd say instead of segmenting by color, you could try separation by texture.
The Eigen transform paper describes a single-value descriptor of texture "roughness" using the average of eigenvalues over a grayscale window in the image/video frame. On pg. 78 of the paper they apply the mean-shift segmentation on the eigen-transform output image, effectively separating it into different textures.
Since your images are from a video feed, there can be a lot of variations in lighting so color segmentation might pose a few problems (unless you're working with HSV and other color spaces as mentioned above). The calculation of the eigenvalues is very simple and fast in OpenCV with the cvSVD() function.
If you can make the assumption about colour constancy your main issue is going to be changes in lighting that will throw off your colour detection.
To that end, convert your input image to HSV, HSL, cie-Lab, YUV or some other luminance-separated colourspace and segment your image based on just the colour part (leave out the luminance value, V, L, L and Y in the examples above). This will help you overcome the obstacle of shadows and variations in lighting.

Resources