How is segmentation useful? - opencv

Recently I began to study computer vision. In a series of text, I encountered "segmentation" which creates a group of pixels (super-pixels) based on pixel features.
Here is an example:
But, I'm not sure how do they use super-pixels at the first place. Do we use it for paint-like visualization? Or some kinds of object detection?

The segmentation you did for this image is useless since it does not split any thing useful. But consider this segmentation for example:
It splits the duck and the other objects from the background.
You can here find some useful application of Image Segmentation : https://en.wikipedia.org/wiki/Image_segmentation#Applications

Usually super pixel alone is not enough to perform segmentation. It can be the first step in performing segmentation. But further processing need to be done to perform segmentation.
In one of the papers I have read they use seam processing to measure the energy of the edges.
There is another paper for jitendra Malik about using super pixels in segmentation.

Related

Image segmentation evaluation - single class problem

What is the best method to evaluate the quality of a segmentation algorithm when the majority of the image has multiple objects all belonging to the same class.
For example:
If I had an algorithm that segments books in this image of a bookcase - with a single bounding box per book.
Bookcase
I have had a look at various blog posts on segmentation evaluation and the majority seem to showcase examples of multiclass problems where it is fairly obvious if a prediction is not accurate - the bounding boxes do/do-not overlap for that class.
My first thoughts are that a tradition IoU or thematic accuracy would not work on this kind of problem because an output containing a single 'book' polygon (completely under-segmenting) that covers the entire image would still return high scoring metrics as almost all of the image is in fact 'book', however it is in fact very poorly segmenting the image.
I'm not sure if I have framed my problem well, any help would be appreciated.
I would try to tackle this problem using these solutions:
Compute the Dice/IoU coefficients of the background class
this is a simple solution for the semantic segmentation results
if the algorithm would get good results in both foreground and background metrics you can at least generally tell it performs well
Computing the average of metrics for instance segmented separated objects
Your problem seems like an instance segmentation to me
Calculating the metrics for each distinct object is then easy - you can compute dice/jaccard coefficients for every object separately and than average the results of all instances as in this great article with more information about segmentation metrics.

What is the difference between image segmentation and feature extraction in image processing?

I have read an article regarding Brain tumor segmentation.That article has some methods to segment the brain tumor cells from normal brain cells.Those methods are pre-processing,segmentation and feature extraction.But I couldn't understand,whats the difference between segmentation and Feature extraction.I googled it also,but still I didn't understand.Can anyone please explain the basic concept of this methods?
Segmentation is usually understood as the decomposition of a whole into parts. In particular, decomposing or partitioning an image into homogeneous regions.
Feature extraction is a broader concept, which can be described as finding areas with specific properties, such as corners, but it can also be any set of measurements, be them scalar, vector or other. Those features are commonly used for pattern recognition and classification.
A typical processing scheme could be to segment out cells from the image, then characterizing their shape by means of, say edge smoothness features, and telling normal from ill cells.
Image Segmentation vs. Feature Localization • Image Segmentation: If R is a segmented region,
1. R is usually connected; all pixels in R are connected (8- connected or 4-connected).
2. Ri \ Rj = , i 6= j; regions are disjoint.
3. [ni=1Ri = I, where I is the entire image; the segmentation
is complete.
• Feature Localization: a coarse localization of image fea- tures based on proximity and compactness – more e↵ective than Image Segmentation.
Feature extraction is a prerequisite for image segmentation.
When you face a project for segmenting a particular shape or structure in an image, one of the procedure to be applied is to extract the relevant features for that region so that you can differentiate it from other region.
A simple and basic features which are commonly used in image segmentation could be intensity. So you can make different groups of structure based on the intensity they show in the image.
Feature extraction is used for classification and relevant and significant features are used for labeling different classed inside an image.

Tissue based image segmentation

I'm working on a project to do a segmentation of tissu. So far i so good for now. But her i want to segment the destructed from the good tissu. Her is an image example. So as you can see the good tissus are smooth and the destructed ones are not. I have the idea to detected the edges to do the segmentation but it give bad results.
I'm opening to any i'm open to any suggestions.
Use a convolutional neural network for example any prebuilt in the Caffe package. Label the different kinds of areas in as many images as you have, then use many (1000s) small (32x32) patches from those to train the network. This will produce much better results than any kind of handcrafted algorithm.
A very simple approach which can be used as an intermediate test could be following:
Blur the image to reduce the noise. This is an important step. OpenCV provides an inbuilt method for it.
Find contours using the OpenCV method findContour().
Then if the perimeter of contour is greater than a set threshold (you will have to set a value) then, you can consider it to be a smooth tissue else you can discard the tissue.
This is a really simple approach and a simple program can be written for it really fast.

Differences between blob detection and image segmentation

please can anyone explain the main differences respectively the relations between this techniques. On the one and in many tutorials image segmentation is used as the base od blob detection. But on the other hand blob detection algorithms like Connected-component labeling is equal to a Region-growing methods which is related to image segmentation.
They have distinct concepts, however, sometimes they do overlap.
Let me try to explain it in layman's terms:
Blob detection refers to a specific application of image processing techniques, whose purpose is to isolate (one or more) objects (aka. regions) in the input image;
Image segmentation refers to a classification of image processing techniques used to partition an image into smaller segments (groups of pixels).
Image segmentation have many applications, and so it happens that one of them is actually object detection. This is where the confusion usually surfaces because now these 2 terms mean similars things:
The application of image segmentation techniques for object detection, is exactly what blob detection is all about.
So I believe the main difference is: image segmentation refers to a vast group of techniques, and blob detection refers to an application of those techniques.

Object recognition and measuring size

I'd like to create a system for use in a factory to measure the size of the objects coming off the assembly line. The objects are slabs of stone, approximately rectangular, and I'd like the width and height. Each stone is photographed in the same position with a flash, so the conditions are pretty controlled. The tricky part is the stones sometimes have patterns on their surface (often marble with ripples and streaks) and they are sometimes almost black, blending in with the shadows.
I tried simply subtracting each image from a reference image of the background, but there are enough small changes in the lighting and the positions of rollers and small bits of machinery that the output is really noisy.
The approach I plan to try next is to use Canny's edge detection algorithm and then use some kind of numerical optimization (Nelder-Mead maybe) to match a 4-sided polygon to the edges. Before I home-brew something, though, is there an existing approach that works well in this kind of situation?
If it helps, it would be possible to 'seed' the algorithm with a patch of the image known to be within the slab (they're always lined up in the corner) to help identify its surface pattern and colors. I could also produce a training set of annotated images if necessary.
Some sample images of the background and some stone slabs:
Have you tried an existing image segmentation algorithm?
I would start with the maxflow algorithm for image segmentation by Vladimir Kolmogorov here: http://pub.ist.ac.at/~vnk/software.html
In the papers they fix areas of an image to belong to a particular segment, which would help for you problem, but it may not be obvious how to do this in the software.
Deep learning algorithms for parsing scenes by Richard Socher might also help: http://www.socher.org/
And Eric Sudderth has at least one interesting method for visual scene understanding here: http://www.cs.brown.edu/~sudderth/software.html
I also haven't actually used any of this software, which is mostly, if not all, for research and not particularly user friendly.

Resources