I'm working on a project to do a segmentation of tissu. So far i so good for now. But her i want to segment the destructed from the good tissu. Her is an image example. So as you can see the good tissus are smooth and the destructed ones are not. I have the idea to detected the edges to do the segmentation but it give bad results.
I'm opening to any i'm open to any suggestions.
Use a convolutional neural network for example any prebuilt in the Caffe package. Label the different kinds of areas in as many images as you have, then use many (1000s) small (32x32) patches from those to train the network. This will produce much better results than any kind of handcrafted algorithm.
A very simple approach which can be used as an intermediate test could be following:
Blur the image to reduce the noise. This is an important step. OpenCV provides an inbuilt method for it.
Find contours using the OpenCV method findContour().
Then if the perimeter of contour is greater than a set threshold (you will have to set a value) then, you can consider it to be a smooth tissue else you can discard the tissue.
This is a really simple approach and a simple program can be written for it really fast.
Related
I'm making a program to detect shapes from an r/c plane for a competition. I have no real images of the targets, but I do have computer generated examples of them on the rules.
My question is, can I train my program to detect real world objects based on computer generated shapes or should I find a different method to complete this task?
I would like to know before I foolishly generate 5k samples and find them useless in the end.
EDIT: I also don't know the exact color of the objects. If I feed the program samples of varying color, will it be a problem?
Thanks in advance!!
Edit2: Here's what groups from my school detected in previous years
As you can see, the detected images are not nearly as flawless as what would appear in real life. If you can suggest a better method, that would help.
If you think that the real images will have unique colors with simple geometric shapes then you could probably try to create a normalized Hue-histogram. Use it to train SVM classifier. The benefit of using Hue-histogram is that it will be rotational and scale invariant.
You can take the few precautions in mind:
Don't forget to remove the illumination affects.
Sometimes, White and black pixels create some problem in hue-histogram calculation so try to remove them from calculation by considering only those pixel which have S>0 and V>0 in S & V channels of HSV image.
I would rather suggest you to use the real world images because the performance is largely dependent upon training (my personal experience). And why don't you try to use SIFT/SURF descriptors for training to SVM (support vector machine) as SIFT/SURF are scale as well as rotational invariant.
I asked this question previously "How to extract numbers from an image" LINK and finally i made this step but there is some test cases that leads to awful outputs when i try to recognize digits .. Consider this image as an example
This image is low contrast (from my POV) i tried to adjust its contrast and the results still unacceptable .I tried also to sharp it then i applied gamma correction but the results still not fair ,so the extracted numbers doesn't recognized well by the classifier
this is the image after (sharpening + gamma)
Number 4 after separation :
Could anybody tell me what is the best ideas to solve such a problem ?
Sharpening is not always the best tool to approach a problem like this. Contrary to what the name implies, sharpening does not "recover" information to add detail and edges back into an image. Instead, sharpening is a class of operations that increase local contrast along edges.
Because your original image is highly degraded, this sharpening operation looks to be adding a lot of noise in, and generally not making anything better.
There is another class of algorithms called "deblurring" algorithms that attempt to actually reconstruct image detail through (much more complex) mathematical models. Some versions of this are blind deconvolution, regularized deconvolution, and Wiener deconvolution.
However, it is important to note that all of these methods are approximations - once image content is lost through an operation such as blurring , it can (almost) never be fully recovered. Also, these methods are generally much more complex.
The best way to handle these situations is make sure that they never happen. Ensure good focus during image capture, use a system with a resolution well suited to your task, control the lighting environment. However, when these methods do not or cannot work, image reconstruction techniques are needed.
Your image is blurred, and I suggest you try wiener deconvolution. You can assume the point spread function a Gaussian function and observe what's going on with the deconvolution process. Since you do not know the blur kernel in advance, blind deconvolution is an alternative.
We as human, could recognize these two images as same image :
In computer, it will be easy to recognize these two image if they are in the same size, so we have to make Preprocessing stage or step before recognize it, like scaling, but if we look deeply to scaling process, we will know that it's not an efficient way.
Now, could you help me to find some way to convert images into objects that doesn't deal with size or pixel location, to be input for recognition method ?
Thanks advance.
I have several ideas:
Let the image have several color thresholds. This way you get large
areas of the same color. The shapes of those areas can be traced with
curves which are math. If you do this for the larger and the smaller
one and see if the curves match.
Try to define key spots in the area. I don't know for sure how
this works but you can look up face detection algoritms. In such
an algoritm there is a math equation for how a face should look.
If you define enough object in such algorithms you can define
multiple objects in the images to see if the object match on the
same spots.
And you could see if the predator algorithm can accept images
of multiple size. If so your problem is solved.
It looks like you assume that human's brain recognize image in computationally effective way, which is rather not true. this algorithm is so complicated that we did not find it. It also takes a large part of your brain to deal with visual data.
When it comes to software there are some scale(or affine) invariant algorithms. One of such algorithms is LeNet 5 neural network.
Have OpenCV implementation of shape context matching? I've found only matchShapes() function which do not work for me. I want to get from shape context matching set of corresponding features. Is it good idea to compare and find rotation and displacement of detected contour on two different images.
Also some example code will be very helpfull for me.
I want to detect for example pink square, and in the second case pen. Other examples could be squares with some holes, stars etc.
The basic steps of Image Processing is
Image Acquisition > Preprocessing > Segmentation > Representation > Recognition
And what you are asking for seems to lie within the representation part os this general algorithm. You want some features that descripes the objects you are interested in, right? Before sharing what I've done for simple hand-gesture recognition, I would like you to consider what you actually need. A lot of times simplicity will make it a lot easier. Consider a fixed color on your objects, consider background subtraction (these two main ties to preprocessing and segmentation). As for representation, what features are you interested in? and can you exclude the need of some of these features.
My project group and I have taken a simple approach to preprocessing and segmentation, choosing a green glove for our hand. Here's and example of the glove, camera and detection on the screen:
We have used a threshold on defects, and specified it to find defects from fingers, and we have calculated the ratio of a rotated rectangular boundingbox, to see how quadratic our blod is. With only four different hand gestures chosen, we are able to distinguish these with only these two features.
The functions we have used, and the measurements are all available in the documentation on structural analysis for OpenCV, and for acces of values in vectors (which we've used a lot), can be found in the documentation for vectors in c++
I hope you can use the train of thought put into this; if you want more specific info I'll be happy to comment, Enjoy.
I'm thinking of starting a project for school where I'll use genetic algorithms to optimize digital sharpening of images. I've been playing around with unsharp masking (USM) techniques in Photoshop. Basically, I want to create a software that optimizes the parameters (i.e. blur radius, types of blur, blending the image) to create the "best-fit" set of filters.
I'm sort of quickly planning this project before starting it, and I can't think of a good fitness function for the 'selection' part. How would I determine the 'quality' of the filter sets, or measure how sharp the image is?
Also, I will be programming using python (with the Python Imaging Library) since it's the only language I'm proficient with. Should I learn a low-level language instead?
Any advice/tips on anything is greatly appreciated. Thanks in advance!
tl;dr How do I measure how 'sharp' an image is?
if its for tuning parameters you could take a known image and apply a known blurring/low pass filter. Then sharpen this with your GA+USM algorithm. Calculate your fitness function making use of the original image, e.g maybe something as simple as the mean absolute error. May need to create different datasets, e.g. landscape images (mostly sharp, in focus with large depth of field), portrait images (could be large areas deliberately out of focus and "soft"), along with low noise and noisy images. Sharpening noisy images is actually quite a challenge.
It would definitely be worth taking a look at Bruce Frasier' work on sharpening techniques for Photoshop etc.
Also it might worth checking out Imatest (www.imatest.com) to see if there is anything regarding sharpness/resolution. And finally you might also consider resolution charts.
And finally I seroiusly doubt one set of ideal parameters exists for USM, the optimum parameters will be image dependant and indeed be a personal perference (thatwhy I suggest starting for a known sharp image and blurring it). Understanding the type of image is probably as important and in itself and very interesting and challenging problem. Although perhaps basic hueristics like image varinance and edge histogram would reveal suitable clues.
Anyway just a thought, hopefully some of the above is useful