I am developing an image segmentation algorithm, I used Deeplabv3+ to train a model, which performs fine. The question is if I can improve its performance. I would like to know if there is preprocessing methods to use before feeding the image.
I want to detect the charging connector, as depicted in the image:
I can detect it without any problem. But just looking for some improvements if exist.
Related
I am trying to make a deep learning model to detect and read number plates using deep learning techniques like CNN. I would be making a model in tensorflow. But i still don't know what can be the best approach to build such model.
i have checked few models like this
https://matthewearl.github.io/2016/05/06/cnn-anpr/
i have also checked some research papers but none show the exact way.
So the steps what i am planning to follow are
Image preprocessing using opencv ( grayscale,transformations etc i dont know much about this part)
Licence plate Detection (probably by sliding window method)
Train using CNN by building a synthetic dataset as in the above link.
My questions
Is there any better way to do this?
Can RNN also be combined after CNN for variable length number?
Should i prefer detecting and recognising individual characters rather the whole plate?
There are many old methods too who prefer image preprocessing and the directly passing to OCR.What will be the best?
PS- i want to make a commercial real time system. So i need good accuracy.
Firstly, I don't think combining RNN and CNN can achieve real time system. And I personally prefer detecting individual characters if I want real time system because there will not more than 10 characters on license plate. When detecting plates with variable length, detecting individual characters can be more feasible.
Before I learned deep learning, I also have tried to use OCR to detect plate. In my case, OCR is fast but the accuracy is limited especially when the plate is not clear enough. Even image processing cannot rescue some unclear case.......
So if I were you I will try as follows:
Simple image preprocessing on the whole image
Licence plate Detection (probably by sliding window method)
Image processing (filters and geometric transformations) on the extracted plate part to make it more clear. Separate characters.
Deploy CNN to each character. (Maybe I will try some short CNNs because of real time, such as LeNet used in MNIST handwritten digit data ) (Multithreading might be needed)
Hope my response can help.
I started learning Image Recognition a few days back and I would like to do a project in which it need to identify different brand logos in Android.
For Ex: If I take a picture of Nike logo in an Android device then it needs to display "Nike".
Low computational time would be the main criteria for me.
For this, I have done some work and started learning OpenCV sample examples.
What would be the best Image Recognition that would be used for me.
1) I came to know from Template Matching that their applicability is limited mostly by the available computational power, as identification of big and complex templates can be time consuming. (and so I don't want to use it)
2) Feature Based detectors like SIFT/SURF/STAR (As per my knowledge this would be a better option for me)
3) How about Deep Learning and Pattern recognition concepts? (I was digging on this and don't know whether it would be an option for me). Can any of you let me know whether I can use this and why it would be an better choice for me when compared with 1 and 2.
4) Haar caascade classifiers (From one of the posts in SO, I came to know that by using Haar it doesn't work in Rotation and Scale invariant and so I haven't concentrated much on this). Does this been a better Option for me If I focus up on.
I’m now running one of my pet projects and it's required face recognition – detecting the area with face on the photo, if it exists with Raspberry pi, so I’ve done some analysis about that task
And I found this approach. The key idea is in avoiding scanning entire picture to help by scanning windows of different sizes like it was in OpenCV, but by dividing an entire photo into 49 (7x7) squares and train the model not only for detecting of presenting one of classes inside each square, but also for determining the location and size of detecting object
It’s only 49 runs of trained model, so I think it's possible to execute this in less than in a second even on non state-of-the-art smartphones. Anyway, it will be a trade-off between accuracy and performance
About the model
I will use vgg –like model, probably a bit simpler than even vgg11A.
In my case ready dataset already exists. So I can train convolutional network with it
Why deep learning approach is better than 1-3 you mentioned? Because of its higher accuracy for such kind of tasks. It’s practically proven. You could check it in kaggle. Majority of the top models for classification competitions are based on convolutional networks
The only disadvantage for you – probably it would be necessary create your own dataset to train the model
Here is a post that I think can be useful for you: Image Processing: Algorithm Improvement for 'Coca-Cola Can' Recognition. Another one: Logo recognition in images.
2) Feature Based detectors like SIFT/SURF/STAR (As per my knowledge
this would be a better option for me)
Just remember that SIFT and SURF are both patented so you will need a license for any commercial use (free for non-commercial use).
4) Haar caascade classifiers (From one of the posts in SO, I came to know that by using Haar it doesn't work in Rotation and Scale invariant and so I haven't concentrated much on this). Does this been a better Option for me If I focus up on.
It works (if I understand your question right), much of this depends of how you trained your classifier. You could train it to detect all kind of rotations and scales. Anyways, I would discourage you to go for this option as I think the other possible solutions are better meant for the case.
I am a very new student on machine learning. I just wanted to ask what are possible ways to improve a method (Naive Bayes for example) to get better results classifying images into text or non-text images, instead of just inputing a x number of images and telling the system which have text and which do not?
Thanks in advance
The state of the art in such problems are deep neural networks with several convolutional layers. See this article for an example of image classification using deep convolutional nets. Your problem (just determining if an image has text or not) is much easier than the general image classification problem the authors consider, so you'd probably get away with using a much simpler network architecture.
Nowadays you don't need to implement these things yourself, there are efficient and GPU-accelerated implementations freely available, for instance Caffe, Torch7, keras...
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I have been doing image processing and machine learning course for couple of weeks . Everyday I learn something new about image manipulation and as well as how to train a system to recognize patterns in an image .
My question is in order to do successful image recognition what are the steps one has to follow ,for example denoising , use of LDA , PCA , then use of neural network . I am not looking for any algorithm , just a brief outline of each of the steps(5 -6) from capturing an image to test an input image for similarity.
P.S# To the mods , before labelling this question as not constructive , I know its not constructive , but I don't know which site to put this so . So please redirect me that site of stackexchange .
Thanks.
I would describe the pipeline as follows, and I omitted many bullet items.
acquire images with ground truth labeling.
amazon M-turk
image and label from flickr
compute feature for that image
direct stretch the image as a column vector
use more complicated features such as bag of words, LBP.
post-process the features to reduce the effect of noise if needed
sparse coding
max pooling
whitening
train a classifier/regressor given the (feature,label) pair
SVM
boosting
neural network
random forest
spectral clustering
heuristic methods...
use the trained model to recognize the unseen images and evaluate the result with some metrics.
BTW. traditionally, we will use dimension reduction methods such as PCA to make the problem tractable, but recent research seems cares nothing about that.
Few months back I developed a face recognition system using local binary pattern.
In this method I first took a image either from local storage or camera, then using local binary pattern method I considered each block of the input image. After getting LBP of input image, I found chi-square distance for lbp feature histogram. comparing its value with the stored database image using same process. I was able to get same face.
amazon M-turk is a service to make people do work for you. (and you pay them)
SIFT is a descriptor for interest points. By comparing those descriptors, you can find the correspondence between images. (SIFT fits into step 2.)
when doing step 4. you can choose to combines the result of different classifier, or simply trust the result of one classifier. That depends on situation.
Are you going to label the location of affected region? I am not sure what you are going to do.
couldn't use comment so I post another answer.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
My image dataset is from http://www.image-net.org. There are various synsets for different things like flora, fauna, persons, etc
I have to train a classifier which predicts 1 if the image belongs to floral synset and 0, otherwise.
Images belonging to floral synset can be viewed at http://www.image-net.org/explore, by clicking on the plant, flora, plant life option in the left pane.
These images include wide variety of flora - like trees, herbs, shrubs, flowers etc.
I am not able to figure out what features to use to train the classifier. There is a lot of greenery in these images, but there are many flower images, which don't have much green component. Another feature is the shape of the leaves and the petals.
It would be helpful if anyone could suggest how to extract this shape feature and use it to train the classifier. Also suggest what other features could be used to train the classifier.
And after extracting features, which algorithm is to be used to train the classifier?
Not sure that shape information is the approach for the data set you have linked to.
Just having a quick glance at some of the images I have a few suggestions for classification:
Natural scenes rarely have straight lines - Line detection
You can discount scenes which have swathes of "unnatural" colour in them.
If you want to try something more advanced I would suggest that a hybrid between entropy/pattern recognition would form a good classifier as natural scenes have alot of both.
Attempting template-matching/shape matching for leaves/petals will break your heart - you need to use something much more generalised.
As for which classifier to use... I'd normally advise K-means initially and once you have some results determine if the extra effort to implement Bayes or a Neural Net would be worth it.
Hope this helps.
T.
Expanded:
"Unnatural Colors" could be highly saturated colours outside of the realms of greens and browns. They are good for detecting nature scenes as there should be ~50% of the scene in the green/brown spectrum even if a flower is at the center of it.
Additionally straight line detection should yield few results in nature scenes as straight edges are rare in nature. On a basic level generate an edge image, Threshold it and then search for line segments (pixels which approximate a straight line).
Entropy requires some Machine Vision knowledge. You would approach the scene by determining localised entropys and then histogramming the results here is a similar approach that you will have to use.
You would want to be advanced at Machine Vision if you are to attempt pattern recognition as this is a difficult subject and not something you can throw up in code sample. I would only attempt to implement these as a classifier once colour and edge information(lines) has been exhausted.
If this is a commercial application then a MV expert should be consulted. If this is a college assignment (unless it is a thesis) colour and edge/line information should be more than enough.
HOOG features are pretty much the de-facto standard for these kinds of problems, I think. They're a bit involved to compute (and I don't know what environment you're working in) but powerful.
A simpler solution which might get you up and running, depending on how hard the dataset is, is to extract all overlapping patches from the images, cluster them using k-means (or whatever you like), and then represent an image as a distribution over this set of quantised image patches for a supervised classifier like an SVM. You'd be surprised how often something like this works, and it should at least provide a competitive baseline.