Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Does anyone know of recent academic work which has been done on logo recognition in images?
Please answer only if you are familiar with this specific subject (I can search Google for "logo recognition" myself, thank you very much).
Anyone who is knowledgeable in computer vision and has done work on object recognition is welcome to comment as well.
Update:
Please refer to the algorithmic aspects (what approach you think is appropriate, papers in the field, whether it should work(and has been tested) for real world data, efficiency considerations) and not the technical sides (the programming language used or whether it was with OpenCV...)
Work on image indexing and content based image retrieval can also help.
You could try to use local features like SIFT here:
http://en.wikipedia.org/wiki/Scale-invariant_feature_transform
It should work because logo shape is usually constant, so extracted features shall match well.
The workflow will be like this:
Detect corners (e.g. Harris corner detector) - for Nike logo they are two sharp ends.
Compute descriptors (like SIFT - 128D integer vector)
On training stage remember them; on matching stage find nearest neighbours for every feature in the database obtained during training. Finally, you have a set of matches (some of them are probably wrong).
Seed out wrong matches using RANSAC. Thus you'll get the matrix that describes transform from ideal logo image to one where you find the logo. Depending on the settings, you could allow different kinds of transforms (just translation; translation and rotation; affine transform).
Szeliski's book has a chapter (4.1) on local features.
http://research.microsoft.com/en-us/um/people/szeliski/Book/
P.S.
I assumed you wanna find logos in photos, for example find all Pepsi billboards, so they could be distorted. If you need to find a TV channel logo on the screen (so that it is not rotated and scaled), you could do it easier (pattern matching or something).
Conventional SIFT does not consider color information. Since logos usually have constant colors (though the exact color depends on lightning and camera) you might want to consider color information somehow.
We worked on logo detection/recognition in real-world images. We also created a dataset FlickrLogos-32 and made it publicly available, including data, ground truth and evaluation scripts.
In our work we treated logo recognition as retrieval problem to simplify multi-class recognition and to allow such systems to be easily scalable to many (e.g. thousands) logo classes.
Recently, we developed a bundling technique called Bundle min-Hashing that aggregates spatial configurations of multiple local features into highly distinctive feature bundles. The bundle representation is usable for both retrieval and recognition. See the following example heatmaps for logo detections:
You will find more details on the internal operations, potential applications of the approach, experiments on its performance and of course also many references to related work in the papers [1][2].
Worked on that: Trademark matching and retrieval in sports video databases
get a PDF of the paper: http://scholar.google.it/scholar?cluster=9926471658203167449&hl=en&as_sdt=2000
We used SIFT as trademark and image descriptors, and a normalized threshold matching to compute the distance between models and images. In our latest work we have been able to greatly reduce computation using meta-models, created evaluating the relevance of the SIFT points that are present in different versions of the same trademark.
I'd say that in general working with videos is harder than working on photos due to the very bad visual quality of the TV standards currently used.
Marco
I worked on a project where we had to do something very similar. At first I tried using Haar Training techniques using this software
OpenCV
It worked, but was not an optimal solution for our needs. Our source images (where we were looking for the logo) were a fixed size and only contained the logo. Because of this we were able to use cvMatchShapes with a known good match and compare the value returned to deem a good match.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I have been searching for emotion detection through voice/speech solution on mobile (iOS) and web.
I found Moodies-iOS and Vokaturi solution, but they are not free.
I couldn't find any open source or paid version software available to integrate in my app and test the solution.
Could someone share if you have any info on this related.
Is there any OPEN SOURCE for iOS for Emotion analysis and detection through Voice/Speech, Please let me know.
As a former research in affective computing, I highly doubt you can find a ready-for-use iOS open source solution for emotion recognition from speech. The main reason is that it is a damn difficult task that requires a lot of research and a lot of proper data to train models. That is why companies like BeyondVerbal and Vokaturi do not share their models with others. Thus, you will be very lucky if you can find anything in open source, I am not even talking about iOS solutions.
I am aware about some toolkits you can use for this task (namely, the openEAR toolkit), but to build something working from it, you need an expert knowledge in the field and data to train models. A comprehensive list of databases can be found here: http://emotion-research.net/wiki/Databases. A lot of them a freely available.
As Dmytro Prylipko said it is very doubtful that there is any open-source lib for emotion recognition from speech.
You may write your own solution. It is not hard. Trouble is, as mentioned before, proper training and/or trasholding takes a lot of time and nerves.
I will give you a short theory how you should begin writing the algo, but training and so on is on you.
First big trouble is that different people differently relay their emotions vocally.
For example: one shocked person will to their shock respond with overexclaimed sentence while another will "freeze" and their response would sound very flat (almost robot-like).
Therefore you will need a lot of templates from which to learn how to classify your input speech by emotions.
You can remove some difficulties by using context recognition along with voice prosody.
That is what I'd advise you to do.
First make an algorithm that will use speech-recognized text to put it into emotion context. E.g. you can use specific words and phrases that people use when expressing different emotions.
That is easily done. You may use a neural network or simple branching or whatever.
So you will be able to recognize whether person is thankful and surprised at the same time by combining context recognition and emotions from prosody.
Now, to recognize the emotion from prosody you have to get prosody parameters and some others.
For example, some emotions may be recognized by looking at duration of particular words in a sentence.
So you have the sentence and the text of that sentence. You know that the speed of normal speech is approximately 200 words per minute. Knowing this and number of words in the sentence you can see how fast is someone talking. Then you measure the duration of each word and get its speed. By knowing how fast is the speech and how long is the word you can get normalized ratios that can be used for classifications in order to determine the closest guess of the emotion.
For instance, when someone is presented with a present that he/she likes very much, the "thank you" will sound pretty long. It will also be of higher pitch than that person's usual speech.
So the next step would be to get the average pitch for each word to see the relation between them. So you will be able to see how the sentence prosody modulates. From lower to higher, or vice versa.
Also, how prosody changes inside the phrases within the sentence.
You may go about this by comparing curves of known emotion directly, or you may use aproximation to get coefficients from the prosody curve vector. The square function does good for normal speech prosody (with no particular emotions in). So some higher order polynomial should do. So, you can get coefficients of the polynom and use them to get what emotion should whole sentence or phrase relay.
The same goes for individual words within the sentence. You get the pitch for each phoneme or syllable or just the pitch curve for e.g. every 20 ms of the word. Then you either calculate few coefficients to aproximate the polynom you decided is good enough for you, or you take the whole curve and normalize it to e.g. 30 points to use it with recognition.
To compare curves directly you may use gesture recognition algorithm by Oleg Dopertchouk:
http://www.gamedev.net/reference/articles/article2039.asp
I tried it on pitch curves of melodies, it works just fine.
The trouble is, you need a database of speech with context and emotion with clear manually done classification to give your algo something to compare with.
If you use polynomials instead of whole curves, you can do some recognition by using thresholds on coefficients, but results will be a bit shaky. Only real excuse for using coeffs at all is that you do not need to know how long is the word in question. I.e. the same polynom should work on a word with 2 phonemes and on one with 5. (should work)
You see, a theory is nice and easy. Use speech recognition, measure speech rate, and duration of each word, construct pitch curve for whole phrase and pitch curve for each word using FFT, do some comparison between ready database and the input. And walla, emotion recognized.
But where will you find the database with word curves marked with emotions.
For example, you would need for each emotion at least one pitch curve for words with different number of phonemes. At least one, because it is important whether the word starts with vowel or ends with one, or simply someone differently relays the same emotion even if the curve represents the same word.
OK, so you can say that you can make one. Where would you find recorded samples to make your curves or calculate coeffs? Hm, perhaps a recording of some drama. Not bad idea, but the acted emotions aren't the same as the natural ones.
It is a big job to teach a machine such a thing.
Oh, yeah, I almost forgot, emotions aren't only, or sometimes at all transfered using pitch changes, sometimes it's only the way in which the word is being pronounced.
So, for some cases, you would probably need LPC or some other coefficients showing some more info on how phonemes in the word sounds. Or you would need to take in view other harmonics from FFT, not just the one representing the pitch of excitation train.
The best that you can do without following my hints and developing your own algo, is to use NLTK (natural language toolkit) to develop a statistical speech (emotionally rich) model and use algorithms from there (perhaps a bit modified) to try to get to the emotion in question.
But I fear it would be a greater job than going from zero. As far as I know NLTK doesn't support emotions. Just normal speech prosody.
You may try to integrate some things I wrote about into Sphinx, to develop emotion based speech models and introduce emotion recognition directly into sphinxes VR algorithm.
If you really need this, I advise you to learn enough DSP to write your own algo, then pay someone to make you initial database from audiobooks, radio dramas and similar stuff (using a tool you provide).
After your algo starts to work reasonably well, implement autolearning by giving users an option to correct the algo's wrong guesses. After some time you will get 90% reliable algo to recognize emotions from speech.
Recently, I'm studying about facial recognition with OpenCV, and I'm trying some simple example based on study.
I'm considering to use it at front door condition.
Nowadays some buildings or apartments use facial recognition for preventing intruders. When someone joins them (such as company or houses), they require the person's picture. As I know, they require just one picture.
I didn't care about that last time, but now, I'm very curious about it.
The famous algorithms such as PCA, LDA use machine learning, so they increase successful percentages(cases). To use machine learning, they need sample images as many as I can provide. That's why I'm curious about that. Buildings or companys require just one picture, but they can recognize each person. Moreover, their accuracy is very good. How can this happen? Is there any other algorithm besides PCA or LDA?
Thanks for reading!
As far as I know, this hasn't been achieved yet. So I don't think they can develop a software recognizing a person by using only one picture.
It is most likely that they teach the algorithm with the authorized person's pictures. So if that one picture does not match with the trained ones, the algorithm can say this is an intrusion.
Edit:
As linuxqwerty pointed out those commercial products are already trained with huge datasets.
As a result of this training, learning happens and the algorithm achieves feature extraction of all those sample faces.
Then the algorithm knows almost every kinds of features that an human face can have.
For example: thickness of eyebrows, distance between eyes, roundness of chin... These are only a human can say about faces. The algorithm can extract thousands of these features.
It can keep faces as a representation of those features.
So now we have this commercial software which can represent faces as binary codes with a lot of digits.
I am getting your question again.
The apartment or company bought this software.
They included the picture of authorized person.
What the software does is simply converting the picture as it was a thousand digits password.
So that person has this unique password which the system can only reproduce that password only from his face.
To sum up:
The learning part was achieved using big face databases.
Thanks to learning part, the recognition part can be done by using only one picture.
PS: Corrections are welcome.
I happened to read about facial recognition before, that time I wanted to do it as my semester project. And of course I have heard and thought of using OpenCV as well.
Your question is simple, those company or home that use facial recognition, they usually use very well-developed product, which normally includes well-programmed facial recognition. As we are talking about security here, normally companies will buy these security products, unless if they just want to use it as a tool to deter intruders which focus less on the practical usage, and recognition accuracy, they can opt for free facial recognition software.
So, when I'm talking about well-programmed facial recognition, it means that it was trained with huge amount of databases (the photos to be recognized that you mentioned), this means the training is done even before the software is officially launched, which is during the development stage. A good facial recognition software requires both good, complete and detailed programming coding, and also huge photo databases (taken at different ambient light intensity, different facial features like hair style, spectacles) to train it.
Therefore, the accuracy of the software does not depend solely on the amount of pictures given during the usage of the software provided that it is well-programmed in the first place. Thanks and hope I answered your question and wonder.
ps: recognize is spelled this way (US); recognise (UK) =)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I have been doing image processing and machine learning course for couple of weeks . Everyday I learn something new about image manipulation and as well as how to train a system to recognize patterns in an image .
My question is in order to do successful image recognition what are the steps one has to follow ,for example denoising , use of LDA , PCA , then use of neural network . I am not looking for any algorithm , just a brief outline of each of the steps(5 -6) from capturing an image to test an input image for similarity.
P.S# To the mods , before labelling this question as not constructive , I know its not constructive , but I don't know which site to put this so . So please redirect me that site of stackexchange .
Thanks.
I would describe the pipeline as follows, and I omitted many bullet items.
acquire images with ground truth labeling.
amazon M-turk
image and label from flickr
compute feature for that image
direct stretch the image as a column vector
use more complicated features such as bag of words, LBP.
post-process the features to reduce the effect of noise if needed
sparse coding
max pooling
whitening
train a classifier/regressor given the (feature,label) pair
SVM
boosting
neural network
random forest
spectral clustering
heuristic methods...
use the trained model to recognize the unseen images and evaluate the result with some metrics.
BTW. traditionally, we will use dimension reduction methods such as PCA to make the problem tractable, but recent research seems cares nothing about that.
Few months back I developed a face recognition system using local binary pattern.
In this method I first took a image either from local storage or camera, then using local binary pattern method I considered each block of the input image. After getting LBP of input image, I found chi-square distance for lbp feature histogram. comparing its value with the stored database image using same process. I was able to get same face.
amazon M-turk is a service to make people do work for you. (and you pay them)
SIFT is a descriptor for interest points. By comparing those descriptors, you can find the correspondence between images. (SIFT fits into step 2.)
when doing step 4. you can choose to combines the result of different classifier, or simply trust the result of one classifier. That depends on situation.
Are you going to label the location of affected region? I am not sure what you are going to do.
couldn't use comment so I post another answer.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
My image dataset is from http://www.image-net.org. There are various synsets for different things like flora, fauna, persons, etc
I have to train a classifier which predicts 1 if the image belongs to floral synset and 0, otherwise.
Images belonging to floral synset can be viewed at http://www.image-net.org/explore, by clicking on the plant, flora, plant life option in the left pane.
These images include wide variety of flora - like trees, herbs, shrubs, flowers etc.
I am not able to figure out what features to use to train the classifier. There is a lot of greenery in these images, but there are many flower images, which don't have much green component. Another feature is the shape of the leaves and the petals.
It would be helpful if anyone could suggest how to extract this shape feature and use it to train the classifier. Also suggest what other features could be used to train the classifier.
And after extracting features, which algorithm is to be used to train the classifier?
Not sure that shape information is the approach for the data set you have linked to.
Just having a quick glance at some of the images I have a few suggestions for classification:
Natural scenes rarely have straight lines - Line detection
You can discount scenes which have swathes of "unnatural" colour in them.
If you want to try something more advanced I would suggest that a hybrid between entropy/pattern recognition would form a good classifier as natural scenes have alot of both.
Attempting template-matching/shape matching for leaves/petals will break your heart - you need to use something much more generalised.
As for which classifier to use... I'd normally advise K-means initially and once you have some results determine if the extra effort to implement Bayes or a Neural Net would be worth it.
Hope this helps.
T.
Expanded:
"Unnatural Colors" could be highly saturated colours outside of the realms of greens and browns. They are good for detecting nature scenes as there should be ~50% of the scene in the green/brown spectrum even if a flower is at the center of it.
Additionally straight line detection should yield few results in nature scenes as straight edges are rare in nature. On a basic level generate an edge image, Threshold it and then search for line segments (pixels which approximate a straight line).
Entropy requires some Machine Vision knowledge. You would approach the scene by determining localised entropys and then histogramming the results here is a similar approach that you will have to use.
You would want to be advanced at Machine Vision if you are to attempt pattern recognition as this is a difficult subject and not something you can throw up in code sample. I would only attempt to implement these as a classifier once colour and edge information(lines) has been exhausted.
If this is a commercial application then a MV expert should be consulted. If this is a college assignment (unless it is a thesis) colour and edge/line information should be more than enough.
HOOG features are pretty much the de-facto standard for these kinds of problems, I think. They're a bit involved to compute (and I don't know what environment you're working in) but powerful.
A simpler solution which might get you up and running, depending on how hard the dataset is, is to extract all overlapping patches from the images, cluster them using k-means (or whatever you like), and then represent an image as a distribution over this set of quantised image patches for a supervised classifier like an SVM. You'd be surprised how often something like this works, and it should at least provide a competitive baseline.
can someone tell me how i can detect pictures of architecture or sculpture?
I think hough-transforming is a good approach. But i'm new in CV and maybe there a better methods to detect pattern. I heard about haarcascade. can i take this for architecture,too?
For example i want to detect those kind of pictures:
Image Hosted by ImageShack.us http://img842.imageshack.us/img842/4748/resizeimg0931.jpg
If you want an algorithm to detect them, then detecting an object from an image need a description of that object which can be understood by a machine or computer. For a sculpture or architecture, how can you have such uniform definition since they vary a lot in every sense? For example both your input images vary a lot. How can we differentiate between a house and an architecture? A lot of problems will rise in your question. Even with Hough Transforming, how you are supposed to differentiate a big house and a big architecture?
Check out this SOF : Image Processing: Algorithm Improvement for 'Coca-Cola Can' Recognition
He wants to detect coca-cola cans, and not coca-cola bottles. But if you look into it clearly, you will understand can and bottles are almost alike and it will be difficult to differentiate between them. You can find a lot of its difficulties in subsequent answers. Major problem is that, in some cases, it will be difficult for humans as well to differentiate them.
In your second image, even if you train some cascades for second image, there is a change it will detect live lions if they are present in your image, since a sculpture lion and an original lion seems almost same for a machine.
Haar cascades may not be much effective since you have to train for a lot of these kinds of images.
If you have some sample images and want to check if those things are there in your image, may be you can use SURF features etc. But you may need some sample images first to compare. For a demo of SURF, check out this SOF : OpenCV 2.4.1 - computing SURF descriptors in Python
Another option is template matching. But it is slow, and it is not scale and orientation invariant. And you need some template images for this
I think I have seen some papers relating this topic ( but i don't remember now). May be googling will get you them. I will update the answer if I get it.