How can I adjust OpenEars wrong recognition - ios

I used the OpenEars for my app.just recognize "a" to "z" in the alphabet.
But it had a bad recognition in recognize alphabet than word.
So, how can i use my sound model to improve the recognition of OpenEars.
And how can I use OpenEars to recognize some special sound.
for example. I give OpenEars a dog sound and I want it to give me back "dog"

So this is a two part question which might be better to the community split up. OpenEars from what I understand is best served as using words in the dictionary. If you want it to recognize alphabet letters I would try and use the phonetic spelling of each letter instead of using just the letter. So instead of using 'f' use "ef".
As for the second part of the question, you might be able to recognize specific types of dogs which go "ruff" but smaller dogs with more of a "yip!" would have to be added to the initial dictionary as well.
I would get the demo app and really just experiment with these words.

Related

How to account for variation in spelling (especially for slang) for Word Embeddings/Word2Vec generation using song lyrics?

So I am working on a artist classification project that utilizes hip hop lyrics from genius.com. The problem is these lyrics are user generated, so the same word can be spelled in various different ways, especially if it is slang which is a very common case in hip hop.
I looked into spell correction using hunspell/pyhunspell, but the problem with that is it doesn't fix slang misspellings. I technically could make a mini dictionary with a bunch of misspelled variations but that is effectively useless because there could be a dozen variations of the same word over my (growing) 6000 song corpus.
Any suggestions?
You could try to stem your words. More information on stemming here. This would help grouping together words with close spelling variations.
A popular stemming scheme is the Porter Stemmer, which implementation can be found in most NLP packages, eg. NLTK
I would discard, if possible, short words, or contracted words which somehow are too hard to automatically correct them (conditioned on checking that it won't affect your final result).
For longer words, you may want to use metrics like Levenshtein distance or Jaro similarity. The first one consists of the minimum number of additions, deletes or replaces to convert one candidate word into another. The second one, provides a similar result, between 0 and 1, and putting more emphasis in the last characters of a word.
If you have access to the correct version of your slang word, you could convert the closest candidates to the correct one. Of course, trying not to apply it to different correct words.
If you're working with Python, here some implementations are provided.

Apple's pinyin ranking algorithm

I'm currently developing an English to Chinese dictionary app to learn iOS development and I'm kind of stuck as to ranking the more commonly used characters in Chinese when the user searches it in pinyin.
My question is:
Is there some way that I can use Apple's ranking algorithm for how they rank the Chinese character that come up when pinyin is typed (as they do a pretty good job at producing the right Chinese character)? Or is there some other way whereby I can achieve this?
If you want convert Chinese character to pinyin, you may use:
CFString​Transform or PinYin4Objc.
If you want first letter of pinyin, you can use pinyinFirstLetter .
If you just sort in Pinyin alphabetical order,you can use
sortedArray = [array sortedArrayUsingSelector:#selector(localizedCaseInsensitiveCompare:)];
Note: Polyphone, place name may not get right.
Edit:
It seems something like auto complete:
How to create an efficient auto-complete?
Implementing Autocomplete in iOS
Hope it can help you.

Trouble in openEars to recognize letter [duplicate]

I used the OpenEars for my app.just recognize "a" to "z" in the alphabet.
But it had a bad recognition in recognize alphabet than word.
So, how can i use my sound model to improve the recognition of OpenEars.
And how can I use OpenEars to recognize some special sound.
for example. I give OpenEars a dog sound and I want it to give me back "dog"
So this is a two part question which might be better to the community split up. OpenEars from what I understand is best served as using words in the dictionary. If you want it to recognize alphabet letters I would try and use the phonetic spelling of each letter instead of using just the letter. So instead of using 'f' use "ef".
As for the second part of the question, you might be able to recognize specific types of dogs which go "ruff" but smaller dogs with more of a "yip!" would have to be added to the initial dictionary as well.
I would get the demo app and really just experiment with these words.

Algorithm for keyword/phrase trend search similar to Twitter trends

Wanted some ideas about building a tool which can scan text sentences (written in english language) and build a keyword rank, based on the most occurrences of words or phrases within the texts.
This would be very similar to the twitter trends wherin twitter detects and reports the top 10 words within the tweets.
I have identified the high level steps in the algorithm as follows
Scan the text and remove all the common , frequent words ( such as, "the" , "is" , "are", "what" , "at" etc..)
Add the remaining words to a hashmap. If the word is already in the map then increment its count.
To get the top 10 words , iterate through the hashmap and find out the top 10 counts.
Step 2 and 3 are straightforward but I do not know in step 1 how do I detect the important words within a text and segregate them from the common words (prepositions, conjunctions etc )
Also if I want to track phrases what could be the approach ?
For example if I have a text saying "This honey is very good"
I might want to track "honey" and "good" but I may also want to track the phrases "very good" or "honey is very good"
Any suggestions would be greatly appreciated.
Thanks in advance
For detecting phrases, I suggest to use chunker. You can use one provided by NLP tool like OpenNLP or Stanford CoreNLP.
NOTE
honey is very good is not a phrase. It is clause. very good is a phrase.
In Information Retrieval System, those common word are called Stop Words.
Actually, your step 1 would be quite similar to step 3 in the sense that you may want to constitute an absolute database of the most common words in the English language in the first place. Such a list is available easily on the internet (Wikipedia even has an article referencing the 100 most common words in the English language.) You can store those words in a hashmap and while scanning your text contents just ignore the common tokens.
If you don't trust Wikipedia and the already existing listing for common words, you can build your own database. For that purpose, just scan thousands of tweets (the more the better) and make your own frequency chart.
You're facing an n-gram-like problem.
Do not reinvent the wheel. What you seem to be wanting to do has been done thousands of times, just use existing libs or pieces of code (check the External Links section of the n-gram Wikipedia page.)
Check out the NLTK library. It has code that does number one two and three:
1 Removing common words can be done using stopwords or a stemmer
2,3 getting the most common words can be done with FreqDist
Second you can use tools from Stanford NLP for tracking your text

How do I design a heuristic for matching translated sentences?

Summary
I am trying to design a heuristic for matching up sentences in a translation (from the original language to the translated language) and would like guidance and tips. Perhaps there is a heuristic that already does something similar? So given two text files, I would like to be able to match up the sentences (so I can pick out a sentence and say this is the translation of that sentence).
Details
The input text would be translated novels. So I do not expect the translations to be literal, although, using something like google translate might be a good way to test the accuracy of the heuristic.
To help me, I have a library that will gloss the contents of the translated text and give me the definitions of the words in the sentence. Other things I know:
Chapters and order are preserved; I know that the first sentence in chapter three will match with the first sentence in chapter three of the translation (Note, this is not strictly true; the first sentence might match up with the first two sentences, or even the second sentence)
I can calculate the overall size (characters, sentences, paragraphs); which could give me an idea of the average difference in sentence size (for example, the translation might be 30% longer).
Looking at the some books I have, the translated version has about 30% more sentences than the original text.
Implementation
(if it matters)
I am planning to do this in Java - but I am not that fussed - any language will do.
I am not greatly concerned about speed.
I guess to to be sure of the matches, some user feedback might be required. Like saying "Yes, this sentence definitely matches with that sentence." This would give the heuristic some more ground to stand on. This would mean that the user would need a little proficiency in the languages.
Background
(for those interested)
The reason I want to make this is that I want it to assist with my foreign language study. I am studying Japanese and find it hard to find "good" material (where "good" is defined by what I like). There are already tools to do something similar with subtitles from videos (an easier task - using the timing information of the video). But nothing, as far as I know, for texts.
There are tools called "sentence aligners" used in NLP research that does exactly what you want.
I advise hunalign:
http://mokk.bme.hu/resources/hunalign/
and MS sentence aligner:
http://research.microsoft.com/en-us/downloads/aafd5dcf-4dcc-49b2-8a22-f7055113e656/
Both are quite OK, but remember that nothing is perfect. Sentences that are too hard to be aligned will be dropped and some sentences may be wrongly aligned.

Resources