I've been doing some research on the feasibility of building a mobile/web app that allows users to say a phrase and detects the accent of the user (Boston, New York, Canadian, etc.). There will be about 5 to 10 predefined phrases that a user can say. I'm familiar with some of the Speech to Text API's that are available (Nuance, Bing, Google, etc.) but none seem to offer this additional functionality. The closest examples that I've found are Google Now or Microsoft's Speaker Recognition API:
http://www.androidauthority.com/google-now-accents-515684/
https://www.microsoft.com/cognitive-services/en-us/speaker-recognition-api
Because there are going to be 5-10 predefined phrases I'm thinking of using a machine learning software like Tensorflow or Wekinator. I'd have initial audio created in each accent to use as the initial data. Before I dig deeper into this path I just wanted to get some feedback on this approach or if there are better approaches out there. Let me know if I need to clarify anything.
There is no public API for such a rare task.
Accent detection as language detection is commonly implemented with i-vectors. Tutorial is here. Implementation is available in Kaldi.
You need significant amount of data to train the system even if your sentences are fixed. It might be easier to collect accented speech without focusing on the specific sentences you have.
End-to-end tensorflow implementation is also possible but would probably require too much data since you need to separate speaker-instrinic things from accent-instrinic things (basically perform the factorization like i-vector is doing). You can find descriptions of similar works like this and this one.
You could use(this is just an idea, you will need to experiment a lot) a neural network with as many outputs as possible accents you have with a softmax output layer and cross entropy cost function
Related
I'm new to the ML/NLP field so my question is what technology would be most appropriate to achieve the following goal:
We have a short sentence - "Where to go for dinner?" or "What's your favorite bar?" or "What's your favorite cheap bar?"
Is there a technology that would enable me to train it providing the following data sets:
"Where to go for dinner?" -> Dinner
"What's your favorite bar?" -> Bar
"What's your favorite cheap restaurant?" -> Cheap, Restaurant
so that next time we have a similar question about an unknown activity, say, "What is your favorite expensive [whatever]" it would be able to extract "expensive" and [whatever]?
The goal is if we can train it with hundreds of variations(or thousands) of the question asked and relevant output data expected, so that it can work with everyday language.
I know how to make it even without NLP/ML if we have a dictionary of expected terms like Bar, Restaurant, Pool, etc., but we also want it to work with unknown terms.
I've seen examples with Rake and Scikit-learn for classification of "things", but I'm not sure how would I feed text into those and all those examples had predefined outputs for training.
I've also tried Google's NLP API, Amazon Lex and Wit to see how good they are at extracting entities, but the results are disappointing to say the least.
Reading about summarization techniques, I'm left with the impression it won't work with small, single-sentence texts, so I haven't delved into it.
As #polm23 mentioned for simple stuff you can use the POS tagging to do the extraction. The services you mentioned like LUIS, Dialog flow etc. , uses what is called Natural Language Understanding. They make uses of intents & entities(detailed explanation with examples you can find here). If you are concerned that your data is going online or sometimes you have to go offline, you always go for RASA.
Things you can do with RASA:
Entity extraction and sentence classification. Mention which particular term to be extracted from the sentence by tagging the word position with a variety of sentence. So if any different word comes other than what you had given in the training set it will be detected.
Uses rule-based learning and also keras LSTM for detection.
One downside when comparing with the online services is that you have to manually tag the position numbers in the JSON file for training as opposed to the click and tag features in the online services.
You can find the tutorial here.
I am having pain in my leg.
Eg I have trained RASA with a variety of sentences for identifying body part and symptom (I have limited to 2 entities only, you can add more), then when an unknown sentence (like the one above) appears it will correctly identify "pain" as "symptom" and "leg" as "body part".
Hope this answers your question!
Since "hundreds to thousands" sound like you have very little data for training a model from scratch. You might want to consider training (technically fine-tuning) a DialogFlow Agent to match sentences ("Where to go for dinner?") to intents ("Dinner"), then integrating via API calls.
Alternatively, you can invest time in fine-tuning a small pre-trained model like "Distilled BERT classifier" from "HuggingFace" as you won't need the 100s of thousands to billions of data samples required to train a production-worthy model. This can also be assessed offline and will equip you to solve other NLP problems in the future without much low-level understanding of the underlying statistics.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I have been searching for emotion detection through voice/speech solution on mobile (iOS) and web.
I found Moodies-iOS and Vokaturi solution, but they are not free.
I couldn't find any open source or paid version software available to integrate in my app and test the solution.
Could someone share if you have any info on this related.
Is there any OPEN SOURCE for iOS for Emotion analysis and detection through Voice/Speech, Please let me know.
As a former research in affective computing, I highly doubt you can find a ready-for-use iOS open source solution for emotion recognition from speech. The main reason is that it is a damn difficult task that requires a lot of research and a lot of proper data to train models. That is why companies like BeyondVerbal and Vokaturi do not share their models with others. Thus, you will be very lucky if you can find anything in open source, I am not even talking about iOS solutions.
I am aware about some toolkits you can use for this task (namely, the openEAR toolkit), but to build something working from it, you need an expert knowledge in the field and data to train models. A comprehensive list of databases can be found here: http://emotion-research.net/wiki/Databases. A lot of them a freely available.
As Dmytro Prylipko said it is very doubtful that there is any open-source lib for emotion recognition from speech.
You may write your own solution. It is not hard. Trouble is, as mentioned before, proper training and/or trasholding takes a lot of time and nerves.
I will give you a short theory how you should begin writing the algo, but training and so on is on you.
First big trouble is that different people differently relay their emotions vocally.
For example: one shocked person will to their shock respond with overexclaimed sentence while another will "freeze" and their response would sound very flat (almost robot-like).
Therefore you will need a lot of templates from which to learn how to classify your input speech by emotions.
You can remove some difficulties by using context recognition along with voice prosody.
That is what I'd advise you to do.
First make an algorithm that will use speech-recognized text to put it into emotion context. E.g. you can use specific words and phrases that people use when expressing different emotions.
That is easily done. You may use a neural network or simple branching or whatever.
So you will be able to recognize whether person is thankful and surprised at the same time by combining context recognition and emotions from prosody.
Now, to recognize the emotion from prosody you have to get prosody parameters and some others.
For example, some emotions may be recognized by looking at duration of particular words in a sentence.
So you have the sentence and the text of that sentence. You know that the speed of normal speech is approximately 200 words per minute. Knowing this and number of words in the sentence you can see how fast is someone talking. Then you measure the duration of each word and get its speed. By knowing how fast is the speech and how long is the word you can get normalized ratios that can be used for classifications in order to determine the closest guess of the emotion.
For instance, when someone is presented with a present that he/she likes very much, the "thank you" will sound pretty long. It will also be of higher pitch than that person's usual speech.
So the next step would be to get the average pitch for each word to see the relation between them. So you will be able to see how the sentence prosody modulates. From lower to higher, or vice versa.
Also, how prosody changes inside the phrases within the sentence.
You may go about this by comparing curves of known emotion directly, or you may use aproximation to get coefficients from the prosody curve vector. The square function does good for normal speech prosody (with no particular emotions in). So some higher order polynomial should do. So, you can get coefficients of the polynom and use them to get what emotion should whole sentence or phrase relay.
The same goes for individual words within the sentence. You get the pitch for each phoneme or syllable or just the pitch curve for e.g. every 20 ms of the word. Then you either calculate few coefficients to aproximate the polynom you decided is good enough for you, or you take the whole curve and normalize it to e.g. 30 points to use it with recognition.
To compare curves directly you may use gesture recognition algorithm by Oleg Dopertchouk:
http://www.gamedev.net/reference/articles/article2039.asp
I tried it on pitch curves of melodies, it works just fine.
The trouble is, you need a database of speech with context and emotion with clear manually done classification to give your algo something to compare with.
If you use polynomials instead of whole curves, you can do some recognition by using thresholds on coefficients, but results will be a bit shaky. Only real excuse for using coeffs at all is that you do not need to know how long is the word in question. I.e. the same polynom should work on a word with 2 phonemes and on one with 5. (should work)
You see, a theory is nice and easy. Use speech recognition, measure speech rate, and duration of each word, construct pitch curve for whole phrase and pitch curve for each word using FFT, do some comparison between ready database and the input. And walla, emotion recognized.
But where will you find the database with word curves marked with emotions.
For example, you would need for each emotion at least one pitch curve for words with different number of phonemes. At least one, because it is important whether the word starts with vowel or ends with one, or simply someone differently relays the same emotion even if the curve represents the same word.
OK, so you can say that you can make one. Where would you find recorded samples to make your curves or calculate coeffs? Hm, perhaps a recording of some drama. Not bad idea, but the acted emotions aren't the same as the natural ones.
It is a big job to teach a machine such a thing.
Oh, yeah, I almost forgot, emotions aren't only, or sometimes at all transfered using pitch changes, sometimes it's only the way in which the word is being pronounced.
So, for some cases, you would probably need LPC or some other coefficients showing some more info on how phonemes in the word sounds. Or you would need to take in view other harmonics from FFT, not just the one representing the pitch of excitation train.
The best that you can do without following my hints and developing your own algo, is to use NLTK (natural language toolkit) to develop a statistical speech (emotionally rich) model and use algorithms from there (perhaps a bit modified) to try to get to the emotion in question.
But I fear it would be a greater job than going from zero. As far as I know NLTK doesn't support emotions. Just normal speech prosody.
You may try to integrate some things I wrote about into Sphinx, to develop emotion based speech models and introduce emotion recognition directly into sphinxes VR algorithm.
If you really need this, I advise you to learn enough DSP to write your own algo, then pay someone to make you initial database from audiobooks, radio dramas and similar stuff (using a tool you provide).
After your algo starts to work reasonably well, implement autolearning by giving users an option to correct the algo's wrong guesses. After some time you will get 90% reliable algo to recognize emotions from speech.
I am working on a personal research project.
My objective is to be able to recognize a sound and identify if it belongs to the IPA or not by comparing it's waveform to a wave form in my data base. I have some skill with Mathematica, SciPy, and PyBrain.
For the first phase, I'm only using the English (US) phonetic alphabet.
I have a simple test bank of English phonetic alphabet sound files I found online. The trick here is:
I want to separate a sound file into wave forms that correspond to different syllables- this will take a learning algorithm. So, 'I like apples' would be cut up into the syllable waveforms that would make up the sentence.
Each waveform is then compared against the English PA's wave forms. I'm not certain how to do this part. I was thinking of using Praat to detect the waveforms, capture the image of the wave form and compare it to the one stored in the database with image analysis (which is kind of fun to do).
The damage here, is that I don't know how to make Praat generate a wave form file automatically then cut it up between syllables into waveform chunks. Logically, I would just prepare test cases for a learning algorithm and teach the comp to do it.
Instead of needing a wave form image- could I do this with fast Fourier transformation and compare two fft's- within x% margin of error consider it y syllable?
Frankly I don't really know about Praat, But I find your project super cool and interesting. I have experience with car motor's fault detection using it's sound, which might be connected to your project. I used Neural Networks and SVM to do the classification because multiple research papers proved it. Thus I didn't have any doubt about the way I chose. So my advice is maybe you should research and read some Papers about it. It really helps when you have questions like this (Will it work?, Can I use it instead or Am I using optimal solution? etc...). And good luck that's an awesome project :)
You could try Praat scripting.
Using just FFT will give you rather terrible results. Very long feature vector that will be really difficult to segment and run any training on it. That's thousands of points for a single syllable. Some deep neural networks are able to cope with it, but that's assuming you design them properly and provide huge training set. The advantage of using neural networks is that they can build features for you from the "raw data" (and I would consider fft also "raw"). However, when you work with sound, it's not that badly needed - you can manually engineer features. In case of sounds, science knows very well what sort of "features" sound have.
You can calculate these features with libraries like Yaafe. I recommend checking it even if you are not doing it in C++ or Python - the link I provided also delivers formulas for calculating them. I used some of them in my kiwi classifier.
Another good approach comes from scikit-talkbox, which provides exactly the tooling you might need.
My question is related to the project I've just started working on, and it's a ChatBot.
The bot I want to build has a pretty simple task. It has to automatize the process of purchasing movie tickets. This is pretty close domain and the bot has all the required access to the cinema database. Of course it is okay for the bot to answer like “I don’t know” if user message is not related to the process of ordering movie tickets.
I already created a simple demo just to show it to a few people and see if they are interested in such a product. The demo uses simple DFA approach and some easy text matching with stemming. I hacked it in a day and it turned out that users were impressed that they are able to successfully order tickets they want. (The demo uses a connection to the cinema database to provide users all needed information to order tickets they desire).
My current goal is to create the next version, a more advanced one, especially in terms of Natural Language Understanding. For example, the demo version asks users to provide only one information in a single message, and doesn’t recognize if they provided more relevant information (movie title and time for example). I read that an useful technique here is called "Frame and slot semantics", and it seems to be promising, but I haven’t found any details about how to use this approach.
Moreover, I don’t know which approach is the best for improving Natural Language Understanding. For the most part, I consider:
Using “standard” NLP techniques in order to understand user messages better. For example, synonym databases, spelling correction, part of speech tags, train some statistical based classifiers to capture similarities and other relations between words (or between the whole sentences if it’s possible?) etc.
Use AIML to model the conversation flow. I’m not sure if it’s a good idea to use AIML in such a closed domain. I’ve never used it, so that’s the reason I’m asking.
Use a more “modern” approach and use neural networks to train a classifier for user messages classification. It might, however, require a lot of labeled data
Any other method I don’t know about?
Which approach is the most suitable for my goal?
Do you know where I can find more resources about how does “Frame and slot semantics” work in details? I'm referring to this PDF from Stanford when talking about frame and slot approach.
The question is pretty broad, but here are some thoughts and practical advice, based on experience with NLP and text-based machine learning in similar problem domains.
I'm assuming that although this is a "more advanced" version of your chatbot, the scope of work which can feasibly go into it is quite limited. In my opinion this is a very important factor as different methods widely differ in the amount and type of manual effort needed to make them work, and state-of-the-art techniques might be largely out of reach here.
Generally the two main approaches to consider would be rule-based and statistical. The first is traditionally more focused around pattern matching, and in the setting you describe (limited effort can be invested), would involve manually dealing with rules and/or patterns. An example for this approach would be using a closed- (but large) set of templates to match against user input (e.g. using regular expressions). This approach often has a "glass ceiling" in terms of performance, but can lead to pretty good results relatively quickly.
The statistical approach is more about giving some ML algorithm a bunch of data and letting it extract regularities from it, focusing the manual effort in collecting and labeling a good training set. In my opinion, in order to get "good enough" results the amount of data you'll need might be prohibitively large, unless you can come up with a way to easily collect large amounts of at least partially labeled data.
Practically I would suggest considering a hybrid approach here. Use some ML-based statistical general tools to extract information from user input, then apply manually built rules/ templates. For instance, you could use Google's Parsey McParseface to do syntactic parsing, then apply some rule engine on the results, e.g. match the verb against a list of possible actions like "buy", use the extracted grammatical relationships to find candidates for movie names, etc. This should get you to pretty good results quickly, as the strength of the syntactic parser would allow "understanding" even elaborate and potentially confusing sentences.
I would also suggest postponing some of the elements you think about doing, like spell-correction, and even stemming and synonyms DB - since the problem is relatively closed, you'll probably have better ROI from investing in a rule/template-framework and manual rule creation. This advice also applies to explicit modeling of conversation flow.
This is a research question not a direct programming question.
I am working on a symbol recognition algorithm, What the software currently does, it takes an image, divide it into contours (blobs) and start matching each contour with a list of predefined templates. Then for each contour it takes the one that has the highest match rate.
The algorithm is doing fairely however I need to train it better. What I mean is this:
I want to use a machine learning algorithm that will train the algorithm to have better matching. So lets take an example:
I run the recognition on a symbol, the algorithm will run and find that this symbol is a car, then I have to confirm that result (maybe by clicking on "Yes" or "No") the algorithm should learn from that. So if I click on NO the algorithm should learn that this is not a car and will have better result next time (maybe try to match something else). while if i click on YES he will know that he was correct and next time he will perform better when searching for a car.
This is the concept I am trying to research. I need documents or algorithm that can achieve this sort of things. I am not looking for implementations or programming, just concept or researches.
I have done many researches and read a lot about machine learning, neural networks, decision trees.... but i was not able to know how can I use any in my scenarion.
I hope I was clear and this type of question is allowed on stack overflow. if not I'm sorry
Thanks a lot for any help or tip
Image recognition is still a challenge in the community. What you described in your process of manually clicking yes/no is just creating labeled data. Since this is a very broad area, I will just point you to a few links that might be useful.
To get start, you might want to use some existing image databases instead of creating your own, which saves you a lot of effort. e.g., this car dataset in UCIC image db.
Since you already have the background of machine learning, you can take a look at some survey paper that exactly match your project interests, e.g., search object recognition survey paper or feature extraction car in google.
Then you can dive into some good papers and see whether they are suitable for your project. For example, you can check the two papers below that linked with the UCIC image db.
Shivani Agarwal, Aatif Awan, and Dan Roth,
Learning to detect objects in images via a sparse, part-based representation.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(11):1475-1490, 2004.
Shivani Agarwal and Dan Roth,
Learning a sparse representation for object detection.
In Proceedings of the Seventh European Conference on Computer Vision, Part IV, pages 113-130, Copenhagen, Denmark, 2002.
Also check for some implemented softwares instead of starting from scratch, in your case, opencv should be good one to start with.
For image recognition, feature extraction is one of the most important step. You might want to check some stat-of-the-art algorithms in the community. (SIFT, mean-shift, harr features etc).
Boosting algorithm might also be useful when you reach the classification step. I see a lot of scholars mention this in image recognition community.
As #nickbar suggest, discuss more at https://stats.stackexchange.com/