shazam for voice recognition on iphone - ios

I am trying to build an app that allows the user to record individual people speaking, and then save the recordings on the device and tag each record with the name of the person who spoke. Then there is the detection mode, in which i record someone and can tell whats his name if he is in the local database.
First of all - is this possible at all? I am very new to iOS development and not so familiar with the available APIs.
More importantly, which API should I use (ideally free) to correlate between the incoming voice and the records I have in the local db? This should behave something like Shazam, but much more simple since the database I am looking for a match against is much smaller.

If you're new to iOS development, I'd start with the core app to record the audio and let people manually choose a profile/name to attach it to and worry about the speaker recognition part later.
You obviously have two options for the recognition side of things: You can either tie in someone else's speech authentication/speaker recognition library (which will probably be in C or C++), or you can try to write your own.
How many people are going to use your app? You might be able to create something basic yourself: If it's the difference between a man and a woman you could probably figure that out by doing an FFT spectral analysis of the audio and figure out where the frequency peaks are. Obviously the frequencies used to enunciate different phonemes are going to vary somewhat, so solving the general case for two people who sound fairly similar is probably hard. You'll need to train the system with a bunch of text and build some kind of model of frequency distributions. You could try to do clustering or something, but you're going to run into a fair bit of maths fairly quickly (gaussian mixture models, et al). There are libraries/projects that'll do this. You might be able to port this from matlab, for example: https://github.com/codyaray/speaker-recognition
If you want to take something off-the-shelf, I'd go with a straight C library like mistral, as it should be relatively easy to call into from Objective-C.
The SpeakHere sample code should get you started for audio recording and playback.
Also, it may well take longer for the user to train your app to recognise them than it's worth in time-saving from just picking their name from a list. Unless you're intending their voice to be some kind of security passport type thing, it might just not be worth bothering with.

Related

Improving Twilio Speech Recognition of Proper Nouns

I am working in an application that gathers a user's voice input for an IVR. The input we're capturing is a limited set of proper nouns but even though we have added hints for all of the possible options, we very frequently get back unintelligible results, possibly as a result of our users having various accents from all parts of the world. I'm looking for a way to further improve the speech recognition results beyond just using hints. The available Google adaptive classes will not be useful, as there are none that match the type of input that we're gathering. I see that Twilio recently added something called experimental_utterances that may help but I'm finding little technical documentation on what it does or how to implement.
Any guidance on how to improve our speech recognition results?
Google does a decent job doing recognition of proper names, but not in real time just asynchronously. I've not seen a PaaS tool that can do this in real time. I recommend you change your approach and maybe identify callers based on ANI or account number or have them record their name for manual transcription.
david

"Sound" Recognition in Swift?

I'm working on an applicaion in Swift and I was thinking about a way to get Non-Speech sound recognition in my project.
I mean is there a way in which I can take in sound inputs and match them against some predefined sounds already incorporated in the project and if a match occurs, it should do some particular action?
Is there any way to do the above? I'm thinking breaking up the sounds and doing the checks, but can't seem to get any further than that.
My personal experience follows matt's comment above: requires serious technical knowledge.
There are several ways to do this, and one is typically as follows: extract some properties from the sound segment of interest (audio feature extraction), and classify this audio feature vector with some kind of machine learning technique. This typically requires some training phase where the machine learning technique was given some examples to learn what sounds you want to recognize (your predefined sounds) so that it can build a model from that data.
Without knowing what types of sounds you're aiming for to be recognized, maybe our C/C++ SDK available here might do the trick for you: http://www.samplesumo.com/percussive-sound-recognition
There's a technical demo on that page that you can download and try with your sounds. It's a C/C++ library, and there is a Mac, Windows and iOS version, so you should be able to integrate it with a Swift app on iOS. Maybe this will allow you to do what you need?
If you want to develop your own technology, you may want to start by finding and reading some scientific papers using the keywords "sound classification", "audio recognition", "machine listening", "audio feature classification", ...
Matt,
We've been developing a bunch of cool tools to speed up iOS development, specially in Swift. One of these tools is what we called TLSphinx: a Swift wrapper around Pocketsphinx which can perform speech recognition without the audio leaving the device.
I assume TLSphinx can help you solve your problem since it is a totally open source library. Search for it on Github ('TLSphinx') and you can also download our iOS app ('Tryolabs Mobile Showcase') and try the module live to see how it works.
Hope it is useful!
Best!

Creating an auto-DJ app

I'm trying to create an 'auto dj' application that would let smartphone users select a playlist of songs, and it would create a seamless mix for playback. There are a couple factors involved in this: read a playlist of audio files, calculate their waveforms/spectrums, determine the BPMs, and organize the compatible songs in a new playlist in the order that they will be played (based on compatible tempos & keys).
The app would have to be able to scan the waveform of a song and recognize the beginning of the 'main' part of the song (skipping slow intros/outros). I also imagine having some effects: filtering, so it can filter the bass out of the new track being mixed in, and switch the basses at an appropriate time. Perhaps reverb that the user could control as well.
I am just seeing how feasible of a project this is for 3-4 busy college students in the span of ~4 months. Not sure if it would be an Android or iOS app, or perhaps even a Windows app. Not sure what language we would use (likely Python or Java); whichever has the most useful audio analyzing libraries. Obviously it would work better for certain genres of music (house, trance), but I'd still really like to try to create this.
Thanks for any feedback
As much as I would like to hear a more experienced person's opinion on this, I would say based on your situation that it would be a very big undertaking. Since it sounds like you don't have experience using audio analyzing libraries/ programs you might want to start experimenting with those and most of them are likely going to be in C/ C++, not java/ Python. Here are some I know of but I would recommend do your own research.
http://www.underbit.com/products/mad/
http://audacity.sourceforge.net/
It doesn't sound that feasible in your situation but that just depends on your programming/project experience and motivation to create it.
Good luck

How to translate two languages at the same time without delay for instance: "two online games written in different languages"

I have a question about writing a script which can manage to play online games in different codes. I think the easiest to understand is when I say I need to make a platform on which Playstation as xbox players are allowed to play online Modern Warfare 3 together.
Mathematically it seems it is possible: at the end you have two different screens which project the same. On the platform, Sony and Microsoft players stream their code or screen to the platform and play together. Big problem is that you get it delivered in 2 different codes which you have to translate to one language in less than 0,001 second.
Honestly said I have to get into this stuff but I cannot get much further.
Do you have any tips, other forums or solutions for this problem? Maybe it is writing a new language? (Google is technically using it for Google-translating over the phone)
Depending on the game this might not be possible even in theory. Many console games use a peer-to-peer lock-step synchronization model for multiplayer. Games that use this approach only send each other the player input from the other consoles and rely on deterministic simulation (the same inputs produce the same outputs) to keep the systems synchronized.
This only works when the exact same compiled code is running on the same CPU for all players. Games with this networking model usually have periodic desynch checks to make sure that the different systems haven't drifted out of sync with each other. A desynch failure is usually considered a fatal error and either a bug in the game or evidence of attempted cheating by one of the players.
Other multiplayer games use a client server model and so it would be possible in theory to allow different consoles to play against each other. Reverse engineering the network protocol would be a formidable technical challenge however and it would be a difficult problem to get this to work reliably.
Even if you could solve the technical problems though you would likely have even bigger legal issues to overcome. Sony and Microsoft don't want to allow cross platform play so even though it would be possible in theory to make this work with a client server multiplayer game developers aren't able to implement it. A third party trying to make this work would likely have to deal with legal challenges from Microsoft, Sony and the game developer.

iOS / C: Algorithm to detect phonemes

I am searching for an algorithm to determine whether realtime audio input matches one of 144 given (and comfortably distinct) phoneme-pairs.
Preferably the lowest level that does the job.
I'm developing radical / experimental musical training software for iPhone / iPad.
My musical system comprises 12 consonant phonemes and 12 vowel phonemes, demonstrated here. That makes 144 possible phoneme pairs. The student has to sing the correct phoneme pair 'laa duu bee' etc in response to visual stimulus.
I have done a lot of research into this, it looks like my best bet may be to use one of the iOS Sphinx wrappers ( iPhone App › Add voice recognition? is the best source of information I have found ). However, I can't see how I would adapt such a package, can anyone with experience using one of these technologies give a basic rundown of the steps that would be required?
Would training be necessary by the user? I would have thought not, as it is such an elementary task, compared with full language models of thousands of words and far greater and more subtle phoneme base. However, it would be acceptable (not ideal) to have the user train 12 phoneme pairs: { consonant1+vowel1, consonant2+vowel2, ..., consonant12+vowel12 }. The full 144 would be too burdensome.
Is there a simpler approach? I feel like using a fully featured continuous speech recogniser is using a sledgehammer to crack a nut. It would be far more elegant to use the minimum technology that would solve the problem.
So really I'm hunting for any open source software that recognises phonemes.
PS I need a solution which runs pretty much real-time. so even as they are singing the note, firstly it blinks on to illustrate that it picked up the phoneme pair that was sung, and then it glows to illustrate whether they are singing the correct note pitch
If you are looking for a phone-level open source recogniser, then I would recommend HTK. Very good documentation is available with this tool in the form of the HTK Book. It also contains an entire chapter dedicated to building a phone level real-time speech recogniser. From your problem statement above, it seems to me like you might be able to re-work that example into your own solution. Possible pitfalls:
Since you want to do a phone level recogniser, the data needed to train the phone models would be very high. Also, your training database should be balanced in terms of distribution of the phones.
Building a speaker-independent system would require data from more than one speaker. And lots of that too.
Since this is open-source, you should also check into the licensing info for any additional details about shipping the code. A good alternative would be to use the on-phone recorder and then have the recorded waveform sent over a data channel to a server for the recognition, pretty much something like what google does.
I have a little bit of experience with this type of signal processing, and I would say that this is probably not the type of finite question that can be answered definitively.
One thing worth noting is that although you may restrict the phonemes you are interested in, the possibility space remains the same (i.e. infinite-ish). User training might help the algorithms along a bit, but useful training takes quite a bit of time and it seems you are averse to too much of that.
Using Sphinx is probably a great start on this problem. I haven't gotten very far in the library myself, but my guess is that you'll be working with its source code yourself to get exactly what you want. (Hooray for open source!)
...using a sledgehammer to crack a nut.
I wouldn't label your problem a nut, I'd say it's more like a beast. It may be a different beast than natural language speech recognition, but it is still a beast.
All the best with your problem solving.
Not sure if this would help: check out OpenEars' LanguageModelGenerator. OpenEars uses Sphinx and other libraries.
http://www.hfink.eu/matchbox
This page links to both YouTube video demo and github source.
I'm guessing it would still be a lot of work to mould it into the shape I'm after, but is also definitely does do a lot of the work.

Resources