How to play a tone using Xamarin on iOS - ios

I would like my Xamarin-based iPhone app to play a custom tone but I'm new to iOS development and am struggling to find a simple way to do so.
Ultimately I'd like to be able to make a metal detector type of sound, where infrequent beeps become more frequent and eventually continuous but, to get started, a simple sine wave will suffice.
I've found the objectal-monotouch library (https://github.com/tescott/objectal-monotouch) and an example project in Objective-C (http://www.cocoawithlove.com/2010/10/ios-tone-generator-introduction-to.html) but the former has out-dated references to monotouch and the latter is quite a bit of code to convert for a non-Objective-C programmer.
Before I set off on either of these paths, can anyone recommend any sample code or an up-to-date library to achieve this more simply?
Many thanks,
Richard
Edit: I went ahead and ported the cocoawithlove example. Please contact me if it's of interest. It wasn't rocket science, but it wasn't trivial either, due to significant differences in the Xamarin API. If anyone knows of any resources to aid such conversions (e.g. mappings from the native API to Xamarin or better Xamarin docs!) please let me know!

NSUrl soundURL = NSUrl.FromFilename(soundfile);
using (AVAudioPlayer player = AVAudioPlayer.FromUrl(soundURL)) {
player.Volume = 1.0f;
player.PrepareToPlay();
player.Play();
}

Related

"Sound" Recognition in Swift?

I'm working on an applicaion in Swift and I was thinking about a way to get Non-Speech sound recognition in my project.
I mean is there a way in which I can take in sound inputs and match them against some predefined sounds already incorporated in the project and if a match occurs, it should do some particular action?
Is there any way to do the above? I'm thinking breaking up the sounds and doing the checks, but can't seem to get any further than that.
My personal experience follows matt's comment above: requires serious technical knowledge.
There are several ways to do this, and one is typically as follows: extract some properties from the sound segment of interest (audio feature extraction), and classify this audio feature vector with some kind of machine learning technique. This typically requires some training phase where the machine learning technique was given some examples to learn what sounds you want to recognize (your predefined sounds) so that it can build a model from that data.
Without knowing what types of sounds you're aiming for to be recognized, maybe our C/C++ SDK available here might do the trick for you: http://www.samplesumo.com/percussive-sound-recognition
There's a technical demo on that page that you can download and try with your sounds. It's a C/C++ library, and there is a Mac, Windows and iOS version, so you should be able to integrate it with a Swift app on iOS. Maybe this will allow you to do what you need?
If you want to develop your own technology, you may want to start by finding and reading some scientific papers using the keywords "sound classification", "audio recognition", "machine listening", "audio feature classification", ...
Matt,
We've been developing a bunch of cool tools to speed up iOS development, specially in Swift. One of these tools is what we called TLSphinx: a Swift wrapper around Pocketsphinx which can perform speech recognition without the audio leaving the device.
I assume TLSphinx can help you solve your problem since it is a totally open source library. Search for it on Github ('TLSphinx') and you can also download our iOS app ('Tryolabs Mobile Showcase') and try the module live to see how it works.
Hope it is useful!
Best!

FM synthesis in iOS

I would like to modulate the signal from the mic input with a sine wave at 200HZ (FM only). Anyone know of any good tutorials/articles that will help get me started?
Any info is very welcome
Thanks
I suggest you start here Audio File Stream Services Reference
Here you can also find some basic tutorials: Getting Started with Audio & Video.
Especially the SpeakHere example app could be interesting
Hope that helps you
The standard way to do audio processing in iOS or OSX is Core Audio. Here's Apple's overview of the framework.
However, Core Audio has a reputation of being very difficult to learn, especially if you don't have experience with C. If you're still wanting to learn Core Audio, then this book is the way to go: Learning Core Audio.
There are simpler ways to work with audio on iOS and OSX, one of them being AudioKit, which was developed specifically so developers can quickly prototype audio without having to deal with lower-level memory management, buffers, and pointer arithmetic.
There are examples showing both FM synthesis and audio input via the microphone, so you should have everything you need :)
Full disclosure: I am one of the developers of AudioKit.

how to convert stereo audio to mono?

Nowadays, I'm developing an app for iPhone in which I want to play an audio in left channel and right channel seperately (ps:The audio played is muti-channel), up to now, I have tried many ways, for example, finding some properties(e.g. setPan:) which I can set to do this, but failed,so,what should I do with this problem, could you please give me some suggestions? Thank you very much!
For manipulating audio at the channel level, see the AVAudioSession class in AVFoundation in the docs that come with Xcode.
In particular, the Audio Session Programming Guide.
I think Novocaine library will be helpful. You can go through
this example.
It'll help you for sure. In example, you can alter following method
- (void)filterData:(float *)data numFrames:(UInt32)numFrames numChannels:(UInt32)numChannels
in NVDSP.mm file to get what you want.
One easy, powerful, free and maintained solution to manipulate audio in iOS is AudioKit. Through it you can create something like this:
leftSignal = AKBooster(input)
leftSignal.rightGain = 0
leftPannedRight = AKPanner(leftSignal, pan: 0.5)
mixer = AKMixer(leftPannedRight)
AudioKit.output = mixer
It's a great solution to audio without the need to deal with low level frameworks. To help you start, there is a lot of tutorials online and answered questions for AudioKit here at StackOverflow. A nice start is with the AudioKit's playgrounds.

Audio Framework Confusion

I've read quite a bit both here (Audio Framework in iPhone) and abroad but am still confused as to which Audio Framework to use.
I'm able to get some easier things done, like recording and playing back but I'm looking to the future of the app where I'll be doing more complex things, like managing past recordings (although maybe that's a NSURL bookmark thing) and editing audio.
Right now I'm using AVFoundation but have started reading the docs for Core Audio (and there's also AudioToolbox). I wish there was a developer doc called "Understanding the Different Audio Frameworks and How and When to use them" because, well, the docs are dense and I'm having trouble figuring out which path to go down.
Links to good docs would also be much appreciated!
I recommend you take a look at the recent Learning Core Audio book. The purpose of it was to disambiguate the confusion around audio frameworks on Mac OS and iOS. If you want "good docs", it's well worth getting.
Depending on your requirements, you might also want to consider some of the non-Apple audio frameworks, particularly the MoMu release of STK, which in may respects will be simpler and easier-to-use than Apple's frameworks.

XNA | C# : Record and Change the Voice

My aim is code a project which records human sound and changes it (with effects).
e.g : a person will record its sound over microphone (speak for a while) and than the program makes its like a baby sound.
This shall run effectively and fast (while recording the altering operation must run, too)
What is the optimum way to do it ?
Thanks
If you're looking for either XNA or DirectX to do this for you, I'm pretty sure you're going to be out of luck (I don't have much experience with DirectSound; maybe somebody can correct me). What it sounds like you want to do is realtime digital signal processing, which means that you're either going to need to write your own code to manipulate the raw waveform, or find somebody else who's already written the code for you.
If you don't have experience writing this sort of thing, it's probably best to use somebody else's signal processing library, because this sort of thing can quickly get complicated. Since you're developing for the PC, you're in luck; you can use any library you like using P/Invoke. You might try out some of the solutions suggested here and here.
MSDN has some info about the Audio namespace from XNA, and the audio recording introduced in version 4:
Working with Microphones
Recording Audio from a Microphone
Keep in mind that recorded data is returned in PCM format.

Resources