Generate a sound (not from a file) - ios

I'm building a small game prototype, and I'd like to be able to play simple sounds whose length/tone/pitch will vary based on what the user is doing.
This is surprisingly hard to do. Closest resource I found was:
http://www.tmroyal.com/playing-sounds-in-swift-audioengine.html
But this does not actually generate any sound on my device or on the iOS simulator.
Does anyone know of any working code to play ANY procedurally generated audio? Simple Sine Wave would do.

https://gist.github.com/rgcottrell/5b876d9c5eea4c9e411c
This code on the other hand works, and it's beautifully written...
Success!

You can try AudioKit.
It's an audio framework built on top of Core Audio.
In their Continuous Control example they use a simple FM oscillator with controlled parameters.

Related

What kind of drum sampling options does Audiokit have?

Working in audio kit and I am looking to understand how people have incorporated drums. Obviously, the sampler is an option, but I am wondering if there is a built in option similar to some of the basic synthesis options.
There are a few options. I personally like the AppleSampler/MidiSampler like in the example but instead of using audio files you can create a EXS Sampler instrument in Logic where you can assign notes for different velocities. AppleSampler can also load AUPresets made in GarageBand and SoundFonts (SF2). The DunneAudioKit Sampler is an option if you are working with SFZ files, but I think that might be a work-in-progress in AudioKit 5. Loading WAV files directly into AppleSampler is also a good option if you just want one shot sounds.
I'm assuming you're mostly talking about playback of samples, not recording.
The best built-in option I've seen (other than AppleSampler/MidiSampler) is AudioPlayer, which lets you load in a sample and play it back on demand (from an on-screen pad, etc). MIDIListener can then help you respond to external MIDI events, etc. It works (I have a pretty big branch in my app where I tried it), but not sure it works well.
I wouldn't recommend DunneAudioKit Sampler for drums. There is no one-shot playback (so playing the same note in quick succession will cut off the previous note, even if you mess with the release). If you're trying to build a complex/realistic acoustic drum instrument, you'll also want round-robins so that variations of the same hit can be played, which Dunne also doesn't have. It can load SFZ files, but only a very limited subset of SFZ's opcodes (so again, it's missing things like round robins, mute groups, one-shot, etc).
Having gone down all those roads, I would suggest starting with AppleSampler, and I would build the EXS or aupreset file in Logic or Mainstage rather than trying to build something programmatically.
If your needs are really simple, the examples in AudioKit's recently released drum pad playground is a great place to start, loading single samples into a specific note on AppleSampler.

How to apply audio effects in iOS

I use AVPlayer to play audio(streaming or local file). For this audio I want to apply some effects - boost volume, skip silence, reduce noise, change speed(in 0.1 intervals).
I did same thing in android by creating own player, decoding different audio formats into pcm data and then using some c libraries to modify it. It was quite complicated.
Is it possible to do with AVPlayer or how can I do that? Something like modifying audio already decoded by AVPlayer. Is there some ios api (AVAudioEngine?) or frameworks (audioKit?) which can do this?
Thanks!
IMHO the best solution is to use https://github.com/audiokit/AudioKit as it is well maintained and supports most of the requirements you listed.
Another approach is to import the C library you used in the Android project and have a wrapper around it so it can easily used in ObjectiveC/Swift. With this approach you will have less code to maintain and you guarantee similar results between the two platforms. Do you care to share more about this code ?

XNA | C# : Record and Change the Voice

My aim is code a project which records human sound and changes it (with effects).
e.g : a person will record its sound over microphone (speak for a while) and than the program makes its like a baby sound.
This shall run effectively and fast (while recording the altering operation must run, too)
What is the optimum way to do it ?
Thanks
If you're looking for either XNA or DirectX to do this for you, I'm pretty sure you're going to be out of luck (I don't have much experience with DirectSound; maybe somebody can correct me). What it sounds like you want to do is realtime digital signal processing, which means that you're either going to need to write your own code to manipulate the raw waveform, or find somebody else who's already written the code for you.
If you don't have experience writing this sort of thing, it's probably best to use somebody else's signal processing library, because this sort of thing can quickly get complicated. Since you're developing for the PC, you're in luck; you can use any library you like using P/Invoke. You might try out some of the solutions suggested here and here.
MSDN has some info about the Audio namespace from XNA, and the audio recording introduced in version 4:
Working with Microphones
Recording Audio from a Microphone
Keep in mind that recorded data is returned in PCM format.

Virtual Instrument App Recording Functionality With RemoteIO

I'm developing a virtual instrument app for iOS and am trying to implement a recording function so that the app can record and playback the music the user makes with the instrument. I'm currently using the CocosDenshion sound engine (with a few of my own hacks involving fades etc) which is based on OpenAL. From my research on the net it seems I have two options:
Keep a record of the user's inputs (ie. which notes were played at what volume) so that the app can recreate the sound (but this cannot be shared/emailed).
Hack my own low-level sound engine using AudioUnits & specifically RemoteIO so that I manually mix all the sounds and populate the final output buffer by hand and hence can save said buffer to a file. This will be able to be shared by email etc.
I have implemented a RemoteIO callback for rendering the output buffer in the hope that it would give me previously played data in the buffer but alas the buffer is always all 00.
So my question is: is there an easier way to sniff/listen to what my app is sending to the speakers than my option 2 above?
Thanks in advance for your help!
I think you should use remoteIO, I had a similar project several months ago and wanted to avoid remoteIO and audio units as much as possible, but in the end, after I wrote tons of code and read lots of documentations from third party libraries (including cocosdenshion) I end up using audio units anyway. More than that, it's not that hard to set up and work with. If you however look for a library to do most of the work for you, you should look for one written a top of core audio not open al.
You might want to take a look at the AudioCopy framework. It does a lot of what you seem to be looking for, and will save you from potentially reinventing some wheels.

Stripping silence from AVAudioRecorder recorded audio

I'm working on an iOS app that allows a user to record some audio. The audio is recorded using AVAudioRecorder then saved to a file.
I'd like to strip the silence from the beginning and the end of the recorded audio.
Any ideas?
I am currently working no a similar task. It isn't trivial by any means. Because silence is not going to be a straightforward line of zeros. there is going to be some fluctuation.
If you're guaranteed a clean signal, it would be fairly trivial to set a marker on the first sample with an absolute value greater than say 0.001.
You can set an end marker without having to walk backwards through the file. all you do is, every sample greater than this threshold, you set the end marker to this sample.
If your input has the possibility of containing blips and squips before it starts properly, you will need a more advanced technique. Post a comment below and I will extend the answer.
When I've built audio apps for iOS, the audio eventually ends up on a server. I don't know if your app is similar in that regard. If so, you can do what I did:
use SoX in the backend to post-process the audio, removing silence using a threshold.
If you need to do it all on the phone, it going to be harder. You should build a power level filter using OpenAL or an OpenAL wrapper library

Resources