I'm trying to make my iphone play a tune without using prerecorded files. What are my options here? AVAudioEngine, AudioKit? I've looked at them, but the learning curve is relatively steep for something I'm hoping is easy. They also seem like tools for creating sound effect given a PCM buffer window.
I'd like to be able to do something like
pitchCreator.play(["C4", "E4", "G4"], durations: [1, 1, 1])
Preferrably sounding like an instrument or at least not like a pure sine wave.
EDIT: The below code has been replaced by AudioKit
To anyone wondering this; I did make it work (kind of) using code similar to the one below.
class PitchCreator {
var engine: AVAudioEngine
var player: AVAudioPlayerNode
var mixer: AVAudioMixerNode
var buffer: AVAudioPCMBuffer
init() {
engine = AVAudioEngine()
player = AVAudioPlayerNode()
mixer = engine.mainMixerNode;
buffer = AVAudioPCMBuffer(PCMFormat: player.outputFormatForBus(0), frameCapacity: 100)
buffer.frameLength = 4096
engine.attachNode(player)
engine.connect(player, to: mixer, format: player.outputFormatForBus(0))
}
func play(frequency: Float) {
let signal = self.createSignal(frequency, amplitudes: [1.0, 0.5, 0.3, 0.1], bufferSize: Int(buffer.frameLength), sampleRate: Float(mixer.outputFormatForBus(0).sampleRate))
for i in 0 ..< signal.count {
buffer.floatChannelData.memory[i] = 0.5 * signal[i]
}
do {
try engine.start()
player.play()
player.scheduleBuffer(buffer, atTime: nil, options: .Loops, completionHandler: nil)
} catch {}
}
func stop() {
engine.stop()
player.stop()
}
func createSignal(frequency: Float, amplitudes: [Float], bufferSize: Int, sampleRate: Float) -> [Float] {
let π = Float(M_PI)
let T = sampleRate / frequency
var x = [Float](count: bufferSize, repeatedValue: 0.0)
for k in 0 ..< x.count {
for h in 0 ..< amplitudes.count {
x[k] += amplitudes[h] * sin(2.0 * π * Float(h + 1) * Float(k) / T)
}
}
return x
}
}
But it doesn't sound good enough so I've gone with sampling the notes I need and just use AVAudioPlayer instead to play them.
Related
I have gone through the Apple Sample Code on Equalizing Audio with vDSP, where the audio file is filtered in AVAudioSourceNode and reproduced.
My objective is to do exactly the same, but instead of taking the audio from an audio file, take it in real-time from the microphone. Is it possible to do so in AVAudioEngine? A couple of ways to do so are based on installTap or AVAudioSinkNode, as described in First strategy and Second strategy sections.
So far, I got a bit closer to my objective with the following 2 strategies.
First strategy
// Added new class variables
private lazy var sinkNode = AVAudioSinkNode { (timestep, frames, audioBufferList) -> OSStatus in
let ptr = audioBufferList.pointee.mBuffers.mData?.assumingMemoryBound(to: Float.self)
var monoSamples = [Float]()
monoSamples.append(contentsOf: UnsafeBufferPointer(start: ptr, count: Int(frames)))
self.page = monoSamples.
for frame in 0..<frames {
print("sink: " + String(monoSamples[Int(frame)]))
}
return noErr
}
// AVAudioEngine connections
engine.attach(sinkNode)
// Audio input is passed to the AVAudioSinkNode and the [Float] array is pased to the AVAudioSourceNode through the _page_ variable
engine.connect(input, to: sinkNode, format: formatt)
engine.attach(srcNode)
engine.connect(srcNode,
to: engine.mainMixerNode,
format: format)
engine.connect(engine.mainMixerNode,
to: engine.outputNode,
format: format)
// The AVAudioSourceNode access the self.page array through the getSinalElement() function.
private func getSignalElement() -> Float {
return page.isEmpty ? 0 : page.removeFirst()
}
This approach made it possible to play the audio through the AVAudioSourceNode, but, the audio stops playing after a few seconds (even though, I still successfully get the self.page array in AVAudioSourceNode) and the app finally crashes.
2 strategy
In a similar approach, I used installtap
engine.attach(srcNode)
engine.connect(srcNode,
to: engine.mainMixerNode,
format: format)
engine.connect(engine.mainMixerNode,
to: engine.outputNode,
format: format)
input.installTap(onBus: 0, bufferSize:1024, format:formatt, block: { [weak self] buffer, when in
let arraySize = Int(buffer.frameLength)
let samples = Array(UnsafeBufferPointer(start: buffer.floatChannelData![0], count:arraySize))
self!.page = samples
})
// The AVAudioSourceNode access the self.page array through the getSinalElement() function.
private func getSignalElement() -> Float {
return page.isEmpty ? 0 : page.removeFirst()
}
The outcome after implementing Second strategy is the same as in First strategy. Which can be the issues making these approaches fail?
You can use AvAudioEngine().inputNode like following:
let engine = AVAudioEngine()
private lazy var srcNode = AVAudioSourceNode { _, _, frameCount, audioBufferList -> OSStatus in
return noErr
}
// Attach First
engine.attach(srcNode)
// Then connect nodes
let input = engine.inputNode
engine.connect(input, to: srcNode, format: input.inputFormat(forBus: 0))
It is important to use input.inputFormat(...) as format type.
do{
try audioSession.setCategory(.playAndRecord, mode: .default, options: [.mixWithOthers, .defaultToSpeaker,.allowBluetoothA2DP,.allowAirPlay,.allowBluetooth])
try audioSession.setActive(true)
} catch{
print(error.localizedDescription)
}
engine.attach(player)
//Add this only you want putch
let pitch = AVAudioUnitTimePitch()
// pitch.pitch = 1000 //Filtered Voice
//pitch.rate = 1 //Normal rate
// engine.attach(pitch)
engine.attach(srcNode)
engine.connect(srcNode,
to: engine.mainMixerNode,
format: engine.inputNode.inputFormat(forBus: 0))
engine.connect(engine.mainMixerNode,
to: engine.outputNode,
format: engine.inputNode.inputFormat(forBus: 0))
engine.prepare()
engine.inputNode.installTap(onBus: 0, bufferSize: 512, format: engine.inputNode.inputFormat(forBus: 0)) { (buffer, time) -> Void in
// self.player.scheduleBuffer(buffer)
let arraySize = Int(buffer.frameLength)
let samples = Array(UnsafeBufferPointer(start: buffer.floatChannelData![0], count:arraySize))
self.page = samples
print("samples",samples)
}
engine.mainMixerNode.outputVolume = 0.5
I'd like to use the AudioKit framework to generate a small sound sequence of some high and low sounds.
So what I'm starting with is the message that could look like this: "1100011010"
--> Every column should be looped through and if it's value is "1" AudioKit should play a (short) high frequency sound and if not it should play a (short) lower frequency sound.
Because a simple timer-loop that triggers every 0.15s the .play()-function for running a 0.1s sound (high/low) doesn't seems to be very accurate I decided to use the *AudioKit Sequencer*:
(o) audiokit:
enum Sequence: Int {
case snareDrum
}
var snareDrum = AKSynthSnare()
var sequencer = AKSequencer()
var pumper: AKCompressor?
var mixer = AKMixer()
public init() {
snareDrum >>> mixer
pumper = AKCompressor(mixer)
AudioKit.output = pumper
AudioKit.start()
}
func setupTracks() {
_ = sequencer.newTrack()
sequencer.tracks[Sequence.snareDrum.rawValue].setMIDIOutput(snareDrum.midiIn)
generateMessageSequence()
sequencer.enableLooping()
sequencer.setTempo(2000)
sequencer.play()
}
(o) play:
var message="1100011010"
var counter=0
for i in message {
counter+=0.15
if (i=="1") {
// play high sound at specific position
}
else {
// play low sound at specific position
}
}
(o) play low sound at specific position:
sequencer.tracks[Sequence.snareDrum.rawValue].add(noteNumber: 20,
velocity: 10,
position: AKDuration(beats: counter),
duration: AKDuration(beats: 1))
My question: How is it possible to play local sound files at specific positions using (position: AKDuration(beats: counter)) //the code from above instead of using default instruments like in this case AKSynthSnare()?
You could create two tracks, each with an AKMIDISampler. One plays a 'low' sample, and the other plays a 'high' sample. Assign the high notes to the high track, and low notes to the low track.
let sequencer = AKSequencer()
let lowTrack = sequencer.newTrack()
let lowSampler = AKMIDISampler()
try! lowSampler.loadWav("myLowSoundFile")
lowTrack?.setMIDIOutput(lowSampler.midiIn)
let highTrack = sequencer.newTrack()
let highSampler = AKMIDISampler()
try! highSampler.loadWav("myHighSoundFile")
highTrack?.setMIDIOutput(highSampler.midiIn)
sequencer.setLength(AKDuration(beats: 4.0))
sequencer.enableLooping()
then assign the high and low sounds to each track
let message = "1100011010"
let dur = 4.0 / Double(message.count)
var position: Double = 0
for i in message {
position += dur
if (i == "1") {
highTrack?.add(noteNumber: 60, velocity: 100, position: AKDuration(beats: position), duration: AKDuration(beats: dur * (2/3)))
} else {
lowTrack?.add(noteNumber: 60, velocity: 100, position: AKDuration(beats: position), duration: AKDuration(beats: dur * (2/34)))
}
}
(I haven't run the code, but something like this should work)
I was following this question, but the tone that I try to play with an AVAudioPCMBuffer does not play. The code is pretty simple:
class Player: NSObject {
var engine = AVAudioEngine()
var player = AVAudioPlayerNode()
var mixer: AVAudioMixerNode!
var buffer: AVAudioPCMBuffer!
override init() {
mixer = engine.mainMixerNode
buffer = AVAudioPCMBuffer(pcmFormat: player.outputFormat(forBus: 0), frameCapacity: 100)
buffer.frameLength = 100
let sr = mixer.outputFormat(forBus: 0).sampleRate
let nChannels = mixer.outputFormat(forBus: 0).channelCount
var i = 0
while i < Int(buffer.frameLength) {
let val = sin(441 * Double(i) * Double.pi / sr)
buffer.floatChannelData?.pointee[i] = Float(val * 0.5)
i += Int(nChannels)
}
engine.attach(player)
engine.connect(player, to: mixer, format: player.outputFormat(forBus: 0))
engine.prepare()
}
func play() {
do {
try engine.start()
} catch {
print(error)
}
player.scheduleBuffer(buffer, at: nil, options: .loops) {
print("Played!")
}
player.play()
}
}
For some reason, though, the iPhone does not make any sound. In my ViewController, I have this:
class ViewController: UIViewController {
var player = Player()
override func viewDidAppear(_ animated: Bool) {
player.play()
}
}
As you can see, player is a class variable, so it should not be deallocated from memory.
When I run the app on my physical device (iPhone 6s iOS 11), it does not work, yet it does work on the simulator. Why is this not making any sound, and how can I fix it?
Thanks in advance!
Make sure your device is not on silent mode.
I just created a project that you can download for testing by yourself: https://github.com/mugx/TestSound
I am doing research over four days, But I am not found any solution for calling over Bluetooth between two iOS devices within a distance.
I found that audio streaming is possible between two iOS devices using multipeer connectivity framework but this is not helpful for me. I want real time voice chat between two devices over Bluetooth.
Is there any CO-DAC for voice over Bluetooth?
My code is:
var engine = AVAudioEngine()
var file: AVAudioFile?
var player = AVAudioPlayerNode()
var input:AVAudioInputNode?
var mixer:AVAudioMixerNode?
override func viewDidLoad() {
super.viewDidLoad()
mixer = engine.mainMixerNode
input = engine.inputNode
engine.connect(input!, to: mixer!, format: input!.inputFormat(forBus: 0))
}
#IBAction func btnStremeDidClicked(_ sender: UIButton) {
mixer?.installTap(onBus: 0, bufferSize: 2048, format: mixer?.outputFormat(forBus: 0), block: { (buffer: AVAudioPCMBuffer, AVAudioTime) in
let byteWritten = self.audioBufferToData(audioBuffer: buffer).withUnsafeBytes {
self.appDelegate.mcManager.outputStream?.write($0, maxLength: self.audioBufferToData(audioBuffer: buffer).count)
}
print(byteWritten ?? 0)
print("Write")
})
do {
try engine.start()
}catch {
print(error.localizedDescription)
}
}
func audioBufferToData(audioBuffer: AVAudioPCMBuffer) -> Data {
let channelCount = 1
let bufferLength = (audioBuffer.frameCapacity * audioBuffer.format.streamDescription.pointee.mBytesPerFrame)
let channels = UnsafeBufferPointer(start: audioBuffer.floatChannelData, count: channelCount)
let data = Data(bytes: channels[0], count: Int(bufferLength))
return data
}
Thanks in Advance :)
Why is MultipeerConnectivity not helpful for you? It is a great way to stream audio over bluetooth or even wifi.
When you call this:
audioEngine.installTap(onBus: 0, bufferSize: 17640, format: localInputFormat) {
(buffer, when) -> Void in
You need to use the buffer, which has type AVAudioPCMBuffer. You then need to convert that to NSData and write to the outputStream that you would've opened with the peer:
data = someConverstionMethod(buffer)
_ = stream!.write(data.bytes.assumingMemoryBound(to: UInt8.self), maxLength: data.length)
Then on the other device you need to read from the stream and convert from NSData back to an AVAudioPCMBuffer, and then you can use an AVAudioPlayer to playback the buffer.
I have done this before with a very minimal delay.
I've implemented installTap method, which provides me audio buffer float samples. I've filtered them by my C++ DSP library. I want to "send" this buffer to headphones/speaker. I've did AVAudioPCMBuffer again from samples. Anyone know how to do that?
Code:
node.installTap(onBus: bus, bufferSize: AVAudioFrameCount(BUFFER_SIZE), format: node.inputFormat(forBus: bus), block: { (buffer : AVAudioPCMBuffer ,time : AVAudioTime) in
let root = buffer.floatChannelData!.pointee
// First pointer defines chanels
// Second pointer defines floats values
for i in 0 ..< BUFFER_SIZE{
self.signalData[i] = Double(root.advanced(by: i).pointee) * self.gainCorrection
}
let signalDataPreEq = self.signalData
let filteredSignal = shared.EQ.filterBuffer(UnsafeMutablePointer<Double>(mutating: self.signalData), with_count: Int32(BUFFER_SIZE))
self.signalData = Array(UnsafeBufferPointer(start : filteredSignal, count : BUFFER_SIZE))
for i in 0 ..< BUFFER_SIZE{
root.advanced(by: i).pointee = Float(self.signalData[i])
}
// HERE I WANT TO LISTEN(PLAYBACK) AUDIO FROM BUFFER
Thanks
You can use an AVAudioPlayerNode to play your AVAudioPCMBuffers:
let player = AVAudioPlayerNode()
engine.attach(player)
let bus = 0
let inputFormat = node.inputFormat(forBus: bus)
engine.connect(player, to: engine.mainMixerNode, format: inputFormat)
node.installTap(...) {
// other stuff
player.scheduleBuffer(filteredSignal) // filteredSignal is your AVAudioPCMBuffer?
}
// engine.start()
player.play()