I'm developing app for sound effects. I use AudioKit and I got a needed result for recoding voice. Now I want to add sound effect to voice while video is recording. I can record video with default voice, but I don't know how I can add effect to voice from video.
I use AVCaptureSession for video recording and I get audio stream like this:
self.audioDevice = AVCaptureDevice.default(for: AVMediaType.audio)
if let audioDevice = self.audioDevice {
self.audioInput = try AVCaptureDeviceInput(device: audioDevice)
if captureSession.canAddInput(self.audioInput!) {
captureSession.addInput(self.audioInput!)
} else {
throw CameraControllerError.inputsAreInvalid
}
}
But I want to set to captureSession input AVAudioNode, because AudioKit works with it.
I will be glad for any help. Thanks
Related
I am trying to develop a audio streaming system. The client streams the audio from microphone to server while recording in real-time, another clients will subscribe like podcast.
As far as my understanding, the AVCaptureSession derives the AVCaptureDeviceInput to AVCaptureAudioDataOutput.
let captureSession = AVCaptureSession()
guard let audioDevice = AVCaptureDevice.default(for: .audio) else { return }
let audioInput = try AVCaptureDeviceInput(device: audioDevice)
if captureSession.canAddInput(audioInput) {
captureSession.addInput(audioInput)
}
let audioOutput = AVCaptureAudioDataOutput()
if captureSession.canAddOutput(audioOutput) {
captureSession.addOutput(audioOutput)
}
captureSession.commitConfiguration()
With a background from javascript with MediaRecoder API. The recoder emits a chunk of audio data every X milliseconds.
So, Is it possible to do a similar way in iOS (swift)? Is there any other way to do stream the audio in real-time
Edit 1: I need a stream or a chunk of data that emit every X miliseconds. This questions mentioned the recording but it saved to the file.
I'm having a problem with recording audio from the microphone of my test device to a .caf file in Swift, XCode 9.4.1 using the latest version of AudioKit. In a simple test whereby I send the audio straight from the microphone to the output via an AKBooster, it works just fine and I can hear the mic input coming out of the speakers. I'm more or less following this example, although again using a booster node instead of an oscillator.
The following is my code:
class MicrophoneHandler
{
var microphone : AKMicrophone!
var booster : AKBooster!
var mixer : AKMixer!
var recorder : AKNodeRecorder!
var file : AKAudioFile!
var player : AKAudioPlayer!
init()
{
setupMicrophone()
microphone = AKMicrophone()
booster = AKBooster(microphone) // Stereo amplifier for microphone
mixer = AKMixer(booster)
file = try! AKAudioFile() // File to store recorder output
player = try? AKAudioPlayer(file: file) // Player to play back recorded audio file
//player.looping = true
recorder = try? AKNodeRecorder(node: mixer, file: file)
try? recorder.record()
sleep(5)
let dur = String(format: "%0.3f seconds", recorder.recordedDuration)
print("Stopped. (\(dur) recorded)")
recorder.stop()
//file.exportAsynchronously(name: "Test", baseDir: .documents, exportFormat: .caf){ [weak self] _, _ in
//}
//player.play()
//AudioKit.output = player!
//try? AudioKit.start()
}
func setupMicrophone()
{
// Function to initialise microphone settings
// Adapted from AudioKit example code found here:
// https://audiokit.io/examples/MicrophoneAnalysis
AKSettings.bufferLength = .medium
AKSettings.ioBufferDuration = 0.002 // TODO experiment with this to control latency
do
{
try AKSettings.setSession(category: .playAndRecord, with: .allowBluetoothA2DP) // Set session type & allow streaming to Bluetooth devices
} catch
{
AKLog("Could not set session category.")
}
AKSettings.defaultToSpeaker = true // Output to speaker when audio input is enabled
}
}
I have commented out the export code as the problem doesn't appear to be here. The console displays the following:
AKMicrophone.swift:init():45:Mixer inputs 8
AKAudioPlayer.swift:updatePCMBuffer():533:AKAudioPlayer Warning: "BF848EC0-94F8-4E39-A211-784B001CED72.caf" is an empty file
2018-11-16 17:49:16.936169+0000 VoxBox[2258:6984570] Audio files cannot be non-interleaved. Ignoring setting AVLinearPCMIsNonInterleaved YES.
AKNodeRecorder.swift:record():104:AKNodeRecorder: recording
Stopped. (0.000 seconds recorded)
As you can see, the recorder appears not to be recording to file for some reason. To my mind, my code should
Initialise the microphone (including settings)
Route the microphone input through a booster followed by a mixer (mixing with an FX bank will happen later)
Create an empty .caf audio file to be written to
Set up a player to play this file when the time comes
Set up a recorder to record the output of the mixer node to the audio file
Record 5 seconds of microphone input to the audio file
Yet for some reason nothing is being recorded. Clearly I am missing something or have misunderstood how the AKNodeRecorder works in this regard. I have read as many StackOverflow questions on similar topics as I can, had a dig through the AudioKit documentation and read a couple of examples from the AudioKit site, but nothing seems to address my particular problem.
Any help would be much appreciated.
I am setting a microphone on a AVCaptureSession and I am in need of a switch for the mic. How should I proceed with this?
Do I really need to the captureSession?.removeInput(microphone), or is there an easies way?
let microphone = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio)
do {
let micInput = try AVCaptureDeviceInput(device: microphone)
if captureSession.canAddInput(micInput) {
captureSession.addInput(micInput)
}
} catch {
print("Error setting device audio input: \(error)")
return false
}
You can always just leave the mic input attached and then using your switch decide what to do with the audio buffer. If the switch is off then don't process the audio data. I found an objc.io article that talks about how to set up the separate audio and video buffers before writing the data with an AVAssetWriter.
By default, all AVCaptureAudioChannel objects exposed by a connection are enabled. You may set enabled to false to stop the flow of data for a particular channel.
https://developer.apple.com/documentation/avfoundation/avcaptureaudiochannel/1388574-isenabled
I am successfully able to use Speech (speech recognition) and I can use AVFoundation to play wav files in Xcode 8/IOS 10. I just can't use them both together. I have working speech recognition code where I import Speech. When I import AVFoundation into the same app and use the following code, there is no sound and no errors are generated:
var audioPlayer: AVAudioPlayer!
func playAudio() {
let path = Bundle.main.path(forResource: "file.wav", ofType: nil)!
let url = URL(fileURLWithPath: path)
do {
let sound = try AVAudioPlayer(contentsOf: url)
audioPlayer = sound
sound.play()
} catch {
//handle error
}
}
I assume it is because both use audio. Can anyone suggest how to use both in the same app? I also find that I cannot use speech recognition and text-to-speech together in the same app.
I just bumped into the same problem, and here is how I solved,
add the following line when Speech Recognition is done. What it does is basically setting the audio session back to AVAudioSessionCategoryPlayback category.
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(AVAudioSessionCategoryPlayback)
try audioSession.setActive(false, with: .notifyOthersOnDeactivation)
} catch {
// handle errors
}
hope it helps.
You should change this line:
try audioSession.setCategory(AVAudioSessionCategoryPlayback)
on to:
try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
This should work ;-)
It seems that AVAudioPlayer stops playing the sample if you're using AVAudioSession to record the microphone as in Apple's speech recognition example.
However, I've managed to circumvent this by using AVCaptureSession to capture audio as described in this answer.
I'm trying to add an audio input to my AVCaptureSession() and it works great. However I would also like to support users who wish to play music in the background from other apps such as Spotify and maintain this audioInput for my recording. How is this possible?
let captureSession = AVCaptureSession()
let audioDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio)
audioInput = AVCaptureDeviceInput.deviceInputWithDevice(audioDevice, error:&err) as? AVCaptureDeviceInput
if captureSession.canAddInput(videoCapture) {
captureSession.addInput(videoCapture)
// This line Kills spotify playing in the background
captureSession.addInput(audioInput as AVCaptureInput)
}
The category AVAudioSessionCategoryPlayback is one of the few categories that allow for backgrounding. The option AVAudioSessionCategoryOptionMixWithOthers will make sure your audio won’t stop any currently playing background audio and also make sure that when the user plays music in the future, it won’t kick off your background task.
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback, withOptions: .MixWithOthers)
Maybe this will be helpful