Swift + AVAudioRecorder records very quietly - ios

I have built an iOS App in Swift that records audio clips, these clips are then sent up to the server. Every recording I make is very quiet.
Initially I thought my problems were similar to this question on Stack Overflow - but after trying this solution, my recordings are still very quiet.
Routing the audio through the speaker does not make the recordings any louder, as is suggested here:
https://stackoverflow.com/a/5662478/1037617
Note that its not playback on the device that is an issue. The issue is that the recordings are too quiet. I have tested the Microphone itself, and it is fine.
This is the best waveform I can produce, and this is almost shouting into the Microphone. As you can see, the waveform recorded is very quiet:
Is there any way of getting my iOS App to record at a louder volume?
This is the code I am using to record the audio clips:
let recorderSettings=[
AVFormatIDKey : kAudioFormatLinearPCM,
AVEncoderAudioQualityKey : AVAudioQuality.Max.rawValue,
AVEncoderBitRateKey : 128000,
AVNumberOfChannelsKey : 1,
AVSampleRateKey : 44100.0
]
let session: AVAudioSession = AVAudioSession.sharedInstance()
var error: NSError?
if session.respondsToSelector("requestRecordPermission:") {
AVAudioSession.sharedInstance().requestRecordPermission( { (granted:Bool) -> Void in
if !granted {
println("permission not granted")
}else{
println("permission granted")
if !session.setCategory(AVAudioSessionCategoryRecord, error: &error) { // also tried PlaybackAndRecord
println("could not set sesssion category")
if let e = error {
println(e.localizedDescription)
}
}
if !session.overrideOutputAudioPort(AVAudioSessionPortOverride.Speaker, error: &error) {
println("could not override output audio")
if let e = error {
println(e.localizedDescription)
}
}
if !session.setActive(true, error: &error) {
println("could not make active")
if let e = error {
println(e.localizedDescription)
}
}
self.currentFilename = "xxxx.wav"
let dirPaths = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true)
let docsDir: AnyObject=dirPaths[0]
self.recordedFilePath = docsDir.stringByAppendingPathComponent(self.currentFilename)
self.recordedFileURL = NSURL(fileURLWithPath: self.recordedFilePath)
self.recorder = AVAudioRecorder(URL:self.recordedFileURL, settings: self.recorderSettings, error: &error)
if let e = error {
println(e)
println(e.localizedDescription)
var err = NSError(domain: NSOSStatusErrorDomain, code: e.code, userInfo: nil)
println(err.description)
}else{
self.recorder?.delegate = self
self.recorder?.meteringEnabled = true
self.recorder?.prepareToRecord()
self.recorder?.record()
self.meterTimer = NSTimer.scheduledTimerWithTimeInterval(0.1,
target: self,
selector: "updateRecordAudioMeter:",
userInfo: nil,
repeats: true)
}
}
})
}

When recording audio, set the audio session to AVAudioSessionCategoryPlayAndRecord or just AVAudioSessionCategoryRecord. When you're done, set it back to just AVAudioSessionCategoryPlayback.

You need to set the AVAudioSession category back to AVAudioSessionCategoryPlayback after recording

Related

Tap audio output using AVAudioEngine

I'm trying install a tap on the output audio that is played on my app. I have no issue catching buffer from microphone input, but when it comes to catch sound that it goes trough the speaker or the earpiece or whatever the output device is, it does not succeed. Am I missing something?
In my example I'm trying to catch the audio buffer from an audio file that an AVPLayer is playing. But let's pretend I don't have access directly to the AVPlayer instance.
The goal is to perform Speech Recognition on an audio stream.
func catchAudioBuffers() throws {
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: .allowBluetooth)
try audioSession.setActive(true)
let outputNode = audioEngine.outputNode
let recordingFormat = outputNode.outputFormat(forBus: 0)
outputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in
// PROCESS AUDIO BUFFER
}
audioEngine.prepare()
try audioEngine.start()
// For example I am playing an audio conversation with an AVPlayer and a local file.
player.playSound()
}
This code results in a:
AVAEInternal.h:76 required condition is false: [AVAudioIONodeImpl.mm:1057:SetOutputFormat: (_isInput)]
*** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: _isInput'
I was facing the same problem and during 2 days of brainstorming found the following.
Apple says that For AVAudioOutputNode, tap format must be specified as nil. I'm not sure that it's important but in my case, that finally worked, format was nil.
You need to start recording and don't forget to stop it.
Removing tap is really important, otherwise you will have file that you can't open.
Try to save the file with the same audio settings that you used in source file.
Here's my code that finally worked. It was partly taken from this question Saving Audio After Effect in iOS.
func playSound() {
let rate: Float? = effect.speed
let pitch: Float? = effect.pitch
let echo: Bool? = effect.echo
let reverb: Bool? = effect.reverb
// initialize audio engine components
audioEngine = AVAudioEngine()
// node for playing audio
audioPlayerNode = AVAudioPlayerNode()
audioEngine.attach(audioPlayerNode)
// node for adjusting rate/pitch
let changeRatePitchNode = AVAudioUnitTimePitch()
if let pitch = pitch {
changeRatePitchNode.pitch = pitch
}
if let rate = rate {
changeRatePitchNode.rate = rate
}
audioEngine.attach(changeRatePitchNode)
// node for echo
let echoNode = AVAudioUnitDistortion()
echoNode.loadFactoryPreset(.multiEcho1)
audioEngine.attach(echoNode)
// node for reverb
let reverbNode = AVAudioUnitReverb()
reverbNode.loadFactoryPreset(.cathedral)
reverbNode.wetDryMix = 50
audioEngine.attach(reverbNode)
// connect nodes
if echo == true && reverb == true {
connectAudioNodes(audioPlayerNode, changeRatePitchNode, echoNode, reverbNode, audioEngine.mainMixerNode, audioEngine.outputNode)
} else if echo == true {
connectAudioNodes(audioPlayerNode, changeRatePitchNode, echoNode, audioEngine.mainMixerNode, audioEngine.outputNode)
} else if reverb == true {
connectAudioNodes(audioPlayerNode, changeRatePitchNode, reverbNode, audioEngine.mainMixerNode, audioEngine.outputNode)
} else {
connectAudioNodes(audioPlayerNode, changeRatePitchNode, audioEngine.mainMixerNode, audioEngine.outputNode)
}
// schedule to play and start the engine!
audioPlayerNode.stop()
audioPlayerNode.scheduleFile(audioFile, at: nil) {
var delayInSeconds: Double = 0
if let lastRenderTime = self.audioPlayerNode.lastRenderTime, let playerTime = self.audioPlayerNode.playerTime(forNodeTime: lastRenderTime) {
if let rate = rate {
delayInSeconds = Double(self.audioFile.length - playerTime.sampleTime) / Double(self.audioFile.processingFormat.sampleRate) / Double(rate)
} else {
delayInSeconds = Double(self.audioFile.length - playerTime.sampleTime) / Double(self.audioFile.processingFormat.sampleRate)
}
}
// schedule a stop timer for when audio finishes playing
self.stopTimer = Timer(timeInterval: delayInSeconds, target: self, selector: #selector(EditViewController.stopAudio), userInfo: nil, repeats: false)
RunLoop.main.add(self.stopTimer!, forMode: RunLoop.Mode.default)
}
do {
try audioEngine.start()
} catch {
showAlert(Alerts.AudioEngineError, message: String(describing: error))
return
}
//Try to save
let dirPaths: String = (NSSearchPathForDirectoriesInDomains(.libraryDirectory, .userDomainMask, true)[0]) + "/sounds/"
let tmpFileUrl = URL(fileURLWithPath: dirPaths + "effected.caf")
//Save the tmpFileUrl into global varibale to not lose it (not important if you want to do something else)
filteredOutputURL = URL(fileURLWithPath: filePath)
do{
print(dirPaths)
let settings = [AVSampleRateKey : NSNumber(value: Float(44100.0)),
AVFormatIDKey : NSNumber(value: Int32(kAudioFormatMPEG4AAC)),
AVNumberOfChannelsKey : NSNumber(value: 1),
AVEncoderAudioQualityKey : NSNumber(value: Int32(AVAudioQuality.medium.rawValue))]
self.newAudio = try! AVAudioFile(forWriting: tmpFileUrl as URL, settings: settings)
let length = self.audioFile.length
audioEngine.mainMixerNode.installTap(onBus: 0, bufferSize: 4096, format: nil) {
(buffer: AVAudioPCMBuffer?, time: AVAudioTime!) -> Void in
//Let us know when to stop saving the file, otherwise saving infinitely
if (self.newAudio.length) <= length {
do{
try self.newAudio.write(from: buffer!)
} catch _{
print("Problem Writing Buffer")
}
} else {
//if we dont remove it, will keep on tapping infinitely
self.audioEngine.mainMixerNode.removeTap(onBus: 0)
}
}
}
// play the recording!
audioPlayerNode.play()
}
#objc func stopAudio() {
if let audioPlayerNode = audioPlayerNode {
let engine = audioEngine
audioPlayerNode.stop()
engine?.mainMixerNode.removeTap(onBus: 0)
}
if let stopTimer = stopTimer {
stopTimer.invalidate()
}
configureUI(.notPlaying)
if let audioEngine = audioEngine {
audioEngine.stop()
audioEngine.reset()
}
isPlaying = false
}

Swift ReplayKit AVAssetWriter Video Audio out of Sync when Converted to HLS

In iOS/Swift I am working with ReplayKit to use AVAssetWriter to create a mov or MP4 video of the user's screen and microphone audio.
When I create a video, it plays fine locally and the audio and video are in sync. However when I convert this video to HLS (HTTP Live Stream) format using AWS Mediaconvert, the audio is out of sync with the video. Does anyone know what could be causing this? I read about timecoding, maybe I need to add a timecode to my video? Is there an easier way to fix this or has anyone experience similar issues?
private func startRecordingVideo(){
//Initialize MP4 Output File for Screen Recorded Video
let fileManager = FileManager.default
let urls = fileManager.urls(for: .documentDirectory, in: .userDomainMask)
guard let documentDirectory: NSURL = urls.first as NSURL? else {
fatalError("documentDir Error")
}
videoOutputURL = documentDirectory.appendingPathComponent("OutputVideo.mov")
if FileManager.default.fileExists(atPath: videoOutputURL!.path) {
do {
try FileManager.default.removeItem(atPath: videoOutputURL!.path)
} catch {
fatalError("Unable to delete file: \(error) : \(#function).")
}
}
//Initialize Asset Writer to Write Video to User's Storage
assetWriter = try! AVAssetWriter(outputURL: videoOutputURL!, fileType:
AVFileType.mov)
let videoOutputSettings: Dictionary<String, Any> = [
AVVideoCodecKey : AVVideoCodecType.h264,
AVVideoWidthKey : UIScreen.main.bounds.size.width,
AVVideoHeightKey : UIScreen.main.bounds.size.height,
];
let audioSettings = [
AVFormatIDKey : kAudioFormatMPEG4AAC,
AVNumberOfChannelsKey : 1,
AVSampleRateKey : 44100.0,
AVEncoderBitRateKey: 96000,
] as [String : Any]
videoInput = AVAssetWriterInput(mediaType: AVMediaType.video,outputSettings: videoOutputSettings)
audioInput = AVAssetWriterInput(mediaType: AVMediaType.audio,outputSettings:audioSettings )
videoInput?.expectsMediaDataInRealTime = true
audioInput?.expectsMediaDataInRealTime = true
assetWriter?.add(videoInput!)
assetWriter?.add(audioInput!)
let sharedRecorder = RPScreenRecorder.shared()
sharedRecorder.isMicrophoneEnabled = true
sharedRecorder.startCapture(handler: {
(sample, bufferType, error) in
//Audio/Video Buffer Data returned from the Screen Recorder
if CMSampleBufferDataIsReady(sample) {
DispatchQueue.main.async { [weak self] in
//Start the Asset Writer if it has not yet started
if self?.assetWriter?.status == AVAssetWriter.Status.unknown {
if !(self?.assetWriter?.startWriting())! {
return
}
self?.assetWriter?.startSession(atSourceTime: CMSampleBufferGetPresentationTimeStamp(sample))
self?.startSession = true
}
}
//Handle errors
if self.assetWriter?.status == AVAssetWriter.Status.failed {
print("Error occured, status = \(String(describing: self.assetWriter?.status.rawValue)), \(String(describing: self.assetWriter?.error!.localizedDescription)) \(String(describing: self.assetWriter?.error))")
return
}
//Add video buffer to AVAssetWriter Video Input
if (bufferType == .video)
{
if(self.videoInput!.isReadyForMoreMediaData) && self.startSession {
self.videoInput?.append(sample)
}
}
//Add audio microphone buffer to AVAssetWriter Audio Input
if (bufferType == .audioMic)
{
print("MIC BUFFER RECEIVED")
if self.audioInput!.isReadyForMoreMediaData
{
print("Audio Buffer Came")
self.audioInput?.append(sample)
}
}
}
}, completionHandler: {
error in
print("COMP HANDLER ERROR", error?.localizedDescription)
})
}
private func stopRecordingVideo(){
self.startSession = false
RPScreenRecorder.shared().stopCapture{ (error) in
self.videoInput?.markAsFinished()
self.audioInput?.markAsFinished()
if error == nil{
self.assetWriter?.finishWriting{
self.startSession = false
print("FINISHED WRITING!")
DispatchQueue.main.async {
self.setUpVideoPreview()
}
}
}else{
//DELETE DIRECTORY
}
}
}
I’m sure you’ve either figured this out or moved on, but for all Googlers you basically have to set the mediaTimeScale on the video input. You can see an example here
Here’s the relevant part of that code (This code is using a AVSampleBufferDisplayLayer, but the same concept applies:
double pts = CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer));
if(!timebaseSet && pts != 0)
{
timebaseSet = true;
CMTimebaseRef controlTimebase;
CMTimebaseCreateWithMasterClock( CFAllocatorGetDefault(), CMClockGetHostTimeClock(), &controlTimebase );
displayLayer.controlTimebase = controlTimebase;
CMTimebaseSetTime(displayLayer.controlTimebase, CMTimeMake(pts, 1));
CMTimebaseSetRate(displayLayer.controlTimebase, 1.0);
}
if([displayLayer isReadyForMoreMediaData])
{
[displayLayer enqueueSampleBuffer:sampleBuffer];
}

Forcing iOS Recording Through Headphone Jack When Headphones is Pluggged In

I have a bug here where a set of headphones is plugged in during the audio capture, the audio capture still happens through the onboard microphone and playback is through the speakers. I am trying to capture this audio via the audio port if a headset is plugged in and is in use. Here is the copy of my code:
#IBAction func recordAudio(_ sender: UIButton) {
isPlayingRecording = false;
if endTime == nil {
self.startTimer()
}
recordingInProgress.text = "Recording"
recordingInProgress.isHidden = false
stopButton.isHidden = false
recordButton.isEnabled = false
setSecondaryView()
let dirPath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0] as String
let recordingName = "\(exercise!.exerciseId).m4a"
let pathArray = [dirPath, recordingName]
filePath = NSURL.fileURL(withPathComponents: pathArray) as NSURL?
//Set up audio session
let session = AVAudioSession.sharedInstance()
do {
try session.setCategory(AVAudioSessionCategoryPlayAndRecord)
} catch _ {
}
do {
// setup how audio is going to be used (ie. audio parameters, categories, speaker)
try session.overrideOutputAudioPort(AVAudioSessionPortOverride.speaker)
} catch _ {
}
//Initialize and prepare the recorder
audioRecorder = nil
do {
let recordSettings: [String : AnyObject] = [AVFormatIDKey:Int(kAudioFormatMPEG4AAC) as AnyObject,
AVSampleRateKey:44100.0 as AnyObject,
AVNumberOfChannelsKey:1 as AnyObject,
AVEncoderAudioQualityKey:AVAudioQuality.medium.rawValue as AnyObject
]
try audioRecorder = AVAudioRecorder(url: filePath! as URL, settings:recordSettings)
} catch _ {
}
if audioRecorder != nil {
audioRecorder.delegate = self //This statement makes "RecordViewController" a delegate of "AVAudioRecorder", so that we can use "audioRecorderDidFinishRecording" function later on
audioRecorder.isMeteringEnabled = true
audioRecorder.prepareToRecord()
audioRecorder.record()
}
//Start count up timer if the exercise doesn't have a attempt time constraint.
if(exercise != nil && exercise?.skill != nil && exercise!.skill.respondtime == nil && exercise!.skill.respondtime!.intValue > 0) {
startTime = Date().timeIntervalSince1970
timerLabel.isHidden = false
timerLabel.text = readableSecondString(0.0) as String
timer = Timer.scheduledTimer(timeInterval: 1.0, target: self, selector: #selector(RecordSoundsViewController.tick), userInfo: nil, repeats: true)
}
}
How do I get this to capture the audio through the audio port/headphone jack when a set of headphones is plugged in? I'm doing this using Swift.

Volume of entire app gets quieter after accessing mic

I am having a volume issue within my iOS app. When I call setupMic() the volume level of the entire app is lowered significantly.
Here is the code I am using:
func setupMic() {
//make an AudioSession, set it to PlayAndRecord and make it active
let audioSession: AVAudioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
} catch {
print("There was an error setting the category")
}
do {
try audioSession.setActive(true)
} catch {
print("There was an error setting the audio session to active")
}
//set up the URL for the audio file
let documents: AnyObject = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true)[0]
let str = documents.stringByAppendingPathComponent("recordTest.caf")
let url = NSURL.fileURLWithPath(str as String)
//make a dictionary to hold the recording setting so we can instantiate our AVAudioRecorder
let number = NSNumber(unsignedInt: kAudioFormatAppleIMA4)
let recordSettings: [String: AnyObject] = [AVFormatIDKey: number,
AVSampleRateKey: 44100.0,
AVNumberOfChannelsKey: 2,
AVEncoderBitRateKey: 12800,
AVLinearPCMBitDepthKey: 16,
AVEncoderAudioQualityKey: AVAudioQuality.Min.rawValue]
//Instantiate an AVAudioRecorder
do {
recorder = try AVAudioRecorder(URL: url, settings: recordSettings)
} catch {
print("There was an error")
}
}
//This function is called everytime our timer levelTimer fires
func levelTimerCallback() {
recorder.updateMeters()
let averagePower = self.recorder.peakPowerForChannel(0)
if averagePower > -7 {
stopMonitoring()
print(recorder.peakPowerForChannel(0))
didCompleteChallenge(true)
}
}
func startMonitoring() {
if self.recorder != nil {
recorder.prepareToRecord()
recorder.meteringEnabled = true
//start recording
recorder.record()
//instantiate a timer to be called with whatever frequency we want to grab metering values
self.levelTimer = NSTimer.scheduledTimerWithTimeInterval(1, target: self, selector: #selector(levelTimerCallback), userInfo: nil, repeats: true)
}
}
func stopMonitoring() {
self.recorder.stop()
self.recorder.deleteRecording()
self.levelTimer.invalidate()
}
I call setupMic() and startMonitoring() in an updateWith() method. I also call stopMonitoring() when the view is updated again with the updateWith()
Once the microphone is accessed the volume decreases. Any suggestions? Any fixes?
Fixed the issue. The volume was not actually getting quieter, the audio was actually routing to the earpiece. I just needed to set the audio session category options to .DefaultToSpeaker. AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayAndRecord, withOptions: [.DefaultToSpeaker]). Thanks anyway internet.

how to monitor audio input on ios using swift - example?

I want to write a simple app that 'does something' when the sound level at the mic reaches a certain level, showing the audio input levels for extra credit
cant find any examples in swift that get to this -- dont want to record, just monitor
have been checking out the docs on the AVFoundation classes but cant get off the ground
thanks
Let you can use below code :
func initalizeRecorder ()
{
do {
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayAndRecord)
try AVAudioSession.sharedInstance().setActive(true)
}catch{
print(error);
}
let stringDir:NSString = self.getDocumentsDirectory();
let audioFilename = stringDir.stringByAppendingPathComponent("recording.m4a")
let audioURL = NSURL(fileURLWithPath: audioFilename)
print("File Path : \(audioFilename)");
// make a dictionary to hold the recording settings so we can instantiate our AVAudioRecorder
let settings = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 12000.0,
AVNumberOfChannelsKey: 1 as NSNumber,
AVEncoderBitRateKey:12800 as NSNumber,
AVLinearPCMBitDepthKey:16 as NSNumber,
AVEncoderAudioQualityKey: AVAudioQuality.High.rawValue
]
do {
if audioRecorder == nil
{
audioRecorder = try AVAudioRecorder(URL: audioURL, settings: settings )
audioRecorder!.delegate = self
audioRecorder!.prepareToRecord();
audioRecorder!.meteringEnabled = true;
}
audioRecorder!.recordForDuration(NSTimeInterval(5.0));
} catch {
print("Error")
}
}
//GET DOCUMENT DIR PATH
func getDocumentsDirectory() -> String {
let paths = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true)
let documentsDirectory = paths[0]
return documentsDirectory
}
////START RECORDING
#IBAction func btnStartPress(sender: AnyObject) {
recordingSession = AVAudioSession.sharedInstance()
do {
recordingSession.requestRecordPermission() { [unowned self] (allowed: Bool) -> Void in
dispatch_async(dispatch_get_main_queue()) {
if allowed {
print("Allowd Permission Record!!")
self.initalizeRecorder ()
self.audioRecorder!.record()
//instantiate a timer to be called with whatever frequency we want to grab metering values
self.levelTimer = NSTimer.scheduledTimerWithTimeInterval(0.02, target: self, selector: Selector("levelTimerCallback"), userInfo: nil, repeats: true)
} else {
// failed to record!
self.showPermissionAlert();
print("Failed Permission Record!!")
}
}
}
} catch {
// failed to record!
print("Failed Permission Record!!")
}
}
//This selector/function is called every time our timer (levelTime) fires
func levelTimerCallback() {
//we have to update meters before we can get the metering values
if audioRecorder != nil
{
audioRecorder!.updateMeters()
let ALPHA : Double = 0.05;
let peakPowerForChannel : Double = pow(Double(10.0), (0.05) * Double(audioRecorder!.peakPowerForChannel(0)));
lowPassResults = ALPHA * peakPowerForChannel + Double((1.0) - ALPHA) * lowPassResults;
print("low pass res = \(lowPassResults)");
if (lowPassResults > 0.7 ){
print("Mic blow detected");
}
}
}
//STOP RECORDING
#IBAction func btnStopPress(sender: AnyObject) {
if audioRecorder != nil
{
audioRecorder!.stop()
self.levelTimer.invalidate()
}
}
In AVAudioRecorder you can "record audio" (you don't have to save it) and set meteringEnabled to use the function peakPowerForChannel(_:)
It will
Returns the peak power for a given channel, in decibels, for the sound being recorded.
This link may provide a sample code.
Let me know if it help you.

Resources