When I call the function .stopfetchingaudio() from EZAudio, my app crashes.
var microphone: EZMicrophone!
func didMove(to view: SKView){
/*
* setup all dependencys for the fft analysis
*/
//setup audio session
session = AVAudioSession.sharedInstance()
do{
try session.setCategory(AVAudioSessionCategoryPlayAndRecord)
try session.setActive(true)
}catch{
print("Audio Session setup Fails")
}
//create a mic instance
microphone = EZMicrophone(delegate: self)
}
func stopMic(){
microphone.stopFetchingAudio()
}
I get this error:
xyz-abv[435:35687] fatal error: unexpectedly found nil while unwrapping an Optional value
But I don't know which optional it mean.
I think it should be:
func stopMic(){
if let _ = microphone {
microphone.stopFetchingAudio()
}
}
Explanation: The reason is you move from one view(where microphone is used) to another view without intializing it. And when you call the stop method from second view controller, it causes error, because microphone is NIL.
Related
I am trying to create an app that leverages both STT (Speech to Text) and TTS (Text to Speech) at the same time. However, I am running into a couple of foggy issues and would appreciate your kind expertise.
The app consists of a button at the center of the screen which, upon clicking, starts the required speech recognition functionality using the code below.
// MARK: - Constant Properties
let audioEngine = AVAudioEngine()
// MARK: - Optional Properties
var recognitionRequest: SFSpeechAudioBufferRecognitionRequest?
var recognitionTask: SFSpeechRecognitionTask?
var speechRecognizer: SFSpeechRecognizer?
// MARK: - Functions
internal func startSpeechRecognition() {
// Instantiate the recognitionRequest property.
self.recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
// Set up the audio session.
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(.record, mode: .measurement, options: [.defaultToSpeaker, .duckOthers])
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
} catch {
print("An error has occurred while setting the AVAudioSession.")
}
// Set up the audio input tap.
let inputNode = self.audioEngine.inputNode
let inputNodeFormat = inputNode.outputFormat(forBus: 0)
self.audioEngine.inputNode.installTap(onBus: 0, bufferSize: 512, format: inputNodeFormat, block: { [unowned self] buffer, time in
self.recognitionRequest?.append(buffer)
})
// Start the recognition task.
guard
let speechRecognizer = self.speechRecognizer,
let recognitionRequest = self.recognitionRequest else {
fatalError("One or more properties could not be instantiated.")
}
self.recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest, resultHandler: { [unowned self] result, error in
if error != nil {
// Stop the audio engine and recognition task.
self.stopSpeechRecognition()
} else if let result = result {
let bestTranscriptionString = result.bestTranscription.formattedString
self.command = bestTranscriptionString
print(bestTranscriptionString)
}
})
// Start the audioEngine.
do {
try self.audioEngine.start()
} catch {
print("Could not start the audioEngine property.")
}
}
internal func stopSpeechRecognition() {
// Stop the audio engine.
self.audioEngine.stop()
self.audioEngine.inputNode.removeTap(onBus: 0)
// End and deallocate the recognition request.
self.recognitionRequest?.endAudio()
self.recognitionRequest = nil
// Cancel and deallocate the recognition task.
self.recognitionTask?.cancel()
self.recognitionTask = nil
}
When used alone, this code works like a charm. However, when I want to read that transcribed text using an AVSpeechSynthesizer object, nothing seems to be clear.
I went through the suggestions of multiple Stack Overflow posts, which suggested modifying
audioSession.setCategory(.record, mode: .measurement, options: [.defaultToSpeaker, .duckOthers])
To the following
audioSession.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .duckOthers])
Yet in vain. The app was still crashing after running STT then TTS, respectively.
The solution was for me to use this rather than the aforementioned
audioSession.setCategory(.multiRoute, mode: .default, options: [.defaultToSpeaker, .duckOthers])
This got me completely overwhelmed as I really have no clue what was intricately going on. I would highly appreciate any relevant explanation!
I am developing an app with both SFSpeechRecognizer and AVSpeechSythesizer too, and for me the .setCategory(.playAndRecord, mode: .default) works fine and it is the best category for our needs, according to Apple. Even, I am able to .speak() every transcription of the SFSpeechRecognitionTask while the audio engine is running without any problem. My opinion is somewhere in your programm's logic causes the crash. It would be good if you can update your question with the corresponding error.
And about why the .multiRoute category works: I guess there is a problem with the AVAudioInputNode. If you see in the console and error like this
Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: IsFormatSampleRateAndChannelCountValid(hwFormat)
or like this
Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: nullptr == Tap()
you only needs to reorder some parts of the code like moving the setup of the audio session somewhere where it only gets called once, or ensure that the tap of the input node is always removed before installing a new one even if the recognition task finish successfully or not. And maybe (I have never worked with it) the .multiRoute is able to reuse the same input node by its nature of working with different audio streams and routes.
I leave below the logic I use with my programm following Apple's WWDC session:
Setting category
override func viewDidLoad() { //or init() or when necessarily
super.viewDidLoad()
try? AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .default)
}
Validations/permissions
func shouldProcessSpeechRecognition() {
guard AVAudioSession.sharedInstance().recordPermission == .granted,
speechRecognizerAuthorizationStatus == .authorized,
let speechRecognizer = speechRecognizer, speechRecognizer.isAvailable else { return }
//Continue only if we have authorization and recognizer is available
startSpeechRecognition()
}
Starting STT
func startSpeechRecognition() {
let format = audioEngine.inputNode.outputFormat(forBus: 0)
audioEngine.inputNode.installTap(onBus: 0, bufferSize: 1024, format: format) { [unowned self] (buffer, _) in
self.recognitionRequest.append(buffer)
}
audioEngine.prepare()
do {
try audioEngine.start()
recognitionTask = speechRecognizer!.recognitionTask(with: recognitionRequest, resultHandler: {...}
} catch {...}
}
Ending STT
func endSpeechRecognition() {
recognitionTask?.finish()
stopAudioEngine()
}
Canceling STT
func cancelSpeechRecognition() {
recognitionTask?.cancel()
stopAudioEngine()
}
Stoping audio engine
func stopAudioEngine() {
audioEngine.stop()
audioEngine.inputNode.removeTap(onBus: 0)
recognitionRequest.endAudio()
}
And with that, anywhere in my code I can call an AVSpeechSynthesizer instance and speak an utterance.
I need to start playing sound when user closes app. I use applicationDidEnterBackground method. Here it is:
func applicationDidEnterBackground(application: UIApplication) {
let dispatchQueue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
dispatch_async(dispatchQueue, {[weak self] in
var audioSessionError: NSError?
let audioSession = AVAudioSession.sharedInstance()
NSNotificationCenter.defaultCenter().addObserver(self!,
selector: "handleInterruption:",
name: AVAudioSessionInterruptionNotification,
object: nil)
audioSession.setActive(true, error: nil)
if audioSession.setCategory(AVAudioSessionCategoryPlayback,
error: &audioSessionError){
println("Successfully set the audio session")
} else {
println("Could not set the audio session")
}
let filePath = NSBundle.mainBundle().pathForResource("MySong",
ofType:"mp3")
let fileData = NSData(contentsOfFile: filePath!,
options: .DataReadingMappedIfSafe,
error: nil)
var error:NSError?
/* Start the audio player */
self!.audioPlayer = AVAudioPlayer(data: fileData, error: &error)
/* Did we get an instance of AVAudioPlayer? */
if let theAudioPlayer = self!.audioPlayer{
theAudioPlayer.delegate = self;
if theAudioPlayer.prepareToPlay() &&
theAudioPlayer.play(){
println("Successfully started playing")
} else {
println("Failed to play")
}
} else {
/* Handle the failure of instantiating the audio player */
}
})
}
func handleInterruption(notification: NSNotification){
/* Audio Session is interrupted. The player will be paused here */
let interruptionTypeAsObject =
notification.userInfo![AVAudioSessionInterruptionTypeKey] as! NSNumber
let interruptionType = AVAudioSessionInterruptionType(rawValue:
interruptionTypeAsObject.unsignedLongValue)
if let type = interruptionType{
if type == .Ended{
/* resume the audio if needed */
}
}
}
func audioPlayerDidFinishPlaying(player: AVAudioPlayer!,
successfully flag: Bool){
println("Finished playing the song")
/* The flag parameter tells us if the playback was successfully
finished or not */
if player == audioPlayer{
audioPlayer = nil
}
}
It does not work. After debugging I see that theAudioPlayer.play() returns false. If I run this code for example in applicationDidFinishLaunching it plays sound. I added background mode App plays audio or streams audio/video using AirPlay to my Info.plist What's wrong here?
At a very basic level there are three prerequisites your app should satisfy to play audio in the background:
Music must be playing
Setup ‘Background Modes’ for Audio in your Target Capabilities.
For basic playback, initialise your audio session, and set your audio session category to AVAudioSessionCategoryPlayback. If you don’t, you’ll get the default behaviour.
You should also configure your app to respond to
changes in audio output
audio session interruptions (e.g. phone call)
remote control events (via control centre, or the lock screen)
Check out this Swift example.
I'm capturing video, audio, and photos in one view controller, ideally with one capture session.
The issue I'm currently having is with recording video. It displays output to my preview fine. I have the AVCaptureFileOutputRecordingDelegate enabled and the following method implemented.
func captureOutput(captureOutput: AVCaptureFileOutput!, didFinishRecordingToOutputFileAtURL outputFileURL: NSURL!, fromConnections connections: [AnyObject]!, error: NSError!)
var outputUrl = NSURL(fileURLWithPath: NSTemporaryDirectory() + "test.mp4")
movieOutput?.startRecordingToOutputFileURL(outputUrl, recordingDelegate: self)
I'm getting this error when I run the above code though:
'NSInvalidArgumentException', reason: '*** -[AVCaptureMovieFileOutput startRecordingToOutputFileURL:recordingDelegate:] - no active/enabled connections.'
My configuration:
func configureCaptureSession() {
capturedPhoto.contentMode = .ScaleAspectFill
capturedPhoto.clipsToBounds = true
capturedPhoto.hidden = true
captureSession = AVCaptureSession()
captureSession!.beginConfiguration()
captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
var error: NSError?
AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryRecord, error: &error)
AVAudioSession.sharedInstance().setActive(true, error: &error)
var audioDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio)
var audioInput = AVCaptureDeviceInput(device: audioDevice, error: &error)
if error == nil && captureSession!.canAddInput(audioInput) {
captureSession!.addInput(audioInput)
}
photoCaptureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
photoDeviceInput = AVCaptureDeviceInput(device: photoCaptureDevice, error: &error)
if error == nil && captureSession!.canAddInput(photoDeviceInput) {
captureSession!.addInput(photoDeviceInput)
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput!.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if captureSession!.canAddOutput(stillImageOutput) {
captureSession!.addOutput(stillImageOutput)
}
movieOutput = AVCaptureMovieFileOutput()
if captureSession!.canAddOutput(movieOutput) {
captureSession!.addOutput(movieOutput)
}
photoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
photoPreviewLayer!.videoGravity = AVLayerVideoGravityResizeAspectFill
photoPreviewLayer!.connection?.videoOrientation = AVCaptureVideoOrientation.Portrait
cameraView.layer.addSublayer(photoPreviewLayer)
contentView.addGestureRecognizer(UITapGestureRecognizer(target: self, action: "focusPhoto:"))
}
captureSession!.commitConfiguration()
captureSession!.startRunning()
}
I found two problems in your code.
Duplicated file
As per Apple documentation of startRecordingToOutputFileURL:recordingDelegate::
Starts recording to a given URL.
The method sets the file URL to which the receiver is currently writing output media. If a file at the given URL already exists when capturing starts, recording to the new file will fail.
In iOS, this frame accurate file switching is not supported. You must call stopRecording before calling this method again to avoid any errors.
When recording is stopped either by calling stopRecording, by changing files using this method, or because of an error, the remaining data that needs to be included to the file will be written in the background. Therefore, you must specify a delegate that will be notified when all data has been written to the file using the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: method.
In your case, if test.mp4 is already exist when you start a new recording, it will fail. So it's better to give the recorded file an unique name each time. For example, use the current timestamp.
Session preset
In your code, you set sessionPreset to AVCaptureSessionPresetPhoto:
captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
But according to my personal experience, it is not suitable for an video output and will result in your error. Change to AVCaptureSessionPresetHigh and try again. Also it's recommended to call canSetSessionPreset: and apply the preset only if canSetSessionPreset: returns YES.
Sample code
Apple offers a nice sample code on the usage of AVFoundation, you may want to check it out.
Fatal error: unexpectedly found nil while unwrapping an Optional value (lldb)
This message is because my variable is set to nil but the code is expecting it to not be nil. But I don't have a solution. When I remove the question mark from the casting and assignment other errors happen.
Thread1
Fatal error green highlighted line at if deviceInput == nil!. And
another error green highlighted line at beginSession() call.
The app starts, camera torch gets turned on automatically as per my code but then the app crashes there. App stays running stuck on the launch screen with the torch still on.
Could you please see how much camera session is set up and tell me what's wrong? Thanks
import UIKit
import Foundation
import AVFoundation
import CoreMedia
import CoreVideo
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
let captureSession = AVCaptureSession()
var captureDevice : AVCaptureDevice?
var validFrameCounter: Int = 0
// for sampling from the camera
enum CurrentState {
case statePaused
case stateSampling
}
var currentState = CurrentState.statePaused
override func viewDidLoad() {
super.viewDidLoad()
captureSession.sessionPreset = AVCaptureSessionPresetHigh
let devices = AVCaptureDevice.devices()
// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the back camera
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
//println("Capture device found")
beginSession() // fatal error
}
}
}
}
}
// configure device for camera and focus mode
// start capturing frames
func beginSession() {
// Create the AVCapture Session
var err : NSError? = nil
captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &err))
if err != nil {
println("error: \(err?.localizedDescription)")
}
// Automatic Switch ON torch mode
if captureDevice!.hasTorch {
// lock your device for configuration
captureDevice!.lockForConfiguration(nil)
// check if your torchMode is on or off. If on turns it off otherwise turns it on
captureDevice!.torchMode = captureDevice!.torchActive ? AVCaptureTorchMode.Off : AVCaptureTorchMode.On
// sets the torch intensity to 100%
captureDevice!.setTorchModeOnWithLevel(1.0, error: nil)
// unlock your device
captureDevice!.unlockForConfiguration()
}
// Create a AVCaptureInput with the camera device
var deviceInput : AVCaptureInput = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: &err) as! AVCaptureInput
if deviceInput == nil! { // fatal error: unexpectedly found nil while unwrapping an Optional value (lldb)
println("error: \(err?.localizedDescription)")
}
// Set the output
var videoOutput : AVCaptureVideoDataOutput = AVCaptureVideoDataOutput()
// create a queue to run the capture on
var captureQueue : dispatch_queue_t = dispatch_queue_create("captureQueue", nil)
// setup ourself up as the capture delegate
videoOutput.setSampleBufferDelegate(self, queue: captureQueue)
// configure the pixel format
videoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_32BGRA)]
// kCVPixelBufferPixelFormatTypeKey is a CFString btw.
// set the minimum acceptable frame rate to 10 fps
captureDevice!.activeVideoMinFrameDuration = CMTimeMake(1, 10)
// and the size of the frames we want - we'll use the smallest frame size available
captureSession.sessionPreset = AVCaptureSessionPresetLow
// Add the input and output
captureSession.addInput(deviceInput)
captureSession.addOutput(videoOutput)
// Start the session
captureSession.startRunning()
func setState(state: CurrentState){
switch state
{
case .statePaused:
// what goes here? Something like this?
UIApplication.sharedApplication().idleTimerDisabled = false
case .stateSampling:
// what goes here? Something like this?
UIApplication.sharedApplication().idleTimerDisabled = true
}
}
// sampling from the camera
currentState = CurrentState.stateSampling
// stop the app from sleeping
UIApplication.sharedApplication().idleTimerDisabled = true
// update our UI on a timer every 0.1 seconds
NSTimer.scheduledTimerWithTimeInterval(0.1, target: self, selector: Selector("update"), userInfo: nil, repeats: true)
func stopCameraCapture() {
captureSession.stopRunning()
}
// pragma mark Pause and Resume of detection
func pause() {
if currentState == CurrentState.statePaused {
return
}
// switch off the torch
if captureDevice!.isTorchModeSupported(AVCaptureTorchMode.On) {
captureDevice!.lockForConfiguration(nil)
captureDevice!.torchMode = AVCaptureTorchMode.Off
captureDevice!.unlockForConfiguration()
}
currentState = CurrentState.statePaused
// let the application go to sleep if the phone is idle
UIApplication.sharedApplication().idleTimerDisabled = false
}
func resume() {
if currentState != CurrentState.statePaused {
return
}
// switch on the torch
if captureDevice!.isTorchModeSupported(AVCaptureTorchMode.On) {
captureDevice!.lockForConfiguration(nil)
captureDevice!.torchMode = AVCaptureTorchMode.On
captureDevice!.unlockForConfiguration()
}
currentState = CurrentState.stateSampling
// stop the app from sleeping
UIApplication.sharedApplication().idleTimerDisabled = true
}
}
}
Looking at your code, you should really try to get out of the habit of force-unwrapping optionals using ! at any opportunity, especially just to “make it compile". Generally speaking, if you ever find yourself writing if something != nil, there’s probably a better way to write what you want. Try looking at the examples in this answer for some idioms to copy. You might also find this answer useful for a high-level explanation of why optionals are useful.
AVCaptureDeviceInput.deviceInputWithDevice returns an AnyObject, and you are force-casting it to a AVCaptureInput with this line:
var deviceInput = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: &err) as! AVCaptureInput
(you don’t need to state the type of deviceInput by the way, Swift can deduce it from the value on the right-hand side)
When you write as!, you are telling the compiler “don’t argue with me, force the result to be of type AVCaptureInput, no questions asked”. If it turns out what is returned is something of a different type, your app will crash with an error.
But then on the next line, you write:
if deviceInput == nil! {
I’m actually quite astonished this compiles at all! But it turns out it does, and it’s not surprising it crashes. Force-unwrapping a value that is nil will crash, and you are doing this in it’s purest form, force-unwrapping a nil literal :)
The problem is, you’ve already stated that deviceInput is a non-optional type AVCaptureInput. Force-casting the result is probably not the right thing to do. As the docs for state,
If the device cannot be opened because it is no longer available or because it is in use, for example, this method returns nil, and the optional outError parameter points to an NSError describing the problem.
The right way to handle this is to check is the result is nil, and act appropriately. So you want to do something like:
if let deviceInput = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: &err) as? AVCaptureInput
// use deviceInput
}
else {
println("error: \(err?.localizedDescription)")
}
I have built an app that records an audio clip, and saves it to a file called xxx.m4a.
Once the recording is made, I can find this xxx.m4a file on my system and play it back - and it plays back fine. The problem isn't recording this audio track.
My problem is playing this audio track back from within the app.
// to demonstrate how I am building the recordedFileURL
let currentFileName = "xxx.m4a"
let dirPaths = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true)
let docsDir: AnyObject = dirPaths[0]
let recordedFilePath = docsDir.stringByAppendingPathComponent(currentFileName)
var recordedFileURL = NSURL(fileURLWithPath: recordedFilePath)
// quick check for my own sanity
var checkIfExists = NSFileManager.defaultManager()
if checkIfExists.fileExistsAtPath(recordedFilePath){
println("audio file exists!")
}else{
println("audio file does not exist")
}
// play the recorded audio
var error: NSError?
let player = AVAudioPlayer(contentOfURL: recordedFileURL!, error: &error)
if player == nil {
println("error creating avaudioplayer")
if let e = error {
println(e.localizedDescription)
}
}
println("about to play")
player.delegate = self
player.prepareToPlay()
player.volume = 1.0
player.play()
println("told to play")
// -------
// further on in this class I have the following delegate methods:
func audioPlayerDidFinishPlaying(player: AVAudioPlayer!, successfully flag: Bool){
println("finished playing (successfully? \(flag))")
}
func audioPlayerDecodeErrorDidOccur(player: AVAudioPlayer!, error: NSError!){
println("audioPlayerDecodeErrorDidOccur: \(error.localizedDescription)")
}
My console output when I run this code is like this:
audio file exists!
about to play
told to play
I dont get anything logged into the console from either audioPlayerDidFinishPlaying or audioPlayerDecodeErrorDidOccur.
Can someone explain to me why I my audio clip isnt playing?
Many thanks.
You playing code is fine; your only problem is with the scope of your AVAudioPlayer object. Basically, you need to keep a strong reference to the player around all the time it's playing, otherwise it'll be destroyed by ARC, as it'll think you don't need it any more.
Normally you'd make the AVAudioPlayer object a property of whatever class your code is in, rather than making it a local variable in a method.
In the place where you want to use the Playback using speakers before you initialize your AVAudioPlayer, add the following code:
let recordingSession = AVAudioSession.sharedInstance()
do{
try recordingSession.setCategory(AVAudioSessionCategoryPlayback)
}catch{
}
and when you are just recording using AVAudioRecorder use this before initializing that:
do {
recordingSession = AVAudioSession.sharedInstance()
try recordingSession.setCategory(AVAudioSessionCategoryRecord)
}catch {}