Saving AVAudioRecorder to NSUserDefaults - ios

I'm trying to start an audio recording on the apple watch and allow it to be stopped on the iPhone.
To share the information from the watch to the phone I am trying to do the following:
var recorder : AVAudioRecorder!
recorder = AVAudioRecorder(URL: soundFileURL, settings: recordSettings, error: &error)
if let watchDefaults = NSUserDefaults(suiteName: "group.spywatchkit") {
let encodedRecorder = NSKeyedArchiver.archivedDataWithRootObject(recorder) as NSData
watchDefaults.setObject(encodedRecorder, forKey: "test")
However, this results in the following error:
'NSInvalidArgumentException', reason: '-[AVAudioSession encodeWithCoder:]: unrecognized selector sent to instance
This appears to be failing since the AVAudioRecorder object doesn't conform to the NSCoder protocol. Is there another way to save this object? Can I recreate the object later?

Related

iOS WebRTC Local Media NSInvalidArgumentException [RTCI420Frame nativeHandle]

I am trying to create a local media stream in my iOS webRTC app. See code below
let localStream = pcFactory.mediaStream(withLabel: "ARDAMS")!
let audio = pcFactory.audioTrack(withID: "ARDAMSa0")
localStream.addAudioTrack(audio!)
var device: AVCaptureDevice?
for captureDevice in AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo){
if let captureDevice = captureDevice as? AVCaptureDevice{
if captureDevice.position == AVCaptureDevicePosition.front{
device = captureDevice
}
}
}
if let device = device{
let capture = RTCVideoCapturer(deviceName: device.localizedName)
let videoSource = pcFactory.videoSource(with: capture, constraints: nil)
localVideoTrack = pcFactory.videoTrack(withID: "ARDAMSv0", source: videoSource)
localStream.addVideoTrack(localVideoTrack)
}
self.peerConnection?.add(localStream)
localVideoTrack?.add(localVideoView)
Everything works, but when after I add the localVideoView to the localVideoTrack I get an error:
-[RTCI420Frame nativeHandle]: unrecognized selector sent to instance 0x170010620
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[RTCI420Frame nativeHandle]: unrecognized selector sent to instance 0x170010620'
All of the code is running on the main thread and the app has the appropriate permissions and plist keys. When I walk through the code line by line using the debugger everything seems to be running correctly. This code was taken from the Obj-C AppRTC demo, it has just been converted to swift. I can't seem to find the difference between my swift project that crashes and the working AppRTC project. Any idea what I am doing wrong? I am testing on a 64 bit device. Thanks!

How to save recorded audio iOS?

I am developing an application in which audio is being recorded and being transcribed to text. I am using the Speechkit provided by Nuance Developers.
The functions I am adding are:
Save the recorded audio file to persistent memory
Display the audio files in a table view
Load the saved audio files later
Play the audio files
How do I save the audio files to persistent storage?
Here's the code : https://gist.github.com/buildFlash/48d143217b721823ff4c3c03a925ba55
When you record audio with AVAudioRecorder then you have to pass path as url of the location where you are storing your audio. so by defaul it's store audio at that location.
for example,
var audioSession:AVAudioSession = AVAudioSession.sharedInstance()
audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord, error: nil)
audioSession.setActive(true, error: nil)
var documents: AnyObject = NSSearchPathForDirectoriesInDomains( NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomainMask.UserDomainMask, true)[0]
var str = documents.stringByAppendingPathComponent("myRecording1.caf")
var url = NSURL.fileURLWithPath(str as String)
var recordSettings = [AVFormatIDKey:kAudioFormatAppleIMA4,
AVSampleRateKey:44100.0,
AVNumberOfChannelsKey:2,AVEncoderBitRateKey:12800,
AVLinearPCMBitDepthKey:16,
AVEncoderAudioQualityKey:AVAudioQuality.Max.rawValue
]
println("url : \(url)")
var error: NSError?
audioRecorder = AVAudioRecorder(URL:url, settings: recordSettings, error: &error)
if let e = error {
println(e.localizedDescription)
} else {
audioRecorder.record()
}
So, here url is the location where your audio is stored and you can use that same url to play that audio. and you can get that file from url or path as data if you want to send it to server.
So, if you are using third party library then check that where it is storing audio and you can get it from there or it should have some method to get the location of it.
PS : there is no need to use third party library to record audio because you can easly manage it via AVAudioRecorder and AVAudioPlayer (for playing audio from url).
Inshort if you are recording audio then you definitely parallel storing it also!
You can refer Ravi shankar's tutorial also
Reference : this so post

Video compression using AVAssetWriter

I've created a function to compress a video file. It uses AVAssetWriter and adds inputs and outputs for video and audio tracks. When it starts writing I'm getting an error when the AVAssetReader for the audio track starts reading, audioReader.startReading(). Here the error, *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[AVAssetReader startReading] cannot be called again after reading has already started'.
The code: https://gist.github.com/jaumevn/9ba329aaf49c81c57a276fd135f53f20
Can anyone see what's the problem here? Thanks!
Line 77 of your code, you're starting a second AVAssetReader on the same file.
You don't need to hook up two readers, instead, you should hook up your AVAudioAssetReader as an Output for the existing AVAssetReader.
Something like this:
let videoReaderSettings : [String : Int] = [kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange)]
let videoReaderOutput = AVAssetReaderTrackOutput(track: videoAssetTrack, outputSettings: videoReaderSettings)
let videoReader = try! AVAssetReader(asset: videoAssetUrl)
var settings = [String : AnyObject]()
settings[AVFormatIDKey] = Int(kAudioFormatLinearPCM)
let audioReaderOutput = AVAssetReaderTrackOutput(track: audioAssetTrack, outputSettings: settings)
videoReader.addOutput(videoReaderOutput)
videoReader.addOutput(audioReaderOutput)
videoWriter.startWriting()
videoReader.startReading()
Look into using AVCaptureVideoDataOutputSampleBufferDelegate and AVCaptureAudioDataOutputSampleBufferDelegate to capture and process the buffers from the reader.

Video upload from iPhone device fails but works on Simulator perfectly ERROR:'Cannot read file'

I am trying to upload a video from iPhone device as:
var uploadTask = self.session?.uploadTaskWithRequest(request, fromFile:NSURL(string: assetFilePath.path)!)
This code works on simulator and gives a session task object which I can resume. But it does not work on iPhone device.
It fails as:
2015-05-19 18:36:44.718 myApp[327:24703] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Cannot read file at file:///var/mobile/Media/DCIM/100APPLE/IMG_0144.mp4'
I tried to check if the video file has read access, but it returns false on iPhone:
fileManager.fileExistsAtPath(asset.path) // returns false
Anybody has encountered this before, or am I doing something wrong here?
Code which I am using to get the file path is :
let options = PHFetchOptions()
options.sortDescriptors = [
NSSortDescriptor(key: "creationDate", ascending: true)
]
currentVideofetch = PHAsset.fetchAssetsWithMediaType(.Video, options: options)
let asset = self.currentVideofetch.objectAtIndex(indexPath.row) as? PHAsset
var assetLength:NSNumber!
var assetFilePath:NSString!
if let checkdAsset = asset {
PHImageManager.defaultManager().requestImageDataForAsset(checkdAsset,options: nil) {
imageData,dataUTI,orientation,info in
assetLength = imageData.length as NSNumber
let assetFilePathUrl = info["PHImageFileURLKey"] as? NSURL
assetFilePath = assetFilePathUrl!.absoluteString!
println("Assets FilePath \(assetFilePath)") // returns file:///var/mobile/Media/DCIM/100APPLE/IMG_0144.mp4
}
}
After messing up with lot. This is classic permissions issue in iOS. Unfortunately I didn't get any straight answers to this. We had to copy file to our local directory of my App. After that everything is works like a charm.
But in case of large file I send copying file logic in background task.

exception com.apple.coreaudio.avfaudio reason: error -50

I have this message when i try to play an audio with a different pitch:
And i googled for that error with no succeed. If i set breakpoints it stops here:
I test printing all objects to see is anything is nit but i didnt found anything. The most misterious thing is that only happens in my iphone6+, in other phones i tested this out doesnt break. Then searched the project where i looked into to add this sound effects which is this:
https://github.com/atikur/Pitch-Perfect
And if you run it it works, until you change...
AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayAndRecord, error: &error)
To:
AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback, error: &error)
And then boom (ONLY IN REAL DEVICE ATTACHED TO XCODE, it works in the simulator):
2015-03-21 11:56:13.311 Pitch Perfect[1237:607678] 11:56:13.311 ERROR: [0x10320c000] AVAudioFile.mm:496: -[AVAudioFile readIntoBuffer:frameCount:error:]: error -50
2015-03-21 11:56:13.313 Pitch Perfect[1237:607678] * Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'error -50'
* First throw call stack:
(0x18687a530 0x1978040e4 0x18687a3f0 0x1851ea6c0 0x185232d38 0x1852130f8 0x185212ccc 0x100584fd4 0x100584f94 0x10058fdb8 0x1005882c4 0x1005925d4 0x100592208 0x198037dc8 0x198037d24 0x198034ef8)
libc++abi.dylib: terminating with uncaught exception of type NSException
And the really really weird thing is this screenshot, for some reason after printing audioEngine, audioEngine.outputNode gets nil?
I had the same error... I had created a "sound.swift" class that my view controller would instantiate... I decided to simplify everything and focus on making the sound work. So I have put the following code in the view controller and it works:
//fetch recorded file
var pitchPlayer = AVAudioPlayerNode()
var timePitch = AVAudioUnitTimePitch()
let dirPath = NSSearchPathForDirectoriesInDomains(.DocumentDirectory,.UserDomainMask,true)[0] as! String
var pathArray = [dirPath, String("son.wav")]
filePath = NSURL.fileURLWithPathComponents(pathArray)
audioFile = AVAudioFile(forReading: filePath.filePathURL, error: nil)
audioEngine = AVAudioEngine()
audioEngine.attachNode(pitchPlayer)
audioEngine.attachNode(timePitch)
//Create a session
var session=AVAudioSession.sharedInstance()
session.setCategory(AVAudioSessionCategoryPlayAndRecord,error:nil)
//output audio
session.overrideOutputAudioPort(AVAudioSessionPortOverride.Speaker, error: nil)
audioEngine.connect(pitchPlayer, to: timePitch, format: audioFile.processingFormat)
audioEngine.connect(timePitch, to: audioEngine.outputNode, format: audioFile.processingFormat)
pitchPlayer.scheduleFile(audioFile, atTime: nil, completionHandler: nil)
audioEngine.startAndReturnError(&audioError)
pitchPlayer.play()

Resources