Stream audio from microphone via bluetooth to another iPhone - ios

I am trying to take incoming microphone audio and stream it to another iPhone. Basically a phone call but via bluetooth. I have the audio coming in via AVAudioRecorder:
func startRecording() {
audioRecorder = nil
let audioSession:AVAudioSession = AVAudioSession.sharedInstance()
audioSession.setCategory(AVAudioSessionCategoryRecord, error: nil)
var recordSettings:NSMutableDictionary = NSMutableDictionary(capacity: 10)
recordSettings.setObject(NSNumber(integerLiteral: kAudioFormatLinearPCM), forKey: AVFormatIDKey)
recordSettings.setObject(NSNumber(float: 44100.0), forKey: AVSampleRateKey)
recordSettings.setObject(NSNumber(int: 2), forKey: AVNumberOfChannelsKey)
recordSettings.setObject(NSNumber(int: 16), forKey: AVLinearPCMBitDepthKey)
recordSettings.setObject(NSNumber(bool: false), forKey: AVLinearPCMIsBigEndianKey)
recordSettings.setObject(NSNumber(bool: false), forKey: AVLinearPCMIsFloatKey)
soundPath = documentsDirectory.stringByAppendingPathComponent("record.caf")
refURL = NSURL(fileURLWithPath: soundPath as String)
var error:NSError?
audioRecorder = AVAudioRecorder(URL: refURL, settings: recordSettings as [NSObject : AnyObject], error: &error)
if audioRecorder.prepareToRecord() == true {
audioRecorder.meteringEnabled = true
audioRecorder.record()
} else {
println(error?.localizedDescription)
}
}
Then I tried using StreamReader from HERE - StreamReader from #martin-r
Using:
if let aStreamReader = StreamReader(path: documentsDirectory.stringByAppendingPathComponent("record.caf")) {
while let line = aStreamReader.nextLine() {
let dataz = line.dataUsingEncoding(NSUTF8StringEncoding)
println (line)
Then send the data to another device using:
self.appDelegate.mpcDelegate.session.sendData(data: NSData!, toPeers: [AnyObject]!, withMode: MCSessionSendDataMode, error: NSErrorPointer )
I convert line to NSData, then using a dispatch_after 0.5 seconds running constantly, I send it to another device via bluetooth.
It does not seem to work and I don't think this is a practical way of doing it. I have done numerous searches and haven't seen much on streaming data via bluetooth. The key word streaming (understandably) sends me to pages about server streaming.
My question is, how can I take audio from a microphone and send it to another iPhone via bluetooth? I have the bluetooth part all set up and it works great. My question is very similar to THIS except with iPhones and Swift - I want to have a phone call via bluetooth.
Thank you in advance.

To simultaneously record and redirect output you need to use the category AVAudioSessionCategoryMultiRoute.
Here's the link to the categories list:
https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVAudioSession_ClassReference/#//apple_ref/doc/constant_group/Audio_Session_Categories
If all else fails you can use a pre-made solution:
http://audiob.us
It has an API that lets you integrate audio streaming from one app to another:
https://developer.audiob.us/
It supports multiple output endpoints.
Hope this helps.

Related

AVPlayer / AVAudioPlayer - Play audio fileURL from iCloud Drive (work on simulator but not iPhone) + (OSStatus error -54.)

I'm making Audio Player.
Importing file from iCloud Drive using .fileImporter.
I get file URL that looks like this: file:///private/var/mobile/Library/Mobile%20Documents/com~apple~CloudDocs/_Storage/Audio-books/%D0%91%D1%80%D0%B5%D0%BD%D0%B4%D1%8F%D1%82%D0%B8%D0%BD%D0%B0/Audiobook.mp3"
Then I pass it to audio player (tried AVPlayer and AVAudioPlayer). Both works on iOS simulator.
On the device after import I get error: The operation couldn’t be completed. (OSStatus error -54.)
I know it's possible, app called Evermusic does quite the same with on device files.
Is there permissions I need to be granted to play audio that stored on device?
How can I access Container for com~apple~CloudDocs?
Thank you very much for help, any suggestions greatly appreciated, I'm seriously stuck.
For future references repo of the project: https://github.com/yaosamo/AudioPlayer
You need to be using startAccessingSecurityScopedResource in order to get read access to those files. See documentation:
https://developer.apple.com/documentation/foundation/nsurl/1417051-startaccessingsecurityscopedreso
https://developer.apple.com/documentation/corefoundation/1543318-cfurlstartaccessingsecurityscope
In addition to #jnpdx answer want to add some details, and my solution example.
Few core things
✅ for my app if you need to access secured audio you need to use startAccessingSecurityScopedResource()
❌ you can't simple store URL and use it later, in fact you don't store fileURL at all. You need to use what's called bookmarkData() on your URL and store it. So later you can restore URL
✅ Watch Apple pres here
Here's how I import file:
.fileImporter(isPresented: $presentImporter, allowedContentTypes: [.mp3]) { result in
switch result {
case .success(let url):
// Start accessing secured url
let StartAccess = url.startAccessingSecurityScopedResource()
defer {
// Must stop accessing once stop using
if StartAccess {
url.stopAccessingSecurityScopedResource()
}
}
// Creating new book
let newBook = Book(context: viewContext)
let _ = print("---- Access Granted?", url.startAccessingSecurityScopedResource())
// Getting bookmarkData of the URL
let bookmarkData = try? url.bookmarkData()
newBook.name = "\(url.lastPathComponent)"
// Save bookmarkURL into CoreData
newBook.urldata = bookmarkData
// Specifiying parent item in CoreData
newBook.origin = playlist.self
try? viewContext.save()
case .failure(let error):
print(error)
}
}
Player retrieving URL:
func Audioplayer(bookmarkData: Data) {
// Restore security scoped bookmark
var bookmarkDataIsStale = false
let playNow = try? URL(resolvingBookmarkData: bookmarkData, bookmarkDataIsStale: &bookmarkDataIsStale)
do {
player = try AVAudioPlayer(contentsOf: playNow!)
// Delegate listen when audio is finished
player?.delegate = del
NotificationCenter.default.addObserver(forName: NSNotification.Name("ended"), object: nil, queue: .main) { (_) in
player?.stop()
ended = true
let _ = print("---- Book has ended ----")
}
} catch let error {
print("Player Error", error.localizedDescription)
}
player?.prepareToPlay()
player?.play()
}
Thank you and again here's repo on git.

How can I play audio through non-apple bluetooth earphones without distortion

I have an iPhone app that plays back prerecorded video clips. The audio sounds fine from the phone speaker or applepods, but when I listen through bluetooth connected headphones/speakers that are not apple it sounds terribly distorted. I have tried to use AVAUDIOSESESSION to fix the problem, but no luck. This is the code I tried (from another similar stack overflow answer):
var error: NSError?
var success: Bool?
override func viewDidLoad() {
super.viewDidLoad()
// so we can play the audio undistorted through bluetooth headphones:
do {
try AVAudioSession.sharedInstance().setCategory(.playAndRecord,
mode: .default, options: [.mixWithOthers, .allowAirPlay,
.allowBluetoothA2DP,.defaultToSpeaker])
try AVAudioSession.sharedInstance().setActive(true)
} catch let error1 as NSError {
error = error1
success = false
}
if !success! {
print("Failed to set audio session category. Error: \(error!)")
}
I am a first time developer so I need things explained very simply and from the basics up. Thank you so much.
It was an embarrassing error. I hope no one else makes it.
The range of AVPlayer's volume must be between 0.0 and 1.0. If you set the volume louder than this (I had player.volume = 7, instead of player.volume = 0.7) you will get distortion on non-apple bluetooth headphones for some reason (apple earphones and the internal speaker accommodates for this error).

Best way to make an asynchronous call effectively synchronous in iOS 10

To explain my situation a little better I'm trying to make an app which will play a ping noise when a button is pressed and then proceed to record and transcribe the user's voice immediately after.
For the ping sound I'm using System Sound Services, to record the audio I'm using AudioToolbox, and to transcribe it I'm using Speech kit.
I believe the crux of my problem lies in the timing of the asynchronous System sound services play function:
//Button pressed function
let audiosession = AVAudioSession.sharedInstance()
let filename = "Ping"
let ext = "wav"
if let soundUrl = Bundle.main.url(forResource: filename, withExtension: ext){
var soundId: SystemSoundID = 0
AudioServicesCreateSystemSoundID(soundUrl as CFURL, &soundId)
AudioServicesAddSystemSoundCompletion(soundId, nil, nil, {(soundid,_) -> Void in
AudioServicesDisposeSystemSoundID(soundid)
print("Sound played!")}, nil)
AudioServicesPlaySystemSound(soundId)
}
do{
try audiosession.setCategory(AVAudioSessionCategoryRecord)
try audiosession.setMode(AVAudioSessionModeMeasurement)
try audiosession.setActive(true, with: .notifyOthersOnDeactivation)
print("Changing modes!")
}catch{
print("error with audio session")
}
recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
guard let inputNode = audioEngine.inputNode else{
fatalError("Audio engine has no input node!")
}
guard let recognitionRequest = recognitionRequest else{
fatalError("Unable to create a speech audio buffer recognition request object")
}
recognitionRequest.shouldReportPartialResults = true
recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, delegate: self)
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
self.recognitionRequest?.append(buffer)
}
audioEngine.prepare()
do{
try audioEngine.start()
delegate?.didStartRecording()
}catch{
print("audioEngine couldn't start because of an error")
}
What happens when I run this code is that it records the voice and transcribes it successfully. However the ping is never played. The two(non-error) print statements I have in there fire in the order:
Changing modes!
Sound played!
So to my understanding, the reason the ping sound isn't being played is because by the time it actually completes I've already changed the audio session category from playback to record. Just to verify this is true, I tried removing everything but the sound services ping and it plays the sound as expected.
So my question is what is the best way to bypass the asynchronous nature of the AudioServicesPlaySystemSound call? I've experimented with trying to pass self into the completion function so I could have it trigger a function in my class which then runs the recording chunk. However I haven't been able to figure out how one actually goes about converting self to an UnsafeMutableRawPointer so it can be passed as clientData. Furthermore, even if I DID know how to do that, I'm not sure if it's even a good idea or the intended use of that parameter.
Alternatively, I could probably solve this problem by relying on something like notification center. But once again that just seems like a very clunky way of solving the problem that I'm going to end up regretting later.
Does anyone know what the correct way to handle this type of situation is?
Update:
As per Gruntcake's request, here is my attempt to access self in the completion block.
First I create a userData constant which is an UnsafeMutableRawPointer to self:
var me = self
let userData = withUnsafePointer(to: &me) { ptr in
return unsafeBitCast(ptr, to: UnsafeMutableRawPointer.self)
Next I use that constant in my callback block, and attempt to access self from it:
AudioServicesAddSystemSoundCompletion(soundId, nil, nil, {(sounded,me) -> Void in
AudioServicesDisposeSystemSoundID(sounded)
let myself = Unmanaged<myclassname>.fromOpaque(me!).takeRetainedValue()
myself.doOtherStuff()
print("Sound played!")}, userData)
Your attempt to call doOtherStuff() in the completion block is a correct approach (the only other one is notifications, those are the only two options)
What is complicating it in this case is the bridging from Obj-C to Swift that is necessary. Code to do that is:
let myData = unsafeBitCast(self, UnsafeMutablePointer<Void>.self)
AudioServicesAddSystemSoundCompletion(YOUR_SOUND_ID, CFRunLoopGetMain(), kCFRunLoopDefaultMode,{ (mSound, mVoid) in
let me = unsafeBitCast(mVoid, YOURCURRENTCLASS.self)
//me it is your current object so if yo have a variable like
// var someVar you can do
print(me.someVar)
}, myData)
Credit: This code was taken from an answer to this question, though it is not the accepted answer:
How do I implement AudioServicesSystemSoundCompletionProc in Swift?

How to save recorded audio iOS?

I am developing an application in which audio is being recorded and being transcribed to text. I am using the Speechkit provided by Nuance Developers.
The functions I am adding are:
Save the recorded audio file to persistent memory
Display the audio files in a table view
Load the saved audio files later
Play the audio files
How do I save the audio files to persistent storage?
Here's the code : https://gist.github.com/buildFlash/48d143217b721823ff4c3c03a925ba55
When you record audio with AVAudioRecorder then you have to pass path as url of the location where you are storing your audio. so by defaul it's store audio at that location.
for example,
var audioSession:AVAudioSession = AVAudioSession.sharedInstance()
audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord, error: nil)
audioSession.setActive(true, error: nil)
var documents: AnyObject = NSSearchPathForDirectoriesInDomains( NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomainMask.UserDomainMask, true)[0]
var str = documents.stringByAppendingPathComponent("myRecording1.caf")
var url = NSURL.fileURLWithPath(str as String)
var recordSettings = [AVFormatIDKey:kAudioFormatAppleIMA4,
AVSampleRateKey:44100.0,
AVNumberOfChannelsKey:2,AVEncoderBitRateKey:12800,
AVLinearPCMBitDepthKey:16,
AVEncoderAudioQualityKey:AVAudioQuality.Max.rawValue
]
println("url : \(url)")
var error: NSError?
audioRecorder = AVAudioRecorder(URL:url, settings: recordSettings, error: &error)
if let e = error {
println(e.localizedDescription)
} else {
audioRecorder.record()
}
So, here url is the location where your audio is stored and you can use that same url to play that audio. and you can get that file from url or path as data if you want to send it to server.
So, if you are using third party library then check that where it is storing audio and you can get it from there or it should have some method to get the location of it.
PS : there is no need to use third party library to record audio because you can easly manage it via AVAudioRecorder and AVAudioPlayer (for playing audio from url).
Inshort if you are recording audio then you definitely parallel storing it also!
You can refer Ravi shankar's tutorial also
Reference : this so post

Audible glitches on buffer playback via AVAudioPlayerNode in iOS (Swift) *working in simulator, but not on device

When using an AVAudioPlayerNode to schedule a short buffer to play immediately on a touch event ("Touch Up Inside"), I've noticed audible glitches / artifacts on playback while testing. The audio does not glitch at all in iOS simulator, however there is audible distortion on playback when I run the app on an actual iOS device. The audible distortion occurs randomly (the triggered sound will sometimes sound great, while other times it sounds distorted)
I've tried using different audio files, file formats, and preparing the buffer for playback using the prepareWithFrameCount method, but unfortunately the result is always the same and I'm stuck wondering what could be going wrong..
I've stripped the code down to globals for clarity and simplicity. Any help or insight would be greatly appreciated. This is my first attempt at developing an iOS app and my first question posted on Stack Overflow.
let filePath = NSBundle.mainBundle().pathForResource("BD_withSilence", ofType: "caf")!
let fileURL: NSURL = NSURL(fileURLWithPath: filePath)!
var error: NSError?
let file = AVAudioFile(forReading: fileURL, error: &error)
let fileFormat = file.processingFormat
let frameCount = UInt32(file.length)
let buffer = AVAudioPCMBuffer(PCMFormat: fileFormat, frameCapacity: frameCount)
let audioEngine = AVAudioEngine()
let playerNode = AVAudioPlayerNode()
func startEngine() {
var error: NSError?
file.readIntoBuffer(buffer, error: &error)
audioEngine.attachNode(playerNode)
audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: buffer.format)
audioEngine.prepare()
func start() {
var error: NSError?
audioEngine.startAndReturnError(&error)
}
start()
}
startEngine()
let frameCapacity = AVAudioFramePosition(buffer.frameCapacity)
let frameLength = buffer.frameLength
let sampleRate: Double = 44100.0
func play() {
func scheduleBuffer() {
playerNode.scheduleBuffer(buffer, atTime: nil, options: AVAudioPlayerNodeBufferOptions.Interrupts, completionHandler: nil)
playerNode.prepareWithFrameCount(frameLength)
}
if playerNode.playing == false {
scheduleBuffer()
let time = AVAudioTime(sampleTime: frameCapacity, atRate: sampleRate)
playerNode.playAtTime(time)
}
else {
scheduleBuffer()
}
}
// triggered by a "Touch Up Inside" event on a UIButton in my ViewController
#IBAction func triggerPlay(sender: AnyObject) {
play()
}
Update:
Ok I think I've identified the source of the distortion: the volume of the node(s) is too great at output and causes clipping. By adding these two lines in my startEngine function, the distortion no longer occurred:
playerNode.volume = 0.8
audioEngine.mainMixerNode.volume = 0.8
However, I'm still don't know why I need to lower the output- my audio file itself does not clip. I'm guessing that it might be a result of the way that the AVAudioPlayerNodeBufferOptions.Interrupts is implemented. When a buffer interrupts another buffer, could there be an increase in output volume as a result of the interruption, causing output clipping? I'm still looking for a solid understanding as to why this occurs.. If anyone is willing/able to provide any clarification about this that would be fantastic!
Not sure if this is the problem you experienced in 2015, it may be the same issue that #suthar experienced in 2018.
I experienced a very similar problem and was due to the fact that the sampleRate on the device is different to the simulator. On macOS it is 44100 and on iOS Devices (late model ones) it is 48000.
So when you fill your buffer with 44100 samples on a 48000 device, you get 3900 samples of silence. When played back it doesn't sound like silence, it sounds like a glitch.
I used the mainMixer format when connecting my playerNode and also when creating my pcmBuffer. Don't refer to 48000 or 44100 anywhere in the code.
audioEngine.attach( playerNode)
audioEngine.connect( playerNode, to:mixerNode, format:mixerNode.outputFormat(forBus:0))
let pcmBuffer = AVAudioPCMBuffer( pcmFormat:SynthEngine.shared.audioEngine.mainMixerNode.outputFormat( forBus:0),
frameCapacity:AVAudioFrameCount(bufferSize))

Resources