iOS - Detect Blow into Mic and convert the results! (swift) - ios

I need to develop an iOS App in swift which detects a blow in the microphone from a user. This has to be a Challenge-Game where two players have to blow into the iPhone mic one after the other. The decibel values should be measured and converted in meter or kilometer so I can determine a winner. The player which "blows further" (player1: 50km, player2: 70km) wins.
Is this a possible implementation?
I have this code in swift and I don't know how to proceed:
import Foundation
import UIKit
import AVFoundation
import CoreAudio
class ViewController: UIViewController {
// #IBOutlet weak var mainImage: UIImageView!
var recorder: AVAudioRecorder!
var levelTimer = NSTimer()
var lowPassResults: Double = 0.0
override func viewDidLoad() {
super.viewDidLoad()
let url = NSURL.fileURLWithPath("dev/null")
//numbers are automatically wrapped into NSNumber objects, so I simplified that to [NSString : NSNumber]
var settings : [NSString : NSNumber] = [AVSampleRateKey: 44100.0, AVFormatIDKey: kAudioFormatAppleLossless, AVNumberOfChannelsKey: 1, AVEncoderAudioQualityKey: AVAudioQuality.Max.rawValue]
var error: NSError?
// mainImage?.image = UIImage(named: "flyForReal.png");
recorder = AVAudioRecorder(URL:url, settings:settings, error:&error)
if((recorder) != nil){
recorder.prepareToRecord()
recorder.meteringEnabled = true
recorder.record()
levelTimer = NSTimer.scheduledTimerWithTimeInterval(0.05, target: self, selector: Selector("levelTimerCallback"), userInfo: nil, repeats: true)
}
else{
NSLog("%#", "Error");
}
}
func levelTimerCallback(timer:NSTimer) {
recorder.updateMeters()
let ALPHA: Double = 0.05
var peakPowerForChannel = pow(Double(10), (0.05 * Double(recorder.peakPowerForChannel(0))))
lowPassResults = ALPHA * peakPowerForChannel + (1.0 - ALPHA) * lowPassResults;
if(lowPassResults > 0.95){
NSLog("#Mic blow detected");
}
NSLog("#Average input: %f Peak input: %f Low pass results: %f", recorder.averagePowerForChannel(0), recorder.peakPowerForChannel(0), lowPassResults);
}
}
Thanks ahead!

I converted Andrew's answer to Swift 4:
import Foundation
import UIKit
import AVFoundation
import CoreAudio
class ViewController: UIViewController {
var recorder: AVAudioRecorder!
var levelTimer = Timer()
let LEVEL_THRESHOLD: Float = -10.0
override func viewDidLoad() {
super.viewDidLoad()
let documents = URL(fileURLWithPath: NSSearchPathForDirectoriesInDomains(FileManager.SearchPathDirectory.documentDirectory, FileManager.SearchPathDomainMask.userDomainMask, true)[0])
let url = documents.appendingPathComponent("record.caf")
let recordSettings: [String: Any] = [
AVFormatIDKey: kAudioFormatAppleIMA4,
AVSampleRateKey: 44100.0,
AVNumberOfChannelsKey: 2,
AVEncoderBitRateKey: 12800,
AVLinearPCMBitDepthKey: 16,
AVEncoderAudioQualityKey: AVAudioQuality.max.rawValue
]
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
try audioSession.setActive(true)
try recorder = AVAudioRecorder(url:url, settings: recordSettings)
} catch {
return
}
recorder.prepareToRecord()
recorder.isMeteringEnabled = true
recorder.record()
levelTimer = Timer.scheduledTimer(timeInterval: 0.02, target: self, selector: #selector(levelTimerCallback), userInfo: nil, repeats: true)
}
#objc func levelTimerCallback() {
recorder.updateMeters()
let level = recorder.averagePower(forChannel: 0)
let isLoud = level > LEVEL_THRESHOLD
// do whatever you want with isLoud
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
}

Close. You have a couple of issues. Your selector call crashes the app because you're not passing an argument and levelTimerCallback() expects one.
averagePowerPerChannel seems to give me a more real-time metering so I used that instead of peakPowerPerChannel
Also, you need to set up an audio session. I wasn't really sure what all that math was about, so I just got rid of it here. I've pasted the entire view controller below for basic mic detection.
import Foundation
import UIKit
import AVFoundation
import CoreAudio
class ViewController: UIViewController {
var recorder: AVAudioRecorder!
var levelTimer = NSTimer()
var lowPassResults: Double = 0.0
override func viewDidLoad() {
super.viewDidLoad()
//make an AudioSession, set it to PlayAndRecord and make it active
var audioSession:AVAudioSession = AVAudioSession.sharedInstance()
audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord, error: nil)
audioSession.setActive(true, error: nil)
//set up the URL for the audio file
var documents: AnyObject = NSSearchPathForDirectoriesInDomains( NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomainMask.UserDomainMask, true)[0]
var str = documents.stringByAppendingPathComponent("recordTest.caf")
var url = NSURL.fileURLWithPath(str as String)
// make a dictionary to hold the recording settings so we can instantiate our AVAudioRecorder
var recordSettings: [NSObject : AnyObject] = [AVFormatIDKey:kAudioFormatAppleIMA4,
AVSampleRateKey:44100.0,
AVNumberOfChannelsKey:2,AVEncoderBitRateKey:12800,
AVLinearPCMBitDepthKey:16,
AVEncoderAudioQualityKey:AVAudioQuality.Max.rawValue
]
//declare a variable to store the returned error if we have a problem instantiating our AVAudioRecorder
var error: NSError?
//Instantiate an AVAudioRecorder
recorder = AVAudioRecorder(URL:url, settings: recordSettings, error: &error)
//If there's an error, print that shit - otherwise, run prepareToRecord and meteringEnabled to turn on metering (must be run in that order)
if let e = error {
println(e.localizedDescription)
} else {
recorder.prepareToRecord()
recorder.meteringEnabled = true
//start recording
recorder.record()
//instantiate a timer to be called with whatever frequency we want to grab metering values
self.levelTimer = NSTimer.scheduledTimerWithTimeInterval(0.02, target: self, selector: Selector("levelTimerCallback"), userInfo: nil, repeats: true)
}
}
//This selector/function is called every time our timer (levelTime) fires
func levelTimerCallback() {
//we have to update meters before we can get the metering values
recorder.updateMeters()
//print to the console if we are beyond a threshold value. Here I've used -7
if recorder.averagePowerForChannel(0) > -7 {
print("Dis be da level I'm hearin' you in dat mic ")
println(recorder.averagePowerForChannel(0))
println("Do the thing I want, mofo")
}
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
}

Related

MTAudioProcessingTap EXC_BAD_ACCESS , doesnt always fire the finalize callback. how to Release it?

Im trying to implement MTAudioProcessingTap and it works great. The problem is when Im done using the Tap and I reinstaniate my class and create a new Tap.
How Im supposely releasing the tap
1- I retain the tap as a property when created, hoping I can access it and release it later
2- In deinit() method of the class, I set the audiomix to nil and try to do a self.tap?.release()
The thing is.. sometimes it works and calls the FINALIZE callback and everything is great, and sometimes it doesn't and just crashes at the tapProcess Callback line:
let selfMediaInput = Unmanaged<VideoMediaInput>.fromOpaque(MTAudioProcessingTapGetStorage(tap)).takeUnretainedValue()
Here's the full code: https://gist.github.com/omarojo/03d08165a1a7962cb30c17ec01f809a3
import Foundation
import UIKit
import AVFoundation;
import MediaToolbox
protocol VideoMediaInputDelegate: class {
func videoFrameRefresh(sampleBuffer: CMSampleBuffer) //could be audio or video
}
class VideoMediaInput: NSObject {
private let queue = DispatchQueue(label: "com.GenerateMetal.VideoMediaInput")
var videoURL: URL!
weak var delegate: VideoMediaInputDelegate?
private var playerItemObserver: NSKeyValueObservation?
var displayLink: CADisplayLink!
var player = AVPlayer()
var playerItem: AVPlayerItem!
let videoOutput = AVPlayerItemVideoOutput(pixelBufferAttributes: [String(kCVPixelBufferPixelFormatTypeKey): NSNumber(value: kCVPixelFormatType_32BGRA)])
var audioProcessingFormat: AudioStreamBasicDescription?//UnsafePointer<AudioStreamBasicDescription>?
var tap: Unmanaged<MTAudioProcessingTap>?
override init(){
}
convenience init(url: URL){
self.init()
self.videoURL = url
self.playerItem = AVPlayerItem(url: url)
playerItemObserver = playerItem.observe(\.status) { [weak self] item, _ in
guard item.status == .readyToPlay else { return }
self?.playerItemObserver = nil
self?.player.play()
}
setupProcessingTap()
player.replaceCurrentItem(with: playerItem)
player.currentItem!.add(videoOutput)
NotificationCenter.default.removeObserver(self)
NotificationCenter.default.addObserver(forName: NSNotification.Name.AVPlayerItemDidPlayToEndTime, object: nil, queue: nil) {[weak self] notification in
if let weakSelf = self {
/*
Setting actionAtItemEnd to None prevents the movie from getting paused at item end. A very simplistic, and not gapless, looped playback.
*/
weakSelf.player.actionAtItemEnd = .none
weakSelf.player.seek(to: CMTime.zero)
weakSelf.player.play()
}
}
NotificationCenter.default.addObserver(
self,
selector: #selector(applicationDidBecomeActive(_:)),
name: UIApplication.didBecomeActiveNotification,
object: nil)
}
func stopAllProcesses(){
self.queue.sync {
self.player.pause()
self.player.isMuted = true
self.player.currentItem?.audioMix = nil
self.playerItem.audioMix = nil
self.playerItem = nil
self.tap?.release()
}
}
deinit{
print(">> VideoInput deinited !!!! 📌📌")
if let link = self.displayLink {
link.invalidate()
}
NotificationCenter.default.removeObserver(self)
stopAllProcesses()
}
public func playVideo(){
if (player.currentItem != nil) {
print("Starting playback!")
player.play()
}
}
public func pauseVideo(){
if (player.currentItem != nil) {
print("Pausing playback!")
player.pause()
}
}
#objc func applicationDidBecomeActive(_ notification: NSNotification) {
playVideo()
}
//MARK: GET AUDIO BUFFERS
func setupProcessingTap(){
var callbacks = MTAudioProcessingTapCallbacks(
version: kMTAudioProcessingTapCallbacksVersion_0,
clientInfo: UnsafeMutableRawPointer(Unmanaged.passUnretained(self).toOpaque()),
init: tapInit,
finalize: tapFinalize,
prepare: tapPrepare,
unprepare: tapUnprepare,
process: tapProcess)
var tap: Unmanaged<MTAudioProcessingTap>?
let err = MTAudioProcessingTapCreate(kCFAllocatorDefault, &callbacks, kMTAudioProcessingTapCreationFlag_PostEffects, &tap)
self.tap = tap
print("err: \(err)\n")
if err == noErr {
}
print("tracks? \(playerItem.asset.tracks)\n")
let audioTrack = playerItem.asset.tracks(withMediaType: AVMediaType.audio).first!
let inputParams = AVMutableAudioMixInputParameters(track: audioTrack)
inputParams.audioTapProcessor = tap?.takeRetainedValue()//tap?.takeUnretainedValue()
// tap?.release()
// print("inputParms: \(inputParams), \(inputParams.audioTapProcessor)\n")
let audioMix = AVMutableAudioMix()
audioMix.inputParameters = [inputParams]
playerItem.audioMix = audioMix
}
//MARK: TAP CALLBACKS
let tapInit: MTAudioProcessingTapInitCallback = {
(tap, clientInfo, tapStorageOut) in
tapStorageOut.pointee = clientInfo
print("init \(tap, clientInfo, tapStorageOut)\n")
}
let tapFinalize: MTAudioProcessingTapFinalizeCallback = {
(tap) in
print("finalize \(tap)\n")
}
let tapPrepare: MTAudioProcessingTapPrepareCallback = {
(tap, itemCount, basicDescription) in
print("prepare: \(tap, itemCount, basicDescription)\n")
let selfMediaInput = Unmanaged<VideoMediaInput>.fromOpaque(MTAudioProcessingTapGetStorage(tap)).takeUnretainedValue()
selfMediaInput.audioProcessingFormat = AudioStreamBasicDescription(mSampleRate: basicDescription.pointee.mSampleRate,
mFormatID: basicDescription.pointee.mFormatID, mFormatFlags: basicDescription.pointee.mFormatFlags, mBytesPerPacket: basicDescription.pointee.mBytesPerPacket, mFramesPerPacket: basicDescription.pointee.mFramesPerPacket, mBytesPerFrame: basicDescription.pointee.mBytesPerFrame, mChannelsPerFrame: basicDescription.pointee.mChannelsPerFrame, mBitsPerChannel: basicDescription.pointee.mBitsPerChannel, mReserved: basicDescription.pointee.mReserved)
}
let tapUnprepare: MTAudioProcessingTapUnprepareCallback = {
(tap) in
print("unprepare \(tap)\n")
}
let tapProcess: MTAudioProcessingTapProcessCallback = {
(tap, numberFrames, flags, bufferListInOut, numberFramesOut, flagsOut) in
print("callback \(bufferListInOut)\n")
let selfMediaInput = Unmanaged<VideoMediaInput>.fromOpaque(MTAudioProcessingTapGetStorage(tap)).takeUnretainedValue()
let status = MTAudioProcessingTapGetSourceAudio(tap, numberFrames, bufferListInOut, flagsOut, nil, numberFramesOut)
//print("get audio: \(status)\n")
if status != noErr {
print("Error TAPGetSourceAudio :\(String(describing: status.description))")
return
}
selfMediaInput.processAudioData(audioData: bufferListInOut, framesNumber: UInt32(numberFrames))
}
func processAudioData(audioData: UnsafeMutablePointer<AudioBufferList>, framesNumber: UInt32) {
var sbuf: CMSampleBuffer?
var status : OSStatus?
var format: CMFormatDescription?
//FORMAT
// var audioFormat = self.audioProcessingFormat//self.audioProcessingFormat?.pointee
guard var audioFormat = self.audioProcessingFormat else {
return
}
status = CMAudioFormatDescriptionCreate(allocator: kCFAllocatorDefault, asbd: &audioFormat, layoutSize: 0, layout: nil, magicCookieSize: 0, magicCookie: nil, extensions: nil, formatDescriptionOut: &format)
if status != noErr {
print("Error CMAudioFormatDescriptionCreater :\(String(describing: status?.description))")
return
}
print(">> Audio Buffer mSampleRate:\(Int32(audioFormat.mSampleRate))")
var timing = CMSampleTimingInfo(duration: CMTimeMake(value: 1, timescale: Int32(audioFormat.mSampleRate)), presentationTimeStamp: self.player.currentTime(), decodeTimeStamp: CMTime.invalid)
status = CMSampleBufferCreate(allocator: kCFAllocatorDefault,
dataBuffer: nil,
dataReady: Bool(truncating: 0),
makeDataReadyCallback: nil,
refcon: nil,
formatDescription: format,
sampleCount: CMItemCount(framesNumber),
sampleTimingEntryCount: 1,
sampleTimingArray: &timing,
sampleSizeEntryCount: 0, sampleSizeArray: nil,
sampleBufferOut: &sbuf);
if status != noErr {
print("Error CMSampleBufferCreate :\(String(describing: status?.description))")
return
}
status = CMSampleBufferSetDataBufferFromAudioBufferList(sbuf!,
blockBufferAllocator: kCFAllocatorDefault ,
blockBufferMemoryAllocator: kCFAllocatorDefault,
flags: 0,
bufferList: audioData)
if status != noErr {
print("Error cCMSampleBufferSetDataBufferFromAudioBufferList :\(String(describing: status?.description))")
return
}
let currentSampleTime = CMSampleBufferGetOutputPresentationTimeStamp(sbuf!);
print(" audio buffer at time: \(currentSampleTime)")
self.delegate?.videoFrameRefresh(sampleBuffer: sbuf!)
}
}
How I use my class
self.inputVideoMedia = nil
self.inputVideoMedia = VideoMediaInput(url: videoURL)
self.inputVideoMedia!.delegate = self
the second time I do that.. it crashes (but not always). The times it doesnt crash I can see printed in the console the FINALIZE print.
If VideoMediaInput is deallocated before the tap is deallocated (which can happen as there seems to be no way to synchronously stop a tap), then the tap callback can choke on a reference to your deallocated class.
You can fix this by passing (a wrapped, I guess) weak reference to your class. You can do it like this:
First delete your tap instance variable, and any references to it - it's not needed. Then make these changes:
class VideoMediaInput: NSObject {
class TapCookie {
weak var input: VideoMediaInput?
deinit {
print("TapCookie deinit")
}
}
...
func setupProcessingTap(){
let cookie = TapCookie()
cookie.input = self
var callbacks = MTAudioProcessingTapCallbacks(
version: kMTAudioProcessingTapCallbacksVersion_0,
clientInfo: UnsafeMutableRawPointer(Unmanaged.passRetained(cookie).toOpaque()),
init: tapInit,
finalize: tapFinalize,
prepare: tapPrepare,
unprepare: tapUnprepare,
process: tapProcess)
...
let tapFinalize: MTAudioProcessingTapFinalizeCallback = {
(tap) in
print("finalize \(tap)\n")
// release cookie
Unmanaged<TapCookie>.fromOpaque(MTAudioProcessingTapGetStorage(tap)).release()
}
let tapPrepare: MTAudioProcessingTapPrepareCallback = {
(tap, itemCount, basicDescription) in
print("prepare: \(tap, itemCount, basicDescription)\n")
let cookie = Unmanaged<TapCookie>.fromOpaque(MTAudioProcessingTapGetStorage(tap)).takeUnretainedValue()
let selfMediaInput = cookie.input!
...
let tapProcess: MTAudioProcessingTapProcessCallback = {
(tap, numberFrames, flags, bufferListInOut, numberFramesOut, flagsOut) in
print("callback \(bufferListInOut)\n")
let cookie = Unmanaged<TapCookie>.fromOpaque(MTAudioProcessingTapGetStorage(tap)).takeUnretainedValue()
guard let selfMediaInput = cookie.input else {
print("Tap callback: VideoMediaInput was deallocated!")
return
}
...
I'm not sure if the cookie class is necessary, it exists only to wrap the weak reference. Cutting edge Swift experts may know how to mash the weakness through all the teenage mutant ninja raw pointers, but I don't.
The audio context runs in its own real-time thread. So audio processes don't stop synchronously with a stop or cancel function call, but some unknown time later (on the order of the duration of some number of audio samples in some internal audio buffers), after the real-time thread drains.
Thus, audio buffers, objects, and callbacks should not be released (or reallocated) until some (unknown, but less than a couple seconds) time after stopping any real-time audio stream.
Depending on deallocation object messages or instance variable states (including weak references) betweens real-time threads is reported to be currently unsafe in Swift (see WWDC 2018 session on audio). Thus, I recommend using semaphores (outside of a real-time context, such as audio), or posix memory barriers (inside a bridged call to a C function). (...until some future version of Swift figures out a real-time concurrency mechanism.) (...especially on iOS or Apple Silicon (M1) devices which can re-order memory writes).

Unable to play a very simple tone

I was following this question, but the tone that I try to play with an AVAudioPCMBuffer does not play. The code is pretty simple:
class Player: NSObject {
var engine = AVAudioEngine()
var player = AVAudioPlayerNode()
var mixer: AVAudioMixerNode!
var buffer: AVAudioPCMBuffer!
override init() {
mixer = engine.mainMixerNode
buffer = AVAudioPCMBuffer(pcmFormat: player.outputFormat(forBus: 0), frameCapacity: 100)
buffer.frameLength = 100
let sr = mixer.outputFormat(forBus: 0).sampleRate
let nChannels = mixer.outputFormat(forBus: 0).channelCount
var i = 0
while i < Int(buffer.frameLength) {
let val = sin(441 * Double(i) * Double.pi / sr)
buffer.floatChannelData?.pointee[i] = Float(val * 0.5)
i += Int(nChannels)
}
engine.attach(player)
engine.connect(player, to: mixer, format: player.outputFormat(forBus: 0))
engine.prepare()
}
func play() {
do {
try engine.start()
} catch {
print(error)
}
player.scheduleBuffer(buffer, at: nil, options: .loops) {
print("Played!")
}
player.play()
}
}
For some reason, though, the iPhone does not make any sound. In my ViewController, I have this:
class ViewController: UIViewController {
var player = Player()
override func viewDidAppear(_ animated: Bool) {
player.play()
}
}
As you can see, player is a class variable, so it should not be deallocated from memory.
When I run the app on my physical device (iPhone 6s iOS 11), it does not work, yet it does work on the simulator. Why is this not making any sound, and how can I fix it?
Thanks in advance!
Make sure your device is not on silent mode.
I just created a project that you can download for testing by yourself: https://github.com/mugx/TestSound

Allowing background audio with Swift not working

I want to allow background audio while the app is not in focus. I currently have this code, which should allow that:
do {
try AKSettings.setSession(category: .playback, with: .mixWithOthers)
} catch {
print("error")
}
AKSettings.playbackWhileMuted = true
I also have the setting 'Audio, Airplay and Picture in Picture' enabled in capabilities settings. However, when I press the home button on my device the audio doesn't keep playing. What am I doing wrong? I am using AudioKit to produce sounds if that matters.
I am using a singleton to house all of the AudioKit components which I named AudioPlayer.swift. Here is what I have in my AudioPlayer.swift singleton file:
class AudioPlayer: NSObject {
var currentFrequency = String()
var soundIsPlaying = false
var leftOscillator = AKOscillator()
var rightOscillator = AKOscillator()
var rain = try! AKAudioFile()
var rainPlayer: AKAudioPlayer!
var envelope = AKAmplitudeEnvelope()
override init() {
super.init()
do {
try AKSettings.setSession(category: .playback, with: .mixWithOthers)
} catch {
print("error")
}
AKSettings.playbackWhileMuted = true
AudioKit.output = envelope
AudioKit.start()
}
func setupFrequency(left: AKOscillator, right: AKOscillator, frequency: String) {
currentFrequency = frequency
leftOscillator = left
rightOscillator = right
let leftPanner = AKPanner(leftOscillator)
leftPanner.pan = -1
let rightPanner = AKPanner(rightOscillator)
rightPanner.pan = 1
//Set up rain and rainPlayer
do {
rain = try AKAudioFile(readFileName: "rain.wav")
rainPlayer = try AKAudioPlayer(file: rain, looping: true, deferBuffering: false, completionHandler: nil)
} catch { print(error) }
let mixer = AKMixer(leftPanner, rightPanner, rainPlayer)
//Put mixer in sound envelope
envelope = AKAmplitudeEnvelope(mixer)
envelope.attackDuration = 2.0
envelope.decayDuration = 0
envelope.sustainLevel = 1
envelope.releaseDuration = 0.2
//Start AudioKit stuff
AudioKit.output = envelope
AudioKit.start()
leftOscillator.start()
rightOscillator.start()
rainPlayer.start()
envelope.start()
soundIsPlaying = true
}
}
And here is an example of one of my sound effect view controllers, which reference the AudioKit singleton to send it a certain frequency (I have about a dozen of these view controllers, each with its own frequency settings):
class CalmView: UIViewController {
let leftOscillator = AKOscillator()
let rightOscillator = AKOscillator()
override func viewDidLoad() {
super.viewDidLoad()
leftOscillator.amplitude = 0.3
leftOscillator.frequency = 220
rightOscillator.amplitude = 0.3
rightOscillator.frequency = 230
}
#IBAction func playSound(_ sender: Any) {
if shared.soundIsPlaying == false {
AudioKit.stop()
shared.setupFrequency(left: leftOscillator, right: rightOscillator, frequency: "Calm")
} else if shared.soundIsPlaying == true && shared.currentFrequency != "Calm" {
AudioKit.stop()
shared.leftOscillator.stop()
shared.rightOscillator.stop()
shared.rainPlayer.stop()
shared.envelope.stop()
shared.setupFrequency(left: leftOscillator, right: rightOscillator, frequency: "Calm")
} else {
shared.soundIsPlaying = false
shared.envelope.stop()
}
}
}
I instantiated the AudioPlayer singleton in my ViewController.swift file.
It depends on when you are doing your configuration in relation to when AudioKit is started. If you're using AudioKit you should be using its AKSettings to manage your session category. Basically not only the playback category but also mixWithOthers. By default, does this:
/// Set the audio session type
#objc open static func setSession(category: SessionCategory,
with options: AVAudioSessionCategoryOptions = [.mixWithOthers]) throws {
So you'd do something like this in your ViewController:
do {
if #available(iOS 10.0, *) {
try AKSettings.setSession(category: .playAndRecord, with: [.defaultToSpeaker, .allowBluetooth, .allowBluetoothA2DP])
} else {
// Fallback on earlier versions
}
} catch {
print("Errored setting category.")
}
So I think its a matter of getting that straight. It might also help to have inter-app audio set up. If you still have trouble and provide more information, I can help more, but this is as good an answer as I can muster based on the info you've given so far.

Get valid Float value to use in AVPlayer from CMTime

Im trying to set a slider to the current value of an audio file which is playing. But I'm getting problems when setting the values for the slider.
when i use a timer from ViewDidLoad:
self.timer = Timer.scheduledTimer(timeInterval: 0.1, target: self, selector: #selector(PlayerViewController.audioSliderUpdate), userInfo: nil, repeats: true)
which calls my function:
func audioSliderUpdate() {
var currentTime : CMTime = (self.audioPlayerItem?.currentTime())!
var seconds : Float64 = CMTimeGetSeconds(currentTime)
var time : Float = Float(seconds)
self.audioSlider.value = time
}
The audio track starts skipping and the slider is jumping about and it returns this to the console:
[aqme] 255: AQDefaultDevice (173): skipping input stream 0 0 0x0
and when i try set the max value with this code:
let duration : CMTime = (self.audioPlayerItem?.duration)!
let seconds : Float64 = CMTimeGetSeconds(duration)
let maxTime : Float = Float(seconds)
self.audioSlider.valueMaximum = maxTime
i get this error:
<Error>: Error: this application, or a library it uses, has passed an invalid numeric value (NaN, or not-a-number) to CoreGraphics API and this value is being ignored. Please fix this problem.
Jan 20 07:24:23 ParseStarterProject-Swift[2441] <Error>: If you want to see the backtrace, please set CG_NUMERICS_SHOW_BACKTRACE environmental variable.
I've read all over SO and they all have simple solutions like
let floatTime = Float(player.currentTime)
which don't work. as you cant convert CMT directly to float in swift 3????
So my question is how do i set the values for the slider?
this is the entire class to shed light if my explanation was unclear:
import UIKit
import Parse
import AVFoundation
import AVKit
class PlayerViewController: UIViewController, AVAudioPlayerDelegate {
#IBOutlet var audioSlider: MTCircularSlider!
// passed objectID from PackViewController
var selectedAudio: String!
var audioPlayer: AVPlayer?
var audioPlayerItem: AVPlayerItem?
fileprivate var timer: Timer?
fileprivate var direction: Float = 0.01
// query parse to get the file and stream it
func getAudio() {
let query = PFQuery(className: "Part")
query.whereKey("objectId", equalTo: selectedAudio)
query.getFirstObjectInBackground { (object, error) in
if error != nil || object == nil {
print("The getFirstObject request failed.")
} else {
print("There is an object now get the Audio. ")
if let audioFileURL = (object?.object(forKey: "partAudio") as! PFFile).url {
// create an item for use by notification center
self.audioPlayerItem = AVPlayerItem(url: NSURL(string: audioFileURL) as! URL)
self.audioPlayer = AVPlayer(playerItem: self.audioPlayerItem)
self.audioPlayer?.play()
var duration : CMTime = (self.audioPlayerItem?.duration)!
var seconds : Float64 = CMTimeGetSeconds(duration)
var maxTime : Float = Float(seconds)
self.audioSlider.valueMaximum = maxTime
self.timer = Timer.scheduledTimer(timeInterval: 0.1, target: self, selector: #selector(PlayerViewController.audioSliderUpdate), userInfo: nil, repeats: true)
}
}
}
}
#IBOutlet var playerButton: UIButton!
func playerButtonTapped() {
// if the player is not playing
if audioPlayer?.rate == 0 {
audioPlayer?.play()
self.playerButton.setImage(UIImage(named: "play"), for: UIControlState.normal)
} else {
audioPlayer?.pause()
self.playerButton.setImage(UIImage(named: "pause"), for: UIControlState.normal)
}
}
func finishedPlaying (myNotification: NSNotification) {
self.playerButton.setImage(UIImage(named: "play"), for: UIControlState.normal)
let stoppedPlayerItem: AVPlayerItem = myNotification.object as! AVPlayerItem
//create a CMTime for zero seconds so we can go back to the beginning
let seconds : Int64 = 0
let preferredTimeScale : Int32 = 1
let seekTime : CMTime = CMTimeMake(seconds, preferredTimeScale)
stoppedPlayerItem.seek(to: seekTime)
}
override func viewDidLoad() {
super.viewDidLoad()
self.playerButton.addTarget(self, action: #selector(PlayerViewController.playerButtonTapped), for: UIControlEvents.touchUpInside)
getAudio()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
NotificationCenter.default.addObserver(audioPlayerItem as Any, selector: #selector(PlayerViewController.finishedPlaying), name: NSNotification.Name.AVPlayerItemDidPlayToEndTime, object: audioPlayer)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillAppear(animated)
// remove the observer when leaving page
NotificationCenter.default.removeObserver(audioPlayer?.currentItem! as Any)
}
#IBAction func audioSliderActioned(_ sender: Any) {
audioPlayer?.pause()
//audioPlayer.currentTime() = TimeInterval(audioSlider.value)
//audioPlayer.prepareToPlay()
audioPlayer?.play()
}
func audioSliderUpdate() {
var currentTime : CMTime = (self.audioPlayerItem?.currentTime())!
var seconds : Float64 = CMTimeGetSeconds(currentTime)
var time : Float = Float(seconds)
self.audioSlider.value = time
}
}

How to detect max dB Swift

I'm trying to detect dB on a iOS Device, however, I am new to AV audio foundation can't really get to figure it out. I have come across this post: iOS - Detect Blow into Mic and convert the results! (swift), but it is not working for me.
My current code is this:
import Foundation
import UIKit
import AVFoundation
import CoreAudio
class ViewController: UIViewController {
var recorder: AVAudioRecorder!
var levelTimer = NSTimer()
var lowPassResults: Double = 0.0
override func viewDidLoad() {
super.viewDidLoad()
//make an AudioSession, set it to PlayAndRecord and make it active
var audioSession:AVAudioSession = AVAudioSession.sharedInstance()
audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord, withOptions: nil)
audioSession.setActive(true, error: nil)
//set up the URL for the audio file
var documents: AnyObject = NSSearchPathForDirectoriesInDomains( NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomainMask.UserDomainMask, true)[0]
var str = documents.stringByAppendingPathComponent("recordTest.caf")
var url = NSURL.fileURLWithPath(str as String)
// make a dictionary to hold the recording settings so we can instantiate our AVAudioRecorder
var recordSettings: [NSObject : AnyObject] = [AVFormatIDKey:kAudioFormatAppleIMA4,
AVSampleRateKey:44100.0,
AVNumberOfChannelsKey:2,AVEncoderBitRateKey:12800,
AVLinearPCMBitDepthKey:16,
AVEncoderAudioQualityKey:AVAudioQuality.Max.rawValue
]
//declare a variable to store the returned error if we have a problem instantiating our AVAudioRecorder
var error: NSError?
//Instantiate an AVAudioRecorder
recorder = AVAudioRecorder(URL:url, settings: recordSettings, error: &error)
//If there's an error, print otherwise, run prepareToRecord and meteringEnabled to turn on metering (must be run in that order)
if let e = error {
print(e.localizedDescription)
} else {
recorder.prepareToRecord()
recorder.meteringEnabled = true
//start recording
recorder.record()
//instantiate a timer to be called with whatever frequency we want to grab metering values
self.levelTimer = NSTimer.scheduledTimerWithTimeInterval(0.02, target: self, selector: #selector(ViewController.levelTimerCallback), userInfo: nil, repeats: true)
}
}
//This selector/function is called every time our timer (levelTime) fires
func levelTimerCallback() {
//we have to update meters before we can get the metering values
recorder.updateMeters()
//print to the console if we are beyond a threshold value. Here I've used -7
if recorder.averagePowerForChannel(0) > -7 {
print("Dis be da level I'm hearin' you in dat mic ")
print(recorder.averagePowerForChannel(0))
print("Do the thing I want, mofo")
}
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
}
i was currently building my app about movie making,and learned something about how to metering sound level in dB.
the origin data of recorder.averagePowerForChannel is not really dB level of the sound,it's provide a level range which is -160 - 0,so we need some modification to make this data more reasonable
so i was finding some thing that makes this data(value) convert to dB level data.
(Sorry about forgot where i was found it!)
here is the code
/**
Format dBFS to dB
- author: RÃ…GE_Devil_JÃ¥meson
- date: (2016-07-13) 20:07:03
- parameter dBFSValue: raw value of averagePowerOfChannel
- returns: formatted value
*/
func dBFS_convertTo_dB (dBFSValue: Float) -> Float
{
var level:Float = 0.0
let peak_bottom:Float = -60.0 // dBFS -> -160..0 so it can be -80 or -60
if dBFSValue < peak_bottom
{
level = 0.0
}
else if dBFSValue >= 0.0
{
level = 1.0
}
else
{
let root:Float = 2.0
let minAmp:Float = powf(10.0, 0.05 * peak_bottom)
let inverseAmpRange:Float = 1.0 / (1.0 - minAmp)
let amp:Float = powf(10.0, 0.05 * dBFSValue)
let adjAmp:Float = (amp - minAmp) * inverseAmpRange
level = powf(adjAmp, 1.0 / root)
}
return level
}
i was noticed that you are recording whit 2 channels so it will be little different with my code;
wish could help you out or give you some ideas :D
LAST UPDATE
Change coding language to swift

Resources