I'm trying to run an AVCaptureSession in a view controller, but within the same, I'm also calling a function from a library that uses AVAudioSession. I can't seem to get much info out of the debugger, other than it crashes exactly when I call this particular library's function. The library is libpd:
https://github.com/libpd
and it calls AVAudioSession as sharedInstance. I call libpd as:
[self.audioController configurePlaybackWithSampleRate:44100 numberChannels:2 inputEnabled:YES mixingEnabled:YES]
so mixing is enabled, but just in case, I've recompiled it so that when it inits, I do:
UInt32 doSetProperty = 1;
AudioSessionSetProperty (kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof(doSetProperty), &doSetProperty);
but, no luck. Moving the calls to libpd to viewWillAppear within the view controller didn't work either. However, if I take the code that calls libpd out of my viewcontroller, and put it in the app delegate within didFinishLaunchingWithOptions, it then starts just fine, and the two sessions seem to co-exist without crashing.
Am I missing something about AVCaptureSession and mixing? How do I go about both sessions co-existing? I'm not using AVCapture to capture audio, only camera input, so shouldn't I be able to somehow have both going on?
Start the audio session (which can be set to support mixing) after you've started the camera session. I tried and you need to wait for the camera to be setup before you start the audio session (e.g. wait a couple of seconds)
Is it possible that the library assumes there's no other audio session active and calls AudioSessionInitialize?
Per the docs "You may activate and deactivate your audio session as needed (see AudioSessionSetActive), but should initialize it only once."
I'd guess that the authors of this library didn't include functionality for an ongoing audio session... but it would be easy enough to dive in there and comment out the initializing lines, as long as your app always hits that function call with a running audio session (otherwise just check with an if statement to see if you have an audio session, if you don't, initialize one, etc.)
Related
My application uses Google WebRTC framework to make audio calls and that part work. However I would like to find a way to stream an audio file during a call.
Scenario :
A calls B
B answer and play a music
A hear this music
I've downloaded entire source code of WebRTC and trying to understand how it works. On the iOS part it seems that it is using Audio Unit. I can see a voice_processing_audio_unit file. I would (maybe wrongly) assume that I need to create a custom audio_unit that is reading its data from a file?
Does anyone has an idea in which direction to go?
After fighting an entire week with this issue. I finally manage to find a solution for this problem.
By editing WebRTC Code, I was able to get to the level of AudioUnits and in the AudioRenderingCallback, catch the io_data buffer.
This callback is called every 10ms to get the data from the mic. Therefor in this precise callback I was able to change this io_data buffer to put my own audio data.
I have an app in which I only want to be recording from one particular screen. On every other screen I would like to not be recording mostly so that audio can still play while the app is in the background without the red recording bar being there.
The only way I've been able to do this is to stop AudioKit (AudioKit.stop()) but I'm pretty sure the starting and stopping of AudioKit is causing some very random hard to track down crashes. I have also tried calling stop on the Microphone, and setting input enabled to false but this has not worked either.
I know there is a similar question AudioKit: Can I disable an AKMicrophone without calling AudioKit.stop()?
but the answer doesn't address this.
Is there anyway to stop receiving input from the microphone without stopping the engine?
It depends a bit on your goal with disconnecting the microphone. If you just want to stop processing input and change the AVAudioSession category so that the red bar goes away you could set AKSettings.audioInputEnabled = false but then you'll need to get AVAudioSession's category updated. Currently as far as I can tell there isn't a good way to do this without stopping/starting AudioKit although changing the category without restarting the engine should be possible. One hack that seems to work on first blush is to assign AudioKit.output = AudioKit.output to trigger the didSet side-effect that will update the category via updateSessionCategoryAndOptions().
You could always bypass AudioKit and try to set the shared AVAudioSession's category directly as well. I'm looking into creating a PR for AudioKit to expose updateSessionCategoryAndOptions() but in the meantime you could always make this function public in your own branch and run build_frameworks.sh. Anyway, not great solutions but this is a valid use-case AudioKit should support better in the future!
As a general rule for using Audio Kit you should be employing a proper gain staging pattern. I don't know if you have any experiences with any DAW's (protools, logic, cubase etc) but you should view AudioKit as a DAW... in code.
AudioKit.start()
is your master output and I honestly see no reason to ever stop it during your application. Instead you should mute the master output of audiokit to 'kill the sound'
As for your question, AKMicrophone does have both a start and a stop property, you should just be able to call stop!
let mic = AKMicrophone
mic.start(AKMicrophone)
mic.stop(AKMicrophone)
new here, and new to mobile dev in general. This question is more about approach than anything. I have a simple app that I’m writing to learn various things, one of which is AVFoundation. I have the app working to the point where I record audio using AVAudioRecorder, play the recorded file back with AVAudioPlayer, and all is well. There are two things I’d like to achieve but I’m not quite sure how to go about them in the best way. I'm using Swift 3, xcode 8.3, iOS 10.3. Lots of 3s.
First: I want to only play back X number of seconds of the audio. To achieve this, my thought is to use scheduledTimer for X, which will trigger a stop() call when it elapses. Is that the best method to use?
Second: I want to measure the decibel level of input coming into the microphone while it’s recording. This is the one I truly have little insight on how to accomplish. I believe this can be obtained through the AVAudioRecorder.powerOutput value (?), but I’m unclear as to how I can monitor the value during playback and act on it.
Not really sure what code to include since it's pretty basic. I'm setting up the AVAudioSession in the AppDelegate, the AVAudioRecorder is setup to record in didFinishLoading, and the rest of the record, stop, play functionality is through buttons.
Im just playing around with trigger.io, and need some clarification on native component usage. This question is specifically about the audio player, but I assume the other APIs work in the same manner so its probably valid for all APIS.
To play an audio file the documentation states:
forge.file.getLocal("music.mp3", function (file) {
forge.media.createAudioPlayer(file, function (player) {
player.play();
});
});
If you have multiple audio files that the user can play within the app, with the above code, every time they play a file a new audio player is created. This seems to happen because you can have multiple audio files playing together.
Surely overtime as the person uses the app this is going to consume a lot of memory? There doesnt seem like there is anyway to use an existing player and replace the current audio file with a new one. Is this possible once you have the "player" instance? Or is there a way to dispose the current instance when the user stops the audio or when its finished? or when the user navigates away from that particular audio item?
Thanks
Tyrone.
Good spot, this is actually just an oversight in our documentation, the player instance has another method player.destroy() which will remove the related native instance.
I'll make sure the API docs are updated in the future.
I use AVCamCaptureManager class from AVFoundation framework. At the same time I want to use volume up button to take a picture.
As I understood, the only possible solution nowadays is to use audio session (turn it on and listen to changes in volume). I started to use RBVolumeButtons class from here https://github.com/blladnar/RBVolumeButtons
When my app launches, the AVCamCaptureManager initializes audio session. Then I need to start to listen to changes in volume, so RBVolumeButtons initializes a new audio session, that interrupts the previous one. Consequently, camera stops, but I can use volume buttons.
How to avoid this interruption, and how to use volume buttons and camera at the same time? Maybe I can run two audio sessions at the same time? Or maybe there is a way to get access to AVCamCaptureManager's audio session and use it?
Thank you a lot for considering answering my question!
P.S. I use this line to add property listener inside RBVolumeButtons class:
AudioSessionAddPropertyListener(kAudioSessionProperty_CurrentHardwareOutputVolume, volumeListenerCallback, self);