I use AVCamCaptureManager class from AVFoundation framework. At the same time I want to use volume up button to take a picture.
As I understood, the only possible solution nowadays is to use audio session (turn it on and listen to changes in volume). I started to use RBVolumeButtons class from here https://github.com/blladnar/RBVolumeButtons
When my app launches, the AVCamCaptureManager initializes audio session. Then I need to start to listen to changes in volume, so RBVolumeButtons initializes a new audio session, that interrupts the previous one. Consequently, camera stops, but I can use volume buttons.
How to avoid this interruption, and how to use volume buttons and camera at the same time? Maybe I can run two audio sessions at the same time? Or maybe there is a way to get access to AVCamCaptureManager's audio session and use it?
Thank you a lot for considering answering my question!
P.S. I use this line to add property listener inside RBVolumeButtons class:
AudioSessionAddPropertyListener(kAudioSessionProperty_CurrentHardwareOutputVolume, volumeListenerCallback, self);
Related
I am trying to create a recording app that has the ability to stop and start a audio recording.
My idea to achieve this is to have AudioKit record and save a new file(.aac) every time the stop button is clicked. Then when it goes to play the full recording back it would essentially concatenate all the different aac’s together.(My understanding is that I can't continue recording to the end of a file once it's saved) Example:
Records three different recordings, in directory folder is [1.acc, 2.acc, 3.acc]. When played the user would think it’s one file.
To achieve this do I use a single AKPlayer or multiple? I would need a single playback slider and also a playback time label, both these would have to correlate to the ‘single concatenation’ file of [1.acc, 2.acc, 3.acc].
This is the first time I have used AudioKit, I really appreciate any advice or solutions to this. Thanks!
I have an app in which I only want to be recording from one particular screen. On every other screen I would like to not be recording mostly so that audio can still play while the app is in the background without the red recording bar being there.
The only way I've been able to do this is to stop AudioKit (AudioKit.stop()) but I'm pretty sure the starting and stopping of AudioKit is causing some very random hard to track down crashes. I have also tried calling stop on the Microphone, and setting input enabled to false but this has not worked either.
I know there is a similar question AudioKit: Can I disable an AKMicrophone without calling AudioKit.stop()?
but the answer doesn't address this.
Is there anyway to stop receiving input from the microphone without stopping the engine?
It depends a bit on your goal with disconnecting the microphone. If you just want to stop processing input and change the AVAudioSession category so that the red bar goes away you could set AKSettings.audioInputEnabled = false but then you'll need to get AVAudioSession's category updated. Currently as far as I can tell there isn't a good way to do this without stopping/starting AudioKit although changing the category without restarting the engine should be possible. One hack that seems to work on first blush is to assign AudioKit.output = AudioKit.output to trigger the didSet side-effect that will update the category via updateSessionCategoryAndOptions().
You could always bypass AudioKit and try to set the shared AVAudioSession's category directly as well. I'm looking into creating a PR for AudioKit to expose updateSessionCategoryAndOptions() but in the meantime you could always make this function public in your own branch and run build_frameworks.sh. Anyway, not great solutions but this is a valid use-case AudioKit should support better in the future!
As a general rule for using Audio Kit you should be employing a proper gain staging pattern. I don't know if you have any experiences with any DAW's (protools, logic, cubase etc) but you should view AudioKit as a DAW... in code.
AudioKit.start()
is your master output and I honestly see no reason to ever stop it during your application. Instead you should mute the master output of audiokit to 'kill the sound'
As for your question, AKMicrophone does have both a start and a stop property, you should just be able to call stop!
let mic = AKMicrophone
mic.start(AKMicrophone)
mic.stop(AKMicrophone)
Is it possible to record output audio in an app using Swift? So, for example, say I'm listening to a podcast, and I want to, within a separate app, record a small segment of the podcast's audio. Is there any way to do that?
I've looked around but have only been able to find information on recording microphone recording and such.
It depends on how you are producing the audio. If the production of the audio is within your control, you can put a tap on the output and record to a file as it plays. The easiest way is with the new AVAudioEngine feature (there are other ways, but AVAudioEngine is basically an easy front end for them).
Of course, if the real problem is to take a copy of a podcast, then obviously all you have to do is download the podcast as opposed to listening to it. Similarly, you could buffer and save streaming audio to a file. There are many apps that do this. But this is not because the device's output is being hijacked; it is, again, because we have control of the sound data itself.
I believe you'll have to write a kernel extension to do that
https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptIOKit/iokit_tutorial.html
You'd have to make your own audio driver to record it
It appears as though
That is how softonic made soundflowerbed.
http://features.en.softonic.com/how-to-record-internal-sound-on-a-mac
Im just playing around with trigger.io, and need some clarification on native component usage. This question is specifically about the audio player, but I assume the other APIs work in the same manner so its probably valid for all APIS.
To play an audio file the documentation states:
forge.file.getLocal("music.mp3", function (file) {
forge.media.createAudioPlayer(file, function (player) {
player.play();
});
});
If you have multiple audio files that the user can play within the app, with the above code, every time they play a file a new audio player is created. This seems to happen because you can have multiple audio files playing together.
Surely overtime as the person uses the app this is going to consume a lot of memory? There doesnt seem like there is anyway to use an existing player and replace the current audio file with a new one. Is this possible once you have the "player" instance? Or is there a way to dispose the current instance when the user stops the audio or when its finished? or when the user navigates away from that particular audio item?
Thanks
Tyrone.
Good spot, this is actually just an oversight in our documentation, the player instance has another method player.destroy() which will remove the related native instance.
I'll make sure the API docs are updated in the future.
I'm trying to run an AVCaptureSession in a view controller, but within the same, I'm also calling a function from a library that uses AVAudioSession. I can't seem to get much info out of the debugger, other than it crashes exactly when I call this particular library's function. The library is libpd:
https://github.com/libpd
and it calls AVAudioSession as sharedInstance. I call libpd as:
[self.audioController configurePlaybackWithSampleRate:44100 numberChannels:2 inputEnabled:YES mixingEnabled:YES]
so mixing is enabled, but just in case, I've recompiled it so that when it inits, I do:
UInt32 doSetProperty = 1;
AudioSessionSetProperty (kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof(doSetProperty), &doSetProperty);
but, no luck. Moving the calls to libpd to viewWillAppear within the view controller didn't work either. However, if I take the code that calls libpd out of my viewcontroller, and put it in the app delegate within didFinishLaunchingWithOptions, it then starts just fine, and the two sessions seem to co-exist without crashing.
Am I missing something about AVCaptureSession and mixing? How do I go about both sessions co-existing? I'm not using AVCapture to capture audio, only camera input, so shouldn't I be able to somehow have both going on?
Start the audio session (which can be set to support mixing) after you've started the camera session. I tried and you need to wait for the camera to be setup before you start the audio session (e.g. wait a couple of seconds)
Is it possible that the library assumes there's no other audio session active and calls AudioSessionInitialize?
Per the docs "You may activate and deactivate your audio session as needed (see AudioSessionSetActive), but should initialize it only once."
I'd guess that the authors of this library didn't include functionality for an ongoing audio session... but it would be easy enough to dive in there and comment out the initializing lines, as long as your app always hits that function call with a running audio session (otherwise just check with an if statement to see if you have an audio session, if you don't, initialize one, etc.)