Reuse audio player - trigger.io

Im just playing around with trigger.io, and need some clarification on native component usage. This question is specifically about the audio player, but I assume the other APIs work in the same manner so its probably valid for all APIS.
To play an audio file the documentation states:
forge.file.getLocal("music.mp3", function (file) {
forge.media.createAudioPlayer(file, function (player) {
player.play();
});
});
If you have multiple audio files that the user can play within the app, with the above code, every time they play a file a new audio player is created. This seems to happen because you can have multiple audio files playing together.
Surely overtime as the person uses the app this is going to consume a lot of memory? There doesnt seem like there is anyway to use an existing player and replace the current audio file with a new one. Is this possible once you have the "player" instance? Or is there a way to dispose the current instance when the user stops the audio or when its finished? or when the user navigates away from that particular audio item?
Thanks
Tyrone.

Good spot, this is actually just an oversight in our documentation, the player instance has another method player.destroy() which will remove the related native instance.
I'll make sure the API docs are updated in the future.

Related

Concatenate Audio Recordings for Playback in AudioKit

I am trying to create a recording app that has the ability to stop and start a audio recording.
My idea to achieve this is to have AudioKit record and save a new file(.aac) every time the stop button is clicked. Then when it goes to play the full recording back it would essentially concatenate all the different aac’s together.(My understanding is that I can't continue recording to the end of a file once it's saved) Example:
Records three different recordings, in directory folder is [1.acc, 2.acc, 3.acc]. When played the user would think it’s one file.
To achieve this do I use a single AKPlayer or multiple? I would need a single playback slider and also a playback time label, both these would have to correlate to the ‘single concatenation’ file of [1.acc, 2.acc, 3.acc].
This is the first time I have used AudioKit, I really appreciate any advice or solutions to this. Thanks!

Stream Audio in WebRTC during webRTC calls

My application uses Google WebRTC framework to make audio calls and that part work. However I would like to find a way to stream an audio file during a call.
Scenario :
A calls B
B answer and play a music
A hear this music
I've downloaded entire source code of WebRTC and trying to understand how it works. On the iOS part it seems that it is using Audio Unit. I can see a voice_processing_audio_unit file. I would (maybe wrongly) assume that I need to create a custom audio_unit that is reading its data from a file?
Does anyone has an idea in which direction to go?
After fighting an entire week with this issue. I finally manage to find a solution for this problem.
By editing WebRTC Code, I was able to get to the level of AudioUnits and in the AudioRenderingCallback, catch the io_data buffer.
This callback is called every 10ms to get the data from the mic. Therefor in this precise callback I was able to change this io_data buffer to put my own audio data.

Recording output audio with Swift

Is it possible to record output audio in an app using Swift? So, for example, say I'm listening to a podcast, and I want to, within a separate app, record a small segment of the podcast's audio. Is there any way to do that?
I've looked around but have only been able to find information on recording microphone recording and such.
It depends on how you are producing the audio. If the production of the audio is within your control, you can put a tap on the output and record to a file as it plays. The easiest way is with the new AVAudioEngine feature (there are other ways, but AVAudioEngine is basically an easy front end for them).
Of course, if the real problem is to take a copy of a podcast, then obviously all you have to do is download the podcast as opposed to listening to it. Similarly, you could buffer and save streaming audio to a file. There are many apps that do this. But this is not because the device's output is being hijacked; it is, again, because we have control of the sound data itself.
I believe you'll have to write a kernel extension to do that
https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptIOKit/iokit_tutorial.html
You'd have to make your own audio driver to record it
It appears as though
That is how softonic made soundflowerbed.
http://features.en.softonic.com/how-to-record-internal-sound-on-a-mac

Trying to use "The Amazing Audio Engine" and "AudioStreamer" at the same app

At the app i'm currently working on, there is a "studio" when you can make some sound effect, and for that i'm using "The amazing audio engine".
And there is an option to listen to songs by stream too.
Unfortunately the amazing audio engine doesn't contain "streaming" functionality, so i'm using the "AudioStreamer" calss.
I don't know why but the two don't work well together for me.
Each off them alone work great, but at the moment i try to play some audio on the amazing audio engine, stop, and move to stream, then move back to the audio engine, the sound doesn't play any more! no sound!
I checked already that i call "Stop" on every class, and make it "nil".
I allocate each of them every time again before they play.
I'm out of options, and thinking maybe it has something to do with core audio that both of them use?
Any help would be much appreciated
Thanks
EDIT
What i found is, this happens only when i use the "Stop" method of the "AudioStreamer"!
Can any won explain way?
Edit second
Found the answer!
This was solved by outmark This:
/*
while (state != AS_INITIALIZED)
{
[NSThread sleepForTimeInterval:0.1];
}
*/
// And adding this:
AudioQueueStart(audioQueue, NULL);
To the "stop" method, still, do not really understand why...
It takes some time after calling an audio stop method or function for it to really stop all the audio units (while the buffers get emptied by the hardware, etc.). You often can't restart audio until after this (a short) delay.

Record audio iOS

How does one record audio using iOS? Not the input recording from the microphone, but I want to be able to capture/record the current playing audio within my app?
So, e.g. I start a recording session, and any sound that plays within my app only, I want to record it to a file?
I have done research on this but I am confused with what to use as it looks like mixing audio frameworks can cause problems?
I just want to be able to capture and save the audio playing within my application.
Well if you're looking to just record the audio that YOUR app produces, then yes this is very much possible.
What isn't possible, is recording all audio that is output through the speaker.
(EDIT: I just want to clarify that there is no way to record audio output produced by other applications. You can only record the audio samples that YOU produce).
If you want to record your app's audio output, you must use the remote io audio unit (http://atastypixel.com/blog/using-remoteio-audio-unit/).
All you would really need to do is copy the playback buffer after you fill it.
ex)
memcpy(void *dest, ioData->mBuffers[0].mData, int amount_of_bytes);
This is possible by wrapping a Core Audio public utility file CAAudioUnitOutputCapturer
http://developer.apple.com/library/mac/#samplecode/CoreAudioUtilityClasses/Introduction/Intro.html
See my reply in this question for the wrapper classes.
Properly use Objective C++
There is no public API for capturing or recording all generic audio output from an iOS app.
Check out the MixerHostAudio sample application from Apple. Its a great way to start learning about Audio Units. Once you have an grasp of that, there are many tutorials online that talk about adding recording.

Resources