My application uses Google WebRTC framework to make audio calls and that part work. However I would like to find a way to stream an audio file during a call.
Scenario :
A calls B
B answer and play a music
A hear this music
I've downloaded entire source code of WebRTC and trying to understand how it works. On the iOS part it seems that it is using Audio Unit. I can see a voice_processing_audio_unit file. I would (maybe wrongly) assume that I need to create a custom audio_unit that is reading its data from a file?
Does anyone has an idea in which direction to go?
After fighting an entire week with this issue. I finally manage to find a solution for this problem.
By editing WebRTC Code, I was able to get to the level of AudioUnits and in the AudioRenderingCallback, catch the io_data buffer.
This callback is called every 10ms to get the data from the mic. Therefor in this precise callback I was able to change this io_data buffer to put my own audio data.
Related
I'm having trouble controlling third-party AUv3 instruments with MIDI using AVAudioSequencer (iOS 12.1.4, Swift 4.2, Xcode 10.1) and would appreciate your help.
What I'm doing currently:
Get all AUs of type kAudioUnitType_MusicDevice.
Instantiate one and connect it to the AVAudioEngine.
Create some notes, and put them on a MusicTrack.
Hand the track data over to an AVAudioSequencer connected to the engine.
Set the destinationAudioUnit of the track to my selected Audio Unit.
So far, so good, but...
When I play the sequence using AVAudioSequencer it plays fine the first time, using the selected Audio Unit. On the second time I get either silence, or a sine wave sound (and I wonder who is making that). I'm thinking the Audio Unit should not be going out of scope in between playbacks of the sequence, but I do stop the engine and restart it again for the new round. (But it should even be possible to swap AUs while the engine is running, so I think this is OK.)
Are there some steps that I'm missing? I would love to include code, but it is really hard to condense it down to its essence from a wall of text. But if you want to ask for specifics, I can answer. Or if you can point me to a working example that shows how to reliably send MIDI to AUv3 using AVAudioSequencer, that would be great.
Is AVAudioSequencer even supposed to work with other Audio Units than Apple's? Or should I start looking for other ways to send MIDI over to AUv3?
I should add that I can consistently send MIDI to the AUv3 using the InstrumentPlayer method from Apple's AUv3Host sample, but that involves a concurrent thread, and results in all sorts of UI sync and timing problems.
EDIT: I added an example project to GitHub:
https://github.com/jerekapyaho/so54753738
It seems that it's now working in iPadOS 13.7, but I don't think I'm doing anything that different than earlier, except this loads a MIDI file from the bundle, instead of generating it from data on the fly.
If someone still has iOS 12, it would be interesting to know if it's broken there, but working on iOS 13.x (x = ?)
In case you are using AVAudioUnitSampler as an audio unit instrument, the sine tone happens when you stop and start the audio engine without reloading the preset. Whenever you start the engine you need to load any instruments back into the sampler (e.g. a SoundFont), otherwise you may hear the sine. This is an issue with the Apple AUSampler, not with 3rd party instruments.
Btw you can test it under iOS 12 using the simulator.
We're currently looking at taking our music visualization software that's been around for many years to an iOS app that plays music via the new iOS Spotify SDK -- check out http://soundspectrum.com to see our visuals such as G-Force and Aeon.
Anyway, we have the demo projects in the Spotify iOS SDK all up and running and things look good, but the major step forward is to get access to the audio pcm so we can sent it into our visual engines, etc.
Could a Spotify dev or someone in the know kindly suggest what possibilities are available to get a hold of the pcm audio? The audio pcm block can be as simple as a circular buffer of a few thousand of the latest samples (that we would use to FFT, etc).
Thanks in advance!
Subclass SPTCoreAudioController and do one of two things:
Override connectOutputBus:ofNode:toInputBus:ofNode:inGraph:error: and use AudioUnitAddRenderNotify() to add a render callback to destinationNode's audio unit. The callback will be called as the output node is rendered and will give you access to the audio as it's leaving for the speakers. Once you've done that, make sure you call super's implementation for the Spotify iOS SDK's audio pipeline to work correctly.
Override attemptToDeliverAudioFrames:ofCount:streamDescription:. This gives you access to the PCM data as it's produced by the library. However, there's some buffering going on in the default pipeline so the data given in this callback might be up to half a second behind what's going out to the speakers, so I'd recommend using suggestion 1 over this. Call super here to continue with the default pipeline.
Once you have your custom audio controller, initialise an SPTAudioStreamingController with it and you should be good to go.
I actually used suggestion 1 to implement iTunes' visualiser API in my Mac OS X Spotify client that was built with CocoaLibSpotify. It's not working 100% smoothly (I think I'm doing something wrong with runloops and stuff), but it drives G-Force and Whitecap pretty well. You can find the project here, and the visualiser stuff is in VivaCoreAudioController.m. The audio controller class in CocoaLibSpotify and that project is essentially the same as the one in the new iOS SDK.
I have run through an audio units tutorial for a sine wave generator and done a bit of reading, and I understand basically how it is working. What I would actually like to do for my app, is play a short sound file in response to some external event. These sounds would be about 1-2 seconds in duration and occur at a rate of about about 1-2 per second.
Basically where I am at right now is trying to figure out how to play an actual audio file using my audio unit, rather than generating a sine wave. So basically my question is, how do I get an audio unit to play an audio file?
Do I simply read bytes from the audio file into the buffer in the render callback?
(if so what class do I need to deal with to open / convert / decompress / read the audio file)
or is there some simpler method where I could maybe just hand off the entire buffer and tell it to play?
Any names of specific classes or APIs I will need to look at to accomplish this would be very helpful.
OK, check this:
http://developer.apple.com/library/ios/samplecode/MixerHost/Introduction/Intro.html
EDIT: That is a sample project. This page has detailed instructions with inline code to setup common configurations: http://developer.apple.com/library/ios/ipad/#DOCUMENTATION/MusicAudio/Conceptual/AudioUnitHostingGuide_iOS/ConstructingAudioUnitApps/ConstructingAudioUnitApps.html#//apple_ref/doc/uid/TP40009492-CH16-SW1
If you don't mind being tied to IOS 5+, you should look into AUFilePlayer. It is much easer then using the callbacks and you don't have to worry about setting up your own ring buffer (something that you would need to do if you want to avoid loading all of your audio data into memory on start-up)
For a project I need to handle audio in an iPhone app quite special and hope somebody may point me in the right direction.
Lets say you have a fixed set of up to thirty audio files of the same length (2-3 sec, non-compressed). While a que is playing from one audio file it should be able to update parameters that makes the playing continue from another audio file from the same timestamp the previous audiofile ended playing. If the different audio files is different versions of heavely filtered audio it should be possible to "slide" between them an get the impression that you applied the filter directly. The filtering is at the moment not possible to achive in realtime on an iPhone, therefore the prerendered files.
If A B and C is different audio files I like to be able to:
Play A without interruption:
Start AAAAAAAAAAAAA Stop
Or start play A and continue over in B and then C, initiated while playing
Start AAABBBBBBBBCC Stop
Ideally is should be possible to play two er more ques at the same time. Latency is not that important, but the skipping between files should ideally not produce clicks or delays.
I have looked into using Audio Queue Services (which look like hell to dive into) and sniffed on OpenAl. Could anyone give me a ruff overview and a general direction I can spend the next days burried into?
Try using the iOS Audio Unit API, particularly a mixer unit connected to RemoteIO for audio output.
I managed to do this by using FMOD Designer. FMOD (http://www.fmod.org/) is a sound design framework for game development, that supports iOS development. I made a multitrack-event in FMOD Designer with different layers for each sound clip. Add a parameter in the horizontal bar that lets you controll which sound clip to play in realtime. The trick is to let each soundclip continue over the whole bar and controll which sound that is beeing heard by using a volume effect (0-100%) like in the attached picture. In that way you are ensured that skipping between files follow the same timecode. I have tried this successfully with up to thirty layers, but experienced some double playing. This seemed to dissapear if I cut the number down to fifteen.
It should be possible to use iOS Audio Unit API if you are comfortable with this, but for those of us that like the most simple sollution FMOD is quite good :) Thanks to Ellen S for the sollution tip!
Screenshot of the multitrack-event in FMOD Designer:
https://plus.google.com/photos/106278910734599034045/albums/5723469198734595793?authkey=CNSIkbyYw8PM2wE
How does one record audio using iOS? Not the input recording from the microphone, but I want to be able to capture/record the current playing audio within my app?
So, e.g. I start a recording session, and any sound that plays within my app only, I want to record it to a file?
I have done research on this but I am confused with what to use as it looks like mixing audio frameworks can cause problems?
I just want to be able to capture and save the audio playing within my application.
Well if you're looking to just record the audio that YOUR app produces, then yes this is very much possible.
What isn't possible, is recording all audio that is output through the speaker.
(EDIT: I just want to clarify that there is no way to record audio output produced by other applications. You can only record the audio samples that YOU produce).
If you want to record your app's audio output, you must use the remote io audio unit (http://atastypixel.com/blog/using-remoteio-audio-unit/).
All you would really need to do is copy the playback buffer after you fill it.
ex)
memcpy(void *dest, ioData->mBuffers[0].mData, int amount_of_bytes);
This is possible by wrapping a Core Audio public utility file CAAudioUnitOutputCapturer
http://developer.apple.com/library/mac/#samplecode/CoreAudioUtilityClasses/Introduction/Intro.html
See my reply in this question for the wrapper classes.
Properly use Objective C++
There is no public API for capturing or recording all generic audio output from an iOS app.
Check out the MixerHostAudio sample application from Apple. Its a great way to start learning about Audio Units. Once you have an grasp of that, there are many tutorials online that talk about adding recording.