I'm currently trying to get the outgoing audio signal of my iOS app to be able to send it to Audiobus. I need the AudioBufferLists which are outgoing to be able to route them. I'm using OpenAL for audio playback.
The best case would be that I can even modify the outgoing signal to put effects on it.
There currently appears to be no public API to access the output of OpenAL in an iOS app.
If you want the output, you will need to use another audio API to play the sound, such as Audio Queues with uncompressed raw PCM audio or the RemoteIO Audio Unit, in order to be able to grab the audio output buffers.
You might want to check out how this guy made his own audio mixing object for OpenAL so he could achieve this:
http://www.cuppadev.co.uk/openal-sucks-write-your-own-audio-mixer/
Rather than using OpenAL, you could use CoreAudio with the 3D Mixer Audio Unit (kAudioUnitSubType_AU3DMixerEmbedded). Then you have control over where your output goes. Obviously doing this will sacrifice some portability (you'll be OK with Mac OS X, but not Windows, Linux or Android).
Related
I understand that this question might get a bad rating, but I've been looking at questions which ask how to reroute audio output to the loud speaker on iOS devices.
Every question I looked at the user talked about using your AVAudioSession to reroute it.. However, I'm not using AVAudioSession, I'm using an AVAudioEngine.
So basically my question is, even though I'm using an AVAudioEngine, should I still have an AVAudioSession?
If so, what is the relationship between these two objects? Or is there a way to connect an AVAudioEngine to an AVAudioSession?
If this is not the case, and there is no relation between an AVAudioEngine and an AVAudioSession, than how do you reroute audio so that it plays out of the main speakers on an iOS device rather than the earpiece.
Thank you!
AVAudioSession is specific to iOS and coordinates audio playback between apps, so that, for example, audio is stopped when a call comes in, or music playback stops when the user starts a movie. This API is needed to make sure an app behaves correctly in response to such events
AVAudioEngine is a modern Objective-C API for playback and recording. It provides a level of control for which you previously had to drop down to the C APIs of the Audio Toolbox framework (for example, with real-time audio tasks). The audio engine APIs are built to interface well with lower-level APIs, so you can still drop down to Audio Toolbox if you have to.
The basic concept of this API is to build up a graph of audio nodes, ranging from source nodes (players and microphones) and overprocessing nodes (mixers and effects) to destination nodes (hardware outputs). Each node has a certain number of input and output busses with well-defined data formats. This architecture makes it very flexible and powerful. And it even integrates with audio units.
so there is no inclusive relation between this .
Source Link : https://www.objc.io/issues/24-audio/audio-api-overview/
Yes it is not clearly commented , however, I found this comment from ios developer documentation.
AVFoundation playback and recording classes automatically activate your audio session.
Document Link : https://developer.apple.com/library/content/documentation/Audio/Conceptual/AudioSessionProgrammingGuide/ConfiguringanAudioSession/ConfiguringanAudioSession.html
I hope this may help you.
I am trying to develop an iOS app which reads sound from the microphone, apply some effects and play it through the headset instantly, may be with some acceptable delay.
Is this possible? As a first step, i am trying to play the sound received from microphone in my headsets at the same time, but struggling to do so...
I was able to record the sound, save it and then play it easily. Relevant questions, articles couldn't be found easily. Any ideas, links are much appreciated
I did check Apple's aurioTouch. I couldn't find simultaneous record and play of same signal.
Request the shortest buffers possible using audio session APIs (less than 6 mS is possible on most iOS devices). Then feed the raw audio samples you get from RemoteIO recording callbacks to the buffers in the RemoteIO play callbacks, possibly using a lock free circular fifo in between.
I want to process the stereo output from iOS devices, no matter what application causes them and visualize it in real-time.
Is it possible to use the generic output device (or anything else) to get at the audio data which are currently being played? Maybe as an input to a remoteIO unit?
In other words: I want to do what aurioTouch2 does (FFT only) but instead of using the microphone as input source, I want to process everything which is coming out of the speakers at a given time.
Kind regards
If your own app is playing using the RemoteIO Audio Unit, you can capture that content. You can not capture audio your app is playing using many of the other audio APIs. The iOS security sandbox will prevent your app from capturing audio that any other app is playing (unless that app explicitly exports audio via the Inter-App Audio API or equivalent).
Say you want to playback exactly what the iPhone mic is picking up in real-time. Which framework/class would be used?
You'll need to use the Core Audio framework for this. Specifically, look into audio graphs, audio units, and RemoteIO. Plenty of sample code for those to get you started.
Hey, I'm a new developer in Objective C. I'm trying to record the audio running out of iPhone speakers. I can capture the audio by mouth speaker and record it. But I cannot record the audio producing from my iPhone. Please help me.
Unfortunately, there is no way to directly capture from the "audio bus". You can either capture the audio via the internal microphone or headset microphone, but that's it. If you are rendering the audio, you could obviously also write that audio out to a file as well at the same time. That's pretty much your only option.
yes, you only get a handle on the audio generated by your process. There is no way to get the audio generated by the rest of the system.