These have been around in OS X for a little while now and just recently became available in ios with ios 6. I am trying to figure what they let you do exactly. So the idea is you can tap into an audio queue and process the data before sending it on. Does this mean you can now intercept raw audio coming from different applications and process that (such as the iOS music player) before it plays? In other words is inter-app audio possible? I have read over the audioQueue.h file and can't quite figure out what to make of it.
Consider it a mid-level entry for your audio custom processing (e.g. insert effect) or reading (e.g. for analysis or display purposes) of the queue's sample data. A basic interface for reading or processing an AQ's data.
Does this mean you can now intercept raw audio coming from different applications and process that (such as the iOS music player) before it plays? In other words is inter-app audio possible?
Nope - it's not inter-process; you have no access to other processes' audio queues. These are for your queues' sample data. They can be used to simplify general audio render or analysis chains (the common case, by app count). My guess is that it was provided because a lot of people wanted an easier entry to access this sample data for processing or analysis. Custom processing entries on iOS can also be more complicated to implement (i.e. AudioUnit availability is restricted).
Related
I am building a game that lets users remix songs. I have built a mixer (based upon the apple sample code MixerHost (creating an audioGraph with a mixer audioUnit), but expanded to load 12 tracks. everything is working fine, however it takes a really long time for the songs to load when the gamer selects the songs they want to remix. This is because the program has to load 12 separate mp4 files into memory before I can start playing the music.
I think what I need is to create a AUFilePlayer audioUnit that is in charge of loading the file into the mixer. If the AUFilePlayer can handle loading the file on the fly then the user will not have to wait for the files to load 100% into memory. My two questions are, 1. can an AUFilePlayer be used this way? 2. The documentation on AUFilePlayer is very very very thin. Where can I find some example code demonstrated how to implement a AUFilePlayer properly in IOS (not in MacOS)?
Thanks
I think you're right - in this case a 'direct-from-disk' buffering approach is probably what you need. I believe the correct AudioUnit subtype is AudioFilePlayer. From the documentation:
The unit reads and converts
audio file data into its own internal
buffers. It performs disk I/O on a
high-priority thread shared among all
instances of this unit within a process.
Upon completion of a disk read, the unit
internally schedules buffers for playback.
A working example of using this unit on Mac OS X is given in Chris Adamson's book Learning Core Audio. The code for iOS isn't much different, and is discussed in this thread on the CoreAudio-API mailing list. Adamson's working code example can be found here. You should be able to adapt this to your requirements.
I’m working on a small iPhone app which is streaming movie content over a network connection using regular sockets. The video is in H.264 format. I’m however having difficulties with playing/decoding the data. I’ve been considering using FFMPEG, but the license makes it unsuitable for the project. I’ve been looking into Apple’s AVFoundation framework (AVPlayer in particular), which seems to be able to handle h264 content, however I’m only able to find methods to initiate the movie using an url – not by proving a memory buffer streamed from the network.
I’ve been doing some tests to make this happen anyway, using the following approaches:
Play the movie using a regular AVPlayer. Every time data is received on the network, it’s written to a file using fopen with append-mode. The AVPlayer’s asset is then reloaded/recreated with the updated data. There seems to be two issues with this approach: firstly, the screen goes black for a short moment while the first asset is unloaded and the new loaded. Secondly, I do not know exactly where the playing stopped, so I’m unsure how I would find out the right place to start playing the new asset from.
The second approach is to write the data to the file as in the first approach, but with the difference that the data is loaded into a second asset. A AVQueuedPlayer is then used where the second asset is inserted/queued in the player and then called when the buffering has been done. The first asset can then be unloaded without a black screen. However, using this approach it’s even more troublesome (than the first approach) to find out where to start playing the new asset.
Has anyone done something like this and made it work? Is there a proper way of doing this using AVFoundation?
The official method to do this is the HTTP Live Streaming format which supports multiple quality levels (among other things) and automatically switches between them (eg: if the user moves from WiFi to cellular).
You can find the docs here: Apple Http Streaming Docs
What I want to do is to take the output samples of an AVAsset corresponding to an audio file (no video involved) and send them to an audio effect class that takes in a block of samples, and I want to be able to this in real time.
I am currently looking at the AVfoundation class reference and programming guide, but I can't see a way of redirect the output of a player item and send it to my effect class, and from there, send the transformed samples to an Audio output (using AVAssetReaderAudioMixOutput?) and hear it from there. I see that the AVAssetReader class gives me a way to get a block of samples using
[myAVAssetReader addOutput:myAVAssetReaderTrackOutput];
[myAVAssetReaderTrackOutput copyNextSampleBuffer];
but Apple documentation specifies that the AVAssetReader class is not made and should not be used for real-time situations. Does anybody have a suggestion on where to look, or if I am having the right approach?
The MTAudioProcessingTap is perfect for this. By leveraging an AVPlayer, you can avoid having to block the samples yourself with the AVAssetReaderOutput and then render them yourself in an Audio Queue or with an Audio Unit.
Instead, attach an MTAudioProcessingTap to the inputParameters of your AVAsset's audioMix, and you'll be given samples in blocks which are easy to then throw into an effect unit.
Another benefit from this is that it will work with AVAssets derived from URLs that can't always be opened by other Apple APIs (like Audio File Services), such as the user's iPod library. Additionally, you get all of the functionality like tolerance of audio interruptions that the AVPlayer provides for free, which you would otherwise have to implement by hand if you went with an AVAssetReader solution.
To set up a tap you have to set up some callbacks that the system invokes as appropriate during playback. Full code for such processing can be found at this tutorial here.
There's a new MTAudioProcessingTap object in iOS 6 and Mac OS 10.8 . Check out the Session 517 video from WWDC 2012 - they've demonstrated exactly what you want to do.
WWDC Link
AVAssetReader is not ideal for realtime usage because it handles the decoding for you, and in various cases copyNextSampleBuffer can block for random amounts of time.
That being said, AVAssetReader can be used wonderfully well in a producer thread feeding a circular buffer. It depends on your required usage, but I've had good success using this method to feed a RemoteIO output, and doing my effects/signal processing in the RemoteIO callback.
I'm been happily synthesizing audio (at 44.1khz) and sending it out through the RemoteIO audio unit. It's come to my attention that my app's audio is "garbled" when going out via HDMI to a certain model of TV. It looks to me like the problem is related to the fact that this TV is looking for audio data at 48khz.
Here are some questions:
Does RemoteIO adopt the sample rate of whichever device it's outputting to? If I'm sending audio via HDMI to a device that asks for 48kz, do my RemoteIO callback buffers become 48khz?
Is there some tidy way to just force RemoteIO to still think in terms of 44.1khz, and be smart enough to perform any necessary sample rate conversions on its own, before it hands data off to the device?
If RemoteIO does indeed just defer to the device it's connected to, then presumably I need to do some sample rate conversion between my synthesis engine and remote IO. Is AudioConverterConvertComplexBuffer the best way to do this?
Fixed my problem. I was incorrectly assuming that the number of frames requested by the render callback would be a power of two. Changed my code to accommodate any arbitrary number of frames and all seems to work fine now.
If you want sample rate conversion, try using the Audio Queue API, or do the conversion within your own app using some DSP code.
Whether the RemoteIO buffer size or sample rate can be configured or not might depend on iOS device model, OS version, audio routes, background modes, etc., so an app must accomodate different buffer sizes and sample rates when using RemoteIO.
Could someone explain in terms of Audio Unit connections how to modify the iPhone microphone data stream visible to other processes with gain or EQ? I understand how to use a remote I/O unit to grab mic data and do my processing. I want this new data to replace the original mic data stream, not go to speakers or a file. "Audio Unit Hosting Fundamentals" Figure 1-3 is close.
I have read everything out there on Audio Units and used several of the online examples (Tim B, Play It Loud, Tasty Pixel) but don't see how to do this yet.
Any help?
Thanks
This doesn't seem to be clearly explained or illustrated in the documentation. However, if you look at the AURIOTOUCH sample code, you will see how within the remote I/O render callback, it makes a call to retrieve data from the microphone. then it optionally processes this data, and returns it.
this is kind of doubly useful because this call to retrieve microphone data returns already created buffesr. this means you don't have to create your own buffers, which is great becaues that is a bit of a hassle.