Mixing and equalizing multiple streams of compressed audio on iOS - ios

What I'm trying to do is exactly as the title says, decode multiple compressed audio streams/files - it will be extracted from a modified MP4 file - and do EQ on them in realtime simultaneously.
I have read through most of Apple's docs.
I have tried AudioQueues, but I won't be able to do equalization, as once the compressed audio goes in, it doesn't come out ... so I can't manipulate it.
Audio Units don't seem to have any components to handle decompression of AAC and MP3 - if I'm right it's converter only handles converting from one LPCM format to another.
I have been trying to work out a solution on and off for about a month and a half now.
I'm now thinking, use a 3rd party decoder (god help me; I haven't a clue how to use those, the source code is greek; oh and any recommendations? :x), then feed the decoded-to LPCM into AudioQueues doing EQ at the callback.
Maybe I'm missing something here. Suggestions? :(

I'm still trying to figure out Core Audio for my own needs, but from what I can understand, you want to use Extended Audio File Services which handles reading and compression for you, producing PCM data you can then hand off to a buffer. The MixerHost sample project provides an example of using ExtAudioFileOpenURL to do this.

Related

Is it possible to get frequency from an audio without play it?

My iOS project requires to retrieve some audio data (i.e. frequency , decibel) from an audio file.
By using AudioKit framework, I can get those data from the microphone by use AKFrequencyTracker, however, I am struggling on how to get the frequency straight away from the audio file without playing it. Because I need those data to plot some graphs (i.e. frequency vs. time, etc)
PS: I'm saving the record as an m4a format at the moment. (the format is optional)
Thanks in advance
You can use Accelerate framework FFT API's to get the frequency information from an audio file.
Here is a useful library to understand vDSP API usage.
https://github.com/tomer8007/real-time-audio-fft

Random access and decode AAC audio track in a mp4 file in iOS

I working on a project which involves decoding AAC track from a mp4 file into PCM format. So far, the only way I found that does this is by using AVAssetReader. However, this approach has 2 problems for me:
1) According to the guide, the AVAssetReader is not recommended for real-time processing. However, My project requires live decoding and playback, where the decoded PCMs are post-processed. Will this be a problem? If yes, what will be the alternative?
2) AVAssetReader seems to decode the track sequentially. It does not seem to allow jumping to a random point and decode from there, which is something required by my project. What will be solution?
Answer 1: If you have to deal with track in iPod library, AVAssetReader is the only way.If not, you can choose another decoder like FFmpeg.
Answer 2: AVAssetReader supports random access.It has a timeRange property, see https://stackoverflow.com/a/6719873/1060971.

How to access data in .caf audio file and fill up an array with that data?

Background:
I am attempting to plot the data in an audio file (file type: .caf).
What I have done so far:
I am using the AudioToolbox and AVFoundation to record, playback, close, and open the file.
I have also figured out how to read how many packets and bytes the audio file contains.
I have also plotted a simple plot (not related to the audio file) using Core Plot.
What I can't figure out:
How to access the data in the .caf file in a way that will allow me to plot the data.
My question:
How to access the data in the .caf file in a way that will allow me to create an array that can be plotted?
I apologize if this question has been addressed and answered already. If it has been, I would appreciate someone pointing me in the direction of that post.
Regards,
George
I did this a long time ago using a feature of audio queues called offline rendering. There may be a better way to do it these days, I'm not sure.
Here's a good (but old) technical note for how to do it this way:
Technical Q&A QA1562 : Audio Queue - Offline Rendering

iOS Audio Service : Read & write audio files

guys.
I'm working on some audio services on iOS.
I trying to search any examples or tutorials about
how audio service or stream can read a existing audio file than
process something like filter, than write another file.
Is there any body who can help me?
Dirac3LE (by Stephan M. Bernsee) is a great library for this job.
There are examples and manual included in the download.
It is particulary inteded for time and pitch manipulation
but in your case you'll be interested in its EAFRead and EAFWrite
classes.
If you want to get familiar with the lower level library that you can also use for microphone input/sound output, and that you can get raw samples into and out of, I would suggest taking a look at Audio Queue Services.
I used it in my side project to get audio from the microphone, and I also wrote some code you might find useful to do fast vectorized, FFT based FIR filtering on input audio. You can find the code here https://github.com/jamescarlson/FreeAPRS

Converting raw pcm to speex?

For latency issues, I would like to send speex encoded audio frame data to a server instead of the raw PCM like I'm sending right now.
The problem is that I'm doing this in flash, and I want to use a socket connection to stream encoded spx frames of data.
I read the speex manual and it unfortunately does not go over the actual CELP algorithm used to convert pcm to spx data, it briefly introduces the use of excitation gains and how it grabs the filter coefficients.
It's libraries are in dlls- dead ends.
I really would like to create a conversion class in actionscript. Is this possible? Is there any documentation on this? I've been googling to no avail. You'd think there would be more documentation on speex out there...
And if I can't do this, what would be the most documente audio format to use?
thanks

Resources