iOS audio manipulation - play local .caf file backwards - ios

I'm wanting to load a local .caf audio file and reverse the audio (play it backwards). I've gathered that I basically need to flip an array of buffer data from posts like this
However, I'm not sure how to access this buffer data from a given audio file. I have a little experience playing sounds back with AVaudioPlayer and ObjectAL(an obj-c openAL library), but I don't know how to access something lower level like this buffer data array.
Could I please get an example of how I would go about getting access to that array?

Your problem reduces to the same problem described here, which was linked by P-i in the comment under your question. Kiran answered that question and re-posted his answer for you here. Kiran's answer is accurate, but you may need a few more details to be able to decide how to proceed because you're starting with a CAF file.
The simplest audio file format, linear pulse-code modulation (LPCM), is the easiest to read byte-for-byte or sample-for-sample. This means it's the easiest to reverse. Kiran's solution does just that.
The CAF format is a container/wrapper format, however. While your CAF file could contain a WAV file, it could also contain a compressed file format that cannot be manipulated in the same fashion.
You should consider first converting the CAF file to WAV, then reversing it as shown in the other solution. There are various libraries that will do this conversion for you, but a good place to start might be with the AudioToolbox framework, which includes Audio Converter Services. Alternately, if you can use the WAV file format from the start, you can prevent the need to convert to WAV.
You may need to know more if you find Kiran's sample code gives you an error (Core Audio is finicky). A great place to start is with the Core Audio 'Bible', "Learning Core Audio", written by Chris Adamson and Kevin Avila. That book builds your knowledge of digital sound using great samples. You should also check out anything written by Michael Tyson, who started the Amazing Audio Engine project on github, and wrote AudioBus.

I have worked on a sample app, which records what user says and plays them backwards. I have used CoreAudio to achieve this. Link to app code.
As each sample is 16-bits in size(2 bytes)(mono channel). You can load each sample at a time by copying it into a different buffer by starting at the end of the recording and reading backwards. When you get to the start of the data you have reversed the data and playing will be reversed.
// set up output file
AudioFileID outputAudioFile;
AudioStreamBasicDescription myPCMFormat;
myPCMFormat.mSampleRate = 16000.00;
myPCMFormat.mFormatID = kAudioFormatLinearPCM ;
myPCMFormat.mFormatFlags = kAudioFormatFlagsCanonical;
myPCMFormat.mChannelsPerFrame = 1;
myPCMFormat.mFramesPerPacket = 1;
myPCMFormat.mBitsPerChannel = 16;
myPCMFormat.mBytesPerPacket = 2;
myPCMFormat.mBytesPerFrame = 2;
AudioFileCreateWithURL((__bridge CFURLRef)self.flippedAudioUrl,
kAudioFileCAFType,
&myPCMFormat,
kAudioFileFlags_EraseFile,
&outputAudioFile);
// set up input file
AudioFileID inputAudioFile;
OSStatus theErr = noErr;
UInt64 fileDataSize = 0;
AudioStreamBasicDescription theFileFormat;
UInt32 thePropertySize = sizeof(theFileFormat);
theErr = AudioFileOpenURL((__bridge CFURLRef)self.recordedAudioUrl,kAudioFileReadPermission, 0, &inputAudioFile);
thePropertySize = sizeof(fileDataSize);
theErr = AudioFileGetProperty(inputAudioFile, kAudioFilePropertyAudioDataByteCount, &thePropertySize, &fileDataSize);
UInt32 dataSize = fileDataSize;
void* theData = malloc(dataSize);
//Read data into buffer
UInt32 readPoint = dataSize;
UInt32 writePoint = 0;
while( readPoint > 0 )
{
UInt32 bytesToRead = 2;
AudioFileReadBytes( inputAudioFile, false, readPoint, &bytesToRead, theData );
AudioFileWriteBytes( outputAudioFile, false, writePoint, &bytesToRead, theData );
writePoint += 2;
readPoint -= 2;
}
free(theData);
AudioFileClose(inputAudioFile);
AudioFileClose(outputAudioFile);

I think this sample code could help you.
Mixer Host Sample Code
It will load two caf files from the bundle and play it. It contains a function call readAudioFilesIntoMemory which is loading a caf file to a data array as you said.
The whole program is an example of core audio, I hope it can help you :)

Why not let CoreAudio's AudioConverter do it for you? See this post about "Getting PCM from MP3/AAC/ALAC File", and Apple's Core Audio Essentials

you can use libsox for iphone framework to apply audio effects easily.
it includes a sample project that shows how to do it.
libsox ios

Related

Writing buffers of Streamed mp3 packets to wav file using ExtAudioFileWrite ios

I am working on online radio app I managed to play the streamed mp3 packets from the Icecast server using AudioQueueServices, what I am struggling with is implementing a recording feature.
Since the streaming is in mp3 format I can not write the Audio packets directly to file using AudioFileWritePackets.
To leverage The automatic conversion of Extended Audio I am using ExtAudioWriteFile to write to a wav file. I have setup the AudioStreamBasicDescription of the incoming packets using the FileStreamOpen call back function AudioFileStream_PropertyListenerProc and the destination format I populated manually.The code successfully creates the file and writes the packet to it but on playback what I hear is a white noise;
Here is my code
// when the recording button is pressed this function creates the file and setup the asbd
-(void)startRecording{
recording = true;
OSStatus status;
NSURL *baseUrl=[self applicationDocumentsDirectory];//returns the document direcotry of the app
NSURL *audioUrl = [NSURL URLWithString:#"Recorded.wav" relativeToURL:baseUrl];
//asbd setup for the destination file/wav file
AudioStreamBasicDescription dstFormat;
dstFormat.mSampleRate=44100.0;
dstFormat.mFormatID=kAudioFormatLinearPCM; dstFormat.mFormatFlags=kAudioFormatFlagsNativeEndian|kAudioFormatFlagIsSignedInteger|kAudioFormatFlagIsPacked;
dstFormat.mBytesPerPacket=4;
dstFormat.mBytesPerFrame=4;
dstFormat.mFramesPerPacket=1;
dstFormat.mChannelsPerFrame=2;
dstFormat.mBitsPerChannel=16;
dstFormat.mReserved=0;
//creating the file
status = ExtAudioFileCreateWithURL(CFBridgingRetain(audioUrl), kAudioFileWAVEType, &(dstFormat), NULL, kAudioFileFlags_EraseFile, &recordingFilRef);
// tell the EXtAudio File ApI what format we will be sending samples
//recordasbd is the asbd of incoming packets populated in AudioFileStream_PropertyListenerProc
status = ExtAudioFileSetProperty(recordingFilRef, kExtAudioFileProperty_ClientDataFormat, sizeof(recordasbd), &recordasbd);
}
// a handler called by packetproc call back function in AudiofileStreamOpen
- (void)handlePacketsProc:(const void *)inInputData numberBytes:(UInt32)inNumberBytes numberPackets:(UInt32)inNumberPackets packetDescriptions:(AudioStreamPacketDescription *)inPacketDescriptions {
if(recording){
// wrap the destination buffer in an audiobuffer list
convertedData.mNumberBuffers= 1;
convertedData.mBuffers[0].mNumberChannels = recordasbd.mChannelsPerFrame;
convertedData.mBuffers[0].mDataByteSize = inNumberBytes;
convertedData.mBuffers[0].mData = inInputData;
ExtAudioFileWrite(recordingFilRef,recordasbd.mFramesPerPacket * inNumberPackets, &convertedData);
}
}
My questions are:
Is my approach right can I write mp3 packets to wav file this way If so what am I missing ??
If my approach is wrong please tell me any other way you think is right.A nudge in the right direction is more than enough for me
I am so grateful for any help I have read every SO question I could get my hands on this topic, I also looked closely at apples Convertfile example but I could not figure out what I am missng
Thanks in advance for any help
Why not write the raw mp3 packets directly to a file? Without using ExtAudioFile at all.
They will form a valid mp3 file and will be much smaller than the equivalent wav file.

ExtAudioFileRead is too slow. How to make it faster?

I've written my own audio library. (Silly, I know but I enjoy it.) Right now, I'm using it to play stereo files from the iPod library on iOS. It works well except that sometimes the call to ExtAudioFileRead() takes longer than 23ms. Since my AudioUnits output setup is playing audio at 44.1KHz, and it's asking for 1024 frames per callback. My code + ExtAudioFileRead() must take no longer than 23ms to respond.
I'm quite surprised that ExtAudioFileRead() is so slow. What I'm doing seems quite the normal thing to do. I'm thinking there must be some undocumented configuration magic that I'm not doing. Here is my relevant configuration code:
ExtAudioFileRef _extAudioFileRef;
OSStatus result;
result = ::ExtAudioFileOpenURL((__bridge CFURLRef)_url, &_extAudioFileRef);
AudioStreamBasicDescription format;
format.mBitsPerChannel = 8 * sizeof(float);
format.mBytesPerFrame = sizeof(float);
format.mBytesPerPacket = sizeof(float);
format.mChannelsPerFrame = channelCount;
format.mFormatFlags = kAudioFormatFlagIsFloat|kAudioFormatFlagIsNonInterleaved;
format.mFormatID = kAudioFormatLinearPCM;
format.mFramesPerPacket = 1;
format.mSampleRate = framesPerSecond;
result = ::ExtAudioFileSetProperty(_extAudioFileRef,
kExtAudioFileProperty_ClientDataFormat,
sizeof(format),
&format);
I'm not setting any other properties of _extAudioFileRef.
I've traced the heck out of this. I've put time measurement around just my call to ExtAudioFileRead() so I know it's not my code slowing this process down. It's got to be a configuration issue, right?
Thanks so much for any help or even guesses!
Cheers,
Christopher
You shouldn't be reading from the audio file in your audio callback - you should be buffering in another thread and passing the samples across.
You're breaking cardinal audio rule #4:
Don’t do file or network IO on the audio thread.
Like read, write or sendto.

AudioUnitRender and ExtAudioFileWrite error -50 in Swift: Trying to convert MIDI to Audio File

I'm trying to convert a MIDI file to an Audio File (.m4a) in Swift.
Right now I'm using MIKMIDI as a tool to sequence and playback MIDI files, however it does not include the ability to save the playback into a file. MIKMID's creator outlines the process to do this here. In an attempt to capture and save the output to an audio file, I've followed this example to try and replace the MIKMIDI Graph's RemoteIO node with a GeneralIO node in Swift. When I try to save the output to a file using AudioUnitRender and ExtAudioFileWrite, they both return error -50 (kAudio_ParamError).
var channels = 2
var buffFrames = 512
var bufferList = AudioBufferList.allocate(maximumBuffers: 1)
for i in 0...bufferList.count-1{
var buffer = AudioBuffer()
buffer.mNumberChannels = 2
buffer.mDataByteSize = UInt32(buffFrames*sizeofValue(AudioUnitSampleType))
buffer.mData = calloc(buffFrames, sizeofValue(AudioUnitSampleType))
bufferList[i] = buffer
result = AudioUnitRender(generalIOAudioUnit, &flags, &inTimeStamp, busNum, UInt32(buffFrames), bufferList.unsafeMutablePointer)
inTimeStamp.mSampleTime += 1
result = ExtAudioFileWrite(extAudioFile, UInt32(buffFrames), bufferList.unsafeMutablePointer)
}
What is causing error -50, and how can I resolve it to render the MIDI (offline) to .m4a files?
UPDATE: I have resolved the ExtAudioFileWrite error -50 by changing mNumberChannels and channels to = 1. Now I get a one second audio file with noise. AudioUnitRender still returns error -50.
There are a couple of problems with your code:
your AudioBufferList doesn't agree with the client format, try
let bufferList = AudioBufferList.allocate(maximumBuffers: Int(clientFormat.mChannelsPerFrame))
you're replacing the wrong node from the AUGraph, and connecting the remaining node to itself, resulting in an infinite loop on AudioUnitRender.
But the main problem is that you are not implementing the solution that the author suggested. You wish that you could call AudioUnitRender with sample timestamps, faster than realtime, but the author said no, you'll have to manually convert sample time to hosttime and implement the better part of a midi player if you want that.
So you could do that (sounds hard), or file a feature request, or maybe record to file in realtime as you listen to the music by adding a render notification to the graph's remote IO audio unit with AudioUnitAddRenderNotify and writing the samples during the kAudioUnitRenderAction_PostRender phase.

Read encoded frames from audio file with ExtAudioFileSeek and ExtAudioFileRead

This is what I would like to do:
Get audio from the microphone
Encode it in AAC, G.711 or G.726
Write the encoded frames to a socket.
And this is how I'm trying to get there:
I'm getting audio (PCM) from the microphone using TheAmazingAudioEngine and putting it in a buffer;
Using TPAACAudioConverter I'm reading audio from my buffer and writing to a temp file (AAC);
In the processing thread of TPAACAudioConverter I replaced this:
OSStatus status = ExtAudioFileWrite(destinationFile, numFrames, &fillBufList);
with this:
OSStatus status = ExtAudioFileWrite(destinationFile, numFrames, &fillBufList);
UInt32 framesWritten = numFrames;
totalFramesWritten += framesWritten;
AudioBufferList readData;
readData.mNumberBuffers = 1;
ExtAudioFileSeek(destinationFile, totalFramesWritten - framesWritten);
OSStatus readStatus = ExtAudioFileRead(destinationFile, &numFrames, &readData);
ExtAudioFileSeek(destinationFile, totalFramesWritten);
NSLog(#"Bytes read=%d", numFrames);
but what I get is 0 numFrames read from file.
Any idea on what I may be doing wrong or any suggestion on alternative paths to achieve what I need?
The issue is that whatever ExtAudioFile does under the hood doesn't allow for seeking on a file that is open for writing. If you look at the documentation for ExtAudioFileSeek it says "This function's behavior with files open for writing is currently undefined".
You can solve this by using the more extensible (and difficult) Audio File Services and the Audio Converter Services directly instead of the convenient Extended audio file services.
I abandoned this approach and reused the AQRecorder class from the SpeakHere example by Apple.
The project is available here https://github.com/robovm/apple-ios-samples/tree/master/SpeakHere.

Stop AUGraph's stuttering

I am receiving a stuttered sound when I first start the AUGraph and play a song with a kAudioUnitSubType_AudioFilePlayer component. The stutter is about 3 seconds but its enough to bother me plus I notice that music stops for a split second sometimes while playing(I guess to buffer?). I have tried changing the kAudioUnitProperty_ScheduledFilePrime to random values but notice no change.
What variables or values should I be looking to change to get rid of this flaw? Is this an issue with the stream format?
I am using the YBAudioUnit from https://github.com/ronaldmannak/YBAudioFramework/tree/master/YBAudioUnit
Code:
YBAudioFilePlayer:
- (void)setFileURL:(NSURL *)fileURL typeHint:(AudioFileTypeID)typeHint {
if (_fileURL) {
// Release old file:
AudioFileClose(_audioFileID);
}
_fileURL = fileURL;
if (_fileURL) {
YBAudioThrowIfErr(AudioFileOpenURL((__bridge CFURLRef)fileURL, kAudioFileReadPermission, typeHint, &_audioFileID));
YBAudioThrowIfErr(AudioUnitSetProperty(_auAudioUnit, kAudioUnitProperty_ScheduledFileIDs, kAudioUnitScope_Global, 0, &_audioFileID, sizeof(AudioFileID)));
// Get number of audio packets in the file:
UInt32 propsize = sizeof(_filePacketsCount);
YBAudioThrowIfErr(AudioFileGetProperty(_audioFileID, kAudioFilePropertyAudioDataPacketCount, &propsize, &_filePacketsCount));
// Get file's asbd:
propsize = sizeof(_fileASBD);
YBAudioThrowIfErr(AudioFileGetProperty(_audioFileID, kAudioFilePropertyDataFormat, &propsize, &_fileASBD));
// Get unit's asbd:
propsize = sizeof(_fileASBD);
AudioUnitGetProperty(_auAudioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &_unitASBD, &propsize);
if (_fileASBD.mSampleRate > 0 && _unitASBD.mSampleRate > 0) {
_sampleRateRatio = _unitASBD.mSampleRate / _fileASBD.mSampleRate;
} else {
_sampleRateRatio = 1.;
}
}
}
To play I call these methods on the YBAudioFilePlayer:
[player1 setFileURL:item.url typeHint:0];
[player1 scheduleEntireFilePrimeAndStartImmediately];
[graph start];//On a YBAudioUnitGraph which is really just a basic AUGraph
More than an answer this is a comment, but it's rather large, so I'll post it here.
I don't have the time and patience to study the code inside the YB.. API. But a couple of thigns come to my mind.
First I remember experimenting with Audio Units (using Apple's API) and I had a lot of stuttering going on. I solved the problem removing all objective-C calls inside the callback that feeds data to my AUGraph (well, I removed all except one that I couldn't get rid of). I replaced all Objective-c calls with pure C and C++ calls. Example:
... this is the render callback
int i = [myClass someProperty]; // obj-c
int i = myClass->someVarialbe; // c, c++
This was just an example, but it improved dramatically and I got rid of stuttering. Maybe you can take a look at the implementation of the YBXX API and see if there are a lot of obj-c calls in the callback, and if there are, I would not use the API.
Second observation. It seems that you're only trying to play an audio file, for which having an AudioGraph is a lot of overhead, you could use a single IO Audio Unit without the Graph.
There are a large number of questions to ask:
First, are you using a compressed audio file? If so, you may need to take into account padding frames (kAudioFilePropertyPacketTableInfo) to get the real number of audio frames in the file. Perhaps try an AIFF, CAF, or WAV file.
Have you made sure no other audio app are running in the background?
Are there any logging messages?
Have you tried posting to their issue page on github?
The final question is why you are trying to use their framework (which hasn't been updated in two years). I would recommend The Amazing Audio Engine. It is actively developed by some of the best audio folks on iOS.

Resources