how to get 3sn of audio buffer ios - ios

how can I get 3sn of samples of an audio that is being recorded. I have used RemoteIO audiounit and it brings 512 samples and it is 10 milisecond. I need total 3sec of samples? Can you give me an idea how to do it .
here is my another post with details of my code Concatenating Audio Buffers in ObjectiveC
my worst sceneario will be recording the audio in a file then get its samples. I dont want to go with this.
should I use AudioQueue ? Any Advice?
I really need help. thanks

Save the buffers (giving to you by the Audio Unit callback) to an array (C array), and increment the index of the array used for saving data by 512 after every 512 samples of input data.

I have append the frames that came with each render like hotpaw2 said. now you can find the detailed codes, how I applied buffers in my this post/question
Concatenating Audio Buffers in ObjectiveC
thanks

Related

How to modify the size of AudioUnit Buffer?

I'm developing an app of recording, but I have a demand that the size of input buffer should be 882 bytes. I know that I can modify the mDataByteSize of buffList like this picture.
But it can only be modified to power of 2. When I tried to modify it to 882, it warned me that "AudioUnitRender error:-50".
I hope somebody can help me because I have no way.
You can't demand a specific input size in an Audio Unit recording callback bufferList. In fact, the Audio Unit API is allowed to change the number of samples per audio buffer at run time. So your app has to support a differing number of frames than requested, each callback.
Instead your app should save the samples in another temporary FIFO buffer, and the later remove some of the samples in your desired block size when that temporary FIFO buffer become full enough. Typically a circular buffer is used to store the temporary amount until it gets filled to the size you need or larger. They you can pull out exactly 882 or whatever number of samples you need.

Record and send audio data to c++ function

I need to send audio data in real-time in PCM format 8 KHz 16 Bit Mono.
Audio must been sent like array of chars with length
(<#char *data#>, <#int len#>).
Now I'm beginner in Audio processing and cant really understand how to accomplish that. My best try was been to convert to iLBC format and try but it couldn't work. Is there any sample how to record and convert audio to any format. I have already read Learning Core Audio from Chris Adamson and Kevin Avila but I really didn't find solution that works.
Simple what i need:
(record)->(convert?)-> send(char *data, int length);
Couse I need to send data like arrays of chars i cant use player.
EDIT:
I managed to make everything work with recording and with reading buffers. What I can't manage is :
if (ref[i]->mAudioDataByteSize != 0){
char * data = (char*)ref[i]->mAudioData;
sendData(mHandle, data, ref[i]->mAudioDataByteSize);
}
This is not really a beginner task. The solutions are to use either the RemoteIO Audio Unit, the Audio Queue API, or an AVAudioEngine installTapOnBus block. These will give you near real-time (depending on the buffer size) buffers of audio samples (Int16's or Floats, etc.) that you can convert, compress, pack into other data types or arrays, etc. Usually by calling a callback function or block that you provide to do whatever you want with the incoming recorded audio sample buffers.

Real time audio processing Swift

Our app continuously records and processes audio from iPhone mic.
Currently I use AVAudioRecorder and AVFoundation and record audio inout into 8 sec ".wav" files.
Instead I want continuously to record audio input into buffer and process 8 sec length buffer's chunks.
How can I record audio input into buffer and how can I read 8 sec length chunks from there?
Thanks!
You could receive the raw PCM a number of ways (in AV Foundation: AVCaptureAudioDataOutput from an AVCaptureDevice, or AVAudioEngine with a processing tap inserted; in Audio Toolbox: Audio Queue Services, the RemoteIO audio unit), then to write the file, you could use Audio Toolbox's AudioFile or ExtAudioFile, just counting up how many frames you've written and deciding when it's time to start a new 8 sec file.
As Rhythmic Fistman notes above, it would be safer if you did something like
capture callbacks --pushes-to--> ring buffer <--pulls-from-- file-writing code
Because when you're closing one file and opening another, the capture callbacks are still going to be coming in, and if you block on file I/O you stand a very good chance of dropping some data on the floor.
I suppose another approach would be to just fill an 8 sec buffer in memory from your callbacks, and when it's full, have another thread write that file while you malloc a new buffer and start recording into that (obviously, the file writer would dispose the old buffer when it's done).
Edit: Also, I didn't see anything about Swift in your question, but any of this should work fine from Swift or C/Obj-C.

Is it possible to split the recorded wav file into multiple wav files on iOS, given the duration of the splits?

I want to extract a few clips from the recorded wav file. I am not finding much help online regarding this issue. I understand we can't split from compressed formats like mp3, but how do we do it with caf/wav files?
One approach you may consider would be to calculate and read the bytes from an audio file and write them to a new file. Because you are dealing with LPCM formats the calculations are relatively simple.
If for example you have a file of 16bit mono LPCM audio sampled at 44.1kHz that is one minute in duration, then you have a total of (60 secs x 44100Hz) 2,646,000 samples. Times 2 bytes per sample gives a total of 5,292,000 bytes. And if you want audio from 10sec to 30sec then you need to read the bytes from 882,000 to 2,646,000 and write them to a separate file.
There is a bit of code involved but it can be done using Audio File Services Class from the AudioToolbox framework.
Functions you'll need to use are AudioFileOpenURL, AudioFileCreateWithURL, AudioFileReadBytes, AudioFileWriteBytes, and AudioFileClose.
An algorithm would be something like this-
You first set up an AudioFileID which is an opaque type that gets passed in to the AudioFileCreateWithURL function. Then open the file you wish to splice up using AudioFileOpenURL.
Calculate the start and end bytes of what you want to copy.
Next, in a loop preferably, read in the bytes and write them to file. AudioFileReadBytes and AudioFileWriteBytes allow you to do this. Whats good is that you can read and write whatever size bytes you decide on each iteration of the loop.
When finished close the new file and original using AudioFileClose.
Then repeat for each file (audio extraction) to be written.
On an additional note you would split a compressed format by converting the compressed format to LPCM first.

CoreAudio, collect an exact amount of samples

I'm reading an audio file with AVAssetReaderAudioMixOutput. So I'll get AudioBuffer objects with normally 2*8192 samples. Now I want to do some analysis on exactly 2*44100 samples shifted by 1024 every 1024 samples. Is there a simple way to collect an exact amount of samples? Or do I have to build that all on my own?
And is there a collection like a ring buffer that works well with AudioBuffer?
The best way I found to to this is with the TPCircularBuffer (https://github.com/michaeltyson/TPCircularBuffer). It has a category that can deal directly with AudioBuffer objects. So I put them into them into the buffer until there are 2*44100 bytes in the buffer and then I remove the last 2*8192 bytes. Works like a charm!

Resources