Record and send audio data to c++ function - ios

I need to send audio data in real-time in PCM format 8 KHz 16 Bit Mono.
Audio must been sent like array of chars with length
(<#char *data#>, <#int len#>).
Now I'm beginner in Audio processing and cant really understand how to accomplish that. My best try was been to convert to iLBC format and try but it couldn't work. Is there any sample how to record and convert audio to any format. I have already read Learning Core Audio from Chris Adamson and Kevin Avila but I really didn't find solution that works.
Simple what i need:
(record)->(convert?)-> send(char *data, int length);
Couse I need to send data like arrays of chars i cant use player.
EDIT:
I managed to make everything work with recording and with reading buffers. What I can't manage is :
if (ref[i]->mAudioDataByteSize != 0){
char * data = (char*)ref[i]->mAudioData;
sendData(mHandle, data, ref[i]->mAudioDataByteSize);
}

This is not really a beginner task. The solutions are to use either the RemoteIO Audio Unit, the Audio Queue API, or an AVAudioEngine installTapOnBus block. These will give you near real-time (depending on the buffer size) buffers of audio samples (Int16's or Floats, etc.) that you can convert, compress, pack into other data types or arrays, etc. Usually by calling a callback function or block that you provide to do whatever you want with the incoming recorded audio sample buffers.

Related

Audio Synthesis with (AVFoundation?) Using Sine Wave

Let's say I have an array of Y values for a sine wave. (Assume X is time)
In Python you can just write it to a Wav file:
wav.write("file.wav", <sample rate>, <waveform>)
Is it possible to do this in Swift using AVFoundation? If so how? If not, what library should I be using? (I'm trying to avoid AudioKit for now.)
Thanks,
Charles
In AVFoundation there is AVAudioFile, but you'll have to provide the data as AVAudioPCMBuffers, which keeps the data in a AudioBufferList, which in turn consists of AudioBuffers, which are imho all rather complicated since their design goal apparently was to be able to handle every conceivable audio format (including compressed, VBR etc.). So AVAudioFile is probably overkill for just writing some synthetic samples to a WAV file.
Alternatively, there is the Audio File Services C-API. It provides AudioFileCreateWithURL, AudioFileWriteBytes and AudioFileClose, which will probably do the trick for your task.
The most complicated part may be the AudioStreamBasicDescription required by AudioFileCreateWithURL. To help with this a utility function exists: FillOutASBDForLPCM.

Split audio track into segments by BPM and analyse each segment using Superpowered iOS

I have been using the Superpowered iOS library to analyse audio and extract BPM, loudness, pitch data. I'm working on an iOS Swift 3.0 project and have been able to get the C classes work with Swift using the Bridging headers for ObjC.
The problem I am running into is that whilst I can create a decoder object, extract audio from the Music Library and store it as a .WAV - I am unable to create a decoder object for just snippets of the extracted audio and get the analyser class to return data.
My approach has been to create a decoder object as follows:
var decodeAttempt = decoder!.open(self.originalFilePath, metaOnly: false, offset: offsetBytes, length: lengthBytes, stemsIndex: 0)
'offsetBytes' and 'LengthBytes' I think are the position within the audio file. As I have already decompressed audio, stored it as WAV and then am providing it to the decoder here, I am calculating the offset and length using the PCM Wave audio formula of 44100 x 2 x 16 / 8 = 176400 bytes per second. Then using this to specify a start point and length in bytes. I'm not sure that this is the correct way to do this as the decoder will return 'Unknown file format'.
Any ideas or even alternative suggestions of how to achieve the title of this question? Thanks in advance!
The offset and length parameters of the SuperpoweredDecoder are there because of the Android APK file format, where bundled audio files are simply concatenated to the package.
Despite a WAV file is as "uncompressed" as it can be, there is a header at the beginning, so offset and length are not a good way for this purpose. Especially as the header is present at the beginning only, and without the header decoding is not possible.
You mention that you can extract audio to PCM (and save to WAV). Then you have the answer in your hand: just submit different extracted portions to different instances of the SuperpoweredOfflineAnalyzer.

how to get 3sn of audio buffer ios

how can I get 3sn of samples of an audio that is being recorded. I have used RemoteIO audiounit and it brings 512 samples and it is 10 milisecond. I need total 3sec of samples? Can you give me an idea how to do it .
here is my another post with details of my code Concatenating Audio Buffers in ObjectiveC
my worst sceneario will be recording the audio in a file then get its samples. I dont want to go with this.
should I use AudioQueue ? Any Advice?
I really need help. thanks
Save the buffers (giving to you by the Audio Unit callback) to an array (C array), and increment the index of the array used for saving data by 512 after every 512 samples of input data.
I have append the frames that came with each render like hotpaw2 said. now you can find the detailed codes, how I applied buffers in my this post/question
Concatenating Audio Buffers in ObjectiveC
thanks

Core Audio get data from AudioQueue (or AudioUnits) into memory

I'm a total noob when it comes to core audio so bear with me. Basically what I want to do is record audio data from a machine's default mic, record until the user decides to stop, and then do some analysis on the entire recording. I've been learning from the book "Learning Core Audio" by Chis Adamson and Kevin Avila (which is an awesome book, found it here: http://www.amazon.com/Learning-Core-Audio-Hands-On-Programming/dp/0321636848/ref=sr_1_1?ie=UTF8&qid=1388956621&sr=8-1&keywords=learning+core+audio ). I see how the AudioQueue works, but I'm not sure how to get data as it's coming from the buffers and store it in a global array.
The biggest problem is that I can't allocate an array a priori because we have no idea how long the user wants to record for. I'm guessing that a global array would have to be passed to the AudioQueue's callback where it would then append data from the latest buffer, however I'm not exactly sure how to do that, or if that's the correct place to be doing so.
If using AudioUnits I'm guessing that I would need to create two audio units, one a remote IO unit to get the microphone data and one generic output audio unit that would do the data appending in the unit's (I'm guessing here, really not sure) AudioUnitRender() function.
If you know where I need to be doing these things or know any resources that could help explain how this works, that would be awesome.
I eventually want to learn how to do this in iOS and Mac OS platforms. For the time being I'm just working in the Mac OS.
Since you know the sample rate, your app can pre-allocated a sufficient number of new buffers (for example, in a linked list) in the UI run-loop (for example, periodically, based on an NSTimer or CADisplayLink), into which you can then just copy data during the Audio Queue callbacks.
There are also a few async file write functions that are safe to call inside an audio callback. After recording you can copy the data back out of the file into a now-known-sized memory array (or just mmap the file).
Ok everyone, so I figured out the answer to my own question, and I only had to add about 10 lines of code to get it to work. Basically what I did was in the user data struct, or as Apple calls it the client data struct I added a variable to keep track of the total number of recorded samples, the sample rate (just so I would have access to the value inside the callback) and a pointer that would point to the audio data. In the callback function, I reallocated memory for the pointer and then copied the contents of the buffer into the newly allocated memory.
I'll post the code for my client recorder struct and the lines of code inside the callback function. I would like to post code for the entire program, but much of it was borrowed from the book "Learning Core Audio" by Chis Adamson and Kevin Avila and I don't want to infringe on any copyrights held by the book (can someone tell me if it's legal to post that here or not? if it is I'd be more than happy to post the whole program).
the client recorder struct code:
//user info struct for recording audio queue callbacks
typedef struct MyRecorder{
AudioFileID recordFile;
SInt64 recordPacket;
Boolean running;
UInt32 totalRecordedSamples;
Float32 * allData;
Float64 sampleRate;
}MyRecorder;
This struct needs to be initialized in the main loop of the program. It would look something like this:
MyRecorder recoder = {0};
I know I spelled "recorder" incorrectly.
Next, what I did inside the callback function:
//now let's also write the data to a buffer that can be accessed in the rest of the program
//first calculate the number of samples in the buffer
Float32 nSamples = RECORD_SECONDS * recorder->sampleRate;
nSamples = (UInt32) nSamples;
//so first we need to reallocate memory that the recorder.allData pointer is pointing to
//pretty simple, just add the amount of samples we just recorded to the number that we already had
//to get the current total number of samples, and the size of each sample, which we get from
//sizeof(Float32).
recorder->allData = realloc(recorder->allData, sizeof(Float32) * (nSamples + recorder->totalRecordedSamples));
//now we need to copy what's in the current buffer into the memory that we just allocated
//however, rememeber that we don't want to overwrite what we already just recorded
//so using pointer arith, we need to offset the recorder->allData pointer in memcpy by
//the current value of totalRecordedSamples
memcpy((recorder->allData) + (recorder->totalRecordedSamples), inBuffer->mAudioData, sizeof(Float32) * nSamples);
//update the number of total recorded samples
recorder->totalRecordedSamples += nSamples;
And of course at the end of my program I freed the memory in the recoder.allData pointer.
Also, my experience with C is very limited, so if I'm making some mistakes especially with memory management, please let me know. Some of the malloc, realloc, memcpy etc. type functions in C sort of freak me out.
EDIT: Also I'm now working on how to do the same thing using AudioUnits, I'll post the solution to that when done.

Delphi: BASS.dll - how to copy part of MP3 stream to another file

Im using BASS.dll library and all I want to do is to "redirect" part of MP3 Im playing using for example BASS_StreamCreateFile to another file (may be MP3 or WAVe). I dont know how to start? Im trying to use help to find an answer, but still nothing. I can play this stream. Read some data I need. Now I need to copy ile for example from 2:00 to 2:10 (or by position).
Any ideas how should I start?
Regards,
J.K.
Well, I don't know BASS specifically, but I know a little about music playing and compressed data formats in general, and copying the data around properly involves an intermediate decoding step. Here's what you'll need to do:
Open the file and find the correct position.
Decode the audio into an in-memory buffer. The size of your buffer should be (LengthInSeconds * SamplesPerSecond * Channels * BytesPerSample) bytes. So if it's 10 seconds of CD quality audio, that's 10 * 44100 * 2 (stereo) * 2 (16-bit audio) = 1764000 bytes.
Take this buffer of decoded data and feed it into an MP3 encoding function, and save the resulting MP3 to a file.
If BASS has functions for decoding to an external buffer and for encoding a buffer to MP3, you're good; all you have to do is figure out which ones to use. If not, you'll have to find another library for MP3 encoding and decoding.
Also, watch out for generational loss. MP3 uses lossy compression, so if you decompress and recompress the data multiple times, it'll hurt the sound quality.

Resources