Is there any way to create CMSampleBufferRef from NSData? NSData object has also be constructed from CMSampleBufferRef formerly. I need that conversion because I want to save the CMSampleBufferRef frames (as NSData) that are taken from live camera using AVFoundation, then be able to use the CMSampleBufferRef frames (by converting NSData objects to CMSampleBufferRef) to construct a video.
Thanks in advance...
-(AudioBufferList *) getBufferListFromData: (NSData *) data
{
if (data.length > 0)
{
NSUInteger len = [data length];
//I guess you can use Byte*, void* or Float32*. I am not sure if that makes any difference.
Byte * byteData = (Byte*) malloc (len);
memcpy (byteData, [data bytes], len);
if (byteData)
{
AudioBufferList * theDataBuffer =(AudioBufferList*)malloc(sizeof(AudioBufferList) * 1);
theDataBuffer->mNumberBuffers = 1;
theDataBuffer->mBuffers[0].mDataByteSize = len;
theDataBuffer->mBuffers[0].mNumberChannels = 1;
theDataBuffer->mBuffers[0].mData = byteData;
// Read the data into an AudioBufferList
return theDataBuffer;
}
}
return nil;
}
Related
How to split byteArray in iOS
Iam getting 160 length of arrayByte data..
I need to split into 4 parts..each part contain 40 arrayByte.that data I need to copy and use for decoding..I tried to converted it but its not working..Can some one help to do this..
Finally i got solution Below is updated working code
-(NSMutableData*)decodeOpusData:(NSData*)data
{
NSMutableData *audioData = [[NSMutableData alloc] init];
for (NSUInteger i = 0; i < 4; i ++)
{
int bufferLength = 40;
if([data length]>= 40){
NSData *subData = [data subdataWithRange:NSMakeRange(i*bufferLength, bufferLength)];
Byte *byteData = (Byte*)malloc(sizeof(Byte)*bufferLength);
memcpy(byteData, [subData bytes], bufferLength);
//You can do anything here with data..........
//Below iam decoding audio data using OPUS library
short decodedBuffer[WB_FRAME_SIZE];
int nDecodedByte = sizeof(short) * [self decode:byteData length:bufferLength output:decodedBuffer];
NSData *PCMData = [NSData dataWithBytes:(Byte *)decodedBuffer length:nDecodedByte ];
[audioData appendData:PCMData];
//Decoding audio data using OPUS library
}
}
return audioData;
}
Below code is android.i want to do like this..
ArrayByte length = 160
BUFFER_LENGTH = 40
public fun opusDataDecoder(data:ByteArray){
for (i in 0..3){
val byteArray = ByteArray(BUFFER_LENGTH)
System.arraycopy(data,i * BUFFER_LENGTH,byteArray,0, BUFFER_LENGTH) //BUFFER_LENGTH = 40
val decodeBufferArray = ShortArray(byteArray.size * 8) // decodeBufferArray = 320
val size = tntOpusUtils.decode(decoderHandler, byteArray, decodeBufferArray)
if (size > 0) {
val decodeArray = ShortArray(size)
System.arraycopy(decodeBufferArray, 0, decodeArray, 0, size)
opusDecode(decodeArray)
} else {
Log.e(TAG, "opusDecode error : $size")
}
}
}
Iam getting only first 40 bytes..i want like first 0-40 bytes then 40-80 bytes,then 80-120bytes then 120-160bytes..
But here iam getting always 40 bytes...
Can some one help me how to fix this?
Finally i got solution for split byte array and send it in small packs
Below is updated working code..
-(NSMutableData*)decodeOpusData:(NSData*)data
{
NSMutableData *audioData = [[NSMutableData alloc] init];
for (NSUInteger i = 0; i < 4; i ++)
{
int bufferLength = 40;
if([data length]>= 40){
NSData *subData = [data subdataWithRange:NSMakeRange(i*bufferLength, bufferLength)];
Byte *byteData = (Byte*)malloc(sizeof(Byte)*bufferLength);
memcpy(byteData, [subData bytes], bufferLength);
//You can do anything here with data..........
//Below iam decoding audio data using OPUS library
short decodedBuffer[WB_FRAME_SIZE];
int nDecodedByte = sizeof(short) * [self decode:byteData length:bufferLength output:decodedBuffer];
NSData *PCMData = [NSData dataWithBytes:(Byte *)decodedBuffer length:nDecodedByte ];
[audioData appendData:PCMData];
//Decoding audio data using OPUS library
}
}
return audioData;
}
I'm trying to play audio I'm receiving from an RTMP stream (I have managed to play the video part). The audio comes in .aac format. I have the NSData coming. Then I'm putting it into a CMAudiSampleBuffer and enqueing it into a AVSampleBufferAudioRenderer. (Basically I'm doing the same thing that I have done for the video packets).
Everything is going fine except that I get no sound. Now I'm pretty new to objective-c and iOS programming so the issue ight come from somewhere else, all ideas are welcome.
Here is the code I use to make the format description
-(void)createFormatDescription:(NSData*)payload
{
OSStatus status;
NSData* data = [NSData dataWithData:[payload subdataWithRange:NSMakeRange(2, [payload length]-2)]];
const uint8_t* bytesBuffer = [data bytes];
_type = bytesBuffer[0]>>3;
_frequency = [self getSampleRate:(bytesBuffer[0] & 0b00000111) << 1 | (bytesBuffer[1] >> 7)];
_channel = (bytesBuffer[1] & 0b01111000) >> 3;
AudioStreamBasicDescription audioFormat;
audioFormat.mFormatID = kAudioFormatMPEG4AAC;
audioFormat.mSampleRate = _frequency;
audioFormat.mFormatFlags = _type;
audioFormat.mBytesPerPacket = 0;
audioFormat.mFramesPerPacket = 1024;
audioFormat.mBytesPerFrame = 0;
audioFormat.mChannelsPerFrame = _channel;
audioFormat.mBitsPerChannel = 0;
audioFormat.mReserved = 0;
status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, nil, 0, nil, nil, &_formatDesc);
}
Here is the code that I use the add the adts data in front of the packets and create the buffers :
- (NSData*) adts:(int)length
{
int size = 7;
int fullSize =length + size;
uint8_t adts[size];
adts[0] = 0xFF;
adts[1] = 0xF9;
adts[2] = (_type - 1) << 6 | (_frequency << 2) | (_channel >> 2);
adts[3] = (_channel & 3) << 6 | (fullSize >> 11);
adts[4] = (fullSize & 0x7FF) >> 3;
adts[5] = ((fullSize & 7) << 5) + 0x1F;
adts[6] = 0xFC;
NSData* result = [NSData dataWithBytes:adts length:size];
return result;
}
-(void)enqueueBuffer:(RTMPMessage*)message {
OSStatus status;
NSData* payloadData = [NSData dataWithData:[message.payloadData
subdataWithRange:NSMakeRange(2, [message.payloadData length]-2)]];
NSData* adts = [NSData dataWithData:[self adts:(int)[payloadData length]]];
NSMutableData* data = [NSMutableData dataWithData:adts];
[data appendData:payloadData];
uint8_t* bytesBuffer[[data length]];
[data getBytes:bytesBuffer length:[data length]];
const size_t sampleSize = [data length];
AudioStreamPacketDescription packetDescription;
packetDescription.mDataByteSize = (int)sampleSize;
packetDescription.mStartOffset = 0;
packetDescription.mVariableFramesInPacket = 0;
CMBlockBufferRef blockBuffer = NULL;
CMSampleBufferRef sampleBuffer = NULL;
CMTime time = CMTimeMake(5, _frequency);
status = CMBlockBufferCreateWithMemoryBlock(NULL, bytesBuffer, [data length], kCFAllocatorNull, NULL, 0, [data length], 0, &blockBuffer);
status = CMAudioSampleBufferCreateWithPacketDescriptions(kCFAllocatorDefault, blockBuffer, true, NULL, NULL, _formatDesc, 1, time, &packetDescription, &sampleBuffer);
CFArrayRef attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, YES);
CFMutableDictionaryRef dict = (CFMutableDictionaryRef)CFArrayGetValueAtIndex(attachments, 0);
CFDictionarySetValue(dict, kCMSampleAttachmentKey_DisplayImmediately, kCFBooleanTrue);
[_audioRenderer enqueueSampleBuffer:sampleBuffer];
}
Thanks in advance for any help
ADTS header is not required. AVAudioSampleRenderer just need naked aac compressed packet for playing. But the precondition is that you set the correct formatDescription, and correct parameters for samplebuffer creation.
You need aware that, HE-AAC(LC+SBR) packed like a AAC-LC, but has 22050 sample rate. HE-V2(LC+SBR+PS) packed like a AAC-LC, but has 22050 sample rate, and one channel per sample.
And all HE-AAC(v1,v2), samplesPerFrame always 2048, not like LC's 1024.
That's all I know how to play aac stream with AVAudioSampleRenderer correctly. It's a long way succeed..
I need to be able to assemble audio from several files into a single buffer (stereo). My code is working as expected if I load each file into its own buffer. Looping through several files and losing into one larger buffer only plays back the segment from the last file.
Its possible that the header info is getting copied each time, or that the same area of the buffer is just being over-written with each new file.
Any suggestions would be appreciated.
Some code is listed below. I'm reading from encrypted files, so I'm using NSData and AudioFileOpenWithCallbacks.
// Assign the frame count to the soundStructArray instance variable
UInt64 totalFrames = [[inputNotes.stopTimes lastObject] intValue];
self.soundStructArray[0]->frameCount = (UInt32)totalFrames;
self.soundStructArray[0]->audioDataLeft =
(AudioUnitSampleType *) calloc (totalFrames, sizeof (AudioUnitSampleType));
AudioStreamBasicDescription importFormat = {0};
// if (2 == channelCount) {
self.soundStructArray[0]->isStereo = YES;
self.soundStructArray[0]->audioDataRight =
(AudioUnitSampleType *) calloc (totalFrames, sizeof (AudioUnitSampleType));
// Allocate memory for the buffer list struct according to the number of
// channels it represents.
AudioBufferList *bufferList;
UInt32 channelCount = 2;
bufferList = (AudioBufferList *) malloc (
sizeof (AudioBufferList) + sizeof (AudioBuffer) * (channelCount - 1)
);
if (NULL == bufferList) {DLog (#"*** malloc failure for allocating bufferList memory"); return;}
// initialize the mNumberBuffers member
bufferList->mNumberBuffers = channelCount;
// initialize the mBuffers member to 0
AudioBuffer emptyBuffer = {0};
size_t arrayIndex;
for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) {
bufferList->mBuffers[arrayIndex] = emptyBuffer;
}
// set up the AudioBuffer structs in the buffer list
bufferList->mBuffers[0].mNumberChannels = 1;
bufferList->mBuffers[0].mDataByteSize = (UInt32)totalFrames * sizeof (AudioUnitSampleType);
bufferList->mBuffers[0].mData = self.soundStructArray[0]->audioDataLeft;
if (2 == channelCount) {
bufferList->mBuffers[1].mNumberChannels = 1;
bufferList->mBuffers[1].mDataByteSize = (UInt32)totalFrames * sizeof (AudioUnitSampleType);
bufferList->mBuffers[1].mData = self.soundStructArray[0]->audioDataRight;
}
NSString *fileType = #"m4a";
for (int audioFile = 0; audioFile < inputVoicesCount; ++audioFile) {
#autoreleasepool {
NSData *encData;
NSData *audioData;
AudioFileID refAudioFileID;
DLog (#"readAudioFilesIntoMemory - file %i", audioFile);
NSString *source = [[NSBundle mainBundle] pathForResource:[inputNotes.notes objectAtIndex:audioFile] ofType:fileType];
// NSURL *url = [NSURL encryptedFileURLWithPath:source];
if ([[NSFileManager defaultManager] fileExistsAtPath:source])
{
//File exists
encData = [[NSData alloc] initWithContentsOfFile:source];
if (encData)
{
NSError *error;
audioData = [RNDecryptor decryptData:encData
withPassword:key
error:&error];
}
}
else
{
DLog(#"File does not exist");
}
OSStatus result = AudioFileOpenWithCallbacks((__bridge void *)(audioData), readProc, 0, getSizeProc, NULL, kAudioFileMPEG4Type, &refAudioFileID);
if(result != noErr){
DLog(#"problem in theAudioFileReaderWithData function: result code %i \n", result);
}
// Instantiate an extended audio file object.
ExtAudioFileRef audioFileObject = 0;
result = ExtAudioFileWrapAudioFileID(refAudioFileID, NO, &audioFileObject);
if (result != noErr){
DLog(#"problem in theAudioFileReaderWithData function Wraping the audio FileID: result code %i \n", result);
}
// Get the audio file's number of channels.
AudioStreamBasicDescription fileAudioFormat = {0};
UInt32 formatPropertySize = sizeof (fileAudioFormat);
result = ExtAudioFileGetProperty (
audioFileObject,
kExtAudioFileProperty_FileDataFormat,
&formatPropertySize,
&fileAudioFormat
);
if (noErr != result) {[self printErrorMessage: #"ExtAudioFileGetProperty (file audio format)" withStatus: result]; return;}
importFormat = stereoStreamFormat;
result = ExtAudioFileSetProperty (
audioFileObject,
kExtAudioFileProperty_ClientDataFormat,
sizeof (importFormat),
&importFormat
);
if (noErr != result) {[self printErrorMessage: #"ExtAudioFileSetProperty (client data format)" withStatus: result]; return;}
// Assign the frame count to the soundStructArray instance variable
UInt64 desiredFrames = (UInt64) ([[inputNotes.stopTimes objectAtIndex:audioFile] intValue] - [[inputNotes.startTimes objectAtIndex:audioFile] intValue]);
// Perform a synchronous, sequential read of the audio data out of the file and
// into the soundStructArray[audioFile].audioDataLeft and (if stereo) .audioDataRight members.
UInt32 numberOfPacketsToRead = (UInt32) desiredFrames;
result = ExtAudioFileRead (
audioFileObject,
&numberOfPacketsToRead,
bufferList
);
if (noErr != result) {
[self printErrorMessage: #"ExtAudioFileRead failure - " withStatus: result];
// If reading from the file failed, then free the memory for the sound buffer.
// free (soundStructArray[audioFile].audioDataLeft);
// soundStructArray[audioFile].audioDataLeft = 0;
free (self.soundStructArray[0]->audioDataLeft);
self.soundStructArray[0]->audioDataLeft = 0;
free (self.soundStructArray[0]->audioDataRight);
self.soundStructArray[0]->audioDataRight = 0;
ExtAudioFileDispose (audioFileObject);
return;
}
ExtAudioFileDispose (audioFileObject);
AudioFileClose(refAudioFileID);
}
}//end of #autoreleasepool
free (bufferList);
// Set the sample index to zero, so that playback starts at the
// beginning of the sound.
self.soundStructArray[0]->sampleNumber = 0;
DLog (#"Finished reading all files into memory");
readingFiles = NO;
}
I am working on an iOS project
I have to read a huge file in chunks of 4KB
Here is what I have so far:
NSData *fileData= [self getBytesFromInput];
pj_str_t text;
int chunkSize = 4*1024;
int fileSize = [fileData length];
while (fileSize>0){
if (fileSize<=chunkSize) {
chunkSize = fileSize;
fileSize=0;
}
else fileSize = fileSize-chunkSize;
pj_strset(&text, (char*)[fileData bytes], MIN([fileData length], chunkSize); //takes the first chunk
//BUT HOW TO TAKE THE NEXT CHUNK OF DATA?
//do something with the &text ....
}
i would refactor the code so you are also loading the files in chunks, but you can access the later chunks by adding an offset to your byte-pointer:
int currentOffset = 0;
while (fileSize>0) {
...
char* bytePointer = (char*)[fileData bytes];
pj_strset(&text, bytePointer+currentOffset, MIN([fileData length], chunkSize);
currentOffset += chunkSize;
}
EDIT:
This should do the same thing but reading the file chunk for chunk:
NSString *yourFilePath;
NSFileHandle *fileHandle = [NSFileHandle fileHandleForReadingAtPath:yourFilePath];
int chunksize = 4*1024;
pj_str_t text;
NSData *data;
while ((data = [fileHandle readDataOfLength:chunksize]) && [data length] > 0) {
pj_strset(&text, (char*)[data bytes], [data length]);
}
I'm developing a mobile application for iOS related to voice recording.
Due to that fact, I'm developing some different sound effects to modify recorded voice but I have a problem to implement some of them.
I'm trying to create echo/delay effect and I need to transform a byte array into a short array but I have no idea how to do it in Objective-C.
Thanks.
This is my current source code to implement it, but like byte is a very short type, when I apply attenuation (what must return a float value) produce an awful noise in my audio.
- (NSURL *)echo:(NSURL *)input output:(NSURL *)output{
int delay = 50000;
float attenuation = 0.5f;
NSMutableData *audioData = [NSMutableData dataWithContentsOfURL:input];
NSUInteger dataSize = [audioData length] - 44;
NSUInteger audioLength = [audioData length];
NSUInteger newAudioLength = audioLength + delay;
// Copy bytes
Byte *byteData = (Byte*)malloc(audioLength);
memcpy(byteData, [audioData bytes], audioLength);
short *shortData = (short*)malloc(audioLength/2);
// create a new array to store new modify data
Byte *newByteData = (Byte*)malloc(newAudioLength);
newByteData = byteData;
for (int i = 44; i < audioLength - delay; i++)
{
newByteData[i + delay] += byteData[i] * attenuation;
}
// Copy bytes in a new NSMutableData
NSMutableData *newAudioData = [NSMutableData dataWithBytes:newByteData length:newAudioLength];
// Store in a file
[newAudioData writeToFile:[output path] atomically:YES];
// Set WAV size
[[AudioUtils alloc] setAudioFileSize:output];
return output;
}
Finally, I could finish my echo effect implementing these four methods. I hope they will be useful for you.
Byte to short array
- (short *) byte2short:(Byte *)bytes size:(int)size resultSize:(int)resultSize{
short *shorts = (short *)malloc(sizeof(short)*resultSize);
for (int i=0; i < size/2; i++){
shorts[i] = (bytes[i*2+1] << 8) | bytes[i*2];
}
return shorts;
}
Short to byte array
- (Byte *) short2byte:(short *)shorts size:(int)size resultSize:(int)resultSize{
Byte *bytes = (Byte *)malloc(sizeof(Byte)*resultSize);
for (int i = 0; i < size; i++)
{
bytes[i * 2] = (Byte) (shorts[i] & 0x00FF);
bytes[(i * 2) + 1] = (Byte) (shorts[i] >> 8);
shorts[i] = 0;
}
return bytes;
}
Effect
- (NSMutableData *) effect:(NSMutableData *)data delay:(int)delay attenuation:(float)attenuation{
NSUInteger audioLength = [data length];
// Copy original data in a byte array
Byte *byteData = (Byte*)malloc(sizeof(Byte)*audioLength);
memcpy(byteData, [data bytes], audioLength);
short *shortData = (short*)malloc(sizeof(short)*(audioLength/2 + delay));
shortData = [self byte2short:byteData size:(int)audioLength resultSize:(int)audioLength/2 + delay];
// Array to store shorts
short *newShortData = shortData;
for (int i = 44; i < audioLength/2; i++)
{
newShortData[i + delay] += (short)((float)shortData[i] * attenuation);
}
Byte *newByteData = [self short2byte:newShortData size:(int)(audioLength/2 + delay) resultSize:(int)(audioLength + delay*2)];
// Copy bytes to a NSMutableData in order to create new file
NSMutableData *newAudioData = [NSMutableData dataWithBytes:newByteData length:(int)(audioLength + delay*2)];
return newAudioData;
}
Echo effect
- (NSURL *)echo:(NSURL *)input output:(NSURL *)output{
NSMutableData *audioData = [NSMutableData dataWithContentsOfURL:input];
// we call effect method that returns a NSMutableData and create a new file
[[self effect:audioData delay:6000 attenuation:0.5f] writeToFile:[output path] atomically:YES];
// We set file's size (is a method I have implemented)
[[AudioUtils alloc] setAudioFileSize:output];
return output;
}
There's no predefined function that will create a short array from a byte array, but it should be fairly simple to do it with a for loop
// create a short array
short *shortData = malloc(sizeof(short)*audioLength);
for (i=0; i<bytearray.length, i++)
{
shortData[i] = byteData[i];
}
The code is not rigorously correct (meaning I didn't compile it, just wrote it here on the fly), but it should give you an idea on how to do it.
Also be aware that saving audio data with two bytes instead of one can give very different results when playing back, but I'll assume you know how to handle with audio data for your specific purposes.