MediaFoundation SinkWriter: Enabling MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS results in WriteSample failing with E_FAIL error - directx

I'm trying to encode RGB/NV12 samples to h264 and stream the encoded video over WebSocket through SinkWriter by implementing IMFByteStream. For this experiment, I have converted the RGB32 samples to NV12 samples using pixel shaders. The output format is H264 with FMPEG4 container.
I have also tried directly feeding RGB samples. It works fine for both RGB and NV12 samples through the software approach, but WriteSample fails with E_FAIL error when the below-mentioned line is uncommented. I'm properly setting MF_SINK_WRITER_D3D_MANAGER though.
COM_CHECK(attribs->SetUINT32(MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS, TRUE));
MFtrace log:
59980,EB60 10:18:27.65002 ### Exiting: traced process has exited
CMFPlatExportDetours::MFStartup # Version=0x00020070, dwFlags=0x00000000
59980,EB60 10:18:27.65448 COle32ExportDetours::CoCreateInstance # Created {60F9F51E-4613-4B35-AE88-332542B567B8} MF Fragmented MPEG4 Sink Class Factory (C:\WINDOWS\System32\mfmp4srcsnk.dll) #04B80148 - traced interfaces:
59980,EB60 10:18:27.65700 COle32ExportDetours::CoCreateInstance # Created {9A02E012-6303-4E1E-B9A1-630F802592C5} Packed Property Storage Object (C:\WINDOWS\system32\propsys.dll) #012D5FEC - traced interfaces:
59980,EB60 10:18:27.65931 COle32ExportDetours::CoCreateInstance # Created {48E2ED0F-98C2-4A37-BED5-166312DDD83F} MFReadWrite Class Factory (C:\WINDOWS\System32\mfreadwrite.dll) #02D13F00 - traced interfaces: IMFReadWriteClassFactory #02D13F00,
59980,EB60 10:18:27.65935 CMFReadWriteClassFactoryDetours::CreateInstanceFromObject #02D13F00 Object #04B859F0, MF_TRANSCODE_CONTAINERTYPE={9BA876F1-419F-4B77-A1E0-35959D9D4004};MF_SINK_WRITER_ASYNC_CALLBACK=#05ECCD98;{9C27891A-ED7A-40E1-88E8-B22727A024EE}=1;MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS=1;MF_SOURCE_READER_D3D_MANAGER=#04B82770
59980,EB60 10:18:27.65940 CMFAttributesDetours::GetUINT32 #02D17618 attribute not found guidKey = MF_SINK_WRITER_DISABLE_THROTTLING
59980,EB60 10:18:27.65941 CMFAttributesDetours::GetUnknown #02D17618 attribute not found guidKey = {2ACF1917-3743-41DF-A564-E727A80EA33E}
59980,EB60 10:18:27.65942 CMFAttributesDetours::GetItemType #02D17618 attribute not found guidKey = MF_SINK_WRITER_DISABLE_THROTTLING
59980,EB60 10:18:27.65942 CMFAttributesDetours::GetItemType #02D17618 attribute not found guidKey = MF_READWRITE_DISABLE_CONVERTERS
59980,EB60 10:18:27.65943 CMFAttributesDetours::GetUINT32 #02D17618 attribute not found guidKey = {BDAD7BCA-0E5F-4B10-AB16-26DE381B6293}
59980,EB60 10:18:27.65943 CMFAttributesDetours::GetItemType #02D17618 attribute not found guidKey = {39384300-D0EB-40B1-87A0-3318871B5A53}
59980,EB60 10:18:27.65944 CMFAttributesDetours::GetItemType #02D17618 attribute not found guidKey = {430847DA-0890-4B0E-938C-054332C547E1}
59980,EB60 10:18:27.65944 CMFAttributesDetours::GetItemType #02D17618 attribute not found guidKey = {43AD19CE-F33F-4BA9-A580-E4CD12F2D144}
59980,EB60 10:18:27.65945 CMFAttributesDetours::GetItemType #02D17618 attribute not found guidKey = {273DB885-2DE2-4DB2-A6A7-FDB66FB40B61}
59980,EB60 10:18:27.65945 CMFAttributesDetours::GetString #02D17618 attribute not found guidKey = {39384300-D0EB-40B1-87A0-3318871B5A53}
59980,EB60 10:18:27.65946 CMFAttributesDetours::GetUINT32 #02D17618 attribute not found guidKey = {43AD19CE-F33F-4BA9-A580-E4CD12F2D144}
### BuffersWritten : 7
59980,EB60 10:18:27.65959 ### Trace session stopped
CMFReadWriteClassFactoryDetours::HandleObject # New sink writer #04B890F8
59980,EB60 10:18:27.65970 CMFAttributesDetours::GetUnknown #04B8A1F8 attribute not found guidKey = MFT_FIELDOFUSE_UNLOCK_Attribute
__M_F_T_R_A_C_E___LOG__
Total events received: 25
Initialize:
CComPtr<IMFAttributes> attribs;
CComPtr<IMFDXGIDeviceManager> pDeviceManager = NULL;
UINT resetToken = 0;
CComPtr<ID3D11Device> pD3dDevice = NULL;
CComQIPtr<ID3D10Multithread> pMultithread = nullptr;
VideoEncoderMF (std::array<unsigned short, 2> dimensions, unsigned int fps, IMFByteStream * stream, ID3D11Device* D3dDevice = NULL) : VideoEncoderMF(dimensions, fps) {
const unsigned int bit_rate = static_cast<unsigned int>(0.78f*fps*m_width*m_height); // yields 40Mb/s for 1920x1080#25fps
CComPtr<IMFAttributes> attribs;
COM_CHECK(MFCreateAttributes(&attribs, 0));
pD3dDevice = D3dDevice;
pMultithread = pD3dDevice;
if (pMultithread) {
pMultithread->SetMultithreadProtected(TRUE);
}
if (pD3dDevice) {
MFCreateDXGIDeviceManager(&resetToken, &pDeviceManager);
}
COM_CHECK(attribs->SetGUID(MF_TRANSCODE_CONTAINERTYPE, MFTranscodeContainerType_FMPEG4));
COM_CHECK(attribs->SetUINT32(MF_LOW_LATENCY, TRUE));
//COM_CHECK(attribs->SetUINT32(MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS, TRUE));// un-commenting this leads WriteSample to fail with error E_FAIL
if (pDeviceManager) {
COM_CHECK(attribs->SetUnknown(MF_SINK_WRITER_D3D_MANAGER, pDeviceManager));
}
// create sink writer with specified output format
IMFMediaTypePtr mediaTypeOut = MediaTypeutput(fps, bit_rate);
COM_CHECK(MFCreateFMPEG4MediaSink(stream, mediaTypeOut, nullptr, &m_media_sink)); // "fragmented" MPEG4 does not require seekable byte-stream
COM_CHECK(MFCreateSinkWriterFromMediaSink(m_media_sink, attribs, &m_sink_writer));
// connect input to output
IMFMediaTypePtr mediaTypeIn = MediaTypeInput(fps);
COM_CHECK(m_sink_writer->SetInputMediaType(m_stream_index, mediaTypeIn, nullptr));
COM_CHECK(m_sink_writer->BeginWriting());
}
Configure input media type:
IMFMediaTypePtr MediaTypeInput (unsigned int fps) {
IMFMediaTypePtr mediaTypeIn;
COM_CHECK(MFCreateMediaType(&mediaTypeIn));
COM_CHECK(mediaTypeIn->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video));
COM_CHECK(mediaTypeIn->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_NV12));
COM_CHECK(mediaTypeIn->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlace_Progressive));
COM_CHECK(mediaTypeIn->SetUINT32(MF_MT_ALL_SAMPLES_INDEPENDENT, TRUE));
LONG stride = 0;
COM_CHECK(MFGetStrideForBitmapInfoHeader(MFVideoFormat_NV12.Data1, m_width, &stride));
COM_CHECK(mediaTypeIn->SetUINT32(MF_MT_DEFAULT_STRIDE, UINT32(stride)));
UINT32 vidSampleSize = (UINT32)(m_height * stride + (m_height >> 1)* stride);
COM_CHECK(mediaTypeIn->SetUINT32(MF_MT_SAMPLE_SIZE, vidSampleSize)); //Set the new sample size
COM_CHECK(MFSetAttributeSize(mediaTypeIn, MF_MT_FRAME_SIZE, m_width, m_height));
COM_CHECK(MFSetAttributeRatio(mediaTypeIn, MF_MT_FRAME_RATE, fps, 1));
COM_CHECK(MFSetAttributeRatio(mediaTypeIn, MF_MT_PIXEL_ASPECT_RATIO, 1, 1));
return mediaTypeIn;
}
Configure output media type:
IMFMediaTypePtr MediaTypeutput (unsigned int fps, unsigned int bit_rate) {
IMFMediaTypePtr mediaTypeOut;
COM_CHECK(MFCreateMediaType(&mediaTypeOut));
COM_CHECK(mediaTypeOut->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video));
COM_CHECK(mediaTypeOut->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_H264)); // H.264 format
COM_CHECK(mediaTypeOut->SetUINT32(MF_MT_MPEG2_PROFILE, eAVEncH264VProfile_High));
COM_CHECK(mediaTypeOut->SetUINT32(MF_MT_MPEG2_LEVEL, eAVEncH264VLevel5_1));
COM_CHECK(mediaTypeOut->SetUINT32(MF_MT_AVG_BITRATE, bit_rate));
COM_CHECK(mediaTypeOut->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlace_Progressive));
COM_CHECK(MFSetAttributeSize(mediaTypeOut, MF_MT_FRAME_SIZE, m_width, m_height));
COM_CHECK(MFSetAttributeRatio(mediaTypeOut, MF_MT_FRAME_RATE, fps, 1));
COM_CHECK(MFSetAttributeRatio(mediaTypeOut, MF_MT_PIXEL_ASPECT_RATIO, 1, 1));
return mediaTypeOut;
}
Write Sample:
HRESULT WriteSample(CComPtr<IMFSample> &sample) override {
// Set the time stamp and the duration.
COM_CHECK(sample->SetSampleTime(m_time_stamp));
COM_CHECK(sample->SetSampleDuration(m_frame_duration));
// send sample to Sink Writer.
HRESULT hr = m_sink_writer->WriteSample(m_stream_index, sample); // fails on I/O error
if (FAILED(hr))
return hr;
// increment time
m_time_stamp += m_frame_duration;
return S_OK;
}
I'm puzzled with this behavior of SinkWriter and unable to solve the problem for quite some time. Any help would be appreciated.

Related

Muxing AAC audio and h.264 video streams to mp4 with AVFoundation

For OSX and IOS, I have streams of real time encoded video (h.264) and audio (AAC) data coming in, and I want to be able to mux these together into an mp4.
I'm using an AVAssetWriterto perform the muxing.
I have video working, but my audio still sounds like jumbled static. Here's what I'm trying right now (skipping some of the error checks here for brevity):
I initialize the writer:
NSURL *url = [NSURL fileURLWithPath:mContext->filename];
NSError* err = nil;
mContext->writer = [AVAssetWriter assetWriterWithURL:url fileType:AVFileTypeMPEG4 error:&err];
I initialize the audio input:
NSDictionary* settings;
AudioChannelLayout acl;
bzero(&acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
settings = nil; // set output to nil so it becomes a pass-through
CMAudioFormatDescriptionRef audioFormatDesc = nil;
{
AudioStreamBasicDescription absd = {0};
absd.mSampleRate = mParameters.audioSampleRate; //known sample rate
absd.mFormatID = kAudioFormatMPEG4AAC;
absd.mFormatFlags = kMPEG4Object_AAC_Main;
CMAudioFormatDescriptionCreate(NULL, &absd, 0, NULL, 0, NULL, NULL, &audioFormatDesc);
}
mContext->aacWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:settings sourceFormatHint:audioFormatDesc];
mContext->aacWriterInput.expectsMediaDataInRealTime = YES;
[mContext->writer addInput:mContext->aacWriterInput];
And start the writer:
[mContext->writer startWriting];
[mContext->writer startSessionAtSourceTime:kCMTimeZero];
Then, I have a callback where I receive a packet with a timestamp (milliseconds), and a std::vector<uint8_t> with the data containing 1024 compressed samples. I make sure isReadyForMoreMediaData is true. Then, if this is our first time receiving the callback, I set up the CMAudioFormatDescription:
OSStatus error = 0;
AudioStreamBasicDescription streamDesc = {0};
streamDesc.mSampleRate = mParameters.audioSampleRate;
streamDesc.mFormatID = kAudioFormatMPEG4AAC;
streamDesc.mFormatFlags = kMPEG4Object_AAC_Main;
streamDesc.mChannelsPerFrame = 2; // always stereo for us
streamDesc.mBitsPerChannel = 0;
streamDesc.mBytesPerFrame = 0;
streamDesc.mFramesPerPacket = 1024; // Our AAC packets contain 1024 samples per frame
streamDesc.mBytesPerPacket = 0;
streamDesc.mReserved = 0;
AudioChannelLayout acl;
bzero(&acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
error = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &streamDesc, sizeof(acl), &acl, 0, NULL, NULL, &mContext->audioFormat);
And finally, I create a CMSampleBufferRef and send it along:
CMSampleBufferRef buffer = NULL;
CMBlockBufferRef blockBuffer;
CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault, NULL, packet.data.size(), kCFAllocatorDefault, NULL, 0, packet.data.size(), kCMBlockBufferAssureMemoryNowFlag, &blockBuffer);
CMBlockBufferReplaceDataBytes((void*)packet.data.data(), blockBuffer, 0, packet.data.size());
CMTime duration = CMTimeMake(1024, mParameters.audioSampleRate);
CMTime pts = CMTimeMake(packet.timestamp, 1000);
CMSampleTimingInfo timing = {duration , pts, kCMTimeInvalid };
size_t sampleSizeArray[1] = {packet.data.size()};
error = CMSampleBufferCreate(kCFAllocatorDefault, blockBuffer, true, NULL, nullptr, mContext->audioFormat, 1, 1, &timing, 1, sampleSizeArray, &buffer);
// First input buffer must have an appropriate kCMSampleBufferAttachmentKey_TrimDurationAtStart since the codec has encoder delay'
if (mContext->firstAudioFrame)
{
CFDictionaryRef dict = NULL;
dict = CMTimeCopyAsDictionary(CMTimeMake(1024, 44100), kCFAllocatorDefault);
CMSetAttachment(buffer, kCMSampleBufferAttachmentKey_TrimDurationAtStart, dict, kCMAttachmentMode_ShouldNotPropagate);
// we must trim the start time on first audio frame...
mContext->firstAudioFrame = false;
}
CMSampleBufferMakeDataReady(buffer);
BOOL ret = [mContext->aacWriterInput appendSampleBuffer:buffer];
I guess the part I'm most suspicious of is my call to CMSampleBufferCreate. It seems I have to pass in a sample sizes array, otherwise I get this error message immediately when checking my writer's status:
Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12735), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x604001e50770 {Error Domain=NSOSStatusErrorDomain Code=-12735 "(null)"}}
Where the underlying error appears to be kCMSampleBufferError_BufferHasNoSampleSizes.
I did notice an example in Apple's documentation for creating the buffer with AAC data:
https://developer.apple.com/documentation/coremedia/1489723-cmsamplebuffercreate?language=objc
In their example, they specify a long sampleSizeArray with an entry for every single sample. Is that necessary? I don't have that information with this callback. And in our Windows implementation we didn't need that data. So I tried sending in packet.data.size() as the sample size but that doesn't seem right and it certainly doesn't produce pleasant audio.
Any ideas? Either tweaks to my calls here or different APIs I should be using to mux together streams of encoded data.
Thanks!
If you don't want to transcode, do not pass the outputSetting dictionary. You should pass nil there:
mContext->aacWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:nil sourceFormatHint:audioFormatDesc];
It is explained somewhere in this article:
https://developer.apple.com/library/archive/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/05_Export.html

error converting AudioBufferList to CMBlockBufferRef

I am trying to take a video file read it in using AVAssetReader and pass the audio off to CoreAudio for processing (adding effects and stuff) before saving it back out to disk using AVAssetWriter. I would like to point out that if i set the componentSubType on AudioComponentDescription of my output node as RemoteIO, things play correctly though the speakers. This makes me confident that my AUGraph is properly setup as I can hear things working. I am setting the subType to GenericOutput though so I can do the rendering myself and get back the adjusted audio.
I am reading in the audio and i pass the CMSampleBufferRef off to copyBuffer. This puts the audio into a circular buffer that will be read in later.
- (void)copyBuffer:(CMSampleBufferRef)buf {
if (_readyForMoreBytes == NO)
{
return;
}
AudioBufferList abl;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(buf, NULL, &abl, sizeof(abl), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer);
UInt32 size = (unsigned int)CMSampleBufferGetTotalSampleSize(buf);
BOOL bytesCopied = TPCircularBufferProduceBytes(&circularBuffer, abl.mBuffers[0].mData, size);
if (!bytesCopied){
/
_readyForMoreBytes = NO;
if (size > kRescueBufferSize){
NSLog(#"Unable to allocate enought space for rescue buffer, dropping audio frame");
} else {
if (rescueBuffer == nil) {
rescueBuffer = malloc(kRescueBufferSize);
}
rescueBufferSize = size;
memcpy(rescueBuffer, abl.mBuffers[0].mData, size);
}
}
CFRelease(blockBuffer);
if (!self.hasBuffer && bytesCopied > 0)
{
self.hasBuffer = YES;
}
}
Next I call processOutput. This will do a manual reder on the outputUnit. When AudioUnitRender is called it invokes the playbackCallback below, which is what is hooked up as input callback on my first node. playbackCallback pulls the data off the circular buffer and feeds it into the audioBufferList passed in. Like I said before if the output is set as RemoteIO this will cause the audio to correctly be played on the speakers. When AudioUnitRender finishes, it returns noErr and the bufferList object contains valid data. When I call CMSampleBufferSetDataBufferFromAudioBufferList though I get kCMSampleBufferError_RequiredParameterMissing (-12731).
-(CMSampleBufferRef)processOutput
{
if(self.offline == NO)
{
return NULL;
}
AudioUnitRenderActionFlags flags = 0;
AudioTimeStamp inTimeStamp;
memset(&inTimeStamp, 0, sizeof(AudioTimeStamp));
inTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
UInt32 busNumber = 0;
UInt32 numberFrames = 512;
inTimeStamp.mSampleTime = 0;
UInt32 channelCount = 2;
AudioBufferList *bufferList = (AudioBufferList*)malloc(sizeof(AudioBufferList)+sizeof(AudioBuffer)*(channelCount-1));
bufferList->mNumberBuffers = channelCount;
for (int j=0; j<channelCount; j++)
{
AudioBuffer buffer = {0};
buffer.mNumberChannels = 1;
buffer.mDataByteSize = numberFrames*sizeof(SInt32);
buffer.mData = calloc(numberFrames,sizeof(SInt32));
bufferList->mBuffers[j] = buffer;
}
CheckError(AudioUnitRender(outputUnit, &flags, &inTimeStamp, busNumber, numberFrames, bufferList), #"AudioUnitRender outputUnit");
CMSampleBufferRef sampleBufferRef = NULL;
CMFormatDescriptionRef format = NULL;
CMSampleTimingInfo timing = { CMTimeMake(1, 44100), kCMTimeZero, kCMTimeInvalid };
AudioStreamBasicDescription audioFormat = self.audioFormat;
CheckError(CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, NULL, 0, NULL, NULL, &format), #"CMAudioFormatDescriptionCreate");
CheckError(CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numberFrames, 1, &timing, 0, NULL, &sampleBufferRef), #"CMSampleBufferCreate");
CheckError(CMSampleBufferSetDataBufferFromAudioBufferList(sampleBufferRef, kCFAllocatorDefault, kCFAllocatorDefault, 0, bufferList), #"CMSampleBufferSetDataBufferFromAudioBufferList");
return sampleBufferRef;
}
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
int numberOfChannels = ioData->mBuffers[0].mNumberChannels;
SInt16 *outSample = (SInt16 *)ioData->mBuffers[0].mData;
/
memset(outSample, 0, ioData->mBuffers[0].mDataByteSize);
MyAudioPlayer *p = (__bridge MyAudioPlayer *)inRefCon;
if (p.hasBuffer){
int32_t availableBytes;
SInt16 *bufferTail = TPCircularBufferTail([p getBuffer], &availableBytes);
int32_t requestedBytesSize = inNumberFrames * kUnitSize * numberOfChannels;
int bytesToRead = MIN(availableBytes, requestedBytesSize);
memcpy(outSample, bufferTail, bytesToRead);
TPCircularBufferConsume([p getBuffer], bytesToRead);
if (availableBytes <= requestedBytesSize*2){
[p setReadyForMoreBytes];
}
if (availableBytes <= requestedBytesSize) {
p.hasBuffer = NO;
}
}
return noErr;
}
The CMSampleBufferRef I pass in looks valid (below is a dump of the object from the debugger)
CMSampleBuffer 0x7f87d2a03120 retainCount: 1 allocator: 0x103333180
invalid = NO
dataReady = NO
makeDataReadyCallback = 0x0
makeDataReadyRefcon = 0x0
formatDescription = <CMAudioFormatDescription 0x7f87d2a02b20 [0x103333180]> {
mediaType:'soun'
mediaSubType:'lpcm'
mediaSpecific: {
ASBD: {
mSampleRate: 44100.000000
mFormatID: 'lpcm'
mFormatFlags: 0xc2c
mBytesPerPacket: 2
mFramesPerPacket: 1
mBytesPerFrame: 2
mChannelsPerFrame: 1
mBitsPerChannel: 16 }
cookie: {(null)}
ACL: {(null)}
}
extensions: {(null)}
}
sbufToTrackReadiness = 0x0
numSamples = 512
sampleTimingArray[1] = {
{PTS = {0/1 = 0.000}, DTS = {INVALID}, duration = {1/44100 = 0.000}},
}
dataBuffer = 0x0
The buffer list looks like this
Printing description of bufferList:
(AudioBufferList *) bufferList = 0x00007f87d280b0a0
Printing description of bufferList->mNumberBuffers:
(UInt32) mNumberBuffers = 2
Printing description of bufferList->mBuffers:
(AudioBuffer [1]) mBuffers = {
[0] = (mNumberChannels = 1, mDataByteSize = 2048, mData = 0x00007f87d3008c00)
}
Really at a loss here, hoping someone can help. Thanks,
In case it matters i am debugging this in ios 8.3 simulator and the audio is coming from a mp4 that i shot on my iphone 6 then saved to my laptop.
I have read the following issues, however still to no avail, things are not working.
How to convert AudioBufferList to CMSampleBuffer?
Converting an AudioBufferList to a CMSampleBuffer Produces Unexpected Results
CMSampleBufferSetDataBufferFromAudioBufferList returning error 12731
core audio offline rendering GenericOutput
UPDATE
I poked around some more and notice that when my AudioBufferList right before AudioUnitRender runs looks like this:
bufferList->mNumberBuffers = 2,
bufferList->mBuffers[0].mNumberChannels = 1,
bufferList->mBuffers[0].mDataByteSize = 2048
mDataByteSize is numberFrames*sizeof(SInt32), which is 512 * 4. When I look at the AudioBufferList passed in playbackCallback, the list looks like this:
bufferList->mNumberBuffers = 1,
bufferList->mBuffers[0].mNumberChannels = 1,
bufferList->mBuffers[0].mDataByteSize = 1024
not really sure where that other buffer is going, or the other 1024 byte size...
if when i get finished calling Redner if I do something like this
AudioBufferList newbuff;
newbuff.mNumberBuffers = 1;
newbuff.mBuffers[0] = bufferList->mBuffers[0];
newbuff.mBuffers[0].mDataByteSize = 1024;
and pass newbuff off to CMSampleBufferSetDataBufferFromAudioBufferList the error goes away.
If I try setting the size of BufferList to have 1 mNumberBuffers or its mDataByteSize to be numberFrames*sizeof(SInt16) I get a -50 when calling AudioUnitRender
UPDATE 2
I hooked up a render callback so I can inspect the output when I play the sound over the speakers. I noticed that the output that goes to the speakers also has a AudioBufferList with 2 buffers, and the mDataByteSize during the input callback is 1024 and in the render callback its 2048, which is the same as I have been seeing when manually calling AudioUnitRender. When I inspect the data in the rendered AudioBufferList I notice that the bytes in the 2 buffers are the same, which means I can just ignore the second buffer. But I am not sure how to handle the fact that the data is 2048 in size after being rendered instead of 1024 as it's being taken in. Any ideas on why that could be happening? Is it in more of a raw form after going through the audio graph and that is why the size is doubling?
Sounds like the issue you're dealing with is because of a discrepancy in the number of channels. The reason you're seeing data in blocks of 2048 instead of 1024 is because it is feeding you back two channels (stereo). Check to make sure all of your audio units are properly configured to use mono throughout the entire audio graph, including the Pitch Unit and any audio format descriptions.
One thing to especially beware of is that calls to AudioUnitSetProperty can fail - so be sure to wrap those in CheckError() as well.

Swift: TCP socket keep-alive

I'm developing an iOS app (using Swift) which uses a TCP connection to a TCP server. Currently, whenever I've sent something, the connection closes automatically. I want to keep the connection open/alive until I manually close it.
From this Objective-C-based question I found that it could be done like this in Objective-C:
#include <sys/socket.h>
...
CFDataRef socketData = CFReadStreamCopyProperty((__bridge CFReadStreamRef)(stream), kCFStreamPropertySocketNativeHandle);
CFSocketNativeHandle socket;
CFDataGetBytes(socketData, CFRangeMake(0, sizeof(CFSocketNativeHandle)), (UInt8 *)&socket);
CFRelease(socketData);
int on = 1;
if (setsockopt(socket, SOL_SOCKET, SO_KEEPALIVE, &on, sizeof(on)) == -1) {
NSLog(#"setsockopt failed: %s", strerror(errno));
}
My current Swift implementation/translation looks like this:
var socketData = CFReadStreamCopyProperty(inputStream, kCFStreamPropertySocketNativeHandle) as CFDataRef
var socket: CFSocketNativeHandle
CFDataGetBytes(socketData, CFRangeMake(0, sizeof(CFSocketNativeHandle)), (UInt8).self&socket)
var on: UInt8 = 1;
if setsockopt(socket, SOL_SOCKET, SO_KEEPALIVE, &on, 255) == -1 {
}
A few notes:
inputStream is declared as: var inputStream: NSInputStream?
I'm not sure if using 255 as an alternative to sizeof(on) is a good idea.
The code happens in the function func stream(theStream:NSStream, handleEvent streamEvent:NSStreamEvent) which is required by using the NSStreamDelegate protocol.
I'm not sure if using inputStream instead of theStream (function parameter) is a good idea.
I'm getting an Xcode error on the CFDataGetBytes function. It says the following:
'NSData' is not a subtype of of 'CFData'
Any idea how to fix that?
Also, how do I import/include <sys/socket.h> in my Swift file? I've seen something called bridging headers, but isn't that only for Obj-C side-by-side with Swift implementations?
CFDataRef is "toll-free bridged" to NSData, therefore the first part can be
more simply written as
var socketData = CFReadStreamCopyProperty(inputStream, kCFStreamPropertySocketNativeHandle) as NSData
var socket: CFSocketNativeHandle = 0
socketData.getBytes(&socket, length: sizeofValue(socket))
The "option_value" argument of setsockopt() is the address of an int (which is mapped to Swift as UInt32), and the last argument has to be the size of that variable:
var on: UInt32 = 1;
if setsockopt(socket, SOL_SOCKET, SO_KEEPALIVE, &on, socklen_t(sizeofValue(on))) == -1 {
let errmsg = String.fromCString(strerror(errno))
println("setsockopt failed: \(errno) \(errmsg)")
}
<sys/socket.h> is imported by default.

Looping AAC file with ExtAudioFileRead - bug?

Reading audio files on iOS with ExtAudioFileRead, it seems that reaching eof completely freezes the reader… Example, assumes _abl AudioBufferList and _eaf ExtendedAudioFileRef are allocated and correctly configured:
- ( void )testRead
{
UInt32 requestedFrames = 1024;
UInt32 numFrames = requestedFrames;
OSStatus error = 0;
error = ExtAudioFileRead( _eaf, &numFrames, _abl );
if( numFrames < requestedFrames ) //eof, want to read enough frames from the beginning of the file to reach requestedFrames and loop gaplessly
{
requestedFrames = requestedFrames - numFrames;
numFrames = requestedFrames;
// move some pointers in _abl's buffers to write at the correct offset
error = ExtAudioFileSeek( _eaf, 0 );
error = ExtAudioFileRead( _eaf, &numFrames, _abl );
if( numFrames != requestedFrames ) //Now this call always sets numFrames to the same value as the previous read call...
{
NSLog( #"Oh no!" );
}
}
}
No errors, always the same behavior, exactly as if the reader was stuck at the end of the file. ExtAudioFileTell confirms the requested seek, btw. Also tried keeping track of the position in the file to request only the number of frames available at eof, same result: as soon as the last packet is read, seek seems to have no effect.
Happily seeking in other circumstances.
Bug? Feature? Imminent face palm? I'd very much appreciate any help in solving this!
I'm testing this on an iPad 3 ( iOS7.1 ).
Cheers,
Gregzo
Woozah!
Gotcha, evil AudioBufferList tinkerer.
So, in addition to informing the client as to the number of frames actually read, ExtAudioFileRead also sets the AudioBufferList's AudioBuffers mDataByteSize to the number of bytes read. As it clamps reading to that value, not resetting it at eof results in perpetually getting less frames than asking.
So, once eof is reached, simply reset the abl's buffers size.
-( void )resetABLBuffersSize: ( AudioBufferList * )alb size: ( UInt32 )size
{
AudioBuffer * buffer;
UInt32 i;
for( i = 0; i < abl->mNumberBuffers; i++ )
{
buffer = &( abl->mBuffers[ i ] );
buffer->mDataByteSize = size;
}
}
Shouldn't this be documented? The official docs only describe the AudioBufferList parameter as such: One or more buffers into which the audio data is read.
Cheers,
Gregzo

Ran Analyzer : Potential Memory Leak

Hi,
I ran the XCode analyser - and it tells me that the following both are potential memory leaks.
I'm not sure why.
I declared midiDevices like this
#property (assign, nonatomic) NSMutableArray* midiDevices;
Here's the code for openMidiIn
-(void)openMidiIn {
int k, endpoints;
CFStringRef name = NULL, cname = NULL, pname = NULL;
CFStringEncoding defaultEncoding = CFStringGetSystemEncoding();
MIDIPortRef mport = NULL;
MIDIEndpointRef endpoint;
OSStatus ret;
/* MIDI client */
cname = CFStringCreateWithCString(NULL, "my client", defaultEncoding);
ret = MIDIClientCreate(cname, NULL, NULL, &mclient);
if(!ret){
/* MIDI output port */
pname = CFStringCreateWithCString(NULL, "outport", defaultEncoding);
ret = MIDIInputPortCreate(mclient, pname, MidiWidgetsManagerReadProc, self, &mport);
if(!ret){
/* sources, we connect to all available input sources */
endpoints = MIDIGetNumberOfSources();
//NSLog(#"midi srcs %d\n", endpoints);
for(k=0; k < endpoints; k++){
endpoint = MIDIGetSource(k);
void *srcRefCon = endpoint;
MIDIPortConnectSource(mport, endpoint, srcRefCon);
}
}
}
if(name) CFRelease(name);
if(pname) CFRelease(pname);
if(cname) CFRelease(cname);
}
Thanks for your help.
analyzer info
Here's more info about the error about making a bit of changes.
Assuming you're using ARC, that object will actually be released and dealloc'd instantly. Why it's saying you have a memory leak is confusing, but you will have a dead reference. Use strong, not assign.

Resources