CVOpenGLESTextureCacheCreateTextureFromImage for kCVPixelFormatType_OneComponent8 - ios

I'm trying to use CVOpenGLESTextureCacheCreateTextureFromImage in order to use the reference in OpenGL, with no luck:
I have a CVPixelBufferRef pixel_bufferAlpha which gets updated using CVPixelBufferCreateWithBytes, with success :
CVReturn is successfull.
Then I try to use CVOpenGLESTextureCacheCreateTextureFromImage on a 1 channel texture ( alphaMatte ) to create a CVOpenGLESTextureRef that i can use in OpenGL.
I have initialised my CVOpenGLESTextureCacheRef _videoTextureAlphaCache :
err = CVOpenGLESTextureCacheCreate(
kCFAllocatorDefault,
nil,
eaglContext,
nil,
&_videoTextureAlphaCache);
And my CVPixelBufferRef pixel_bufferAlpha is initialised using :
cvret = CVPixelBufferCreate(kCFAllocatorDefault,
width,height,
kCVPixelFormatType_OneComponent8,
(__bridge CFDictionaryRef)cvBufferProperties,
&pixel_bufferAlpha);
if(cvret != kCVReturnSuccess)
{
assert(!"Failed to create shared opengl pixel_bufferAlpha");
}
I'm using kCVPixelFormatType_OneComponent8, as my MTLTexture passed in CVPixelBufferCreateWithBytes has a MTLPixelFormat 10 - > MTLPixelFormatR8Unorm.
When trying to use CVOpenGLESTextureCacheCreateTextureFromImage, I get an error "not opengl compatible" :
CVPixelBufferLockBaseAddress(pixel_bufferAlpha, 0);
err = noErr;
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_videoTextureAlphaCache,
pixel_bufferAlpha,
NULL,
GL_TEXTURE_2D,
// internal
// GL_RGBA,
GL_ALPHA,
width,
height,
// gl format
// GL_BGRA_EXT,
// GL_R8_EXT,
GL_ALPHA8_EXT,
// gl type
// GL_UNSIGNED_INT_8_8_8_8_REV,
GL_UNSIGNED_BYTE,
NULL,
&alphaTextureGLES);
if (err != kCVReturnSuccess) {
CVBufferRelease(pixel_bufferAlpha);
if(err == kCVReturnInvalidPixelFormat){
NSLog(#"Invalid pixel format");
}
if(err == kCVReturnInvalidPixelBufferAttributes){
NSLog(#"Invalid pixel buffer attributes");
}
if(err == kCVReturnInvalidSize){
NSLog(#"invalid size");
}
if(err == kCVReturnPixelBufferNotOpenGLCompatible){
NSLog(#"CVOpenGLESTextureCacheCreateTextureFromImage::not opengl compatible");
}
}else{
NSLog(#"ok CVOpenGLESTextureCacheCreateTextureFromImage SUCCESS");
}
// ================================================================================ //
// clear texture cache
CVOpenGLESTextureCacheFlush(_videoTextureAlphaCache, 0);
CVPixelBufferUnlockBaseAddress(pixel_bufferAlpha, 0);
I'm not sure what I' doing wrong here.
Also I would appreciate any pointers as I'm not super versed in iOS and textures conversions / formats...
Best,
P
Full relevant part of the code :
alphaTexture = [matteDepthTexture generateMatteFromFrame:_session.currentFrame commandBuffer:commandBuffer];
// ===============================================================
NSUInteger texBytesPerRow = alphaTexture.bufferBytesPerRow;
NSUInteger texArrayLength = alphaTexture.arrayLength;
int width = (int) alphaTexture.width;
int height = (int) alphaTexture.height;
MTLPixelFormat texPixelFormat = alphaTexture.pixelFormat;
MTLTextureType texType = alphaTexture.textureType;
int bytesPerPixel = 8;
// MTLPixelFormatR8Unorm Ordinary format with one 8-bit normalized unsigned integer component.
NSLog(#" texPixelFormat of the texture is : %d", texPixelFormat);
NSLog(#" texType of the texture is : %d", texType);
CVReturn err = noErr;
err = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_OneComponent8,
alphaTexture,
bytesPerPixel * width,
stillImageDataReleaseCallback,
alphaTexture,
NULL,
&pixel_bufferAlpha);
if (err != kCVReturnSuccess) {
if(err == kCVReturnInvalidPixelFormat){
NSLog(#"Invalid pixel format");
}
if(err == kCVReturnInvalidPixelBufferAttributes){
NSLog(#"Invalid pixel buffer attributes");
}
if(err == kCVReturnInvalidSize){
NSLog(#"invalid size");
}
if(err == kCVReturnPixelBufferNotOpenGLCompatible){
NSLog(#"CVPixelBufferCreateWithBytes::not opengl compatible");
}
}else{
NSLog(#"ok CVPixelBufferCreateWithBytes SUCCESS");
}
OSType sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixel_bufferAlpha);
if (kCVPixelFormatType_OneComponent8 == sourcePixelFormat) {
NSLog(#" got format kCVPixelFormatType_OneComponent8");
} else{
NSLog(#" Unknown CoreVideo pixel format : %a", sourcePixelFormat);
}
// ================================================================================ //
CVPixelBufferLockBaseAddress(pixel_bufferAlpha, 0);
err = noErr;
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_videoTextureAlphaCache,
pixel_bufferAlpha,
NULL,
GL_TEXTURE_2D,
// internal
// GL_RGBA,
GL_RED_EXT,
width,
height,
// gl format
// GL_BGRA_EXT,
// GL_R8_EXT,
GL_R8_EXT,
// gl type
// GL_UNSIGNED_INT_8_8_8_8_REV,
GL_UNSIGNED_BYTE,
NULL,
&alphaTextureGLES);
if (err != kCVReturnSuccess) {
CVBufferRelease(pixel_bufferAlpha);
if(err == kCVReturnInvalidPixelFormat){
NSLog(#"Invalid pixel format");
}
if(err == kCVReturnInvalidPixelBufferAttributes){
NSLog(#"Invalid pixel buffer attributes");
}
if(err == kCVReturnInvalidSize){
NSLog(#"invalid size");
}
if(err == kCVReturnPixelBufferNotOpenGLCompatible){
NSLog(#"CVOpenGLESTextureCacheCreateTextureFromImage::not opengl compatible");
}
}else{
NSLog(#"ok CVOpenGLESTextureCacheCreateTextureFromImage SUCCESS");
}
// ================================================================================ //
// clear texture cache
CVOpenGLESTextureCacheFlush(_videoTextureAlphaCache, 0);
CVPixelBufferUnlockBaseAddress(pixel_bufferAlpha, 0);

After digging a bit, I found an extension to MTLtexture from Alloy - great work there - :
here
getBytes to put the texture in a CVPixelBufferRef then
CVOpenGLESTextureCacheCreateTextureFromImage to store it in a CVOpenGLESTextureRef.
So in Objective - C it looks like this - with the same initialisation for the pixel_bufferAlpha and _videoTextureAlphaCache - :
alphaTexture = [matteDepthTexture generateMatteFromFrame:_session.currentFrame commandBuffer:commandBuffer];
int width = (int) alphaTexture.width;
int height = (int) alphaTexture.height;
MTLPixelFormat texPixelFormat = alphaTexture.pixelFormat;
CVPixelBufferLockBaseAddress(pixel_bufferAlpha, 0);
void * CV_NULLABLE pixelBufferBaseAdress = CVPixelBufferGetBaseAddress(pixel_bufferAlpha);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixel_bufferAlpha);
[alphaTexture getBytes:pixelBufferBaseAdress
bytesPerRow:bytesPerRow
fromRegion:MTLRegionMake2D(0, 0, width, height)
mipmapLevel:0];
size_t w = CVPixelBufferGetWidth(pixel_bufferAlpha);
size_t h = CVPixelBufferGetHeight(pixel_bufferAlpha);
CVReturn err = noErr;
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_videoTextureAlphaCache,
pixel_bufferAlpha,
nil,
GLenum(GL_TEXTURE_2D),
GLint(GL_LUMINANCE),
w,
h,
GLenum(GL_LUMINANCE),
GLenum(GL_UNSIGNED_BYTE),
0,
&alphaTextureGLES);
if (err != kCVReturnSuccess) {
CVBufferRelease(pixel_bufferAlpha);
NSLog(#"error on CVOpenGLESTextureCacheCreateTextureFromImage");
}
CVPixelBufferUnlockBaseAddress(pixel_bufferAlpha, 0);
Hope this helps someone along the way.

Related

iOS 13.1.3 VTDecompressionSessionDecodeFrame can't decode right

CVPixelBufferRef outputPixelBuffer = NULL;
CMBlockBufferRef blockBuffer = NULL;
void* buffer = (void*)[videoUnit bufferWithH265LengthHeader];
OSStatus status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault,
buffer,
videoUnit.length,
kCFAllocatorNull,
NULL, 0, videoUnit.length,
0, &blockBuffer);
if(status == kCMBlockBufferNoErr) {
CMSampleBufferRef sampleBuffer = NULL;
const size_t sampleSizeArray[] = {videoUnit.length};
status = CMSampleBufferCreateReady(kCFAllocatorDefault,
blockBuffer,
_decoderFormatDescription ,
1, 0, NULL, 1, sampleSizeArray,
&sampleBuffer);
if (status == kCMBlockBufferNoErr && sampleBuffer && _deocderSession) {
VTDecodeFrameFlags flags = 0;
VTDecodeInfoFlags flagOut = 0;
OSStatus decodeStatus = VTDecompressionSessionDecodeFrame(_deocderSession,
sampleBuffer,
flags,
&outputPixelBuffer,
&flagOut);
if(decodeStatus == kVTInvalidSessionErr) {
NSLog(#"IOS8VT: Invalid session, reset decoder session");
} else if(decodeStatus == kVTVideoDecoderBadDataErr) {
NSLog(#"IOS8VT: decode failed status=%d(Bad data)", decodeStatus);
} else if(decodeStatus != noErr) {
NSLog(#"IOS8VT: decode failed status=%d", decodeStatus);
}
CFRelease(sampleBuffer);
}
CFRelease(blockBuffer);
}
return outputPixelBuffer;
This is my code to decode the stream data.It was working good on iPhone 6s,but when the code running on iPhoneX or iphone11, the "outputPixelBuffer" return a nil. Can anyone help?
Without seeing the code for your decompressionSession creation, it is hard to say. It could be that your decompressionSession is providing the outputBuffer to the callback function provided at creation, so I highly recommend you add that part of your code too.
By providing &outputPixelBuffer in:
OSStatus decodeStatus = VTDecompressionSessionDecodeFrame(_deocderSession,
sampleBuffer,
flags,
&outputPixelBuffer,
&flagOut);
only means that you've provided the reference, it does not mean that it will be synchronously filled for certain.
I also recommend that you print out the OSStatus for:
CMBlockBufferCreateWithMemoryBlock
and
CMSampleBufferCreateReady
And if there's issues at those steps, there should be a way to know.

Crash while sending ARFrame CVPixelBuffer as byte array over network using gstreamer

I want to send ARFrame pixel buffer data over network, i have listed my setup below. With this setup if i try to send the frames app crashes in gstreamers C code after few frames, but if i send camera's AVCaptureVideoDataOutput pixel buffer instead, the stream works fine. I have set AVCaptureSession's pixel format type to
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
so it replicates the same type ARFrame gives. Please help i am unable to find any solution. I am sorry if my english is bad or i have missed something out, do ask me for it.
My Setup
Get pixelBuffer of ARFrame from didUpdateFrame delegate of ARKit
Encode to h264 using VTCompressionSession
- (void)SendFrames:(CVPixelBufferRef)pixelBuffer :(NSTimeInterval)timeStamp
{
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
if(session == NULL)
{
[self initEncoder:width height:height];
}
CMTime presentationTimeStamp = CMTimeMake(0, 1);
OSStatus statusCode = VTCompressionSessionEncodeFrame(session, pixelBuffer, presentationTimeStamp, kCMTimeInvalid, NULL, NULL, NULL);
if (statusCode != noErr) {
// End the session
VTCompressionSessionInvalidate(session);
CFRelease(session);
session = NULL;
return;
}
VTCompressionSessionEndPass(session, NULL, NULL);
}
- (void) initEncoder:(size_t)width height:(size_t)height
{
OSStatus status = VTCompressionSessionCreate(NULL, (int)width, (int)height, kCMVideoCodecType_H264, NULL, NULL, NULL, OutputCallback, NULL, &session);
NSLog(#":VTCompressionSessionCreate %d", (int)status);
if (status != noErr)
{
NSLog(#"Unable to create a H264 session");
return ;
}
VTSessionSetProperty(session, kVTCompressionPropertyKey_RealTime, kCFBooleanTrue);
VTSessionSetProperty(session, kVTCompressionPropertyKey_ProfileLevel, kVTProfileLevel_H264_Baseline_AutoLevel);
VTCompressionSessionPrepareToEncodeFrames(session);
}
Get sampleBuffer from callback, convert it to elementary stream
void OutputCallback(void *outputCallbackRefCon, void *sourceFrameRefCon, OSStatus status, VTEncodeInfoFlags infoFlags,CMSampleBufferRef sampleBuffer)
{
if (status != noErr) {
NSLog(#"Error encoding video, err=%lld", (int64_t)status);
return;
}
if (!CMSampleBufferDataIsReady(sampleBuffer))
{
NSLog(#"didCompressH264 data is not ready ");
return;
}
// In this example we will use a NSMutableData object to store the
// elementary stream.
NSMutableData *elementaryStream = [NSMutableData data];
// This is the start code that we will write to
// the elementary stream before every NAL unit
static const size_t startCodeLength = 4;
static const uint8_t startCode[] = {0x00, 0x00, 0x00, 0x01};
// Write the SPS and PPS NAL units to the elementary stream
CMFormatDescriptionRef description = CMSampleBufferGetFormatDescription(sampleBuffer);
// Find out how many parameter sets there are
size_t numberOfParameterSets;
int AVCCHeaderLength;
CMVideoFormatDescriptionGetH264ParameterSetAtIndex(description,
0, NULL, NULL,
&numberOfParameterSets,
&AVCCHeaderLength);
// Write each parameter set to the elementary stream
for (int i = 0; i < numberOfParameterSets; i++) {
const uint8_t *parameterSetPointer;
int NALUnitHeaderLengthOut = 0;
size_t parameterSetLength;
CMVideoFormatDescriptionGetH264ParameterSetAtIndex(description,
i,
&parameterSetPointer,
&parameterSetLength,
NULL, &NALUnitHeaderLengthOut);
// Write the parameter set to the elementary stream
[elementaryStream appendBytes:startCode length:startCodeLength];
[elementaryStream appendBytes:parameterSetPointer length:parameterSetLength];
}
// Get a pointer to the raw AVCC NAL unit data in the sample buffer
size_t blockBufferLength;
uint8_t *bufferDataPointer = NULL;
size_t lengthAtOffset = 0;
size_t bufferOffset = 0;
CMBlockBufferGetDataPointer(CMSampleBufferGetDataBuffer(sampleBuffer),
bufferOffset,
&lengthAtOffset,
&blockBufferLength,
(char **)&bufferDataPointer);
// Loop through all the NAL units in the block buffer
// and write them to the elementary stream with
// start codes instead of AVCC length headers
while (bufferOffset < blockBufferLength - AVCCHeaderLength) {
// Read the NAL unit length
uint32_t NALUnitLength = 0;
memcpy(&NALUnitLength, bufferDataPointer + bufferOffset, AVCCHeaderLength);
// Convert the length value from Big-endian to Little-endian
NALUnitLength = CFSwapInt32BigToHost(NALUnitLength);
// Write start code to the elementary stream
[elementaryStream appendBytes:startCode length:startCodeLength];
// Write the NAL unit without the AVCC length header to the elementary stream
[elementaryStream appendBytes:bufferDataPointer + bufferOffset + AVCCHeaderLength
length:NALUnitLength];
// Move to the next NAL unit in the block buffer
bufferOffset += AVCCHeaderLength + NALUnitLength;
}
char *bytePtr = (char *)[elementaryStream mutableBytes];
long maxSize = (long)elementaryStream.length;
CMTime presentationtime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
vidplayer_stream(bytePtr, maxSize, (long)presentationtime.value);
}

copy CVPixelBufferRef

The code in the copyPixelBufferNow is too long. :-(
#property (nonatomic,assign)CVPixelBufferRef pixelBufferNowRef;
- (CVPixelBufferRef)copyPixelBufferNow {
if (_pixelBufferNowRef == NULL) {
return nil;
}
CVPixelBufferRef pixelBufferOut = NULL;
CVReturn ret = kCVReturnError;
size_t height = CVPixelBufferGetHeight(_pixelBufferNowRef);
size_t width = CVPixelBufferGetWidth(_pixelBufferNowRef);
size_t bytersPerRow = CVPixelBufferGetBytesPerRow(_pixelBufferNowRef);
CFDictionaryRef attrs = NULL;
const void *keys[] = { kCVPixelBufferPixelFormatTypeKey };
// kCVPixelFormatType_420YpCbCr8Planar is YUV420
// kCVPixelFormatType_420YpCbCr8BiPlanarFullRange is NV12
uint32_t v = kCVPixelFormatType_420YpCbCr8BiPlanarFullRange;
const void *values[] = { CFNumberCreate(NULL, kCFNumberSInt32Type, &v) };
attrs = CFDictionaryCreate(NULL, keys, values, 1, NULL, NULL);
ret = CVPixelBufferCreate(NULL,
width,
height,
CVPixelBufferGetPixelFormatType(_pixelBufferNowRef),
attrs,
&pixelBufferOut);
CVPixelBufferLockBaseAddress(_pixelBufferNowRef, kCVPixelBufferLock_ReadOnly);
CVPixelBufferLockBaseAddress(pixelBufferOut, kCVPixelBufferLock_ReadOnly);
CFRelease(attrs);
if (ret == kCVReturnSuccess) {
memcpy(CVPixelBufferGetBaseAddress(pixelBufferOut), CVPixelBufferGetBaseAddress(_pixelBufferNowRef), height * bytersPerRow);
} else {
printf("why copy pixlbuffer error %d",ret);
}
CVPixelBufferUnlockBaseAddress(_pixelBufferNowRef, kCVPixelBufferLock_ReadOnly);
CVPixelBufferUnlockBaseAddress(pixelBufferOut, kCVPixelBufferLock_ReadOnly);
return pixelBufferOut;
}
- (void)setPixelBufferNowRef:(CVPixelBufferRef)sender {
if (_pixelBufferNowRef != sender) {
CVPixelBufferRelease(_pixelBufferNowRef);
_pixelBufferNowRef = sender;
CVPixelBufferRetain(_pixelBufferNowRef);
}
}
I have a property pixelBufferNowRef.
How to prevent it be from being modified in copyPixelBufferNow?

Choppy audio playback with AudioQueue

I have the following code which opens an AudioQueue to playback 16 bit pcm # 44,100hz. It has a very odd quirk where once the initial buffers are filled it plays back really quickly then gets "choppy" as it waits for more bytes to come over the network.
So either I am somehow messing up the code that copies a subrange of data into the buffer or I have told the AudioQueue to playback faster than the data comes over the network.
Anybody have any ideas? I've been stuck for a few days now.
//
// Created by Benjamin St Pierre on 15-01-02.
// Copyright (c) 2015 Lightning Strike Solutions. All rights reserved.
//
#import <MacTypes.h>
#import "MediaPlayer.h"
#implementation MediaPlayer
#synthesize sampleQueue;
void OutputBufferCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) {
//Cast userData to MediaPlayer Objective-C class instance
MediaPlayer *mediaPlayer = (__bridge MediaPlayer *) inUserData;
// Fill buffer.
[mediaPlayer fillAudioBuffer:inBuffer];
// Re-enqueue buffer.
OSStatus err = AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
if (err != noErr)
NSLog(#"AudioQueueEnqueueBuffer() error %d", (int) err);
}
- (void)fillAudioBuffer:(AudioQueueBufferRef)inBuffer {
if (self.currentAudioPiece == nil || self.currentAudioPiece.duration >= self.currentAudioPieceIndex) {
//grab latest sample from sample queue
self.currentAudioPiece = sampleQueue.dequeue;
self.currentAudioPieceIndex = 0;
}
//Check for empty sample queue
if (self.currentAudioPiece == nil) {
NSLog(#"Empty sample queue");
memset(inBuffer->mAudioData, 0, kBufferByteSize);
return;
}
UInt32 bytesToRead = inBuffer->mAudioDataBytesCapacity;
while (bytesToRead > 0) {
UInt32 maxBytesFromCurrentPiece = self.currentAudioPiece.audioData.length - self.currentAudioPieceIndex;
//Take the min of what the current piece can provide OR what is needed to be read
UInt32 bytesToReadNow = MIN(maxBytesFromCurrentPiece, bytesToRead);
NSData *subRange = [self.currentAudioPiece.audioData subdataWithRange:NSMakeRange(self.currentAudioPieceIndex, bytesToReadNow)];
//Copy what you can before continuing loop
memcpy(inBuffer->mAudioData, subRange.bytes, subRange.length);
bytesToRead -= bytesToReadNow;
if (bytesToReadNow == maxBytesFromCurrentPiece) {
#synchronized (sampleQueue) {
self.currentAudioPiece = self.sampleQueue.dequeue;
self.currentAudioPieceIndex = 0;
}
} else {
self.currentAudioPieceIndex += bytesToReadNow;
}
}
inBuffer->mAudioDataByteSize = kBufferByteSize;
}
- (void)startMediaPlayer {
AudioStreamBasicDescription streamFormat;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mSampleRate = 44100.0;
streamFormat.mChannelsPerFrame = 2;
streamFormat.mBytesPerFrame = 4;
streamFormat.mFramesPerPacket = 1;
streamFormat.mBytesPerPacket = 4;
streamFormat.mBitsPerChannel = 16;
streamFormat.mReserved = 0;
streamFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
// New input queue
OSStatus err = AudioQueueNewOutput(&streamFormat, OutputBufferCallback, (__bridge void *) self, nil, nil, 0, &outputQueue);
if (err != noErr) {
NSLog(#"AudioQueueNewOutput() error: %d", (int) err);
}
int i;
// Enqueue buffers
AudioQueueBufferRef buffer;
for (i = 0; i < kNumberBuffers; i++) {
err = AudioQueueAllocateBuffer(outputQueue, kBufferByteSize, &buffer);
memset(buffer->mAudioData, 0, kBufferByteSize);
buffer->mAudioDataByteSize = kBufferByteSize;
if (err == noErr) {
err = AudioQueueEnqueueBuffer(outputQueue, buffer, 0, nil);
if (err != noErr) NSLog(#"AudioQueueEnqueueBuffer() error: %d", (int) err);
} else {
NSLog(#"AudioQueueAllocateBuffer() error: %d", (int) err);
return;
}
}
// Start queue
err = AudioQueueStart(outputQueue, nil);
if (err != noErr) NSLog(#"AudioQueueStart() error: %d", (int) err);
}
#end
I'm going to take a swag here and say that you're getting choppy playback because you aren't advancing the write pointer for your data. I don't know objective-C well enough to tell you if this syntax is correct, but here's what I think you need to add:
while (bytesToRead > 0) {
....
memcpy(inBuffer->mAudioData, subRange.bytes, subRange.length);
bytesToRead -= bytesToReadNow;
inBuffer->mAudioData += bytesReadNow; // move the write pointer
...
}

VTDecompressionSessionDecodeFrame fails with code -kVTVideoDecoderBadDataErr

I have been trying to decode H264 using VTDecompressionSessionDecodeFrame but getting errors. The Parameter sets have been created previously and look fine, nothing errors up to this point so it may have something to do with my understanding of Timing information in the CMSampleBufferRef. Any input would be much appreciated
void didDecompress( void *decompressionOutputRefCon, void *sourceFrameRefCon, OSStatus status, VTDecodeInfoFlags infoFlags, CVImageBufferRef imageBuffer, CMTime presentationTimeStamp, CMTime presentationDuration ){
NSLog(#"In decompression callback routine");
}
void decodeH264 {
VTDecodeInfoFlags infoFlags;
[NALPacket appendBytes: NalPacketSize length:4];
[NALPacket appendBytes: &NALCODE length:1];
[NALPacket appendBytes: startPointer length:buflen];
void *samples = (void *)[NALTestPacket bytes];
blockBuffer = NULL;
// add the nal raw data to the CMBlockBuffer
status = CMBlockBufferCreateWithMemoryBlock(
kCFAllocatorDefault,
samples,
[NALPacket length],
kCFAllocatorDefault,
NULL,
0,
[NALPacket length],
0,
&blockBuffer);
const size_t * samplesizeArrayPointer;
size_t sampleSizeArray= buflen;
samplesizeArrayPointer = &sampleSizeArray;
int32_t timeSpan = 1000000;
CMTime PTime = CMTimeMake(presentationTime, timeSpan);
CMSampleTimingInfo timingInfo;
timingInfo.presentationTimeStamp = PTime;
timingInfo.duration = kCMTimeZero;
timingInfo.decodeTimeStamp = kCMTimeInvalid;
status = CMSampleBufferCreate(kCFAllocatorDefault, blockBuffer, YES, NULL, NULL, formatDescription, 1, 1, &timingInfo, 0, samplesizeArrayPointer, &sampleBuffer);
CFArrayRef attachmentsArray = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, true);
for (CFIndex i = 0; i < CFArrayGetCount(attachmentsArray); ++i) {
CFMutableDictionaryRef attachments = (CFMutableDictionaryRef)CFArrayGetValueAtIndex(attachmentsArray, i);
CFDictionarySetValue(attachments, kCMSampleAttachmentKey_DoNotDisplay, kCFBooleanFalse);
CFDictionarySetValue(attachments, kCMSampleAttachmentKey_DisplayImmediately, kCFBooleanTrue);
}
// I Frame
status = VTDecompressionSessionDecodeFrame(decoder, sampleBuffer, kVTDecodeFrame_1xRealTimePlayback, (void*)CFBridgingRetain(currentTime), &infoFlags);
if (status != noErr) {
NSLog(#"Decode error");
}
Discovered why this wasn't working, I had forgotten to set the CMSampleBufferRef to NULL each time a new sample was captured.
samples = NULL;
status = CMSampleBufferCreate(kCFAllocatorDefault, blockBuffer, YES, NULL, NULL, formatDescription, 1, 1, &timingInfo, 0, samplesizeArrayPointer, &sampleBuffer);

Resources