My app is designed to record video & analyze the frames generated under iOS 11.4, using Xcode 10.0 as IDE. Succeeded in recording video using AVCaptureMovieFileOutput, but need to analyze frames so transitioned to AVAssetWriter and modeled code after RosyWriter [ https://github.com/WildDylan/appleSample/tree/master/RosyWriter ]. Code is written in ObjC.
I am stuck with problem inside captureOutput: didOutputSampleBuffer: fromConnection: delegate. After capturing first frame, the AVAssetWriter is configured along with its inputs (video and audio),using settings extracted from first frame. Once user selects record, the captured sampleBuffer is analyzed and written. I tried to use AVAssetWriter startSessionAtSourceTime: but there is clearly something wrong with the way CMSampleBufferGetPresentationTimeStamp is returning CMTime from the sample buffer. The sampleBuufer log seems to show CMTime with valid values.
If I implement:
CMTime sampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
[self->assetWriter startSessionAtSourceTime: sampleTime]
the error generated is '*** -[AVAssetWriter startSessionAtSourceTime:] invalid parameter not satisfying: CMTIME_IS_NUMERIC(startTime)' .
If I use [self->assetWriter startSessionAtSourceTime:kCMTimeZero] the error "warning: could not execute support code to read Objective-C class data in the process. This may reduce the quality of type information available." is generated.
When I log sampleTime I read - value=0, timescale=0, epoch=0 & flags=0. I also log the sampleBuffer and show it below, followed by the relevant code:
SampleBuffer Content =
2018-10-17 12:07:04.540816+0300 MyApp[10664:2111852] -[CameraCaptureManager captureOutput:didOutputSampleBuffer:fromConnection:] : sampleBuffer - CMSampleBuffer 0x100e388c0 retainCount: 1 allocator: 0x1c03a95e0
invalid = NO
dataReady = YES
makeDataReadyCallback = 0x0
makeDataReadyRefcon = 0x0
buffer-level attachments:
Orientation(P) = 1
{Exif} (P) = <CFBasicHash 0x28161ce80 [0x1c03a95e0]>{type = mutable dict, count = 24,
entries => .....A LOT OF CAMERA DATA HERE.....
}
DPIWidth (P) = 72
{TIFF} (P) = <CFBasicHash 0x28161c540 [0x1c03a95e0]>{type = mutable dict, count = 7,
entries => .....MORE CAMERA DATA HERE.....
}
DPIHeight (P) = 72
{MakerApple}(P) = {
1 = 3;
10 = 0;
14 = 0;
3 = {
epoch = 0;
flags = 1;
timescale = 1000000000;
value = 390750488472916;
};
4 = 0;
5 = 221;
6 = 211;
7 = 1;
8 = (
"-0.04894018",
"-0.6889497",
"-0.7034443"
);
9 = 0;
}
formatDescription = <CMVideoFormatDescription 0x280ddc780 [0x1c03a95e0]> {
mediaType:'vide'
mediaSubType:'BGRA'
mediaSpecific: {
codecType: 'BGRA' dimensions: 720 x 1280
}
extensions: {<CFBasicHash 0x28161f880 [0x1c03a95e0]>{type = immutable dict, count = 5,
entries =>
0 : <CFString 0x1c0917068 [0x1c03a95e0]>{contents = "CVImageBufferYCbCrMatrix"} = <CFString 0x1c09170a8 [0x1c03a95e0]>{contents = "ITU_R_601_4"}
1 : <CFString 0x1c09171c8 [0x1c03a95e0]>{contents = "CVImageBufferTransferFunction"} = <CFString 0x1c0917088 [0x1c03a95e0]>{contents = "ITU_R_709_2"}
2 : <CFString 0x1c093f348 [0x1c03a95e0]>{contents = "CVBytesPerRow"} = <CFNumber 0x81092876519e5903 [0x1c03a95e0]>{value = +2880, type = kCFNumberSInt32Type}
3 : <CFString 0x1c093f3c8 [0x1c03a95e0]>{contents = "Version"} = <CFNumber 0x81092876519eed23 [0x1c03a95e0]>{value = +2, type = kCFNumberSInt32Type}
5 : <CFString 0x1c0917148 [0x1c03a95e0]>{contents = "CVImageBufferColorPrimaries"} = <CFString 0x1c0917088 [0x1c03a95e0]>{contents = "ITU_R_709_2"}
}
}
}
sbufToTrackReadiness = 0x0
numSamples = 1
sampleTimingArray[1] = {
{PTS = {390750488483992/1000000000 = 390750.488}, DTS = {INVALID}, duration = {INVALID}},
}
imageBuffer = 0x2832ad2c0
====================================================
//AVCaptureVideoDataOutput Delegates
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
if (connection == videoConnection)
{
if (self.outputVideoFormatDescription == NULL )
{
self.outputVideoFormatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
[self setupVideoRecorder];
}
else if (self.status==RecorderRecording)
{
NSLog(#"%s : self.outputVideoFormatDescription - %#",__FUNCTION__,self.outputVideoFormatDescription);
[self.cmDelegate manager:self capturedFrameBuffer:sampleBuffer];
NSLog(#"%s : sampleBuffer - %#",__FUNCTION__,sampleBuffer);
dispatch_async(vidWriteQueue, ^
{
if (!self->wroteFirstFrame)
{
CMTime sampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
NSLog(#"%s : sampleTime value - %lld, timescale - %i, epoch - %lli, flags - %u",__FUNCTION__,sampleTime.value, sampleTime.timescale, sampleTime.epoch, sampleTime.flags);
[self->assetWriter startSessionAtSourceTime:sampleTime];
self->wroteFirstFrame = YES;
}
if (self->videoAWInput.readyForMoreMediaData)
//else if (self->videoAWInput.readyForMoreMediaData)
{
BOOL appendSuccess = [self->videoAWInput appendSampleBuffer:sampleBuffer];
NSLog(#"%s : appendSuccess - %i",__FUNCTION__,appendSuccess);
if (!appendSuccess) NSLog(#"%s : failed to append video buffer - %##",__FUNCTION__,self->assetWriter.error.localizedDescription);
}
});
}
else if (connection == audioConnection)
{
}
}
}
My bad... my problem was that I was spawning off the frame capture using a thread that was already declared in AVCaptureDataOutput setSampleBufferDelegate:queue: . Was recursively putting a process on a thread within that same thread. Posting answer in case another idiot, like me, makes the same stupid mistake...
Related
I have some trouble with streaming raw H.265 over rtsp when using VTDecompressionSessionDecodeFrame. The 3 main steps I do are the following ones:
OSStatus status = CMVideoFormatDescriptionCreateFromHEVCParameterSets(kCFAllocatorDefault, 3, parameterSetPointers, parameterSetSizes, (int)kNALUHeaderSize, NULL, &formatDescription);
OSStatus status = VTDecompressionSessionCreate(NULL, formatDescription, NULL, NULL, &decompressionCallBack, &_decompressionSession);
VTDecompressionSessionDecodeFrame(self.decompressionSession, sampleBuffer, flags, (void *)CFBridgingRetain(currentTime), NULL);
The format description looks like this:
<CMVideoFormatDescription 0x1c0044d10 [0x1b2bb1310]> {
mediaType:'vide'
mediaSubType:'hvc1'
mediaSpecific: {
codecType: 'hvc1' dimensions: 640 x 360
}
extensions: {<CFBasicHash 0x1c00742c0 [0x1b2bb1310]>{type = immutable dict, count = 10,
entries =>
0 : <CFString 0x1ab8dd470 [0x1b2bb1310]>{contents = "SampleDescriptionExtensionAtoms"} = <CFBasicHash 0x1c0074140 [0x1b2bb1310]>{type = immutable dict, count = 1,
entries =>
0 : hvcC = <CFData 0x1c01678c0 [0x1b2bb1310]>{length = 115, capacity = 115, bytes = 0x01016000000000008000000042f000fc ... 4401d172b0942b12}
}
1 : <CFString 0x1ab8b8ff8 [0x1b2bb1310]>{contents = "CVImageBufferYCbCrMatrix"} = <CFString 0x1ab8b9018 [0x1b2bb1310]>{contents = "ITU_R_709_2"}
2 : <CFString 0x1ab8b92b8 [0x1b2bb1310]>{contents = "CVImageBufferChromaLocationTopField"} = <CFString 0x1ab8b92f8 [0x1b2bb1310]>{contents = "Left"}
3 : <CFString 0x1ab8b8f38 [0x1b2bb1310]>{contents = "CVPixelAspectRatio"} = <CFBasicHash 0x1c0074280 [0x1b2bb1310]>{type = immutable dict, count = 2,
entries =>
1 : <CFString 0x1ab8b8f58 [0x1b2bb1310]>{contents = "HorizontalSpacing"} = <CFNumber 0xb000000000000012 [0x1b2bb1310]>{value = +1, type = kCFNumberSInt32Type}
2 : <CFString 0x1ab8b8f78 [0x1b2bb1310]>{contents = "VerticalSpacing"} = <CFNumber 0xb000000000000012 [0x1b2bb1310]>{value = +1, type = kCFNumberSInt32Type}
}
5 : <CFString 0x1ab8b90d8 [0x1b2bb1310]>{contents = "CVImageBufferColorPrimaries"} = <CFString 0x1ab8b9018 [0x1b2bb1310]>{contents = "ITU_R_709_2"}
6 : <CFString 0x1ab8dd670 [0x1b2bb1310]>{contents = "FullRangeVideo"} = <CFBoolean 0x1b2bb1868 [0x1b2bb1310]>{value = true}
8 : <CFString 0x1ab8b9158 [0x1b2bb1310]>{contents = "CVImageBufferTransferFunction"} = <CFString 0x1ab8b9018 [0x1b2bb1310]>{contents = "ITU_R_709_2"}
10 : <CFString 0x1ab8b92d8 [0x1b2bb1310]>{contents = "CVImageBufferChromaLocationBottomField"} = <CFString 0x1ab8b92f8 [0x1b2bb1310]>{contents = "Left"}
11 : <CFString 0x1ab8ddc10 [0x1b2bb1310]>{contents = "BitsPerComponent"} = <CFNumber 0xb000000000000080 [0x1b2bb1310]>{value = +8, type = kCFNumberSInt8Type}
12 : <CFString 0x1ab8b8e78 [0x1b2bb1310]>{contents = "CVFieldCount"} = <CFNumber 0xb000000000000012 [0x1b2bb1310]>{value = +1, type = kCFNumberSInt32Type}
}
}
}
When decoding a frame I get the OSStatus -12909 in the decompression callback. Therefore, I think that the VPS, PPS and SPS is correctly handled when creating the format description. The Decompression session is also successfully created. I can also successfully decode and render a HEVC stream, for example this one:
The solution also work when streaming raw H.264 if CMVideoFormatDescriptionCreateFromHEVCParameterSets is changed to CMVideoFormatDescriptionCreateFromH264ParameterSets.
Any ideas what can be wrong? Is the format description even supported? Sadly there isn't to much documentation about HEVC decoding from Apples side.
I can stream my H.265 stream if using ffmpeg so I guess the stream should be correctly formated.
I'm manually decoding h264 RTSP stream using ffmpeg and trying to save the uncompressed frames using AVAssertWriter and AVAssertWriterInput.
I'm getting the following error when calling AVAssetWriterInput appendBuffer -
Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSUnderlyingError=0x170059530 {Error Domain=NSOSStatusErrorDomain Code=-12780 "(null)"}, NSLocalizedFailureReason=An unknown error occurred (-12780), NSLocalizedDescription=The operation could not be completed}
The CMSampleBuffer contains BGRA frames and looks like this -
CMSampleBuffer 0x159d12900 retainCount: 1 allocator: 0x1b3aa3bb8
invalid = NO
dataReady = YES
makeDataReadyCallback = 0x0
makeDataReadyRefcon = 0x0
formatDescription = <CMVideoFormatDescription 0x17405bd50 [0x1b3aa3bb8]> {
mediaType:'vide'
mediaSubType:'BGRA'
mediaSpecific: {
codecType: 'BGRA'
dimensions: 720 x 1280
}
extensions: {<CFBasicHash 0x1742652c0 [0x1b3aa3bb8]>{type = immutable dict, count = 4,
entries =>
0 : <CFString 0x1addb17c8 [0x1b3aa3bb8]>{contents = "CVImageBufferYCbCrMatrix"} = <CFString 0x1addb1808 [0x1b3aa3bb8]>{contents = "ITU_R_601_4"}
1 : <CFString 0x1addb1928 [0x1b3aa3bb8]>{contents = "CVImageBufferTransferFunction"} = <CFString 0x1addb17e8 [0x1b3aa3bb8]>{contents = "ITU_R_709_2"}
2 : <CFString 0x1adde3800 [0x1b3aa3bb8]>{contents = "CVBytesPerRow"} = <CFNumber 0xb00000000000b402 [0x1b3aa3bb8]>{value = +2880, type = kCFNumberSInt32Type}
3 : <CFString 0x1adde3880 [0x1b3aa3bb8]>{contents = "Version"} = <CFNumber 0xb000000000000022 [0x1b3aa3bb8]>{value = +2, type = kCFNumberSInt32Type}
}
}
}
sbufToTrackReadiness = 0x0
numSamples = 1
sampleTimingArray[1] = {
{PTS = {3000/90000 = 0.033}, DTS = {INVALID}, duration = {INVALID}},
}
imageBuffer = 0x17413ebe0
I've looked on the following question and answers as well but it doesn't seem to explain the issue I'm having (the format I used is a supported pixel format):
Why won't AVFoundation accept my planar pixel buffers on an iOS device?
Any help will be grateful!
FYI - When I save BGRA CMSampleBuffers I get from the iPhone camera it just works, if needed I can paste an example CMSampleBuffer as well.
I'll answer myself as I've found the issue -
The CMSampleBuffer wasn't IOSurface backed. I've used CVPixelBufferCreateWithBytes which created a CVPixelBuffer without IOSurface backing, as soon as I used CVPixelBufferCreate and passed the kCVPixelBufferIOSurfacePropertiesKey key it worked.
https://developer.apple.com/library/content/qa/qa1781/_index.html has all the information about creating IOSurface-backed CVPixelBuffers.
I am trying to decode video with "PNG, Timecode" codecs with AVFoundation and get error
Error Domain=AVFoundationErrorDomain Code=-11833 "Cannot Decode" UserInfo={NSLocalizedFailureReason=The decoder required for this media cannot be found.,NSUnderlyingError=0x610000044050 {Error Domain=NSOSStatusErrorDomain Code=-12906 "(null)"}, AVErrorMediaTypeKey=vide, NSLocalizedDescription=Cannot Decode}
from AVAssetReader. May be need use specific pixel format type for AVAssetReaderTrackOutput?
Info from videoTrack.formatDescriptions:
<CMVideoFormatDescription 0x618000042ac0 [0x7fff7b281390]> {
mediaType:'vide'
mediaSubType:'png '
mediaSpecific: {
codecType: 'png ' dimensions: 2852 x 1871
}
extensions: {<CFBasicHash 0x61800006c180 [0x7fff7b281390]>{type = immutable dict, count = 9,
entries =>
0 : <CFString 0x7fff7eee0750 [0x7fff7b281390]>{contents = "TemporalQuality"} = <CFNumber 0x27 [0x7fff7b281390]>{value = +0, type = kCFNumberSInt32Type}
1 : <CFString 0x7fff7eee0790 [0x7fff7b281390]>{contents = "Version"} = <CFNumber 0x117 [0x7fff7b281390]>{value = +1, type = kCFNumberSInt16Type}
2 : <CFString 0x7fff7eee0590 [0x7fff7b281390]>{contents = "FormatName"} = PNG
3 : <CFString 0x7fff7ad964d8 [0x7fff7b281390]>{contents = "CVPixelAspectRatio"} = <CFBasicHash 0x61800006c140 [0x7fff7b281390]>{type = immutable dict, count = 2,
entries =>
1 : <CFString 0x7fff7ad964f8 [0x7fff7b281390]>{contents = "HorizontalSpacing"} = <CFNumber 0xb2427 [0x7fff7b281390]>{value = +2852, type = kCFNumberSInt32Type}
2 : <CFString 0x7fff7ad96518 [0x7fff7b281390]>{contents = "VerticalSpacing"} = <CFNumber 0xb2427 [0x7fff7b281390]>{value = +2852, type = kCFNumberSInt32Type}
}
4 : <CFString 0x7fff7eee0550 [0x7fff7b281390]>{contents = "VerbatimSampleDescription"} = <CFData 0x618000140370 [0x7fff7b281390]>{length = 106, capacity = 106, bytes = 0x0000006a706e67200000000000000001 ... 00000b2400000000}
5 : <CFString 0x7fff7eee07b0 [0x7fff7b281390]>{contents = "RevisionLevel"} = <CFNumber 0x117 [0x7fff7b281390]>{value = +1, type = kCFNumberSInt16Type}
6 : <CFString 0x7fff7eee0770 [0x7fff7b281390]>{contents = "SpatialQuality"} = <CFNumber 0x40027 [0x7fff7b281390]>{value = +1024, type = kCFNumberSInt32Type}
7 : <CFString 0x7fff7eee07d0 [0x7fff7b281390]>{contents = "Vendor"} = appl
8 : <CFString 0x7fff7eee05b0 [0x7fff7b281390]>{contents = "Depth"} = <CFNumber 0x2017 [0x7fff7b281390]>{value = +32, type = kCFNumberSInt16Type}
}
}
}
Can I decode this video with AVFoundation?
Also if open this video with QuickTime Player and re-save, it saved video with "Apple ProRes 4444, Timecode" codecs and this video can be decoding with AVFoundation, but size of file increase from 800Kb to 2Mb.
Thanks for any help!
You can only open H264 encoded video with those APIs.
I got a problem I can't get my head around.
First I create a VTCompressionSessionCreate (h264) then in my compression callback when I start feeding images I get a CMSampleBufferRef sampleBuffer as expected.
Just for debugging the code stream I then create a VTDecompressionSessionCreate and feed the 'sampleBuffer' containing the H264 stream to a VTDecompressionSessionDecodeFrame and I would expect a CVImageBufferRef imageBuffer in my decompression callback.
Now to the problem:
If I create VTDecompressionSessionCreate using the 'sampleBuffer' from the compression callback like this:
CMFormatDescriptionRef format = CMSampleBufferGetFormatDescription(sampleBuffer);
Everything works as expected and I get CVImageBufferRef's in my decompression callback.
However my intention is to send the data over a network so I need to get my format discription from the in stream SPS and PPS information.
So then I must 'fake' getting the SPS and PPS by first extracting them and then using them like this:
CMFormatDescriptionRef format = CMSampleBufferGetFormatDescription(sampleBuffer);
size_t spsSize, ppsSize;
size_t parmCount;
const uint8_t* sps, *pps;
CMVideoFormatDescriptionGetH264ParameterSetAtIndex(format, 0, &sps, &spsSize, &parmCount, NULL );
CMVideoFormatDescriptionGetH264ParameterSetAtIndex(format, 1, &pps, &ppsSize, &parmCount, NULL );
const uint8_t* const parameterSetPointers[2] = {sps, pps};
const size_t parameterSetSizes[2] = {spsSize, ppsSize};
CMFormatDescriptionRef format2;
status = CMVideoFormatDescriptionCreateFromH264ParameterSets(kCFAllocatorDefault, 2, parameterSetPointers, parameterSetSizes, 4, &format2);
I would expect format and format2 to contain the same information but:
format = <CMVideoFormatDescription 0x17004fd50 [0x19483ac80]> {
mediaType:'vide'
mediaSubType:'avc1'
mediaSpecific: {
codecType: 'avc1' dimensions: 1280 x 720
}
extensions: {<CFBasicHash 0x170270cc0 [0x19483ac80]>{type = immutable dict, count = 2,
entries =>
0 : <CFString 0x194935fa0 [0x19483ac80]>{contents = "SampleDescriptionExtensionAtoms"} = <CFBasicHash 0x170270c40 [0x19483ac80]>{type = immutable dict, count = 1,
entries =>
2 : <CFString 0x194939fa0 [0x19483ac80]>{contents = "avcC"} = <CFData 0x1700c9920 [0x19483ac80]>{length = 35, capacity = 35, bytes = 0x0164001fffe100106764001fac56c050 ... 28ee3cb0fdf8f800}
}
2 : <CFString 0x194936000 [0x19483ac80]>{contents = "FormatName"} = <CFString 0x17003a160 [0x19483ac80]>{contents = "H.264"}
}
}
}
format2:
format2 = <CMVideoFormatDescription 0x174051c70 [0x19483ac80]> {
mediaType:'vide'
mediaSubType:'avc1'
mediaSpecific: {
codecType: 'avc1' dimensions: 1280 x 720
}
extensions: {<CFBasicHash 0x17426f9c0 [0x19483ac80]>{type = immutable dict, count = 5,
entries =>
0 : <CFString 0x19499a608 [0x19483ac80]>{contents = "CVImageBufferChromaLocationBottomField"} = <CFString 0x19499a648 [0x19483ac80]>{contents = "Center"}
1 : <CFString 0x19499a328 [0x19483ac80]>{contents = "CVFieldCount"} = <CFNumber 0xb000000000000012 [0x19483ac80]>{value = +1, type = kCFNumberSInt32Type}
3 : <CFString 0x194935fa0 [0x19483ac80]>{contents = "SampleDescriptionExtensionAtoms"} = <CFBasicHash 0x17426b100 [0x19483ac80]>{type = immutable dict, count = 1,
entries =>
2 : <CFString 0x174031560 [0x19483ac80]>{contents = "avcC"} = <CFData 0x1740c4910 [0x19483ac80]>{length = 35, capacity = 35, bytes = 0x0164001fffe100106764001fac56c050 ... 28ee3cb0fdf8f800}
}
5 : <CFString 0x19499a5e8 [0x19483ac80]>{contents = "CVImageBufferChromaLocationTopField"} = <CFString 0x19499a648 [0x19483ac80]>{contents = "Center"}
6 : <CFString 0x1949360e0 [0x19483ac80]>{contents = "FullRangeVideo"} = <CFBoolean 0x19483b030 [0x19483ac80]>{value = false}
}
}
}
format works
forma2 don't and VTDecompressionSessionDecodeFrame throws error -12916.
Thank you for helping.
.
Solved it. It was the way I created CMFormatDescriptionRef containing the code stream that was causing the error.
The SPS and PPS was taken from a CFSampleBuffer. Then I create a CMVideoFormatDescriptionCreateFromH264ParameterSets so far so good. But in the same application I turned around the stream and decoded the picture using the same CFSampleBuffer. That's not working and was causing the error. I had to convert the payload to NSData first then create a new CFSampleBuffer from the NSData. Then it works
I am trying to convert a pixelBuffer extracted from AVPlayerItemVideoOutput to CIImage but always getting nil.
The Code
if([videoOutput_ hasNewPixelBufferForItemTime:player_.internalPlayer.currentItem.currentTime])
{
CVPixelBufferRef pixelBuffer = [videoOutput_ copyPixelBufferForItemTime:player_.internalPlayer.currentItem.currentTime
itemTimeForDisplay:nil];
CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer]; // Always image === nil
CIFilter *filter = [FilterCollection filterSepiaForImage:image];
image = filter.outputImage;
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgimg = [context createCGImage:image fromRect:[image extent]];
[pipLayer_ setContents:(id)CFBridgingRelease(cgimg)];
}
Below are details of a pixelBuffer used in order to create a CIImage (which always results in nil):
$0 = 0x09b48720 <CVPixelBuffer 0x9b48720 width=624 height=352 bytesPerRow=2496 pixelFormat=BGRA iosurface=0x0 attributes=<CFBasicHash 0x98241d0 [0x1d244d8]>{type = immutable dict, count = 3,
entries =>
0 : <CFString 0x174cf4 [0x1d244d8]>{contents = "Height"} = <CFNumber 0x9a16e70 [0x1d244d8]>{value = +352, type = kCFNumberSInt32Type}
1 : <CFString 0x174ce4 [0x1d244d8]>{contents = "Width"} = <CFNumber 0x9a109d0 [0x1d244d8]>{value = +624, type = kCFNumberSInt32Type}
2 : <CFString 0x1750e4 [0x1d244d8]>{contents = "PixelFormatType"} = <CFArray 0x1090ddd0 [0x1d244d8]>{type = mutable-small, count = 1, values = (
0 : <CFNumber 0x7a28050 [0x1d244d8]>{value = +1111970369, type = kCFNumberSInt32Type}
)}
}
propagatedAttachments=<CFBasicHash 0x9b485c0 [0x1d244d8]>{type = mutable dict, count = 6,
entries =>
0 : <CFString 0x174ff4 [0x1d244d8]>{contents = "CVImageBufferTransferFunction"} = <CFString 0x174f84 [0x1d244d8]>{contents = "ITU_R_709_2"}
2 : <CFString 0x174f74 [0x1d244d8]>{contents = "CVImageBufferYCbCrMatrix"} = <CFString 0x174f94 [0x1d244d8]>{contents = "ITU_R_601_4"}
9 : <CFString 0x174f14 [0x1d244d8]>{contents = "CVPixelAspectRatio"} = <CFBasicHash 0x9b1bc30 [0x1d244d8]>{type = immutable dict, count = 2,
entries =>
0 : <CFString 0x174f34 [0x1d244d8]>{contents = "VerticalSpacing"} = <CFNumber 0x9b0f730 [0x1d244d8]>{value = +1, type = kCFNumberSInt32Type}
2 : <CFString 0x174f24 [0x1d244d8]>{contents = "HorizontalSpacing"} = <CFNumber 0x9b0f730 [0x1d244d8]>{value = +1, type = kCFNumberSInt32Type}
}
10 : <CFString 0x174fb4 [0x1d244d8]>{contents = "CVImageBufferColorPrimaries"} = <CFString 0x174fd4 [0x1d244d8]>{contents = "SMPTE_C"}
11 : <CFString 0x174e24 [0x1d244d8]>{contents = "QTMovieTime"} = <CFBasicHash 0x7a47940 [0x1d244d8]>{type = immutable dict, count = 2,
entries =>
0 : <CFString 0x174e44 [0x1d244d8]>{contents = "TimeScale"} = <CFNumber 0x7a443d0 [0x1d244d8]>{value = +90000, type = kCFNumberSInt32Type}
2 : <CFString 0x174e34 [0x1d244d8]>{contents = "TimeValue"} = <CFNumber 0x7a476e0 [0x1d244d8]>{value = +1047297, type = kCFNumberSInt64Type}
}
12 : <CFString 0x174eb4 [0x1d244d8]>{contents = "CVFieldCount"} = <CFNumber 0x9b0f730 [0x1d244d8]>{value = +1, type = kCFNumberSInt32Type}
}
nonPropagatedAttachments=<CFBasicHash 0x9b44b40 [0x1d244d8]>{type = mutable dict, count = 0,
entries =>
}
>
Solved this problem - I was trying on simulator, while it seems that is is only supported on devices.
It seems that you do not have iosurface properties key defined. Not 100% it will solve your issue, but try this:
CFDictionaryRef attrs = (CFDictionaryRef)#{
(id)kCVPixelBufferWidthKey: #(WIDTH_OF_YOUR_VIDEO_GOES_HERE),
(id)kCVPixelBufferHeightKey: #(HEIGHT_OF_YOUR_VIDEO_GOES_HERE),
(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange;), // kCVPixelFormatType_32BGRA
(id)kCVPixelBufferIOSurfacePropertiesKey: #{},
}
And pass this dictionary along with other keys during creation of your pixelbuffer
That's not the way you set the content of a layer; this is:
(__bridge id)uiImage.CGImage;
There's no way you got what you had working on a device or a simulator.