iOS VTDecompressionSessionDecodeFrame error -12909 when decoding HEVC - ios

I have some trouble with streaming raw H.265 over rtsp when using VTDecompressionSessionDecodeFrame. The 3 main steps I do are the following ones:
OSStatus status = CMVideoFormatDescriptionCreateFromHEVCParameterSets(kCFAllocatorDefault, 3, parameterSetPointers, parameterSetSizes, (int)kNALUHeaderSize, NULL, &formatDescription);
OSStatus status = VTDecompressionSessionCreate(NULL, formatDescription, NULL, NULL, &decompressionCallBack, &_decompressionSession);
VTDecompressionSessionDecodeFrame(self.decompressionSession, sampleBuffer, flags, (void *)CFBridgingRetain(currentTime), NULL);
The format description looks like this:
<CMVideoFormatDescription 0x1c0044d10 [0x1b2bb1310]> {
mediaType:'vide'
mediaSubType:'hvc1'
mediaSpecific: {
codecType: 'hvc1' dimensions: 640 x 360
}
extensions: {<CFBasicHash 0x1c00742c0 [0x1b2bb1310]>{type = immutable dict, count = 10,
entries =>
0 : <CFString 0x1ab8dd470 [0x1b2bb1310]>{contents = "SampleDescriptionExtensionAtoms"} = <CFBasicHash 0x1c0074140 [0x1b2bb1310]>{type = immutable dict, count = 1,
entries =>
0 : hvcC = <CFData 0x1c01678c0 [0x1b2bb1310]>{length = 115, capacity = 115, bytes = 0x01016000000000008000000042f000fc ... 4401d172b0942b12}
}
1 : <CFString 0x1ab8b8ff8 [0x1b2bb1310]>{contents = "CVImageBufferYCbCrMatrix"} = <CFString 0x1ab8b9018 [0x1b2bb1310]>{contents = "ITU_R_709_2"}
2 : <CFString 0x1ab8b92b8 [0x1b2bb1310]>{contents = "CVImageBufferChromaLocationTopField"} = <CFString 0x1ab8b92f8 [0x1b2bb1310]>{contents = "Left"}
3 : <CFString 0x1ab8b8f38 [0x1b2bb1310]>{contents = "CVPixelAspectRatio"} = <CFBasicHash 0x1c0074280 [0x1b2bb1310]>{type = immutable dict, count = 2,
entries =>
1 : <CFString 0x1ab8b8f58 [0x1b2bb1310]>{contents = "HorizontalSpacing"} = <CFNumber 0xb000000000000012 [0x1b2bb1310]>{value = +1, type = kCFNumberSInt32Type}
2 : <CFString 0x1ab8b8f78 [0x1b2bb1310]>{contents = "VerticalSpacing"} = <CFNumber 0xb000000000000012 [0x1b2bb1310]>{value = +1, type = kCFNumberSInt32Type}
}
5 : <CFString 0x1ab8b90d8 [0x1b2bb1310]>{contents = "CVImageBufferColorPrimaries"} = <CFString 0x1ab8b9018 [0x1b2bb1310]>{contents = "ITU_R_709_2"}
6 : <CFString 0x1ab8dd670 [0x1b2bb1310]>{contents = "FullRangeVideo"} = <CFBoolean 0x1b2bb1868 [0x1b2bb1310]>{value = true}
8 : <CFString 0x1ab8b9158 [0x1b2bb1310]>{contents = "CVImageBufferTransferFunction"} = <CFString 0x1ab8b9018 [0x1b2bb1310]>{contents = "ITU_R_709_2"}
10 : <CFString 0x1ab8b92d8 [0x1b2bb1310]>{contents = "CVImageBufferChromaLocationBottomField"} = <CFString 0x1ab8b92f8 [0x1b2bb1310]>{contents = "Left"}
11 : <CFString 0x1ab8ddc10 [0x1b2bb1310]>{contents = "BitsPerComponent"} = <CFNumber 0xb000000000000080 [0x1b2bb1310]>{value = +8, type = kCFNumberSInt8Type}
12 : <CFString 0x1ab8b8e78 [0x1b2bb1310]>{contents = "CVFieldCount"} = <CFNumber 0xb000000000000012 [0x1b2bb1310]>{value = +1, type = kCFNumberSInt32Type}
}
}
}
When decoding a frame I get the OSStatus -12909 in the decompression callback. Therefore, I think that the VPS, PPS and SPS is correctly handled when creating the format description. The Decompression session is also successfully created. I can also successfully decode and render a HEVC stream, for example this one:
The solution also work when streaming raw H.264 if CMVideoFormatDescriptionCreateFromHEVCParameterSets is changed to CMVideoFormatDescriptionCreateFromH264ParameterSets.
Any ideas what can be wrong? Is the format description even supported? Sadly there isn't to much documentation about HEVC decoding from Apples side.
I can stream my H.265 stream if using ffmpeg so I guess the stream should be correctly formated.

Related

Why can I get the GL_LUMINANCE but not the GL_LUMINANCE_ALPHA from my camera roll's video?

In iOS, I would like to play a video from the camera roll, and for a lot of reasons I need to do some OpenGL stuff on each frame.
My code works when the video comes from the camera, but not from a video from the camera roll
Here is the code, trying to keep the bare minimum.
Setting up AVAssetReaderTrackOutput and _textureCache after user selects a video from the camera roll:
// Creating _textureCache
CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, _glContext, NULL, &_textureCache);
// Reading the video track
NSDictionary *settings = #{(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)};
NSArray *videoTracks = [asset tracksWithMediaType:AVMediaTypeVideo];
_assetReaderTrackOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:videoTracks.firstObject outputSettings:settings];
NSError *assetReaderCreationError;
_assetReader = [[AVAssetReader alloc] initWithAsset:asset error:&assetReaderCreationError];
[_assetReader addOutput:_assetReaderTrackOutput];
[_assetReader startReading];
For each frame (timer based)
if (_assetReader.status == AVAssetReaderStatusReading) {
sampleBuffer = [_assetReaderTrackOutput copyNextSampleBuffer];
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
// This works
CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _textureCache, cameraFrame, NULL, GL_TEXTURE_2D, GL_LUMINANCE, bufferWidth, bufferHeight, GL_LUMINANCE, GL_UNSIGNED_BYTE, 0, &luminanceTextureRef);
// This doesn't work (err 6833)
CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _textureCache, cameraFrame, NULL, GL_TEXTURE_2D, GL_LUMINANCE_ALPHA, bufferWidth, bufferHeight, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, 1, &luminanceTextureRef);
The last line here ^ doesn't work. I get error 6833. I found this CVOpenGLESTextureCacheCreateTextureFromImage returns error 6683 but cannot get more.
I have tried different settings (replacing kCVPixelFormatType_32BGRA with other enums) but it doesn't get me anywhere. Any help ?
EDIT:
frame <CVPixelBuffer 0x1565796b0 width=720 height=1280 bytesPerRow=2880 pixelFormat=BGRA iosurface=0x158300048 attributes=<CFBasicHash 0x156578a40 [0x1a115e150]>{type = immutable dict, count = 1,
entries =>
0 : <CFString 0x19ca7fca8 [0x1a115e150]>{contents = "PixelFormatDescription"} = <CFBasicHash 0x156551460 [0x1a115e150]>{type = immutable dict, count = 15,
entries =>
1 : <CFString 0x19ca7fae8 [0x1a115e150]>{contents = "CGImageCompatibility"} = <CFBoolean 0x1a115e6c8 [0x1a115e150]>{value = true}
2 : <CFString 0x19ca801a8 [0x1a115e150]>{contents = "FillExtendedPixelsCallback"} = <CFData 0x156569340 [0x1a115e150]>{length = 24, capacity = 24, bytes = 0x0000000000000000484e1f85010000000000000000000000}
5 : <CFString 0x19ca7fee8 [0x1a115e150]>{contents = "ContainsAlpha"} = <CFBoolean 0x1a115e6c8 [0x1a115e150]>{value = true}
6 : <CFString 0x19ca7fac8 [0x1a115e150]>{contents = "CGBitmapContextCompatibility"} = <CFBoolean 0x1a115e6c8 [0x1a115e150]>{value = true}
7 : <CFString 0x19ca7ffa8 [0x1a115e150]>{contents = "BitsPerBlock"} = <CFNumber 0xb000000000000202 [0x1a115e150]>{value = +32, type = kCFNumberSInt32Type}
8 : <CFString 0x19ca7ffc8 [0x1a115e150]>{contents = "BlackBlock"} = <CFData 0x156568660 [0x1a115e150]>{length = 4, capacity = 4, bytes = 0x000000ff}
9 : <CFString 0x19ca7fbc8 [0x1a115e150]>{contents = "IOSurfaceOpenGLESTextureCompatibility"} = <CFBoolean 0x1a115e6c8 [0x1a115e150]>{value = true}
10 : <CFString 0x19ca7fc08 [0x1a115e150]>{contents = "OpenGLESCompatibility"} = <CFBoolean 0x1a115e6c8 [0x1a115e150]>{value = true}
11 : <CFString 0x19ca80088 [0x1a115e150]>{contents = "CGBitmapInfo"} = <CFNumber 0xb000000000020042 [0x1a115e150]>{value = +8196, type = kCFNumberSInt32Type}
12 : <CFString 0x19ca7fba8 [0x1a115e150]>{contents = "IOSurfaceCoreAnimationCompatibility"} = <CFBoolean 0x1a115e6c8 [0x1a115e150]>{value = true}
13 : <CFString 0x19ca7fbe8 [0x1a115e150]>{contents = "IOSurfaceOpenGLESFBOCompatibility"} = <CFBoolean 0x1a115e6c8 [0x1a115e150]>{value = true}
14 : <CFString 0x19ca800a8 [0x1a115e150]>{contents = "ContainsYCbCr"} = <CFBoolean 0x1a115e6d8 [0x1a115e150]>{value = false}
15 : <CFString 0x19ca7fe88 [0x1a115e150]>{contents = "PixelFormat"} = <CFNumber 0xb000000424752412 [0x1a115e150]>{value = +1111970369, type = kCFNumberSInt32Type}
16 : <CFString 0x19ca80108 [0x1a115e150]>{contents = "ComponentRange"} = <CFString 0x19ca80148 [0x1a115e150]>{contents = "FullRange"}
21 : <CFString 0x19ca800c8 [0x1a115e150]>{contents = "ContainsRGB"} = <CFBoolean 0x1a115e6c8 [0x1a115e150]>{value = true}
}
}
propagatedAttachments=<CFBasicHash 0x156579910 [0x1a115e150]>{type = mutable dict, count = 5,
entries =>
0 : <CFString 0x19ca7f688 [0x1a115e150]>{contents = "CVImageBufferYCbCrMatrix"} = <CFString 0x19ca7f6c8 [0x1a115e150]>{contents = "ITU_R_601_4"}
1 : <CFString 0x19ca7f7e8 [0x1a115e150]>{contents = "CVImageBufferTransferFunction"} = <CFString 0x19ca7f6a8 [0x1a115e150]>{contents = "ITU_R_709_2"}
2 : <CFString 0x19cab2690 [0x1a115e150]>{contents = "MetadataDictionary"} = <CFBasicHash 0x15654c060 [0x1a115e150]>{type = mutable dict, count = 3,
entries =>
0 : <CFString 0x19cab9970 [0x1a115e150]>{contents = "SNR"} = <CFNumber 0x156515ad0 [0x1a115e150]>{value = +20.18363643733977141892, type = kCFNumberFloat64Type}
1 : <CFString 0x19cab7cb0 [0x1a115e150]>{contents = "ExposureTime"} = <CFNumber 0x1565623b0 [0x1a115e150]>{value = +0.01000000000000000021, type = kCFNumberFloat64Type}
2 : <CFString 0x19cab9950 [0x1a115e150]>{contents = "SensorID"} = <CFNumber 0xb000000000002372 [0x1a115e150]>{value = +567, type = kCFNumberSInt32Type}
}
5 : <CFString 0x19ca7f768 [0x1a115e150]>{contents = "CVImageBufferColorPrimaries"} = <CFString 0x19ca7f6a8 [0x1a115e150]>{contents = "ITU_R_709_2"}
6 : <CFString 0x19ca7f828 [0x1a115e150]>{contents = "CVImageBufferChromaLocationTopField"} = <CFString 0x19ca7f888 [0x1a115e150]>{contents = "Center"}
}
nonPropagatedAttachments=<CFBasicHash 0x1565798d0 [0x1a115e150]>{type = mutable dict, count = 0,
entries =>
}
>
Failed to create IOSurface image (texture)
You should be using either kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange or kCVPixelFormatType_420YpCbCr8BiPlanarFullRange for your AVAssetReaderTrackOutput outputOptions.
I would also try this on a device. The simulator did not used to support these formats, although that may have changed.

Decode .mov video file with PNG codec type

I am trying to decode video with "PNG, Timecode" codecs with AVFoundation and get error
Error Domain=AVFoundationErrorDomain Code=-11833 "Cannot Decode" UserInfo={NSLocalizedFailureReason=The decoder required for this media cannot be found.,NSUnderlyingError=0x610000044050 {Error Domain=NSOSStatusErrorDomain Code=-12906 "(null)"}, AVErrorMediaTypeKey=vide, NSLocalizedDescription=Cannot Decode}
from AVAssetReader. May be need use specific pixel format type for AVAssetReaderTrackOutput?
Info from videoTrack.formatDescriptions:
<CMVideoFormatDescription 0x618000042ac0 [0x7fff7b281390]> {
mediaType:'vide'
mediaSubType:'png '
mediaSpecific: {
codecType: 'png ' dimensions: 2852 x 1871
}
extensions: {<CFBasicHash 0x61800006c180 [0x7fff7b281390]>{type = immutable dict, count = 9,
entries =>
0 : <CFString 0x7fff7eee0750 [0x7fff7b281390]>{contents = "TemporalQuality"} = <CFNumber 0x27 [0x7fff7b281390]>{value = +0, type = kCFNumberSInt32Type}
1 : <CFString 0x7fff7eee0790 [0x7fff7b281390]>{contents = "Version"} = <CFNumber 0x117 [0x7fff7b281390]>{value = +1, type = kCFNumberSInt16Type}
2 : <CFString 0x7fff7eee0590 [0x7fff7b281390]>{contents = "FormatName"} = PNG
3 : <CFString 0x7fff7ad964d8 [0x7fff7b281390]>{contents = "CVPixelAspectRatio"} = <CFBasicHash 0x61800006c140 [0x7fff7b281390]>{type = immutable dict, count = 2,
entries =>
1 : <CFString 0x7fff7ad964f8 [0x7fff7b281390]>{contents = "HorizontalSpacing"} = <CFNumber 0xb2427 [0x7fff7b281390]>{value = +2852, type = kCFNumberSInt32Type}
2 : <CFString 0x7fff7ad96518 [0x7fff7b281390]>{contents = "VerticalSpacing"} = <CFNumber 0xb2427 [0x7fff7b281390]>{value = +2852, type = kCFNumberSInt32Type}
}
4 : <CFString 0x7fff7eee0550 [0x7fff7b281390]>{contents = "VerbatimSampleDescription"} = <CFData 0x618000140370 [0x7fff7b281390]>{length = 106, capacity = 106, bytes = 0x0000006a706e67200000000000000001 ... 00000b2400000000}
5 : <CFString 0x7fff7eee07b0 [0x7fff7b281390]>{contents = "RevisionLevel"} = <CFNumber 0x117 [0x7fff7b281390]>{value = +1, type = kCFNumberSInt16Type}
6 : <CFString 0x7fff7eee0770 [0x7fff7b281390]>{contents = "SpatialQuality"} = <CFNumber 0x40027 [0x7fff7b281390]>{value = +1024, type = kCFNumberSInt32Type}
7 : <CFString 0x7fff7eee07d0 [0x7fff7b281390]>{contents = "Vendor"} = appl
8 : <CFString 0x7fff7eee05b0 [0x7fff7b281390]>{contents = "Depth"} = <CFNumber 0x2017 [0x7fff7b281390]>{value = +32, type = kCFNumberSInt16Type}
}
}
}
Can I decode this video with AVFoundation?
Also if open this video with QuickTime Player and re-save, it saved video with "Apple ProRes 4444, Timecode" codecs and this video can be decoding with AVFoundation, but size of file increase from 800Kb to 2Mb.
Thanks for any help!
You can only open H264 encoded video with those APIs.

Can't attach new metadata to captured image

I am trying to attach some of my own fields to an image I capture. I seem to be able to change existing EXIF entries, but I can't add new ones, either within the EXIF dictionary or as a separate dictionary added to the image. When I make my additions, I can see them as part of the image data, but they never get saved to the image file. However, when I change existing EXIF entries, those do get saved to the image file.
I have already studied:
https://stackoverflow.com/a/5294574/86020
http://blog.codecropper.com/2011/05/adding-metadata-to-ios-images-the-easy-way/
https://stackoverflow.com/a/5967345/86020
Here's my code:
CMAttachmentMode attachmentMode;
CFDictionaryRef exifDict = CMGetAttachment(imageDataSampleBuffer, (CFStringRef) #"{Exif}", &attachmentMode);
NSMutableDictionary *exifMutableDict = [((__bridge NSDictionary*)exifDict) mutableCopy];
[exifMutableDict setObject: #"MyApp" forKey:#"AppName"];
[exifMutableDict setObject:lockedDate forKey:#"CaptureDate"];
[exifMutableDict setObject:#(55555) forKey:#"MeteringMode"];
[exifMutableDict setObject:#"Peach" forKey:#"LensMake"];
CFMutableDictionaryRef newCFDict = (__bridge CFMutableDictionaryRef)exifMutableDict;
CMSetAttachment(imageDataSampleBuffer, (CFStringRef) #"{Exif}", newCFDict, attachmentMode);
NSMutableDictionary *anotherMutableDict = #{}.mutableCopy;
[anotherMutableDict setObject: #"MyApp" forKey:#"AppName"];
[anotherMutableDict setObject:lockedDate forKey:#"CaptureDate"];
[anotherMutableDict setObject:#(55555) forKey:#"MeteringMode"];
[anotherMutableDict setObject:#"Peach" forKey:#"Lens Make"];
CMSetAttachment(imageDataSampleBuffer, (CFStringRef) #"another", (__bridge CFMutableDictionaryRef)anotherMutableDict, kCMAttachmentMode_ShouldPropagate);
NSLog(#"metadata: %# mutable dict %#, another %#", imageDataSampleBuffer, newCFDict, anotherMutableDict);
imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
I successfully get the EXIF dictionary. I add a couple of fields, AppName and CaptureDate, and then modify two existing ones, MeteringMode and LensMake.
I add this modified EXIF dictionary to the imageDataSampleBuffer.
I also create a new dictionary, in case the EXIF dictionary is somehow restrictive to new additions, anotherMutableDict and add it to the imageDataSampleBuffer.
I then get the JPEG representation and write it to a file in my sandbox.
When I look at the resulting image using image editing apps, including Photoshop, Preview and Pixelmator, I see these changes:
LensMake = Peach
MeteringMode = 55555
I don't see other additions to the EXIF, nor do I see another dictionary.
I have also tried completely removing the EXIF and replacing it with a different one, and the only affect there is that the resulting file has a significantly smaller EXIF without any of my additions, and only the changes.
I'm at a loss what is going on. Anyone see what I am doing wrong?
Here is the full output of the NSLog statement. As you can see, the changes and additions are clearly reflected in the EXIF dictionary and are also hanging off of another dictionary that is added to the imageDataSampleBuffer. Yet none of the additions make it into the final saved file, and I only see the changes to existing fields.
2015-02-09 12:18:42.960 MyApp[1720:209034] metadata: CMSampleBuffer 0x15cd2a060 retainCount: 1 allocator: 0x1991dfc80
invalid = NO
dataReady = YES
makeDataReadyCallback = 0x0
makeDataReadyRefcon = 0x0
buffer-level attachments:
another (P) = {
AppName = MyApp;
CaptureDate = "2015-02-09 20:18:25 +0000";
"Lens Make" = Peach;
MeteringMode = 55555;
}
Orientation(P) = 6
{Exif} (P) = <CFBasicHash 0x171479200 [0x1991dfc80]>{type = mutable dict, count = 24,
entries =>
0 : <CFString 0x100163398 [0x1991dfc80]>{contents = "MeteringMode"} = <CFNumber 0xb0000000000d9032 [0x1991dfc80]>{value = +55555, type = kCFNumberSInt32Type}
1 : <CFString 0x1001633d8 [0x1991dfc80]>{contents = "LensMake"} = <CFString 0x1001633b8 [0x1991dfc80]>{contents = "Peach"}
3 : <CFString 0x1992e4fa0 [0x1991dfc80]>{contents = "BrightnessValue"} = <CFNumber 0x17423b620 [0x1991dfc80]>{value = +3.50483639710326944083, type = kCFNumberFloat64Type}
4 : <CFString 0x100163358 [0x1991dfc80]>{contents = "AppName"} = <CFString 0x100163338 [0x1991dfc80]>{contents = "MyApp"}
6 : <CFString 0x19950b368 [0x1991dfc80]>{contents = "FNumber"} = <CFNumber 0x174223b00 [0x1991dfc80]>{value = +2.20000000000000017764, type = kCFNumberFloat64Type}
7 : <CFString 0x19950b688 [0x1991dfc80]>{contents = "FocalLength"} = <CFNumber 0x1742295c0 [0x1991dfc80]>{value = +4.15000000000000035527, type = kCFNumberFloat64Type}
8 : <CFString 0x19950b568 [0x1991dfc80]>{contents = "ShutterSpeedValue"} = <CFNumber 0x1742386e0 [0x1991dfc80]>{value = +4.90764099214979498953, type = kCFNumberFloat64Type}
9 : <CFString 0x19950b908 [0x1991dfc80]>{contents = "SceneType"} = <CFNumber 0xb000000000000012 [0x1991dfc80]>{value = +1, type = kCFNumberSInt32Type}
10 : <CFString 0x19950b588 [0x1991dfc80]>{contents = "ApertureValue"} = <CFNumber 0x17423c080 [0x1991dfc80]>{value = +2.27500704749987026076, type = kCFNumberFloat64Type}
11 : <CFString 0x19950b6a8 [0x1991dfc80]>{contents = "SubjectArea"} = <CFArray 0x174443720 [0x1991dfc80]>{type = mutable-small, count = 4, values = (
0 : <CFNumber 0xb0000000000065f1 [0x1991dfc80]>{value = +1631, type = kCFNumberSInt16Type}
1 : <CFNumber 0xb000000000004c71 [0x1991dfc80]>{value = +1223, type = kCFNumberSInt16Type}
2 : <CFNumber 0xb000000000007031 [0x1991dfc80]>{value = +1795, type = kCFNumberSInt16Type}
3 : <CFNumber 0xb000000000004351 [0x1991dfc80]>{value = +1077, type = kCFNumberSInt16Type}
)}
17 : <CFString 0x19950bb28 [0x1991dfc80]>{contents = "LensSpecification"} = <CFArray 0x174245760 [0x1991dfc80]>{type = mutable-small, count = 4, values = (
0 : <CFNumber 0x17423bba0 [0x1991dfc80]>{value = +4.15000000000000035527, type = kCFNumberFloat64Type}
1 : <CFNumber 0x17423a0e0 [0x1991dfc80]>{value = +4.15000000000000035527, type = kCFNumberFloat64Type}
2 : <CFNumber 0x174232900 [0x1991dfc80]>{value = +2.20000000000000017764, type = kCFNumberFloat64Type}
3 : <CFNumber 0x174235560 [0x1991dfc80]>{value = +2.20000000000000017764, type = kCFNumberFloat64Type}
)}
18 : <CFString 0x19950b7c8 [0x1991dfc80]>{contents = "PixelYDimension"} = <CFNumber 0xb000000000009902 [0x1991dfc80]>{value = +2448, type = kCFNumberSInt32Type}
19 : <CFString 0x19950b988 [0x1991dfc80]>{contents = "WhiteBalance"} = <CFNumber 0xb000000000000002 [0x1991dfc80]>{value = +0, type = kCFNumberSInt32Type}
22 : <CFString 0x100163378 [0x1991dfc80]>{contents = "CaptureDate"} = 2015-02-09 20:18:25 +0000
28 : <CFString 0x19950b3c8 [0x1991dfc80]>{contents = "ISOSpeedRatings"} = <CFArray 0x174249b70 [0x1991dfc80]>{type = mutable-small, count = 1, values = (
0 : <CFNumber 0xb000000000000401 [0x1991dfc80]>{value = +64, type = kCFNumberSInt16Type}
)}
29 : <CFString 0x19950b968 [0x1991dfc80]>{contents = "ExposureMode"} = <CFNumber 0xb000000000000002 [0x1991dfc80]>{value = +0, type = kCFNumberSInt32Type}
31 : <CFString 0x19950b7a8 [0x1991dfc80]>{contents = "PixelXDimension"} = <CFNumber 0xb00000000000cc02 [0x1991dfc80]>{value = +3264, type = kCFNumberSInt32Type}
32 : <CFString 0x19950bb68 [0x1991dfc80]>{contents = "LensModel"} = <CFString 0x17446a940 [0x1991dfc80]>{contents = "iPhone 6 back camera 4.15mm f/2.2"}
34 : <CFString 0x19950b388 [0x1991dfc80]>{contents = "ExposureProgram"} = <CFNumber 0xb000000000000022 [0x1991dfc80]>{value = +2, type = kCFNumberSInt32Type}
35 : <CFString 0x19950b9c8 [0x1991dfc80]>{contents = "FocalLenIn35mmFilm"} = <CFNumber 0xb0000000000001d2 [0x1991dfc80]>{value = +29, type = kCFNumberSInt32Type}
36 : <CFString 0x1992e3420 [0x1991dfc80]>{contents = "ExposureTime"} = <CFNumber 0x17423b540 [0x1991dfc80]>{value = +0.03333333333333333287, type = kCFNumberFloat64Type}
37 : <CFString 0x19950b668 [0x1991dfc80]>{contents = "Flash"} = <CFNumber 0xb000000000000102 [0x1991dfc80]>{value = +16, type = kCFNumberSInt32Type}
38 : <CFString 0x19950b8c8 [0x1991dfc80]>{contents = "SensingMethod"} = <CFNumber 0xb000000000000022 [0x1991dfc80]>{value = +2, type = kCFNumberSInt32Type}
40 : <CFString 0x19950b5c8 [0x1991dfc80]>{contents = "ExposureBiasValue"} = <CFNumber 0x17422a8a0 [0x1991dfc80]>{value = +0.0, type = kCFNumberFloat64Type}
}
{MakerApple}(P) = <CFBasicHash 0x174468500 [0x1991dfc80]>{type = mutable dict, count = 10,
entries =>
0 : <CFString 0x1992dfd60 [0x1991dfc80]>{contents = "7"} = <CFNumber 0xb000000000000012 [0x1991dfc80]>{value = +1, type = kCFNumberSInt32Type}
1 : <CFString 0x1992dfce0 [0x1991dfc80]>{contents = "3"} = <CFBasicHash 0x174474e40 [0x1991dfc80]>{type = mutable dict, count = 4,
entries =>
0 : <CFString 0x1992dcb80 [0x1991dfc80]>{contents = "flags"} = <CFNumber 0xb000000000000012 [0x1991dfc80]>{value = +1, type = kCFNumberSInt32Type}
1 : <CFString 0x1992dcb20 [0x1991dfc80]>{contents = "value"} = <CFNumber 0xb00371370a6edbd3 [0x1991dfc80]>{value = +60556633894333, type = kCFNumberSInt64Type}
4 : <CFString 0x1992dcb40 [0x1991dfc80]>{contents = "timescale"} = <CFNumber 0xb0000003b9aca002 [0x1991dfc80]>{value = +1000000000, type = kCFNumberSInt32Type}
5 : <CFString 0x1992dcb60 [0x1991dfc80]>{contents = "epoch"} = <CFNumber 0xb000000000000003 [0x1991dfc80]>{value = +0, type = kCFNumberSInt64Type}
}
3 : <CFString 0x1992dfd80 [0x1991dfc80]>{contents = "8"} = <CFArray 0x1742541c0 [0x1991dfc80]>{type = mutable-small, count = 3, values = (
0 : <CFNumber 0x1742371e0 [0x1991dfc80]>{value = -0.0552164577, type = kCFNumberFloat32Type}
1 : <CFNumber 0x17423bf60 [0x1991dfc80]>{value = -0.0018360948, type = kCFNumberFloat32Type}
2 : <CFNumber 0x17423c000 [0x1991dfc80]>{value = -0.9881702065, type = kCFNumberFloat32Type}
)}
4 : <CFString 0x1992dfd00 [0x1991dfc80]>{contents = "4"} = <CFNumber 0xb000000000000012 [0x1991dfc80]>{value = +1, type = kCFNumberSInt32Type}
5 : <CFString 0x1992dfe40 [0x1991dfc80]>{contents = "14"} = <CFNumber 0xb000000000000002 [0x1991dfc80]>{value = +0, type = kCFNumberSInt32Type}
6 : <CFString 0x1992dfda0 [0x1991dfc80]>{contents = "9"} = <CFNumber 0xb000000000001132 [0x1991dfc80]>{value = +275, type = kCFNumberSInt32Type}
7 : <CFString 0x1992dfd20 [0x1991dfc80]>{contents = "5"} = <CFNumber 0xb000000000000c82 [0x1991dfc80]>{value = +200, type = kCFNumberSInt32Type}
8 : <CFString 0x1992dfca0 [0x1991dfc80]>{contents = "1"} = <CFNumber 0xb000000000000022 [0x1991dfc80]>{value = +2, type = kCFNumberSInt32Type}
10 : <CFString 0x1992dfd40 [0x1991dfc80]>{contents = "6"} = <CFNumber 0xb000000000000c42 [0x1991dfc80]>{value = +196, type = kCFNumberSInt32Type}
11 : <CFString 0x1992dfcc0 [0x1991dfc80]>{contents = "2"} = <CFData 0x15cd0f070 [0x1991dfc80]>{length = 512, capacity = 512, bytes = 0x4c013801250113010101f100e100d300 ... 8d0084007b007300}
}
formatDescription = <CMVideoFormatDescription 0x174249c90 [0x1991dfc80]> {
mediaType:'vide'
mediaSubType:'jpeg'
mediaSpecific: {
codecType: 'jpeg' dimensions: 3264 x 2448
}
extensions: {(null)}
}
sbufToTrackReadiness = 0x0
numSamples = 1
sampleTimingArray[1] = {
{PTS = {60556633894333/1000000000 = 60556.634}, DTS = {INVALID}, duration = {INVALID}},
}
sampleSizeArray[1] = {
sampleSize = 1301228,
}
dataBuffer = 0x174301290
mutable dict {
ApertureValue = "2.27500704749987";
AppName = MyApp;
BrightnessValue = "3.504836397103269";
CaptureDate = "2015-02-09 20:18:25 +0000";
ExposureBiasValue = 0;
ExposureMode = 0;
ExposureProgram = 2;
ExposureTime = "0.03333333333333333";
FNumber = "2.2";
Flash = 16;
FocalLenIn35mmFilm = 29;
FocalLength = "4.15";
ISOSpeedRatings = (
64
);
LensMake = Peach;
LensModel = "iPhone 6 back camera 4.15mm f/2.2";
LensSpecification = (
"4.15",
"4.15",
"2.2",
"2.2"
);
MeteringMode = 55555;
PixelXDimension = 3264;
PixelYDimension = 2448;
SceneType = 1;
SensingMethod = 2;
ShutterSpeedValue = "4.907640992149795";
SubjectArea = (
1631,
1223,
1795,
1077
);
WhiteBalance = 0;
}, another {
AppName = MyApp;
CaptureDate = "2015-02-09 20:18:25 +0000";
"Lens Make" = Peach;
MeteringMode = 55555;
}

VTDecompressionSessionDecodeFrame ERROR -12916 (kVTFormatDescriptionChangeNotSupportedErr)

I got a problem I can't get my head around.
First I create a VTCompressionSessionCreate (h264) then in my compression callback when I start feeding images I get a CMSampleBufferRef sampleBuffer as expected.
Just for debugging the code stream I then create a VTDecompressionSessionCreate and feed the 'sampleBuffer' containing the H264 stream to a VTDecompressionSessionDecodeFrame and I would expect a CVImageBufferRef imageBuffer in my decompression callback.
Now to the problem:
If I create VTDecompressionSessionCreate using the 'sampleBuffer' from the compression callback like this:
CMFormatDescriptionRef format = CMSampleBufferGetFormatDescription(sampleBuffer);
Everything works as expected and I get CVImageBufferRef's in my decompression callback.
However my intention is to send the data over a network so I need to get my format discription from the in stream SPS and PPS information.
So then I must 'fake' getting the SPS and PPS by first extracting them and then using them like this:
CMFormatDescriptionRef format = CMSampleBufferGetFormatDescription(sampleBuffer);
size_t spsSize, ppsSize;
size_t parmCount;
const uint8_t* sps, *pps;
CMVideoFormatDescriptionGetH264ParameterSetAtIndex(format, 0, &sps, &spsSize, &parmCount, NULL );
CMVideoFormatDescriptionGetH264ParameterSetAtIndex(format, 1, &pps, &ppsSize, &parmCount, NULL );
const uint8_t* const parameterSetPointers[2] = {sps, pps};
const size_t parameterSetSizes[2] = {spsSize, ppsSize};
CMFormatDescriptionRef format2;
status = CMVideoFormatDescriptionCreateFromH264ParameterSets(kCFAllocatorDefault, 2, parameterSetPointers, parameterSetSizes, 4, &format2);
I would expect format and format2 to contain the same information but:
format = <CMVideoFormatDescription 0x17004fd50 [0x19483ac80]> {
mediaType:'vide'
mediaSubType:'avc1'
mediaSpecific: {
codecType: 'avc1' dimensions: 1280 x 720
}
extensions: {<CFBasicHash 0x170270cc0 [0x19483ac80]>{type = immutable dict, count = 2,
entries =>
0 : <CFString 0x194935fa0 [0x19483ac80]>{contents = "SampleDescriptionExtensionAtoms"} = <CFBasicHash 0x170270c40 [0x19483ac80]>{type = immutable dict, count = 1,
entries =>
2 : <CFString 0x194939fa0 [0x19483ac80]>{contents = "avcC"} = <CFData 0x1700c9920 [0x19483ac80]>{length = 35, capacity = 35, bytes = 0x0164001fffe100106764001fac56c050 ... 28ee3cb0fdf8f800}
}
2 : <CFString 0x194936000 [0x19483ac80]>{contents = "FormatName"} = <CFString 0x17003a160 [0x19483ac80]>{contents = "H.264"}
}
}
}
format2:
format2 = <CMVideoFormatDescription 0x174051c70 [0x19483ac80]> {
mediaType:'vide'
mediaSubType:'avc1'
mediaSpecific: {
codecType: 'avc1' dimensions: 1280 x 720
}
extensions: {<CFBasicHash 0x17426f9c0 [0x19483ac80]>{type = immutable dict, count = 5,
entries =>
0 : <CFString 0x19499a608 [0x19483ac80]>{contents = "CVImageBufferChromaLocationBottomField"} = <CFString 0x19499a648 [0x19483ac80]>{contents = "Center"}
1 : <CFString 0x19499a328 [0x19483ac80]>{contents = "CVFieldCount"} = <CFNumber 0xb000000000000012 [0x19483ac80]>{value = +1, type = kCFNumberSInt32Type}
3 : <CFString 0x194935fa0 [0x19483ac80]>{contents = "SampleDescriptionExtensionAtoms"} = <CFBasicHash 0x17426b100 [0x19483ac80]>{type = immutable dict, count = 1,
entries =>
2 : <CFString 0x174031560 [0x19483ac80]>{contents = "avcC"} = <CFData 0x1740c4910 [0x19483ac80]>{length = 35, capacity = 35, bytes = 0x0164001fffe100106764001fac56c050 ... 28ee3cb0fdf8f800}
}
5 : <CFString 0x19499a5e8 [0x19483ac80]>{contents = "CVImageBufferChromaLocationTopField"} = <CFString 0x19499a648 [0x19483ac80]>{contents = "Center"}
6 : <CFString 0x1949360e0 [0x19483ac80]>{contents = "FullRangeVideo"} = <CFBoolean 0x19483b030 [0x19483ac80]>{value = false}
}
}
}
format works
forma2 don't and VTDecompressionSessionDecodeFrame throws error -12916.
Thank you for helping.
.
Solved it. It was the way I created CMFormatDescriptionRef containing the code stream that was causing the error.
The SPS and PPS was taken from a CFSampleBuffer. Then I create a CMVideoFormatDescriptionCreateFromH264ParameterSets so far so good. But in the same application I turned around the stream and decoded the picture using the same CFSampleBuffer. That's not working and was causing the error. I had to convert the payload to NSData first then create a new CFSampleBuffer from the NSData. Then it works

CVPixelBuffer to CIImage always returning nil

I am trying to convert a pixelBuffer extracted from AVPlayerItemVideoOutput to CIImage but always getting nil.
The Code
if([videoOutput_ hasNewPixelBufferForItemTime:player_.internalPlayer.currentItem.currentTime])
{
CVPixelBufferRef pixelBuffer = [videoOutput_ copyPixelBufferForItemTime:player_.internalPlayer.currentItem.currentTime
itemTimeForDisplay:nil];
CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer]; // Always image === nil
CIFilter *filter = [FilterCollection filterSepiaForImage:image];
image = filter.outputImage;
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgimg = [context createCGImage:image fromRect:[image extent]];
[pipLayer_ setContents:(id)CFBridgingRelease(cgimg)];
}
Below are details of a pixelBuffer used in order to create a CIImage (which always results in nil):
$0 = 0x09b48720 <CVPixelBuffer 0x9b48720 width=624 height=352 bytesPerRow=2496 pixelFormat=BGRA iosurface=0x0 attributes=<CFBasicHash 0x98241d0 [0x1d244d8]>{type = immutable dict, count = 3,
entries =>
0 : <CFString 0x174cf4 [0x1d244d8]>{contents = "Height"} = <CFNumber 0x9a16e70 [0x1d244d8]>{value = +352, type = kCFNumberSInt32Type}
1 : <CFString 0x174ce4 [0x1d244d8]>{contents = "Width"} = <CFNumber 0x9a109d0 [0x1d244d8]>{value = +624, type = kCFNumberSInt32Type}
2 : <CFString 0x1750e4 [0x1d244d8]>{contents = "PixelFormatType"} = <CFArray 0x1090ddd0 [0x1d244d8]>{type = mutable-small, count = 1, values = (
0 : <CFNumber 0x7a28050 [0x1d244d8]>{value = +1111970369, type = kCFNumberSInt32Type}
)}
}
propagatedAttachments=<CFBasicHash 0x9b485c0 [0x1d244d8]>{type = mutable dict, count = 6,
entries =>
0 : <CFString 0x174ff4 [0x1d244d8]>{contents = "CVImageBufferTransferFunction"} = <CFString 0x174f84 [0x1d244d8]>{contents = "ITU_R_709_2"}
2 : <CFString 0x174f74 [0x1d244d8]>{contents = "CVImageBufferYCbCrMatrix"} = <CFString 0x174f94 [0x1d244d8]>{contents = "ITU_R_601_4"}
9 : <CFString 0x174f14 [0x1d244d8]>{contents = "CVPixelAspectRatio"} = <CFBasicHash 0x9b1bc30 [0x1d244d8]>{type = immutable dict, count = 2,
entries =>
0 : <CFString 0x174f34 [0x1d244d8]>{contents = "VerticalSpacing"} = <CFNumber 0x9b0f730 [0x1d244d8]>{value = +1, type = kCFNumberSInt32Type}
2 : <CFString 0x174f24 [0x1d244d8]>{contents = "HorizontalSpacing"} = <CFNumber 0x9b0f730 [0x1d244d8]>{value = +1, type = kCFNumberSInt32Type}
}
10 : <CFString 0x174fb4 [0x1d244d8]>{contents = "CVImageBufferColorPrimaries"} = <CFString 0x174fd4 [0x1d244d8]>{contents = "SMPTE_C"}
11 : <CFString 0x174e24 [0x1d244d8]>{contents = "QTMovieTime"} = <CFBasicHash 0x7a47940 [0x1d244d8]>{type = immutable dict, count = 2,
entries =>
0 : <CFString 0x174e44 [0x1d244d8]>{contents = "TimeScale"} = <CFNumber 0x7a443d0 [0x1d244d8]>{value = +90000, type = kCFNumberSInt32Type}
2 : <CFString 0x174e34 [0x1d244d8]>{contents = "TimeValue"} = <CFNumber 0x7a476e0 [0x1d244d8]>{value = +1047297, type = kCFNumberSInt64Type}
}
12 : <CFString 0x174eb4 [0x1d244d8]>{contents = "CVFieldCount"} = <CFNumber 0x9b0f730 [0x1d244d8]>{value = +1, type = kCFNumberSInt32Type}
}
nonPropagatedAttachments=<CFBasicHash 0x9b44b40 [0x1d244d8]>{type = mutable dict, count = 0,
entries =>
}
>
Solved this problem - I was trying on simulator, while it seems that is is only supported on devices.
It seems that you do not have iosurface properties key defined. Not 100% it will solve your issue, but try this:
CFDictionaryRef attrs = (CFDictionaryRef)#{
(id)kCVPixelBufferWidthKey: #(WIDTH_OF_YOUR_VIDEO_GOES_HERE),
(id)kCVPixelBufferHeightKey: #(HEIGHT_OF_YOUR_VIDEO_GOES_HERE),
(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange;), // kCVPixelFormatType_32BGRA
(id)kCVPixelBufferIOSurfacePropertiesKey: #{},
}
And pass this dictionary along with other keys during creation of your pixelbuffer
That's not the way you set the content of a layer; this is:
(__bridge id)uiImage.CGImage;
There's no way you got what you had working on a device or a simulator.

Resources