initWithCVPixelBuffer failed because the CVPixelBufferRef is not non-IOSurface backed - ios

I receive YUV frames (kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) and when creating a CIImage from a CVPixelBufferRef I get:
initWithCVPixelBuffer failed because the CVPixelBufferRef is not non-IOSurface backed.
CVPixelBufferRef pixelBuffer;
size_t planeWidth[] = { width, width / 2 };
size_t planeHeight[] = { height, height / 2};
size_t planeBytesPerRow[] = { width, width / 2 };
CVReturn ret = CVPixelBufferCreateWithBytes(
kCFAllocatorDefault, width, height, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
data, bytesPerRow, 0, 0, 0, &pixelBuffer
);
if (ret != kCVReturnSuccess)
{
NSLog(#"FAILED");
CVPixelBufferRelease(pixelBuffer);
return;
}
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
// fails
CIImage * image = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CVPixelBufferRelease(pixelBuffer);
[image release];

I'll assume the question is: "Why do I get this error?"
To make an CVPixelBuffer IOSurface backed you need to set properties on the CVPixelBuffer when you create it. Right now you are passing in "0" as the second to last parameter in CVPixelBufferCreateWithBytes.
Pass a dictionary with a key for kCVPixelBufferIOSurfacePropertiesKey and value that is an empty dictionary (to use default IOSurface options, others are not documented) in CVPixelBufferCreate (as you can't use kCVPixelBufferIOSurfacePropertiesKey with CVPixelBufferCreateWithBytes), copy correct bytes to created CVPixelBuffer (don't forget bytes alignment). That is how you make it IOSurface-backed.
Although I'm not sure if it will remove all errors for you because of the pixel format. My understanding is that the GPU has to be able to hold textures in that pixel format in order to be used as IOSurfaces though I'm not 100% sure.
Note: correct copying pixel bytes for kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange could be found in this SO answer.

Related

How do I draw onto a CVPixelBufferRef that is planar/ycbcr/420f/yuv/NV12/not rgb?

I have received a CMSampleBufferRef from a system API that contains CVPixelBufferRefs that are not RGBA (linear pixels). The buffer contains planar pixels (such as 420f aka kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange aka yCbCr aka YUV).
I would like to modify do some manipulation of this video data before sending it off to VideoToolkit to be encoded to h264 (drawing some text, overlaying a logo, rotating the image, etc), but I'd like for it to be efficient and real-time. Buuuut planar image data looks suuuper messy to work with -- there's the chroma plane and the luma plane and they're different sizes and... Working with this on a byte level seems like a lot of work.
I could probably use a CGContextRef and just paint right on top of the pixels, but from what I can gather it only supports RGBA pixels. Any advice on how I can do this with as little data copying as possible, yet as few lines of code as possible?
CGBitmapContextRef can only paint into something like 32ARGB, correct. This means that you will want to create ARGB (or RGBA) buffers, and then find a way to very quickly transfer YUV pixels onto this ARGB surface. This recipe includes using CoreImage, a home-made CVPixelBufferRef through a pool, a CGBitmapContextRef referencing your home made pixel buffer, and then recreating a CMSampleBufferRef resembling your input buffer, but referencing your output pixels. In other words,
Fetch the incoming pixels into a CIImage.
Create a CVPixelBufferPool with the pixel format and output dimensions you are creating. You don't want to create CVPixelBuffers without a pool in real time: you will run out of memory if your producer is too fast; you'll fragment your RAM as you won't be reusing buffers; and it's a waste of cycles.
Create a CIContext with the default constructor that you'll share between buffers. It contains no external state, but documentation says that recreating it on every frame is very expensive.
On incoming frame, create a new pixel buffer. Make sure to use an allocation threshold so you don't get runaway RAM usage.
Lock the pixel buffer
Create a bitmap context referencing the bytes in the pixel buffer
Use CIContext to render the planar image data into the linear buffer
Perform your app-specific drawing in the CGContext!
Unlock the pixel buffer
Fetch the timing info of the original sample buffer
Create a CMVideoFormatDescriptionRef by asking the pixel buffer for its exact format
Create a sample buffer for the pixel buffer. Done!
Here's a sample implementation, where I have chosen 32ARGB as the image format to work with, as that's something that both CGBitmapContext and CoreVideo enjoys working with on iOS:
{
CGPixelBufferPoolRef *_pool;
CGSize _poolBufferDimensions;
}
- (void)_processSampleBuffer:(CMSampleBufferRef)inputBuffer
{
// 1. Input data
CVPixelBufferRef inputPixels = CMSampleBufferGetImageBuffer(inputBuffer);
CIImage *inputImage = [CIImage imageWithCVPixelBuffer:inputPixels];
// 2. Create a new pool if the old pool doesn't have the right format.
CGSize bufferDimensions = {CVPixelBufferGetWidth(inputPixels), CVPixelBufferGetHeight(inputPixels)};
if(!_pool || !CGSizeEqualToSize(bufferDimensions, _poolBufferDimensions)) {
if(_pool) {
CFRelease(_pool);
}
OSStatus ok0 = CVPixelBufferPoolCreate(NULL,
NULL, // pool attrs
(__bridge CFDictionaryRef)(#{
(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32ARGB),
(id)kCVPixelBufferWidthKey: #(bufferDimensions.width),
(id)kCVPixelBufferHeightKey: #(bufferDimensions.height),
}), // buffer attrs
&_pool
);
_poolBufferDimensions = bufferDimensions;
assert(ok0 == noErr);
}
// 4. Create pixel buffer
CVPixelBufferRef outputPixels;
OSStatus ok1 = CVPixelBufferPoolCreatePixelBufferWithAuxAttributes(NULL,
_pool,
(__bridge CFDictionaryRef)#{
// Opt to fail buffer creation in case of slow buffer consumption
// rather than to exhaust all memory.
(__bridge id)kCVPixelBufferPoolAllocationThresholdKey: #20
}, // aux attributes
&outputPixels
);
if(ok1 == kCVReturnWouldExceedAllocationThreshold) {
// Dropping frame because consumer is too slow
return;
}
assert(ok1 == noErr);
// 5, 6. Graphics context to draw in
CGColorSpaceRef deviceColors = CGColorSpaceCreateDeviceRGB();
OSStatus ok2 = CVPixelBufferLockBaseAddress(outputPixels, 0);
assert(ok2 == noErr);
CGContextRef cg = CGBitmapContextCreate(
CVPixelBufferGetBaseAddress(outputPixels), // bytes
CVPixelBufferGetWidth(inputPixels), CVPixelBufferGetHeight(inputPixels), // dimensions
8, // bits per component
CVPixelBufferGetBytesPerRow(outputPixels), // bytes per row
deviceColors, // color space
kCGImageAlphaPremultipliedFirst // bitmap info
);
CFRelease(deviceColors);
assert(cg != NULL);
// 7
[_imageContext render:inputImage toCVPixelBuffer:outputPixels];
// 8. DRAW
CGContextSetRGBFillColor(cg, 0.5, 0, 0, 1);
CGContextSetTextDrawingMode(cg, kCGTextFill);
NSAttributedString *text = [[NSAttributedString alloc] initWithString:#"Hello world" attributes:NULL];
CTLineRef line = CTLineCreateWithAttributedString((__bridge CFAttributedStringRef)text);
CTLineDraw(line, cg);
CFRelease(line);
// 9. Unlock and stop drawing
CFRelease(cg);
CVPixelBufferUnlockBaseAddress(outputPixels, 0);
// 10. Timings
CMSampleTimingInfo timingInfo;
OSStatus ok4 = CMSampleBufferGetSampleTimingInfo(inputBuffer, 0, &timingInfo);
assert(ok4 == noErr);
// 11. VIdeo format
CMVideoFormatDescriptionRef videoFormat;
OSStatus ok5 = CMVideoFormatDescriptionCreateForImageBuffer(NULL, outputPixels, &videoFormat);
assert(ok5 == noErr);
// 12. Output sample buffer
CMSampleBufferRef outputBuffer;
OSStatus ok3 = CMSampleBufferCreateForImageBuffer(NULL, // allocator
outputPixels, // image buffer
YES, // data ready
NULL, // make ready callback
NULL, // make ready refcon
videoFormat,
&timingInfo, // timing info
&outputBuffer // out
);
assert(ok3 == noErr);
[_consumer consumeSampleBuffer:outputBuffer];
CFRelease(outputPixels);
CFRelease(videoFormat);
CFRelease(outputBuffer);
}

iOS: Overlay two images with Alpha offscreen

sorry, for this question, I know there is a similar question, but I can not get the answer to work. Probably some dumb error on my side ;-)
I want to overlay two images with Alpha on iOS. The images taken from two videos, read by an AssetReader and stored in two CVPixelBuffer. I know that the Alpha channel is not stored in the video, so I get it from a third file. All data looks fine. The Problem is the overlay, if I do it onscreen with [CIContext drawImage] everything is fine !
But if I do it offscreen because the format of the video is not identical to the screen format, I can not get it to work:
1. drawImage does work, but only on-screen
2. render:toCVPixelBuffer works, but ignores Alpha
3. CGContextDrawImage seems to do nothing at all (not even an error message)
So can somebody give me an idea what is wrong:
Init:
...(a lot of code before)
Setup color space and bitmap context
if(outputContext)
{
CGContextRelease(outputContext);
CGColorSpaceRelease(outputColorSpace);
}
outputColorSpace = CGColorSpaceCreateDeviceRGB();
outputContext = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(pixelBuffer), videoFormatSize.width, videoFormatSize.height, 8, CVPixelBufferGetBytesPerRow(pixelBuffer), outputColorSpace,(CGBitmapInfo) kCGBitmapByteOrderDefault |kCGImageAlphaPremultipliedFirst);
...
(a lot code after)
Drawing:
CIImage *backImageFromSample;
CGImageRef frontImageFromSample;
CVImageBufferRef nextImageBuffer = myPixelBufferArray[0];
CMSampleBufferRef sampleBuffer = NULL;
CMSampleTimingInfo timingInfo;
//draw the frame
CGRect toRect;
toRect.origin.x = 0;
toRect.origin.y = 0;
toRect.size = videoFormatSize;
//background image always full size, this part seems to work
if(drawBack)
{
CVPixelBufferLockBaseAddress( backImageBuffer, kCVPixelBufferLock_ReadOnly );
backImageFromSample = [CIImage imageWithCVPixelBuffer:backImageBuffer];
[coreImageContext render:backImageFromSample toCVPixelBuffer:nextImageBuffer bounds:toRect colorSpace:rgbSpace];
CVPixelBufferUnlockBaseAddress( backImageBuffer, kCVPixelBufferLock_ReadOnly );
}
else
[self clearBuffer:nextImageBuffer];
//Front image doesn't seem to do anything
if(drawFront)
{
unsigned long int numBytes = CVPixelBufferGetBytesPerRow(frontImageBuffer)*CVPixelBufferGetHeight(frontImageBuffer);
CVPixelBufferLockBaseAddress( frontImageBuffer, kCVPixelBufferLock_ReadOnly );
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, CVPixelBufferGetBaseAddress(frontImageBuffer), numBytes, NULL);
frontImageFromSample = CGImageCreate (CVPixelBufferGetWidth(frontImageBuffer) , CVPixelBufferGetHeight(frontImageBuffer), 8, 32, CVPixelBufferGetBytesPerRow(frontImageBuffer), outputColorSpace, (CGBitmapInfo) kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst, provider, NULL, NO, kCGRenderingIntentDefault);
CGContextDrawImage ( outputContext, inrect, frontImageFromSample);
CVPixelBufferUnlockBaseAddress( frontImageBuffer, kCVPixelBufferLock_ReadOnly );
CGImageRelease(frontImageFromSample);
}
Any ideas anyone ?
So obviously I should stop to ask questions on stackflow. Every time I do that after hours of debugging I find the answer myself shortly afterwards. Sorry for that. The problem is in the initialisation, you can't do CVPixelBufferGetBaseAddress without locking the adresss first O_o. The adress gets NULL and this seems to be allowed, with the action then beeing not to do anything. So the correct code is:
if(outputContext)
{
CGContextRelease(outputContext);
CGColorSpaceRelease(outputColorSpace);
}
CVPixelBufferLockBaseAddress(pixelBuffer);
outputColorSpace = CGColorSpaceCreateDeviceRGB();
outputContext = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(pixelBuffer), videoFormatSize.width, videoFormatSize.height, 8, CVPixelBufferGetBytesPerRow(pixelBuffer), outputColorSpace,(CGBitmapInfo) kCGBitmapByteOrderDefault |kCGImageAlphaPremultipliedFirst);
CVPixelBufferUnlockBaseAddress(pixelBuffer);

High dynamic range imaging using openCV on iOS produces garbled output

I'm trying to use openCV 3 on iOS to produce an HDR image from multiple exposures that will eventually be output as an EXR file. I noticed I was getting garbled output when I tried to create an HDR image. Thinking it was a mistake in trying to create a camera response, I started from scratch and adapted the HDR imaging tutorial material on the openCV to iOS but it produces similar results. The following C++ code returns a garbled image:
cv::Mat mergeToHDR (vector<Mat>& images, vector<float>& times)
{
imgs = images;
Mat response;
//Ptr<CalibrateDebevec> calibrate = createCalibrateDebevec();
//calibrate->process(images, response, times);
Ptr<CalibrateRobertson> calibrate = createCalibrateRobertson();
calibrate->process(images, response, times);
// create HDR
Mat hdr;
Ptr<MergeDebevec> merge_debevec = createMergeDebevec();
merge_debevec->process(images, hdr, times, response);
// create LDR
Mat ldr;
Ptr<TonemapDurand> tonemap = createTonemapDurand(2.2f);
tonemap->process(hdr, ldr);
// create fusion
Mat fusion;
Ptr<MergeMertens> merge_mertens = createMergeMertens();
merge_mertens->process(images, fusion);
/*
Uncomment what kind of tonemapped image or hdr to return
Returning one of the images in the array produces ungarbled output
so we know the problem is unlikely with the openCV to UIImage conversion
*/
//give back one of the images from the image array
//return images[0];
//give back one of the hdr images
return fusion * 255;
//return ldr * 255;
//return hdr
}
This is what the image looks like:
Bad image output
I've analysed the image, tried various colour space conversions, but the data appears to be junk.
The openCV framework is the latest compiled 3.0.0 version from the openCV.org website. The RC and alpha produce the same results, and the current version won't build (for iOS or OSX). I was thinking my next steps would be to try and get the framework to compile from scratch, or to get the example working under another platform to see if the issue is platform specific or with the openCV HDR functions themselves. But before I do that I thought I would throw the issue up on stack overflow to see if anyone had come across the same issue or if I am missing something blindingly obvious.
I have uploaded the example xcode project to here:
https://github.com/artandmath/openCVHDRSwiftExample
Getting openCV to work with swift was with the help from user foundry on Github
Thanks foundry for pointing me in the right direction. The UIImage+OpenCV class extension is expecting 8-bits per colour channel, however the HDR functions are spitting out 32-bits per channel (which is actually what I want). Converting the image matrix back to 8-bits per channel for display purposes before converting it to a UIImage fixes the issue.
Here is the resulting image:
The expected result!
Here is the fixed function:
cv::Mat mergeToHDR (vector<Mat>& images, vector<float>& times)
{
imgs = images;
Mat response;
//Ptr<CalibrateDebevec> calibrate = createCalibrateDebevec();
//calibrate->process(images, response, times);
Ptr<CalibrateRobertson> calibrate = createCalibrateRobertson();
calibrate->process(images, response, times);
// create HDR
Mat hdr;
Ptr<MergeDebevec> merge_debevec = createMergeDebevec();
merge_debevec->process(images, hdr, times, response);
// create LDR
Mat ldr;
Ptr<TonemapDurand> tonemap = createTonemapDurand(2.2f);
tonemap->process(hdr, ldr);
// create fusion
Mat fusion;
Ptr<MergeMertens> merge_mertens = createMergeMertens();
merge_mertens->process(images, fusion);
/*
Uncomment what kind of tonemapped image or hdr to return
Convert back to 8-bits per channel because that is what
the UIImage+OpenCV class extension is expecting
*/
// tone mapped
/*
Mat ldr8bit;
ldr = ldr * 255;
ldr.convertTo(ldr8bit, CV_8U);
return ldr8bit;
*/
// fusion
Mat fusion8bit;
fusion = fusion * 255;
fusion.convertTo(fusion8bit, CV_8U);
return fusion8bit;
// hdr
/*
Mat hdr8bit;
hdr = hdr * 255;
hdr.convertTo(hdr8bit, CV_8U);
return hdr8bit;
*/
}
Alternatively here is a fix for the initWithCVMat method in the OpenCV+UIImage class extension based on one of the iOS tutorials in the iOS section on opencv.org:
http://docs.opencv.org/2.4/doc/tutorials/ios/image_manipulation/image_manipulation.html#opencviosimagemanipulation
When creating a new CGImageRef with floating point data, it needs to be explicitly told that it expects floating point data, and the byte order of the image data from openCV needs to be reversed. Now iOS/Quartz has the float data! It's a bit of a hacky fix, because the method still only deals with 8 bit or 32 bits per channel or alphas and doesn't take into account every kind of image that could be passed from Mat to UIImage.
- (id)initWithCVMat:(const cv::Mat&)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
size_t elemSize = cvMat.elemSize();
size_t elemSize1 = cvMat.elemSize1();
size_t channelCount = elemSize/elemSize1;
size_t bitsPerChannel = 8 * elemSize1;
size_t bitsPerPixel = bitsPerChannel * channelCount;
if (channelCount == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
// Tell CGIImageRef different bitmap info if handed 32-bit
uint32_t bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
if (bitsPerChannel == 32 ){
bitmapInfo = kCGImageAlphaNoneSkipLast | kCGBitmapFloatComponents | kCGBitmapByteOrder32Little;
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(cvMat.cols, //width
cvMat.rows, //height
bitsPerChannel, //bits per component
bitsPerPixel, //bits per pixel
cvMat.step[0], //bytesPerRow
colorSpace, //colorspace
bitmapInfo, // bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
self = [self initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return self;
}

Why is there a "potential leak"?

Xcode's analyser is complaining that there is a "potential leak of an object". The first line within the following method is highlighted:
- (void)retrieveBeginRestoreData {
self.restoreContext = [self.image newARGBBitmapContext];
if (!self.restoreContext) self.restoreData = nil;
CGRect rect = {{0,0},self.image.size};
CGContextDrawImage(self.restoreContext, rect, self.image.CGImage);
self.restoreData = CGBitmapContextGetData(self.restoreContext);
}
I have a property declared as such:
#property (nonatomic, assign) CGContextRef restoreContext
The newARGBBitmapContext is defined by the following:
- (CGContextRef)newARGBBitmapContext {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
size_t bitmapByteCount;
size_t bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(self.CGImage);
size_t pixelsHigh = CGImageGetHeight(self.CGImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
// colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
(CGBitmapInfo)kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
I managed to resolve this issue by instead declaring restoreContext as an instance variable in the header file; the "potential leak" warning goes away.
Questions:
What was the issue in the first place?
How was the issue fixed when I stopped declaring restoreContext as a property?
What is the correct way to fix the issue with restoreContext being declared as a property?
This line
self.restoreContext = [self.image newARGBBitmapContext];
does the following:
It (potentially) creates an instance object of CGContext.
Since the method name starts with new, an ownership transfer is applied. That means that the receiver (your code) is responsible for releasing it.
When the line of code is run a second time, the reference of the already existing instance of CGContext is overridden without releasing the instance, it points to. The older instance will leak.

Rotating video without rotating AVCaptureConnection and in the middle of AVAssetWriter session

I'm using PBJVision to implement tap-to-record video functionality. The library doesn't support orientation yet so I'm in the process of trying to engineer it in. From what I see, there are three ways to rotate the video - I need help on deciding the best way forward and how to implement it. Note that rotation can happen between tap-to-record segments. So in a recording session, the orientation is locked to what it was when the user tapped the button. The next time the user taps the button to record, it should re-set the orientation to whatever the device's orientation is (so the resulting video shows right-side-up).
The approaches are outlined in the issue page on GitHub as well
Method 1
Rotate the AVCaptureConnection using setVideoOrientation: - this causes the video preview to flicker every time it's switched, since this switches the actual hardware it seems. Not cool, not acceptable.
Method 2
Set the transform property on the AVAssetWriterInput object used to write the video. The problem is, once the asset writer starts writing, the transform property can't be changed, so this only works for the first segment of the video.
Method 3
Rotate the image buffer being appended using something like this: How to directly rotate CVImageBuffer image in IOS 4 without converting to UIImage? but it keeps crashing and I'm not even sure if I'm barking up the right tree. There's an exception that is thrown and I can't really trace it back to much more than the fact that I'm using the vImageRotate90_ARGB8888 function incorrectly.
The explanation is a bit more detailed on the GitHub issue page I linked to above. Any suggestions would be welcome - to be honest, I'm not hugely experienced at AVFoundation and so I'm hoping that there's some miraculous way to do this that I don't even know about!
Method 1 isn't the preferred method according to Apple's documentation ("Physically rotating buffers does come with a performance cost, so only request rotation if it's necessary"). Method 2 worked for me but if I played my video on an app that doesn't support the transformation "metadata", the video isn't rotated properly. Method 3 is what I did.
I think it's crashing for you before you're trying to pass the image data directly from vImageRotate... to the AVAssetWriterInputPixelBufferAdaptor. You have to create a CVPixelBufferRef first. Here's my code:
Inside of captureOutput:didOutputSampleBuffer:fromConnection: I rotate the frame before writing it into the adaptor:
if ([self.videoWriterInput isReadyForMoreMediaData])
{
// Rotate buffer first and then write to adaptor
CMTime sampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
CVPixelBufferRef rotatedBuffer = [self correctBufferOrientation:sampleBuffer];
[self.videoWriterInputAdaptor appendPixelBuffer:rotatedBuffer withPresentationTime:sampleTime];
CVBufferRelease(rotatedBuffer);
}
The referenced function that performs the vImage rotation is:
/* rotationConstant:
* 0 -- rotate 0 degrees (simply copy the data from src to dest)
* 1 -- rotate 90 degrees counterclockwise
* 2 -- rotate 180 degress
* 3 -- rotate 270 degrees counterclockwise
*/
- (CVPixelBufferRef)rotateBuffer:(CMSampleBufferRef)sampleBuffer withConstant:(uint8_t)rotationConstant
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
OSType pixelFormatType = CVPixelBufferGetPixelFormatType(imageBuffer);
NSAssert(pixelFormatType == kCVPixelFormatType_32ARGB, #"Code works only with 32ARGB format. Test/adapt for other formats!");
const size_t kAlignment_32ARGB = 32;
const size_t kBytesPerPixel_32ARGB = 4;
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
BOOL rotatePerpendicular = (rotateDirection == 1) || (rotateDirection == 3); // Use enumeration values here
const size_t outWidth = rotatePerpendicular ? height : width;
const size_t outHeight = rotatePerpendicular ? width : height;
size_t bytesPerRowOut = kBytesPerPixel_32ARGB * ceil(outWidth * 1.0 / kAlignment_32ARGB) * kAlignment_32ARGB;
const size_t dstSize = bytesPerRowOut * outHeight * sizeof(unsigned char);
void *srcBuff = CVPixelBufferGetBaseAddress(imageBuffer);
unsigned char *dstBuff = (unsigned char *)malloc(dstSize);
vImage_Buffer inbuff = {srcBuff, height, width, bytesPerRow};
vImage_Buffer outbuff = {dstBuff, outHeight, outWidth, bytesPerRowOut};
uint8_t bgColor[4] = {0, 0, 0, 0};
vImage_Error err = vImageRotate90_ARGB8888(&inbuff, &outbuff, rotationConstant, bgColor, 0);
if (err != kvImageNoError)
{
NSLog(#"%ld", err);
}
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CVPixelBufferRef rotatedBuffer = NULL;
CVPixelBufferCreateWithBytes(NULL,
outWidth,
outHeight,
pixelFormatType,
outbuff.data,
bytesPerRowOut,
freePixelBufferDataAfterRelease,
NULL,
NULL,
&rotatedBuffer);
return rotatedBuffer;
}
void freePixelBufferDataAfterRelease(void *releaseRefCon, const void *baseAddress)
{
// Free the memory we malloced for the vImage rotation
free((void *)baseAddress);
}
Note: You may like to use enumeration for rotationConstant. Something like that (don't call this function with MOVRotateDirectionUnknown):
typedef NS_ENUM(uint8_t, MOVRotateDirection)
{
MOVRotateDirectionNone = 0,
MOVRotateDirectionCounterclockwise90,
MOVRotateDirectionCounterclockwise180,
MOVRotateDirectionCounterclockwise270,
MOVRotateDirectionUnknown
};
Note: If you need IOSurface support, you should use CVPixelBufferCreate instead of CVPixelBufferCreateWithBytes and pass bytes data into it directly:
NSDictionary *pixelBufferAttributes = #{ (NSString *)kCVPixelBufferIOSurfacePropertiesKey : #{} };
CVPixelBufferCreate(kCFAllocatorDefault,
outWidth,
outHeight,
pixelFormatType,
(__bridge CFDictionaryRef)(pixelBufferAttributes),
&rotatedBuffer);
CVPixelBufferLockBaseAddress(rotatedBuffer, 0);
uint8_t *dest = CVPixelBufferGetBaseAddress(rotatedBuffer);
memcpy(dest, outbuff.data, bytesPerRowOut * outHeight);
CVPixelBufferUnlockBaseAddress(rotatedBuffer, 0);
There is an easy and safe way.
#define degreeToRadian(x) (Double.pi * x / 180.0)
self.assetWriterInputVideo.transform =
CGAffineTransformMakeRotation(CGFloat(degreeToRadian(-90))) ;
method 3 does work to rotate the frame of video。
But I found out it can cause the MM leak. in order to it, I try to move the funcation in the same thread as the merging the frame of video.
it does work.
When you meet the issue, Please take care.

Resources