Related
I'm recording live video in my iOS app. On another Stack Overflow page, I found that you can use vImage_Buffer to work on my frames.
The problem is that I have no idea how to get back to a CVPixelBufferRef from the outputted vImage_buffer.
Here is the code that is given in the other article:
NSInteger cropX0 = 100,
cropY0 = 100,
cropHeight = 100,
cropWidth = 100,
outWidth = 480,
outHeight = 480;
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
vImage_Buffer inBuff;
inBuff.height = cropHeight;
inBuff.width = cropWidth;
inBuff.rowBytes = bytesPerRow;
int startpos = cropY0 * bytesPerRow + 4 * cropX0;
inBuff.data = baseAddress + startpos;
unsigned char *outImg = (unsigned char*)malloc(4 * outWidth * outHeight);
vImage_Buffer outBuff = {outImg, outHeight, outWidth, 4 * outWidth};
vImage_Error err = vImageScale_ARGB8888(&inBuff, &outBuff, NULL, 0);
if (err != kvImageNoError) NSLog(#" error %ld", err);
And now I need to convert outBuff to a CVPixelBufferRef.
I assume I need to use vImageBuffer_CopyToCVPixelBuffer, but I'm not sure how.
My first attempts failed with an EXC_BAD_ACCESS: CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(kCFAllocatorSystemDefault, 480, 480, kCVPixelFormatType_32BGRA, NULL, &pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
vImage_CGImageFormat format = {
.bitsPerComponent = 8,
.bitsPerPixel = 32,
.bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst, //BGRX8888
.colorSpace = NULL, //sRGB
};
vImageBuffer_CopyToCVPixelBuffer(&outBuff,
&format,
pixelBuffer,
NULL,
NULL,
kvImageNoFlags); // Here is the crash!
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
Any idea?
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool : YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool : YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
[NSNumber numberWithInt : 480], kCVPixelBufferWidthKey,
[NSNumber numberWithInt : 480], kCVPixelBufferHeightKey,
nil];
status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
480,
480,
kCVPixelFormatType_32BGRA,
outImg,
bytesPerRow,
NULL,
NULL,
(__bridge CFDictionaryRef)options,
&pixbuffer);
You should generate a new pixelBuffer like above.
Just in case... if you want a cropped live video feed into your interface, use an AVPlayerLayer, AVCaptureVideoPreviewLayer and/or other CALayer subclasses, use the layer bounds, frame and position for your 100x100 pixel area to 480x480 area.
Notes for vImage for your question (different circumstances may differ):
CVPixelBufferCreateWithBytes will not work with vImageBuffer_CopyToCVPixelBuffer() because you need to copy the vImage_Buffer data into a "clean" or "empty" CVPixelBuffer.
No need for locking/unlocking - make sure you know when to lock & when not to lock pixel buffers.
Your inBuff vImage_Buffer just needs to be initialized from the pixel buffer data, not manually (unless you know how to use CGContexts etc, to init the pixel grid)
use vImageBuffer_InitWithCVPixelBuffer()
vImageScale_ARGB8888 will scale the entire CVPixel data to a smaller/larger rectangle. It won't SCALE a portion/crop area of the buffer to another buffer.
When you use vImageBuffer_CopyToCVPixelBuffer(),
vImageCVImageFormatRef & vImage_CGImageFormat need to be filled out correctly.
CGColorSpaceRef dstColorSpace = CGColorSpaceCreateWithName(kCGColorSpaceITUR_709);
vImage_CGImageFormat format = {
.bitsPerComponent = 16,
.bitsPerPixel = 64,
.bitmapInfo = (CGBitmapInfo)kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder16Big ,
.colorSpace = dstColorSpace
};
vImageCVImageFormatRef vformat = vImageCVImageFormat_Create(kCVPixelFormatType_4444AYpCbCr16,
kvImage_ARGBToYpCbCrMatrix_ITU_R_709_2,
kCVImageBufferChromaLocation_Center,
format.colorSpace,
0);
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
480,
480,
kCVPixelFormatType_4444AYpCbCr16,
NULL,
&destBuffer);
NSParameterAssert(status == kCVReturnSuccess && destBuffer != NULL);
err = vImageBuffer_CopyToCVPixelBuffer(&sourceBuffer, &format, destBuffer, vformat, 0, kvImagePrintDiagnosticsToConsole);
NOTE: these are settings for 64 bit ProRes with Alpha - adjust for 32 bit.
I am trying to resize a CMSampleBufferRef as quickly as possible on an iOS 8 device for use in image processing. From what I have found online, the way to do this seems to be by using the vImage API in the Accelerate framework. However, I haven't done much with the Accelerate framework and I can't quite figure out how to do this. Here is what I have so far to scale an image to 200x200:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef cvimgRef = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(cvimgRef,0);
void *imageData = CVPixelBufferGetBaseAddress(cvimgRef);
NSInteger width = CVPixelBufferGetWidth(cvimgRef);
NSInteger height = CVPixelBufferGetHeight(cvimgRef);
unsigned char *newData= // NOT SURE WHAT THIS SHOULD BE...
vImage_Buffer inBuff = { imageData, height, width, 4*width };
vImage_Buffer outBuff = { newData, 200, 200, 4*200 };
// NOT SURE IF THIS IS THE CORRECT METHOD... video output settings for kCVPixelBufferPixelFormatTypeKey is set to kCVPixelFormatType_32BGRA
// This seems wrong since the image scale is ARGB, not BGRA.
vImageScale_ARGB8888(inBuffer, outBuffer, NULL, kvImageNoFlags);
CVPixelBufferUnlockBaseAddress(cvimgRef,0);
}
Where outBuffer is the result. After that, I am also not sure how to convert the outBuffer back to a CVImageBufferRef for further image processing. Any suggestions would be appreciated!
vImageScale returns just a buffer data, and pay attention that buffers need to be freed.
I don't know if there is a faster way just using that out buffer but I would convert the buffer into a CGImage. Something like that taken from here so take it as a reference
vImage_CGImageFormat format = {
.bitsPerComponent = 8,
.bitsPerPixel = 32,
.colorSpace = NULL,
.bitmapInfo = (CGBitmapInfo)kCGImageAlphaFirst,
.version = 0,
.decode = NULL,
.renderingIntent = kCGRenderingIntentDefault,
};
ret = kvImageNoError;
CGImageRef destRef = vImageCreateCGImageFromBuffer(&dstBuffer, &format, NULL, NULL, kvImageNoFlags, &ret)
Later I will convert it into a CVPixelBuffer.
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
NSDictionary *options = #{
(NSString*)kCVPixelBufferCGImageCompatibilityKey : #YES,
(NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey : #YES,
};
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
if (status!=kCVReturnSuccess) {
DLog(#"Operation failed");
}
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
CGImageGetHeight(image), 8, 4*CGImageGetWidth(image), rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
I'm pretty sure that is possible to avoid the conversion into a CGImage and start using the buffer, but I never tried.
You have to use a resampling filter in conjunction with any vImage operations that alter image geometry: page 32, vImage Programming Guide.
- (CVPixelBufferRef)copyRenderedPixelBuffer:(CVPixelBufferRef)pixelBuffer {
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
// vImage processing
vImage_Error err;
vImage_Buffer buffer;
buffer.data = (unsigned char *)CVPixelBufferGetBaseAddress( pixelBuffer );
buffer.rowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer );
buffer.width = CVPixelBufferGetWidth( pixelBuffer );
buffer.height = CVPixelBufferGetHeight( pixelBuffer );
vImageCVImageFormatRef vformat = vImageCVImageFormat_CreateWithCVPixelBuffer( pixelBuffer );
vImage_CGImageFormat cgformat = {
.bitsPerComponent = 8,
.bitsPerPixel = 32,
.bitmapInfo = kCGBitmapByteOrderDefault,
.colorSpace = NULL, //sRGB
};
const CGFloat bgColor[3] = {0.0, 0.0, 0.0};
vImageBuffer_InitWithCVPixelBuffer(&buffer, &cgformat, pixelBuffer, vformat, bgColor, kvImageNoAllocate);
vImage_Buffer outbuffer;
void *tempBuffer;
tempBuffer = malloc(CVPixelBufferGetBytesPerRow( pixelBuffer ) * CVPixelBufferGetHeight( pixelBuffer ));
outbuffer.data = tempBuffer;
outbuffer.rowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer );
outbuffer.width = CVPixelBufferGetWidth( pixelBuffer );
outbuffer.height = CVPixelBufferGetHeight( pixelBuffer );
// PROCESS vIMAGE HERE
err = vImageBuffer_CopyToCVPixelBuffer(&outbuffer, &cgformat, pixelBuffer, vformat, bgColor, kvImageNoFlags);
if(err != -1)
free(tempBuffer);
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
return (CVPixelBufferRef)CFRetain( pixelBuffer );
}
I am trying to convert a YUV image to CIIMage and ultimately UIImage. I am fairly novice at these and trying to figure out an easy way to do it. From what I have learnt, from iOS6 YUV can be directly used to create CIImage but as I am trying to create it the CIImage is only holding a nil value. My code is like this ->
NSLog(#"Started DrawVideoFrame\n");
CVPixelBufferRef pixelBuffer = NULL;
CVReturn ret = CVPixelBufferCreateWithBytes(
kCFAllocatorDefault, iWidth, iHeight, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
lpData, bytesPerRow, 0, 0, 0, &pixelBuffer
);
if(ret != kCVReturnSuccess)
{
NSLog(#"CVPixelBufferRelease Failed");
CVPixelBufferRelease(pixelBuffer);
}
NSDictionary *opt = #{ (id)kCVPixelBufferPixelFormatTypeKey :
#(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) };
CIImage *cimage = [CIImage imageWithCVPixelBuffer:pixelBuffer options:opt];
NSLog(#"CURRENT CIImage -> %p\n", cimage);
UIImage *image = [UIImage imageWithCIImage:cimage scale:1.0 orientation:UIImageOrientationUp];
NSLog(#"CURRENT UIImage -> %p\n", image);
Here the lpData is the YUV data which is an array of unsigned character.
This also looks interesting : vImageMatrixMultiply, can't find any example on this. Can anyone help me with this?
I have also faced with this similar problem. I was trying to Display YUV(NV12) formatted data to the screen. This solution is working in my project...
//YUV(NV12)-->CIImage--->UIImage Conversion
NSDictionary *pixelAttributes = #{kCVPixelBufferIOSurfacePropertiesKey : #{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
640,
480,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
(__bridge CFDictionaryRef)(pixelAttributes),
&pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer,0);
unsigned char *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
// Here y_ch0 is Y-Plane of YUV(NV12) data.
memcpy(yDestPlane, y_ch0, 640 * 480);
unsigned char *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
// Here y_ch1 is UV-Plane of YUV(NV12) data.
memcpy(uvDestPlane, y_ch1, 640*480/2);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
if (result != kCVReturnSuccess) {
NSLog(#"Unable to create cvpixelbuffer %d", result);
}
// CIImage Conversion
CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *MytemporaryContext = [CIContext contextWithOptions:nil];
CGImageRef MyvideoImage = [MytemporaryContext createCGImage:coreImage
fromRect:CGRectMake(0, 0, 640, 480)];
// UIImage Conversion
UIImage *Mynnnimage = [[UIImage alloc] initWithCGImage:MyvideoImage
scale:1.0
orientation:UIImageOrientationRight];
CVPixelBufferRelease(pixelBuffer);
CGImageRelease(MyvideoImage);
Here I am showing data structure of YUV(NV12) data and how we can get the Y-Plane(y_ch0) and UV-Plane(y_ch1) which is used to create CVPixelBufferRef. Let's look at the YUV(NV12) data structure..
If we look at the picture we can get following information about YUV(NV12),
Total Frame Size = Width * Height * 3/2,
Y-Plane Size = Frame Size * 2/3,
UV-Plane size = Frame Size * 1/3,
Data stored in Y-Plane -->{Y1, Y2, Y3, Y4, Y5.....}.
U-Plane-->(U1, V1, U2, V2, U3, V3,......}.
I hope it will be helpful to all. :) Have fun with IOS Development :D
If you have a video frame object that looks like this:
int width,
int height,
unsigned long long time_stamp,
unsigned char *yData,
unsigned char *uData,
unsigned char *vData,
int yStride
int uStride
int vStride
You can use the following to fill up a pixelBuffer:
NSDictionary *pixelAttributes = #{(NSString *)kCVPixelBufferIOSurfacePropertiesKey:#{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, // NV12
(__bridge CFDictionaryRef)(pixelAttributes),
&pixelBuffer);
if (result != kCVReturnSuccess) {
NSLog(#"Unable to create cvpixelbuffer %d", result);
}
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
unsigned char *yDestPlane = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
for (int i = 0, k = 0; i < height; i ++) {
for (int j = 0; j < width; j ++) {
yDestPlane[k++] = yData[j + i * yStride];
}
}
unsigned char *uvDestPlane = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
for (int i = 0, k = 0; i < height / 2; i ++) {
for (int j = 0; j < width / 2; j ++) {
uvDestPlane[k++] = uData[j + i * uStride];
uvDestPlane[k++] = vData[j + i * vStride];
}
}
Now you can convert it to CIImage:
CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *tempContext = [CIContext contextWithOptions:nil];
CGImageRef coreImageRef = [tempContext createCGImage:coreImage
fromRect:CGRectMake(0, 0, width, height)];
And UIImage if you need that. (image orientation can vary depending on your input)
UIImage *myUIImage = [[UIImage alloc] initWithCGImage:coreImageRef
scale:1.0
orientation:UIImageOrientationUp];
Don't forget to release the variables:
CVPixelBufferRelease(pixelBuffer);
CGImageRelease(coreImageRef);
I can successfully create a movie from a single still image. However I am also given an array of smaller images that I need to superimpose on top of the background image. I've tried just repeating the process of appending frames with the assetWriter, but I get errors because you can't write to the same frame you've already written to.
So, I assume you have to compose the entire pixel buffer for each frame completely before you write the frame. But how would you do that?
Here's my code that works for rendering one background image:
CGSize renderSize = CGSizeMake(320, 568);
NSUInteger fps = 30;
self.assetWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:videoOutputPath] fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(self.assetWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:renderSize.width], AVVideoWidthKey,
[NSNumber numberWithInt:renderSize.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* videoWriterInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(videoWriterInput);
NSParameterAssert([self.assetWriter canAddInput:videoWriterInput]);
videoWriterInput.expectsMediaDataInRealTime = YES;
[self.assetWriter addInput:videoWriterInput];
//Start a session:
[self.assetWriter startWriting];
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
NSInteger totalFrames = 90; //3 seconds
//process the bg image
int frameCount = 0;
UIImage* resizedImage = [UIImage resizeImage:self.bgImage size:renderSize];
buffer = [self pixelBufferFromCGImage:[resizedImage CGImage]];
BOOL append_ok = YES;
int j = 0;
while (append_ok && j < totalFrames) {
if (adaptor.assetWriterInput.readyForMoreMediaData) {
CMTime frameTime = CMTimeMake(frameCount,(int32_t) fps);
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
if(!append_ok){
NSError *error = self.assetWriter.error;
if(error!=nil) {
NSLog(#"Unresolved error %#,%#.", error, [error userInfo]);
}
}
}
else {
printf("adaptor not ready %d, %d\n", frameCount, j);
[NSThread sleepForTimeInterval:0.1];
}
j++;
frameCount++;
}
if (!append_ok) {
printf("error appending image %d times %d\n, with error.", frameCount, j);
}
//Finish the session:
[videoWriterInput markAsFinished];
[self.assetWriter finishWritingWithCompletionHandler:^() {
self.assetWriter = nil;
}];
- (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image {
CGSize size = CGSizeMake(320,568);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
size.width,
size.height,
kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef) options,
&pxbuffer);
if (status != kCVReturnSuccess){
NSLog(#"Failed to create pixel buffer");
}
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
(CGBitmapInfo)kCGImageAlphaPremultipliedFirst);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
Again, the question is how to create a pixel buffer for a background image and an array of N small images that will be layered on top of the bg image. The next step after this will be to also superimposed a small video.
You can add the pixel info from the image list over the pixel buffer.
This example code shows how to add BGRA data over a ARGB pixelbuffer.
// Try to create a pixel buffer with the image mat
uint8_t* videobuffer = m_imageBGRA.data;
// From image buffer (BGRA) to pixel buffer
CVPixelBufferRef pixelBuffer = NULL;
CVReturn status = CVPixelBufferCreate (NULL, m_width, m_height, kCVPixelFormatType_32ARGB, NULL, &pixelBuffer);
if ((pixelBuffer == NULL) || (status != kCVReturnSuccess))
{
NSLog(#"Error CVPixelBufferPoolCreatePixelBuffer[pixelBuffer=%#][status=%d]", pixelBuffer, status);
return;
}
else
{
uint8_t *videobuffertmp = videobuffer;
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pixelBuffer);
// Add data for all the pixels in the image
for( int row=0 ; row<m_width ; ++row )
{
for( int col=0 ; col<m_height ; ++col )
{
memcpy(&pixelBufferData[0], &videobuffertmp[3], sizeof(uint8_t)); // alpha
memcpy(&pixelBufferData[1], &videobuffertmp[2], sizeof(uint8_t)); // red
memcpy(&pixelBufferData[2], &videobuffertmp[1], sizeof(uint8_t)); // green
memcpy(&pixelBufferData[3], &videobuffertmp[0], sizeof(uint8_t)); // blue
// Move the buffer pointer to the next pixel
pixelBufferData += 4*sizeof(uint8_t);
videobuffertmp += 4*sizeof(uint8_t);
}
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}
So, in this example, the data into a image (videobuffer) is added to the pixel buffer. Usually, the pixel data is stored in a single row, so for each pixel, we have 4 bytes (represented as 'uint8_t' in this case): First for blue, then green, next red and the last for the alpha value (remember that the original image is in BGRA format).
The pixel buffer works in the same way, so the data is stored in a sigle row (ARGB in this case, as defined with 'kCVPixelFormatType_32ARGB' parameter).
This piece of code reorders the pixel data to match with the pixelbuffer configuration:
memcpy(&pixelBufferData[0], &videobuffertmp[3], sizeof(uint8_t)); // alpha
memcpy(&pixelBufferData[1], &videobuffertmp[2], sizeof(uint8_t)); // red
memcpy(&pixelBufferData[2], &videobuffertmp[1], sizeof(uint8_t)); // green
memcpy(&pixelBufferData[3], &videobuffertmp[0], sizeof(uint8_t)); // blue
And once we have the pixel added, we can move forward a pixel by:
// Move the buffer pointer to the next pixel
pixelBufferData += 4*sizeof(uint8_t);
videobuffertmp += 4*sizeof(uint8_t);
Moving the pointers 4 bytes forward.
If your images are smaller, you can add them in a smaller region, or define an 'if' using the alpha value as target data. For example:
// Add data for all the pixels in the image
for( int row=0 ; row<m_width ; ++row )
{
for( int col=0 ; col<m_height ; ++col )
{
if( videobuffertmp[3] > 10 ) // check alpha channel
{
memcpy(&pixelBufferData[0], &videobuffertmp[3], sizeof(uint8_t)); // alpha
memcpy(&pixelBufferData[1], &videobuffertmp[2], sizeof(uint8_t)); // red
memcpy(&pixelBufferData[2], &videobuffertmp[1], sizeof(uint8_t)); // green
memcpy(&pixelBufferData[3], &videobuffertmp[0], sizeof(uint8_t)); // blue
}
// Move the buffer pointer to the next pixel
pixelBufferData += 4*sizeof(uint8_t);
videobuffertmp += 4*sizeof(uint8_t);
}
}
I want to convert a yuv 420SP image (captured directly from camera, YCbCr format) to jpg in iOS. What I have found is CGImageCreate() function https://developer.apple.com/library/mac/documentation/graphicsimaging/reference/CGImage/Reference/reference.html#//apple_ref/doc/uid/TP30000956-CH1g-F17167 , which takes in a few parameters including the byte array containing and should return some CGImage, whose UIImage when input to UIImageJPEGRepresentation() returns jpeg data, but that is not really happening
The output image data is far from what is required. At least the output is not nil.
As input to CGImageCreate(), bits per component i am setting as 4, bits per pixel as 12, and some default values.
Can it really convert a yuv YCbCr image ad not only rgb? If yes, then i think i am doing wrong something in the input values to the CGImageCreate function.
From what I can see here, the CGColorSpaceRef colorspace parameter can refer to RGB, CMYK, or grayscale only.
So I think first you need to convert your YCbCr420 image to RGB, for example, using IPP function YCbCr420toRGB (doc). Alternatively, you can write your own conversion routine, it's not that hard.
Here's the code for converting a sample buffer returned by the captureOutput:didOutputSampleBuffer:fromConnection method of AVVideoDataOutput:
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(pixelBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer); //2560 == (640 * 4)
size_t bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
size_t bufferHeight = CVPixelBufferGetHeight(pixelBuffer); //480
size_t dataSize = CVPixelBufferGetDataSize(pixelBuffer); //1_228_808 = (2560 * 480) + 8
CGColorSpaceRef defaultRGBColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rawImageBytes, bufferWidth, bufferHeight, 8, bytesPerRow, defaultRGBColorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef image = CGBitmapContextCreateImage(context);
CFMutableDataRef imageData = CFDataCreateMutable(NULL, 0);
CGImageDestinationRef destination = CGImageDestinationCreateWithData(imageData, kUTTypeJPEG, 1, NULL);
NSDictionary *properties = #{(__bridge id)kCGImageDestinationLossyCompressionQuality: #(0.25),
(__bridge id)kCGImageDestinationBackgroundColor: (__bridge id)CLEAR_COLOR,
(__bridge id)kCGImageDestinationOptimizeColorForSharing : #(TRUE)
};
CGImageDestinationAddImage(destination, image, (__bridge CFDictionaryRef)properties);
if (!CGImageDestinationFinalize(destination))
{
CFRelease(imageData);
imageData = NULL;
}
CFRelease(destination);
UIImage *frame = [[UIImage alloc] initWithCGImage:image];
CGContextRelease(context);
CGImageRelease(image);
renderFrame([self.childViewControllers.lastObject.view viewWithTag:1].layer, frame);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}
Here are your three options for pixel format types:
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
kCVPixelFormatType_32BGRA
If _captureOutput is the pointer reference to my instance of AVVideoDataOutput, this is how you set the pixel format type:
[_captureOutput setVideoSettings:#{(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)}];