I want to capture raw pixel data for manipulation using GPUImage framework. I capture the data like this:
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(imageSampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];
//raw values
UInt32 *values = [dataForRawBytes bytes];//, cnt = [dataForRawBytes length]/sizeof(int);
//test out dropbox upload here
[self uploadDropbox:dataForRawBytes];
//end of dropbox upload
// Do whatever with your bytes
// [self processImages:dataForRawBytes];
CVPixelBufferUnlockBaseAddress(cameraFrame, 0); }];
I am using the following settings for camera:
NSDictionary *settings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey,[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil];
For testing purposes I want to save the image i capture to dropbox, to do that I need to save it to a tmp directory, how would i save dataForRawBytes?
Any help would be very appreciated!
So i was able to figure out how to gain a UIImage from the raw data, here is my modified code:
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(imageSampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
Byte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
size_t width = CVPixelBufferGetWidth(cameraFrame);
size_t height = CVPixelBufferGetHeight(cameraFrame);
NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];
// Do whatever with your bytes
// create suitable color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
//Create suitable context (suitable for camera output setting kCVPixelFormatType_32BGRA)
CGContextRef newContext = CGBitmapContextCreate(rawImageBytes, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
// release color space
CGColorSpaceRelease(colorSpace);
//Create a CGImageRef from the CVImageBufferRef
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
UIImage *FinalImage = [[UIImage alloc] initWithCGImage:newImage];
//is the image captured, now we can test saving it.
I needed to create properties such as colourspace and generate a CDContexyRef and work with that to finally get a UIImage, and when debugging I can properly see the image i captured.
Related
I am developing an iPad app which presents pictures from a photographer. Those photos are uploaded on a webserver, and served directly through the app, where they are downloaded and displayed using the method below :
if([[NSFileManager defaultManager] fileExistsAtPath:[url path]]){
CGImageSourceRef source = CGImageSourceCreateWithURL((CFURLRef)url, NULL);
CGImageRef cgImage = nil;
if(source){
cgImage = CGImageSourceCreateImageAtIndex(source, 0, (CFDictionaryRef)dict);
}
UIImage *retImage = [UIImage imageWithCGImage:cgImage];
if(cgImage){
CGImageRelease(cgImage);
}
if(source){
CFRelease(source);
}
return retImage;
}
I can observe a serious change in the display of the photos' colours between the original picture (which is the same when displayed from the disk or from the web on my Mac and the photographer's Mac) and on the iPad (the result is wrong in the app and even in Safari).
After some search I found some posts explaining how iDevices do not use embedded color profile, so I found I was going this way. The photos are saved using the following info :
I found out on some articles (for example this link from imageoptim or analogsenses) that I should save the picture for device export by converting it to sRGB, without embedding the color profile, but I cant find out how could I do that ? Each time I tried (I don't have photoshop, so I used command line ImageMagick), the resulting picture has the following information, and is still not displayed correctly on my iPad (and any other iPads I've tested) :
Here is a example of a picture that do not display correctly on the iPhone or iPad, but does on the web
I would like to transform it to display it correctly, any Idea would be really welcomed :)
[EDIT] I have succeeded to obtain a correct image using "save for web" options of photoshop using the following parameters :
But I'm still unable to apply those settings automatically to all my pictures.
To read an image, just use:
UIImage *image = [UIImage imageWithContentsOfFile:path];
As for the color profile issue, try the sips command-line tool to fix the image files. Something like:
mkdir converted
sips -m "/System/Library/ColorSync/Profiles/sRGB Profile.icc" *.JPG --out converted
You can get first the color space through CGImage.
#property(nonatomic, readonly) CGImageRef CGImage
CGColorSpaceRef CGImageGetColorSpace (
CGImageRef image
);
And depping of the colorSpace apply a format conversion. So to get the color space of an image, you'd do:
CGColorSpaceRef colorspace = CGImageGetColorSpace([myUIImage CGImage]);
Note: make sure to follow the get/create/copy rules for CG objects.
Color conversion to RGB8 (also can be applied to RGB16 or RGB32, changing the bits per component in the method newBitmapRGBA8ContextFromImage):
// Create a bitmap
unsigned char *bitmap = [ImageHelper convertUIImageToBitmapRGBA8:image];
// Create a UIImage using the bitmap
UIImage *imageCopy = [ImageHelper convertBitmapRGBA8ToUIImage:bitmap withWidth:width withHeight:height];
// Display the image copy on the GUI
UIImageView *imageView = [[UIImageView alloc] initWithImage:imageCopy];
ImageHelper.h
#import <Foundation/Foundation.h>
#interface ImageHelper : NSObject {
}
/** Converts a UIImage to RGBA8 bitmap.
#param image - a UIImage to be converted
#return a RGBA8 bitmap, or NULL if any memory allocation issues. Cleanup memory with free() when done.
*/
+ (unsigned char *) convertUIImageToBitmapRGBA8:(UIImage *)image;
/** A helper routine used to convert a RGBA8 to UIImage
#return a new context that is owned by the caller
*/
+ (CGContextRef) newBitmapRGBA8ContextFromImage:(CGImageRef)image;
/** Converts a RGBA8 bitmap to a UIImage.
#param buffer - the RGBA8 unsigned char * bitmap
#param width - the number of pixels wide
#param height - the number of pixels tall
#return a UIImage that is autoreleased or nil if memory allocation issues
*/
+ (UIImage *) convertBitmapRGBA8ToUIImage:(unsigned char *)buffer
withWidth:(int)width
withHeight:(int)height;
#end
ImageHelper.m
#import "ImageHelper.h"
#implementation ImageHelper
+ (unsigned char *) convertUIImageToBitmapRGBA8:(UIImage *) image {
CGImageRef imageRef = image.CGImage;
// Create a bitmap context to draw the uiimage into
CGContextRef context = [self newBitmapRGBA8ContextFromImage:imageRef];
if(!context) {
return NULL;
}
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
CGRect rect = CGRectMake(0, 0, width, height);
// Draw image into the context to get the raw image data
CGContextDrawImage(context, rect, imageRef);
// Get a pointer to the data
unsigned char *bitmapData = (unsigned char *)CGBitmapContextGetData(context);
// Copy the data and release the memory (return memory allocated with new)
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(context);
size_t bufferLength = bytesPerRow * height;
unsigned char *newBitmap = NULL;
if(bitmapData) {
newBitmap = (unsigned char *)malloc(sizeof(unsigned char) * bytesPerRow * height);
if(newBitmap) { // Copy the data
for(int i = 0; i < bufferLength; ++i) {
newBitmap[i] = bitmapData[i];
}
}
free(bitmapData);
} else {
NSLog(#"Error getting bitmap pixel data\n");
}
CGContextRelease(context);
return newBitmap;
}
+ (CGContextRef) newBitmapRGBA8ContextFromImage:(CGImageRef) image {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
uint32_t *bitmapData;
size_t bitsPerPixel = 32;
size_t bitsPerComponent = 8;
size_t bytesPerPixel = bitsPerPixel / bitsPerComponent;
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
size_t bytesPerRow = width * bytesPerPixel;
size_t bufferLength = bytesPerRow * height;
colorSpace = CGColorSpaceCreateDeviceRGB();
if(!colorSpace) {
NSLog(#"Error allocating color space RGB\n");
return NULL;
}
// Allocate memory for image data
bitmapData = (uint32_t *)malloc(bufferLength);
if(!bitmapData) {
NSLog(#"Error allocating memory for bitmap\n");
CGColorSpaceRelease(colorSpace);
return NULL;
}
//Create bitmap context
context = CGBitmapContextCreate(bitmapData,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast); // RGBA
if(!context) {
free(bitmapData);
NSLog(#"Bitmap context not created");
}
CGColorSpaceRelease(colorSpace);
return context;
}
+ (UIImage *) convertBitmapRGBA8ToUIImage:(unsigned char *) buffer
withWidth:(int) width
withHeight:(int) height {
size_t bufferLength = width * height * 4;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 32;
size_t bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
if(colorSpaceRef == NULL) {
NSLog(#"Error allocating color space");
CGDataProviderRelease(provider);
return nil;
}
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider, // data provider
NULL, // decode
YES, // should interpolate
renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
if(pixels == NULL) {
NSLog(#"Error: Memory not allocated for bitmap");
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
return nil;
}
CGContextRef context = CGBitmapContextCreate(pixels,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);
if(context == NULL) {
NSLog(#"Error context not created");
free(pixels);
}
UIImage *image = nil;
if(context) {
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Support both iPad 3.2 and iPhone 4 Retina displays with the correct scale
if([UIImage respondsToSelector:#selector(imageWithCGImage:scale:orientation:)]) {
float scale = [[UIScreen mainScreen] scale];
image = [UIImage imageWithCGImage:imageRef scale:scale orientation:UIImageOrientationUp];
} else {
image = [UIImage imageWithCGImage:imageRef];
}
CGImageRelease(imageRef);
CGContextRelease(context);
}
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
CGDataProviderRelease(provider);
if(pixels) {
free(pixels);
}
return image;
}
#end
#PhilippeAuriach
I think you might have problem with [UIImage imageWithCGImage:cgImage], My suggestion is use [UIImage imageWithContentsOfFile:path] instead of above.
Below code might be help you.
if([[NSFileManager defaultManager] fileExistsAtPath:[url path]]){
//Provide image path here...
UIImage *image = [UIImage imageWithContentsOfFile:path];
if(image){
return image;
}else{
//Return default image
return image;
}
}
I got my problem solved already by using a different code. i just want to know what is wrong with the following one?
I wanted to change colour of every pixel in UIImage using bitmap data. My code is as follows:
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
UIImage *image = self.imageViewMain.image;
CGImageRef imageRef = image.CGImage;
NSData *data = (NSData *)CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
char *pixels = (char *)[data bytes];
// this is where we manipulate the individual pixels
for(int i = 1; i < [data length]; i += 3)
{
int r = i;
int g = i+1;
int b = i+2;
int a = i+3;
pixels[r] = 0; // eg. remove red
pixels[g] = pixels[g];
pixels[b] = pixels[b];
pixels[a] = pixels[a];
}
// create a new image from the modified pixel data
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
size_t bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
size_t bytesPerRow = CGImageGetBytesPerRow(imageRef);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, [data length], NULL);
CGImageRef newImageRef = CGImageCreate (
width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
provider,
NULL,
false,
kCGRenderingIntentDefault
);
// the modified image
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// cleanup
free(pixels);
CGImageRelease(imageRef);
CGColorSpaceRelease(colorspace);
CGDataProviderRelease(provider);
CGImageRelease(newImageRef);
}
But when this code runs - I get EXC_BAD_ACCESS error shown like in the following image :
And here is some more information from debugging:
What is it that I'm missing or doing wrong ?
try to alloc memory for pixels array like following code
char *pixels = (char *)malloc(data.length);
memcpy(pixels, [data bytes], data.length);
when pixels is not necessary, release this memory by call free(pixels)
I use this code, but it very slow. Is there any other way to do it?
I tried use methods indexOfObject and containsObject for array of images but it not works for me.
BOOL haveDublicate = NO;
UIImage *i = [ImageManager imageFromPath:path];
NSArray *photoImages = [ImageManager imagesFromPaths:photoPaths];
for (UIImage *saved in photoImages)
{
if ([ UIImagePNGRepresentation( saved ) isEqualToData:
UIImagePNGRepresentation( i ) ])
{
haveDublicate = YES;
}
}
I think you should check the size of the image first. If size and scale of both images are equal, check the pixel data directly for equality, and not the images PNG representation. this will be much faster. (The link shows you how to get the pixel data. To compare it, use memcmp.)
From that post (slightly modified):
NSData *rawDataFromUIImage(UIImage *image)
{
assert(image);
// Get the image into the data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int byteSize = height * width * 4;
unsigned char *rawData = (unsigned char*) malloc(byteSize);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
return [NSData dataWithBytes:rawData length:byteSize];
}
About why this is faster: UIImagePNGRepresentation (1) fetches the raw binary data and then (2) converts it to PNG format. Skipping the second step can only improve performance, because it is much more work than just doing step 1. And memcmp is faster than everything else in this example.
I tried to answer this in the original thread however SO would not let me. Hopefully someone with more authority can merge this into the original question.
OK here is a more complete answer. First, setup the capture:
// Create capture session
self.captureSession = [[AVCaptureSession alloc] init];
[self.captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
// Setup capture input
self.inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:self.inputDevice
error:nil];
[self.captureSession addInput:captureInput];
// Setup video processing (capture output)
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
// Don't add frames to the queue if frames are already processing
captureOutput.alwaysDiscardsLateVideoFrames = YES;
// Create a serial queue to handle processing of frames
_videoQueue = dispatch_queue_create("cameraQueue", NULL);
[captureOutput setSampleBufferDelegate:self queue:_videoQueue];
// Set the video output to store frame in YUV
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
[self.captureSession addOutput:captureOutput];
OK now the implementation for the delegate/callback:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create autorelease pool because we are not in the main_queue
#autoreleasepool {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
//Lock the imagebuffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get information about the image
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
// size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress;
// This just moved the pointer past the offset
baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
// convert the image
_prefImageView.image = [self makeUIImage:baseAddress bufferInfo:bufferInfo width:width height:height bytesPerRow:bytesPerRow];
// Update the display with the captured image for DEBUG purposes
dispatch_async(dispatch_get_main_queue(), ^{
[_myMainView.yUVImage setImage:_prefImageView.image];
});
}
and finally here is the method to convert from YUV to a UIImage
- (UIImage *)makeUIImage:(uint8_t *)inBaseAddress bufferInfo:(CVPlanarPixelBufferInfo_YCbCrBiPlanar *)inBufferInfo width:(size_t)inWidth height:(size_t)inHeight bytesPerRow:(size_t)inBytesPerRow {
NSUInteger yPitch = EndianU32_BtoN(inBufferInfo->componentInfoY.rowBytes);
uint8_t *rgbBuffer = (uint8_t *)malloc(inWidth * inHeight * 4);
uint8_t *yBuffer = (uint8_t *)inBaseAddress;
uint8_t val;
int bytesPerPixel = 4;
// for each byte in the input buffer, fill in the output buffer with four bytes
// the first byte is the Alpha channel, then the next three contain the same
// value of the input buffer
for(int y = 0; y < inHeight*inWidth; y++)
{
val = yBuffer[y];
// Alpha channel
rgbBuffer[(y*bytesPerPixel)] = 0xff;
// next three bytes same as input
rgbBuffer[(y*bytesPerPixel)+1] = rgbBuffer[(y*bytesPerPixel)+2] = rgbBuffer[y*bytesPerPixel+3] = val;
}
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rgbBuffer, yPitch, inHeight, 8,
yPitch*bytesPerPixel, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage *image = [UIImage imageWithCGImage:quartzImage];
CGImageRelease(quartzImage);
free(rgbBuffer);
return image;
}
You will also need to #import "Endian.h"
Note that the call to CGBitmapContextCreate is much more tricky that I expected. I'm not very savvy on video processing at all however this call stumped me for a while. Then when it finally worked it was like magic.
Background info: #Michaelg's version only accesses the y buffer so you only get luminance and not color. It also has a buffer overrun bug if the pitch in the buffers and the number of pixels don't match (padding bytes at the end of a line for whatever reason). The background on what is occurring here is that this is a planar image format which allocates one byte per pixel for luminance and 2 bytes per 4 pixels for color information. Rather than being stored continuously in memory these are stored as "planes" where the Y or luminance plane has its own block of memory and the CbCr or color plane also has its own block of memory. The CbCr plane consists of 1/4 the number of samples (half height and width) of the Y plane and each pixel in the CbCr plane corresponds to a 2x2 block in the Y plane. Hopefully this background helps.
edit: Both his version and my old version had the potential to overrun buffers and would not work if the rows in the image buffer have padding bytes at the end of each row. Furthermore my cbcr plane buffer was not created with the correct offset. To do this correctly you should always use the core video functions such as CVPixelBufferGetWidthOfPlane and CVPixelBufferGetBaseAddressOfPlane. This will ensure that you are correctly interpreting the buffer and it will work regardless of whether the buffer has a header and whether you screw up the pointer math. You should use the row sizes from Apple's functions and the buffer base address from their functions also. These are documented at: https://developer.apple.com/library/prerelease/ios/documentation/QuartzCore/Reference/CVPixelBufferRef/index.html Note that while this version here makes some use of Apple's functions and some use of the header it is best to only use Apple's functions. I may update this in the future to not use the header at all.
This will convert a kcvpixelformattype_420ypcbcr8biplanarfullrange buffer buffer into a UIImage which you can then use.
First, setup the capture:
// Create capture session
self.captureSession = [[AVCaptureSession alloc] init];
[self.captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
// Setup capture input
self.inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:self.inputDevice
error:nil];
[self.captureSession addInput:captureInput];
// Setup video processing (capture output)
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
// Don't add frames to the queue if frames are already processing
captureOutput.alwaysDiscardsLateVideoFrames = YES;
// Create a serial queue to handle processing of frames
_videoQueue = dispatch_queue_create("cameraQueue", NULL);
[captureOutput setSampleBufferDelegate:self queue:_videoQueue];
// Set the video output to store frame in YUV
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
[self.captureSession addOutput:captureOutput];
OK now the implementation for the delegate/callback:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create autorelease pool because we are not in the main_queue
#autoreleasepool {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
//Lock the imagebuffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get information about the image
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
// size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress;
//get the cbrbuffer base address
uint8_t* cbrBuff = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1);
// This just moved the pointer past the offset
baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
// convert the image
_prefImageView.image = [self makeUIImage:baseAddress cBCrBuffer:cbrBuff bufferInfo:bufferInfo width:width height:height bytesPerRow:bytesPerRow];
// Update the display with the captured image for DEBUG purposes
dispatch_async(dispatch_get_main_queue(), ^{
[_myMainView.yUVImage setImage:_prefImageView.image];
});
}
and finally here is the method to convert from YUV to a UIImage
- (UIImage *)makeUIImage:(uint8_t *)inBaseAddress cBCrBuffer:(uint8_t*)cbCrBuffer bufferInfo:(CVPlanarPixelBufferInfo_YCbCrBiPlanar *)inBufferInfo width:(size_t)inWidth height:(size_t)inHeight bytesPerRow:(size_t)inBytesPerRow {
NSUInteger yPitch = EndianU32_BtoN(inBufferInfo->componentInfoY.rowBytes);
NSUInteger cbCrOffset = EndianU32_BtoN(inBufferInfo->componentInfoCbCr.offset);
uint8_t *rgbBuffer = (uint8_t *)malloc(inWidth * inHeight * 4);
NSUInteger cbCrPitch = EndianU32_BtoN(inBufferInfo->componentInfoCbCr.rowBytes);
uint8_t *yBuffer = (uint8_t *)inBaseAddress;
//uint8_t *cbCrBuffer = inBaseAddress + cbCrOffset;
uint8_t val;
int bytesPerPixel = 4;
for(int y = 0; y < inHeight; y++)
{
uint8_t *rgbBufferLine = &rgbBuffer[y * inWidth * bytesPerPixel];
uint8_t *yBufferLine = &yBuffer[y * yPitch];
uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch];
for(int x = 0; x < inWidth; x++)
{
int16_t y = yBufferLine[x];
int16_t cb = cbCrBufferLine[x & ~1] - 128;
int16_t cr = cbCrBufferLine[x | 1] - 128;
uint8_t *rgbOutput = &rgbBufferLine[x*bytesPerPixel];
int16_t r = (int16_t)roundf( y + cr * 1.4 );
int16_t g = (int16_t)roundf( y + cb * -0.343 + cr * -0.711 );
int16_t b = (int16_t)roundf( y + cb * 1.765);
//ABGR
rgbOutput[0] = 0xff;
rgbOutput[1] = clamp(b);
rgbOutput[2] = clamp(g);
rgbOutput[3] = clamp(r);
}
}
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSLog(#"ypitch:%lu inHeight:%zu bytesPerPixel:%d",(unsigned long)yPitch,inHeight,bytesPerPixel);
NSLog(#"cbcrPitch:%lu",cbCrPitch);
CGContextRef context = CGBitmapContextCreate(rgbBuffer, inWidth, inHeight, 8,
inWidth*bytesPerPixel, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage *image = [UIImage imageWithCGImage:quartzImage];
CGImageRelease(quartzImage);
free(rgbBuffer);
return image;
}
You will also need to #import "Endian.h" and the define #define clamp(a) (a>255?255:(a<0?0:a));
Note that the call to CGBitmapContextCreate is much more tricky that I expected. I'm not very savvy on video processing at all however this call stumped me for a while. Then when it finally worked it was like magic.
I want to convert a yuv 420SP image (captured directly from camera, YCbCr format) to jpg in iOS. What I have found is CGImageCreate() function https://developer.apple.com/library/mac/documentation/graphicsimaging/reference/CGImage/Reference/reference.html#//apple_ref/doc/uid/TP30000956-CH1g-F17167 , which takes in a few parameters including the byte array containing and should return some CGImage, whose UIImage when input to UIImageJPEGRepresentation() returns jpeg data, but that is not really happening
The output image data is far from what is required. At least the output is not nil.
As input to CGImageCreate(), bits per component i am setting as 4, bits per pixel as 12, and some default values.
Can it really convert a yuv YCbCr image ad not only rgb? If yes, then i think i am doing wrong something in the input values to the CGImageCreate function.
From what I can see here, the CGColorSpaceRef colorspace parameter can refer to RGB, CMYK, or grayscale only.
So I think first you need to convert your YCbCr420 image to RGB, for example, using IPP function YCbCr420toRGB (doc). Alternatively, you can write your own conversion routine, it's not that hard.
Here's the code for converting a sample buffer returned by the captureOutput:didOutputSampleBuffer:fromConnection method of AVVideoDataOutput:
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(pixelBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer); //2560 == (640 * 4)
size_t bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
size_t bufferHeight = CVPixelBufferGetHeight(pixelBuffer); //480
size_t dataSize = CVPixelBufferGetDataSize(pixelBuffer); //1_228_808 = (2560 * 480) + 8
CGColorSpaceRef defaultRGBColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rawImageBytes, bufferWidth, bufferHeight, 8, bytesPerRow, defaultRGBColorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef image = CGBitmapContextCreateImage(context);
CFMutableDataRef imageData = CFDataCreateMutable(NULL, 0);
CGImageDestinationRef destination = CGImageDestinationCreateWithData(imageData, kUTTypeJPEG, 1, NULL);
NSDictionary *properties = #{(__bridge id)kCGImageDestinationLossyCompressionQuality: #(0.25),
(__bridge id)kCGImageDestinationBackgroundColor: (__bridge id)CLEAR_COLOR,
(__bridge id)kCGImageDestinationOptimizeColorForSharing : #(TRUE)
};
CGImageDestinationAddImage(destination, image, (__bridge CFDictionaryRef)properties);
if (!CGImageDestinationFinalize(destination))
{
CFRelease(imageData);
imageData = NULL;
}
CFRelease(destination);
UIImage *frame = [[UIImage alloc] initWithCGImage:image];
CGContextRelease(context);
CGImageRelease(image);
renderFrame([self.childViewControllers.lastObject.view viewWithTag:1].layer, frame);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}
Here are your three options for pixel format types:
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
kCVPixelFormatType_32BGRA
If _captureOutput is the pointer reference to my instance of AVVideoDataOutput, this is how you set the pixel format type:
[_captureOutput setVideoSettings:#{(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)}];