What is the proper colorspace conversion for this image on iOS? - ios

How do we fix our code below to properly color the image you see below from our incoming sampleBuffer?
We are attempting to convert an incoming sampleBuffer image to a UIImage but the result is the inverted off-color image you see below.
You can see our attempt to use the kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange option in the code - but the results were the same.
The incoming image we are trying to show has all the right colors as evidenced by the fact that if we render the image into a GLKView, all the colors are there.
Could this be a YUV420 conversion issue?
Here is the conversion code we are using:
- (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
return newImage;
}
Here is the setup code we use for the incoming CVPixelBuffer:
// Now create the CVPixelBuffer to which we will render
CVReturn retVal = CVPixelBufferCreate(kCFAllocatorDefault,
self.screenWidth,
self.screenHeight,
kCVPixelFormatType_32BGRA,
attrs,
&_outputRenderTarget);
Any suggestions for what to try to restore and display all the proper colors?

There is no problem with the code shown apart from you need to swap back your commented/uncommented lines.
I would take a step back and look at what is writing the data to your incoming buffer. For example I use a AVAssetReaderVideoCompositionOutput object to generate the sample buffers. The initialisation of the video composition object has video settings that match the ones you use for creating your incoming cvpixelbuffer plus a cgimage and bitmap compatibility key.

It turns out we were inverting all the colors with a 1-r, 1-g, 1-b.
Once we removed the inversion the colors appeared normal again.

Related

What is the most efficient way to display CVImageBufferRef on iOS

I have CMSampleBufferRef(s) which I decode using VTDecompressionSessionDecodeFrame which results in CVImageBufferRef after decoding of a frame has completed, so my questions is..
What would be the most efficient way to display these CVImageBufferRefs in UIView?
I have succeeded in converting CVImageBufferRef to CGImageRef and displaying those by settings CGImageRef as CALayer's content but then DecompressionSession has been configured with #{ (id)kCVPixelBufferPixelFormatTypeKey: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] };
Here is example/code how I've converted CVImageBufferRef to CGImageRef (note: cvpixelbuffer data has to be in 32BGRA format for this to work)
CVPixelBufferLockBaseAddress(cvImageBuffer,0);
// get image properties
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(cvImageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cvImageBuffer);
size_t width = CVPixelBufferGetWidth(cvImageBuffer);
size_t height = CVPixelBufferGetHeight(cvImageBuffer);
/*Create a CGImageRef from the CVImageBufferRef*/
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef cgContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef cgImage = CGBitmapContextCreateImage(cgContext);
// release context and colorspace
CGContextRelease(cgContext);
CGColorSpaceRelease(colorSpace);
// now CGImageRef can be displayed either by setting CALayer content
// or by creating a [UIImage withCGImage:geImage] that can be displayed on
// UIImageView ...
The #WWDC14 session 513 (https://developer.apple.com/videos/wwdc/2014/#513) hints that YUV -> RGB colorspace conversion (using CPU?) can be avoided and if YUV capable GLES magic is used - wonder what that might be and how this could be accomplished?
Apple's iOS SampleCode GLCameraRipple shows an example of displaying YUV CVPixelBufferRef captured from camera using 2 OpenGLES with separate textures for Y and UV components and a fragment shader program that does the YUV to RGB colorspace conversion calculations using GPU - is all that really required, or is there some more straightforward way how this can be done?
NOTE: In my use case I'm unable to use AVSampleBufferDisplayLayer, due to fact how the input to decompression becomes available.
Update: The original answer below does not work because kCVPixelBufferIOSurfaceCoreAnimationCompatibilityKey is unavailable for iOS.
UIView is backed by a CALayer whose contents property supports multiple types of images. As detailed in my answer to a similar question for macOS, it is possible to use CALayer to render a CVPixelBuffer’s backing IOSurface. (Caveat: I have only tested this on macOS.)
If you're getting your CVImageBufferRef from CMSampleBufferRef, which you're receiving from captureOutput:didOutputSampleBuffer:fromConnection:, you don't need to make that conversion and can directly get the imageData out of CMSampleBufferRef. Here's the code:
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:sampleBuffer];
UIImage *frameImage = [UIImage imageWithData:imageData];
API description doesn't provide any info about wether its 32BGRA supported or not, and produces imageData, along with any meta-data, in jpeg format without any compression applied. If your goal is to display the image on screen or use with UIImageView, this is the quick way.

Get grayscale image from CMSampleBufferRef

I'm developing an application to detect text from an image. I use AVCaptureSession for getting sample buffer images. Also I use Tesseract lib for recognition. But it gets poor quality detection. I use following code to get image from CMSampleBufferRef. Can anyone edit that code to get grayscale image as output. I heard grayscale image input will be better for recognition in Tesseract.
// Create a UIImage from sample buffer data
- (UIImage *)imageFromSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}

Convert CMSampleBufferRef to UIImage with YUV color space?

I'm working with AVCaptureVideoDataOutput and want to convert CMSampleBufferRef to UIImage. Many answers are the same, like this UIImage created from CMSampleBufferRef not displayed in UIImageView? and AVCaptureSession with multiple previews
It works fine if I set the VideoDataOutput color space to BGRA (credited to this answer CGBitmapContextCreateImage error)
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[dataOutput setVideoSettings:videoSettings];
Without the above videoSettings, I will receive the following errors
CGBitmapContextCreate: invalid data bytes/row: should be at least 2560 for 8 integer bits/component, 3 components, kCGImageAlphaPremultipliedFirst.
<Error>: CGBitmapContextCreateImage: invalid context 0x0
Working with BGRA is not a good choice, since there is conversion overhead from YUV (default AVCaptureSession color space) to BGRA, as stated by Brad and Codo in How to get the Y component from CMSampleBuffer resulted from the AVCaptureSession?
So is there a way to convert CMSampleBufferRef to UIImage and working with YUV color space ?
After doing a lots of research and read apple documentations and wikipedis. I figured out the answer and it is works perfectly for me. So for the future readers Im sharing the code to convert CMSampleBufferRef to UIImage when video pixel type is set as kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
// Create a UIImage from sample buffer data
// Works only if pixel format is kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
-(UIImage *) imageFromSamplePlanerPixelBuffer:(CMSampleBufferRef) sampleBuffer{
#autoreleasepool {
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the plane pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
// Get the number of bytes per row for the plane pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer,0);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent gray color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGImageAlphaNone);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
}
// it works for me.
var image: CGImage?
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &image)
DispatchQueue.main.async {
self.imageView.image = UIImage(cgImage: image!)
}

GLKView snapshot method: null return val, getting an error

I can't figure out how to use the GLKView:snapshot method.
I'm using a GLKView to render some OpenGL stuff. It all works; seems like I have it all set up correctly.
But, when I try to do a snapshot, it fails: I get a null return value, and the following log message:
Error: CGImageCreate: invalid image size: 0 x 0.
Seems like this would mean the view itself is invalid for some reason, but it's not -- everything is working, aside from this.
I've looked at a few code samples, and I'm not doing anything different.
So... anyone seen this before? Ideas?
I never figured out the above problem; however, I found an excellent workaround. I found this chunk which just reads the render buffer and saves it to a UIImage. Problem solved!
- (UIImage*)snapshotRenderBuffer {
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
NSInteger dataLength = backingWidth * backingHeight * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(0.0f, 0.0f, backingWidth, backingHeight, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(
backingWidth, backingHeight, 8, 32, backingWidth * 4, colorspace,
kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipLast,
ref, NULL, true, kCGRenderingIntentDefault);
// (sayeth abd)
// This creates a context with the device pixel dimensions -- not points.
// To be compatible with all devices, you're meant to keep everything as points and a scale factor; but,
// this gives us a scaled down image for purposes of saving. So, keep everything in device resolution,
// and worry about it later...
UIGraphicsBeginImageContextWithOptions(CGSizeMake(backingWidth, backingHeight), NO, 0.0f);
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, backingWidth, backingHeight), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
return image;
}
Maybe this doesn't apply in your case, but the docs for GLKView:snapshot say:
Never call this method inside your drawing function.

Converting Images from camera buffer iOS. Capture still image using AVFoundation

I'm using this well known sample code from Apple to convert camera buffer still images into UIImages.
-(UIImage*) getUIImageFromBuffer:(CMSampleBufferRef) imageSampleBuffer{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);
if (imageBuffer==NULL) {
NSLog(#"No buffer");
}
// Lock the base address of the pixel buffer
if((CVPixelBufferLockBaseAddress(imageBuffer, 0))==kCVReturnSuccess){
NSLog(#"Buffer locked successfully");
}
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
NSLog(#"bytes per row %zu",bytesPerRow );
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
NSLog(#"width %zu",width);
size_t height = CVPixelBufferGetHeight(imageBuffer);
NSLog(#"height %zu",height);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image= [UIImage imageWithCGImage:quartzImage scale:SCALE_IMAGE_RATIO orientation:UIImageOrientationRight];
// Release the Quartz image
CGImageRelease(quartzImage);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
return (image );}
The problem is that usually the image that you obtain is 90° rotated. Using the method +imageWithCGImage:scale:orientation I'm able to rotate it, but before getting into this method I was trying to rotate and scale the image using the CTM function, before passing it to a UIImage. the problem was that CTM transformation didn't affect the image.
I'm asking myself why... is that because I'm locking the buffer? or because the context is created with the image inside, so the changes will affect only the further mod?
Thank you
The answer is that it affects only further modifications, and it has nothing to deal with the buffer locking.
As you can read from this answer mod to the context are applied by time Vertical flip of CGContext

Resources