Does anyone know if it's possible to do the Photoshop layer composite mode "negative multiply"?
Multiply is possible, but i need the negative way.
MagickBooleanType aStat = MagickCompositeImage(magick_wand_local, magick_wand_local_comp, MultiplyCompositeOp, 0, 0);
Thanks
Okay. Found it myself.
It's called: ScreenCompositeOp
Some sample code (not cleaned!):
CGImageRef standardized = srcCGImage; //createStandardImage(srcCGImage);
// could use the image directly if it has 8/16 bits per component,
// otherwise the image must be converted into something more common (such as images with 5-bits per component)
// here we’ll be simple and always convert
const char *map = "ARGB"; // hard coded
const StorageType inputStorage = CharPixel;
NSData *srcData = (NSData *) CGDataProviderCopyData(CGImageGetDataProvider(standardized));
const void *bytes = [srcData bytes];
MagickWandGenesis();
MagickWand * magick_wand_local= NewMagickWand();
MagickBooleanType status = MagickConstituteImage(magick_wand_local, width, width, map, inputStorage, bytes);
if (status == MagickFalse) {
ThrowWandException(magick_wand_local);
}
/*
status = MagickOrderedPosterizeImage(magick_wand_local, "h8x8o");
if (status == MagickFalse) {
ThrowWandException(magick_wand_local);
}
*/
//status = MagickThresholdImage(magick_wand_local, 100.0);
MagickWand * magick_wand_local_comp = NewMagickWand();
NSString *file = #"winter_over.jpg";
if(export == YES) {
file = #"winter_over_large.jpg";
}
if(MagickReadImage(magick_wand_local_comp,[[[NSBundle mainBundle] pathForResource:file ofType:#""] UTF8String]) == MagickFalse) {
ExceptionType severity;
char *err = MagickGetException(magick_wand_local_comp, &severity);
printf("%s\n",err);
NSLog(#"error");
}
MagickBooleanType aStat = MagickCompositeImage(magick_wand_local, magick_wand_local_comp, ScreenCompositeOp, 0, 0);
if(aStat == MagickFalse) {
NSLog(#"error");
}
status = MagickModulateImage(magick_wand_local, 100, 29, 100);
if (status == MagickFalse) {
ThrowWandException(magick_wand_local);
}
const int bitmapBytesPerRow = (width * strlen(map));
const int bitmapByteCount = (bitmapBytesPerRow * height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
char *trgt_image = malloc(bitmapByteCount);
status = MagickExportImagePixels(magick_wand_local, 0, 0, width, height, map, CharPixel, trgt_image);
if (status == MagickFalse) {
ThrowWandException(magick_wand_local);
}
magick_wand_local = DestroyMagickWand(magick_wand_local);
magick_wand_local_comp = DestroyMagickWand(magick_wand_local_comp);
MagickWandTerminus();
CGContextRef context = CGBitmapContextCreate (trgt_image,
width,
height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
Related
I am trying to resize an image from a CVPixelBufferRef to 299x299.
Ideally is would also crop the image. The original pixelbuffer is 640x320, the goal is to scale/crop to 299x299 without loosing aspect ratio (crop to center).
I found code to resize a UIImage in objective c, but none to resize a CVPixelBufferRef. I have found various very complicated examples of object C many different image types, but none specifically for resizing a CVPixelBufferRef.
What is the easiest/best way to do this, please include the exact code.
... I tried the answer from selton, but this did not work as the resulting type in the scaled buffer is not correct (goes into assert code),
OSType sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
int doReverseChannels;
if (kCVPixelFormatType_32ARGB == sourcePixelFormat) {
doReverseChannels = 1;
} else if (kCVPixelFormatType_32BGRA == sourcePixelFormat) {
doReverseChannels = 0;
} else {
assert(false); // Unknown source format
}
Using CoreMLHelpers as an inspiration. We can create a C function that does what you need. Based on your pixel format requirements, I think this solution will be the most efficient option. I used an AVCaputureVideoDataOutput for testing.
I hope this helps!
AVCaptureVideoDataOutputSampleBufferDelegate implementation. The majority of the work here is creating a centered-cropping rectangle. Making use of AVMakeRectWithAspectRatioInsideRect is key (it does exactly what you want).
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection; {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (pixelBuffer == NULL) { return; }
size_t height = CVPixelBufferGetHeight(pixelBuffer);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
CGRect videoRect = CGRectMake(0, 0, width, height);
CGSize scaledSize = CGSizeMake(299, 299);
// Create a rectangle that meets the output size's aspect ratio, centered in the original video frame
CGRect centerCroppingRect = AVMakeRectWithAspectRatioInsideRect(scaledSize, videoRect);
CVPixelBufferRef croppedAndScaled = createCroppedPixelBuffer(pixelBuffer, centerCroppingRect, scaledSize);
// Do other things here
// For example
CIImage *image = [CIImage imageWithCVImageBuffer:croppedAndScaled];
// End example
CVPixelBufferRelease(croppedAndScaled);
}
Method 1: Data manipulation and Accelerate
The basic premise of this function is that it first crops to the specified rectangle then scales to the final desired size. The cropping is achieved by simply ignoring the data outside the rectangle. Scaling is achieved through Accelerate's vImageScale_ARGB8888 function. Again, thanks to CoreMLHelpers for the insight.
void assertCropAndScaleValid(CVPixelBufferRef pixelBuffer, CGRect cropRect, CGSize scaleSize) {
CGFloat originalWidth = (CGFloat)CVPixelBufferGetWidth(pixelBuffer);
CGFloat originalHeight = (CGFloat)CVPixelBufferGetHeight(pixelBuffer);
assert(CGRectContainsRect(CGRectMake(0, 0, originalWidth, originalHeight), cropRect));
assert(scaleSize.width > 0 && scaleSize.height > 0);
}
void pixelBufferReleaseCallBack(void *releaseRefCon, const void *baseAddress) {
if (baseAddress != NULL) {
free((void *)baseAddress);
}
}
// Returns a CVPixelBufferRef with +1 retain count
CVPixelBufferRef createCroppedPixelBuffer(CVPixelBufferRef sourcePixelBuffer, CGRect croppingRect, CGSize scaledSize) {
OSType inputPixelFormat = CVPixelBufferGetPixelFormatType(sourcePixelBuffer);
assert(inputPixelFormat == kCVPixelFormatType_32BGRA
|| inputPixelFormat == kCVPixelFormatType_32ABGR
|| inputPixelFormat == kCVPixelFormatType_32ARGB
|| inputPixelFormat == kCVPixelFormatType_32RGBA);
assertCropAndScaleValid(sourcePixelBuffer, croppingRect, scaledSize);
if (CVPixelBufferLockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly) != kCVReturnSuccess) {
NSLog(#"Could not lock base address");
return nil;
}
void *sourceData = CVPixelBufferGetBaseAddress(sourcePixelBuffer);
if (sourceData == NULL) {
NSLog(#"Error: could not get pixel buffer base address");
CVPixelBufferUnlockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly);
return nil;
}
size_t sourceBytesPerRow = CVPixelBufferGetBytesPerRow(sourcePixelBuffer);
size_t offset = CGRectGetMinY(croppingRect) * sourceBytesPerRow + CGRectGetMinX(croppingRect) * 4;
vImage_Buffer croppedvImageBuffer = {
.data = ((char *)sourceData) + offset,
.height = (vImagePixelCount)CGRectGetHeight(croppingRect),
.width = (vImagePixelCount)CGRectGetWidth(croppingRect),
.rowBytes = sourceBytesPerRow
};
size_t scaledBytesPerRow = scaledSize.width * 4;
void *scaledData = malloc(scaledSize.height * scaledBytesPerRow);
if (scaledData == NULL) {
NSLog(#"Error: out of memory");
CVPixelBufferUnlockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly);
return nil;
}
vImage_Buffer scaledvImageBuffer = {
.data = scaledData,
.height = (vImagePixelCount)scaledSize.height,
.width = (vImagePixelCount)scaledSize.width,
.rowBytes = scaledBytesPerRow
};
/* The ARGB8888, ARGB16U, ARGB16S and ARGBFFFF functions work equally well on
* other channel orderings of 4-channel images, such as RGBA or BGRA.*/
vImage_Error error = vImageScale_ARGB8888(&croppedvImageBuffer, &scaledvImageBuffer, nil, 0);
CVPixelBufferUnlockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly);
if (error != kvImageNoError) {
NSLog(#"Error: %ld", error);
free(scaledData);
return nil;
}
OSType pixelFormat = CVPixelBufferGetPixelFormatType(sourcePixelBuffer);
CVPixelBufferRef outputPixelBuffer = NULL;
CVReturn status = CVPixelBufferCreateWithBytes(nil, scaledSize.width, scaledSize.height, pixelFormat, scaledData, scaledBytesPerRow, pixelBufferReleaseCallBack, nil, nil, &outputPixelBuffer);
if (status != kCVReturnSuccess) {
NSLog(#"Error: could not create new pixel buffer");
free(scaledData);
return nil;
}
return outputPixelBuffer;
}
Method 2: CoreImage
This method is much simpler to read, and has the benefit of being pretty agnostic to the pixel buffer format you pass in, which is a plus for certain use cases. Granted, you're limited to which formats CoreImage supports.
CVPixelBufferRef createCroppedPixelBufferCoreImage(CVPixelBufferRef pixelBuffer,
CGRect cropRect,
CGSize scaleSize,
CIContext *context) {
assertCropAndScaleValid(pixelBuffer, cropRect, scaleSize);
CIImage *image = [CIImage imageWithCVImageBuffer:pixelBuffer];
image = [image imageByCroppingToRect:cropRect];
CGFloat scaleX = scaleSize.width / CGRectGetWidth(image.extent);
CGFloat scaleY = scaleSize.height / CGRectGetHeight(image.extent);
image = [image imageByApplyingTransform:CGAffineTransformMakeScale(scaleX, scaleY)];
// Due to the way [CIContext:render:toCVPixelBuffer] works, we need to translate the image so the cropped section is at the origin
image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-image.extent.origin.x, -image.extent.origin.y)];
CVPixelBufferRef output = NULL;
CVPixelBufferCreate(nil,
CGRectGetWidth(image.extent),
CGRectGetHeight(image.extent),
CVPixelBufferGetPixelFormatType(pixelBuffer),
nil,
&output);
if (output != NULL) {
[context render:image toCVPixelBuffer:output];
}
return output;
}
Creating the CIContext can be done at the call site or it can be created and stored on a property. For information about options, see the documentation.
// Create a CIContext using default settings, this will
// typically use the GPU and Metal by default if supported
if (self.context == nil) {
self.context = [CIContext context];
}
func assertCropAndScaleValid(_ pixelBuffer: CVPixelBuffer, _ cropRect: CGRect, _ scaleSize: CGSize) {
let originalWidth: CGFloat = CGFloat(CVPixelBufferGetWidth(pixelBuffer))
let originalHeight: CGFloat = CGFloat(CVPixelBufferGetHeight(pixelBuffer))
assert(CGRect(x: 0, y: 0, width: originalWidth, height: originalHeight).contains(cropRect))
assert(scaleSize.width > 0 && scaleSize.height > 0)
}
func createCroppedPixelBufferCoreImage(pixelBuffer: CVPixelBuffer,
cropRect: CGRect,
scaleSize: CGSize,
context: inout CIContext
) -> CVPixelBuffer {
assertCropAndScaleValid(pixelBuffer, cropRect, scaleSize)
var image = CIImage(cvImageBuffer: pixelBuffer)
image = image.cropped(to: cropRect)
let scaleX = scaleSize.width / image.extent.width
let scaleY = scaleSize.height / image.extent.height
image = image.transformed(by: CGAffineTransform(scaleX: scaleX, y: scaleY))
image = image.transformed(by: CGAffineTransform(translationX: -image.extent.origin.x, y: -image.extent.origin.y))
var output: CVPixelBuffer? = nil
CVPixelBufferCreate(nil, Int(image.extent.width), Int(image.extent.height), CVPixelBufferGetPixelFormatType(pixelBuffer), nil, &output)
if output != nil {
context.render(image, to: output!)
} else {
fatalError("Error")
}
return output!
}
Swift version of #allenh 's answer
Step 1
Convert the CVPixelBuffer to UIImage by starting with [CIImage imageWithCVPixelBuffer: then converting that CIImage to CGImage then that CGImage to UIImage using the standard methods.
CIImage *ciimage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgimage = [context
createCGImage:ciimage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer))];
UIImage *uiimage = [UIImage imageWithCGImage:cgimage];
CGImageRelease(cgimage);
Step 2
Scale the image to desired size/cropping by placing it in a UIImageView
UIImageView *imageView = [[UIImageView alloc] initWithFrame:/*CGRect with new dimensions*/];
imageView.contentMode = /*UIViewContentMode with desired scaling/clipping style*/;
imageView.image = uiimage;
Step 3
Snapshot the CALayer of said imageView with something like this:
#define snapshotOfView(__view) (\
(^UIImage *(void) {\
CGRect __rect = [__view bounds];\
UIGraphicsBeginImageContextWithOptions(__rect.size, /*(BOOL)Opaque*/, /*(float)scaleResolution*/);\
CGContextRef __context = UIGraphicsGetCurrentContext();\
[__view.layer renderInContext:__context];\
UIImage *__image = UIGraphicsGetImageFromCurrentImageContext();\
UIGraphicsEndImageContext();\
return __image;\
})()\
)
In use:
uiimage = snapshotOfView(imageView);
Step 4
Convert said UIImage-snapshot image (cropped/scaled) back into a CVPixelBuffer using a method like this: https://stackoverflow.com/a/34990820/2057171
That is,
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
NSDictionary *options = #{
(NSString*)kCVPixelBufferCGImageCompatibilityKey : #YES,
(NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey : #YES,
};
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
if (status!=kCVReturnSuccess) {
NSLog(#"Operation failed");
}
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
CGImageGetHeight(image), 8, 4*CGImageGetWidth(image), rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGAffineTransform flipVertical = CGAffineTransformMake( 1, 0, 0, -1, 0, CGImageGetHeight(image) );
CGContextConcatCTM(context, flipVertical);
CGAffineTransform flipHorizontal = CGAffineTransformMake( -1.0, 0.0, 0.0, 1.0, CGImageGetWidth(image), 0.0 );
CGContextConcatCTM(context, flipHorizontal);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
In use:
pixelBuffer = [self pixelBufferFromCGImage:uiimage];
You can consider using CIImage:
CIImage *image = [CIImage imageWithCVPixelBuffer:pxbuffer];
CIImage *scaledImage = [image imageByApplyingTransform:(CGAffineTransformMakeScale(0.1, 0.1))];
CVPixelBufferRef scaledBuf = [scaledImage pixelBuffer];
You should change the scale to fit your dest size.
In application,displaying the high resolution images and have some memory leaks so i would like to calculate the memory consumption of each statement to find out the leaks.
is there any method to calculate the memory (MB or KB)?
I need Like this:
// this is my method
+ (unsigned char *) convertUIImageToBitmapRGBA8:(UIImage *) image {
//Run a method to caclate the memory(MB or KB) --- Before
CGImageRef imageRef = image.CGImage;
// Create a bitmap context to draw the uiimage into
CGContextRef context = [self newBitmapRGBA8ContextFromImage:imageRef];
if(!context) {
return NULL;
}
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
CGRect rect = CGRectMake(0, 0, width, height);
// Draw image into the context to get the raw image data
CGContextDrawImage(context, rect, imageRef);
// Get a pointer to the data
unsigned char *bitmapData = (unsigned char *)CGBitmapContextGetData(context);
// Copy the data and release the memory (return memory allocated with new)
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(context);
size_t bufferLength = bytesPerRow * height;
unsigned char *newBitmap = NULL;
if(bitmapData) {
newBitmap = (unsigned char *)malloc(sizeof(unsigned char) * bytesPerRow * height);
if(newBitmap) { // Copy the data
for(int i = 0; i < bufferLength; ++i) {
newBitmap[i] = bitmapData[i];
}
}
free(bitmapData);
} else {
NSLog(#"Error getting bitmap pixel data\n");
}
CGContextRelease(context);
//Run a method to calculate the memory(MB or KB) --- After
return newBitmap;
}
I am trying to resize a CMSampleBufferRef as quickly as possible on an iOS 8 device for use in image processing. From what I have found online, the way to do this seems to be by using the vImage API in the Accelerate framework. However, I haven't done much with the Accelerate framework and I can't quite figure out how to do this. Here is what I have so far to scale an image to 200x200:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef cvimgRef = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(cvimgRef,0);
void *imageData = CVPixelBufferGetBaseAddress(cvimgRef);
NSInteger width = CVPixelBufferGetWidth(cvimgRef);
NSInteger height = CVPixelBufferGetHeight(cvimgRef);
unsigned char *newData= // NOT SURE WHAT THIS SHOULD BE...
vImage_Buffer inBuff = { imageData, height, width, 4*width };
vImage_Buffer outBuff = { newData, 200, 200, 4*200 };
// NOT SURE IF THIS IS THE CORRECT METHOD... video output settings for kCVPixelBufferPixelFormatTypeKey is set to kCVPixelFormatType_32BGRA
// This seems wrong since the image scale is ARGB, not BGRA.
vImageScale_ARGB8888(inBuffer, outBuffer, NULL, kvImageNoFlags);
CVPixelBufferUnlockBaseAddress(cvimgRef,0);
}
Where outBuffer is the result. After that, I am also not sure how to convert the outBuffer back to a CVImageBufferRef for further image processing. Any suggestions would be appreciated!
vImageScale returns just a buffer data, and pay attention that buffers need to be freed.
I don't know if there is a faster way just using that out buffer but I would convert the buffer into a CGImage. Something like that taken from here so take it as a reference
vImage_CGImageFormat format = {
.bitsPerComponent = 8,
.bitsPerPixel = 32,
.colorSpace = NULL,
.bitmapInfo = (CGBitmapInfo)kCGImageAlphaFirst,
.version = 0,
.decode = NULL,
.renderingIntent = kCGRenderingIntentDefault,
};
ret = kvImageNoError;
CGImageRef destRef = vImageCreateCGImageFromBuffer(&dstBuffer, &format, NULL, NULL, kvImageNoFlags, &ret)
Later I will convert it into a CVPixelBuffer.
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
NSDictionary *options = #{
(NSString*)kCVPixelBufferCGImageCompatibilityKey : #YES,
(NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey : #YES,
};
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
if (status!=kCVReturnSuccess) {
DLog(#"Operation failed");
}
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
CGImageGetHeight(image), 8, 4*CGImageGetWidth(image), rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
I'm pretty sure that is possible to avoid the conversion into a CGImage and start using the buffer, but I never tried.
You have to use a resampling filter in conjunction with any vImage operations that alter image geometry: page 32, vImage Programming Guide.
- (CVPixelBufferRef)copyRenderedPixelBuffer:(CVPixelBufferRef)pixelBuffer {
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
// vImage processing
vImage_Error err;
vImage_Buffer buffer;
buffer.data = (unsigned char *)CVPixelBufferGetBaseAddress( pixelBuffer );
buffer.rowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer );
buffer.width = CVPixelBufferGetWidth( pixelBuffer );
buffer.height = CVPixelBufferGetHeight( pixelBuffer );
vImageCVImageFormatRef vformat = vImageCVImageFormat_CreateWithCVPixelBuffer( pixelBuffer );
vImage_CGImageFormat cgformat = {
.bitsPerComponent = 8,
.bitsPerPixel = 32,
.bitmapInfo = kCGBitmapByteOrderDefault,
.colorSpace = NULL, //sRGB
};
const CGFloat bgColor[3] = {0.0, 0.0, 0.0};
vImageBuffer_InitWithCVPixelBuffer(&buffer, &cgformat, pixelBuffer, vformat, bgColor, kvImageNoAllocate);
vImage_Buffer outbuffer;
void *tempBuffer;
tempBuffer = malloc(CVPixelBufferGetBytesPerRow( pixelBuffer ) * CVPixelBufferGetHeight( pixelBuffer ));
outbuffer.data = tempBuffer;
outbuffer.rowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer );
outbuffer.width = CVPixelBufferGetWidth( pixelBuffer );
outbuffer.height = CVPixelBufferGetHeight( pixelBuffer );
// PROCESS vIMAGE HERE
err = vImageBuffer_CopyToCVPixelBuffer(&outbuffer, &cgformat, pixelBuffer, vformat, bgColor, kvImageNoFlags);
if(err != -1)
free(tempBuffer);
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
return (CVPixelBufferRef)CFRetain( pixelBuffer );
}
I'm working in a WebRTC app for iOS. My goal is record a video from WebRTC objects.
I have the delegate RTCVideoRenderer that provides me this method.
-(void)renderFrame:(RTCI420Frame *)frame{
}
My question is: How can I convert the object RTCI420Frame in a usefull object for show image or save to disk.
RTCI420Frames use the YUV420 format. You can easily convert them to RGB using OpenCV, then convert them to a UIImage. Make sure you #import <RTCI420Frame.h>
-(void) processFrame:(RTCI420Frame *)frame {
cv::Mat mYUV((int)frame.height + (int)frame.chromaHeight,(int)frame.width, CV_8UC1, (void*) frame.yPlane);
cv::Mat mRGB((int)frame.height, (int)frame.width, CV_8UC1);
cvtColor(mYUV, mRGB, CV_YUV2RGB_I420);
UIImage *image = [self UIImageFromCVMat:mRGB];
}
-(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(cvMat.cols,
cvMat.rows,
8,
8 * cvMat.elemSize(),
cvMat.step[0],
colorSpace,
kCGImageAlphaNone|kCGBitmapByteOrderDefault,
provider,
NULL,
false,
kCGRenderingIntentDefault
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
You may want to do this on a separate thread, especially if you are doing any video processing. Also, remember to use the .mm file extension so you can use C++.
If you don't want to use OpenCV, it is possible to do it manually. The following code kind of works, but the colors are messed up and it crashes after a few seconds.
int width = (int)frame.width;
int height = (int)frame.height;
uint8_t *data = (uint8_t *)malloc(width * height * 4);
const uint8_t* yPlane = frame.yPlane;
const uint8_t* uPlane = frame.uPlane;
const uint8_t* vPlane = frame.vPlane;
for (int i = 0; i < width * height; i++) {
int rgbOffset = i * 4;
uint8_t y = yPlane[i];
uint8_t u = uPlane[i/4];
uint8_t v = vPlane[i/4];
uint8_t r = y + 1.402 * (v - 128);
uint8_t g = y - 0.344 * (u - 128) - 0.714 * (v - 128);
uint8_t b = y + 1.772 * (u - 128);
data[rgbOffset] = r;
data[rgbOffset + 1] = g;
data[rgbOffset + 2] = b;
data[rgbOffset + 3] = UINT8_MAX;
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(data, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef cgImage = CGBitmapContextCreateImage(gtx);
UIImage *uiImage = [[UIImage alloc] initWithCGImage:cgImage];
free(data);
I am trying to convert a YUV image to CIIMage and ultimately UIImage. I am fairly novice at these and trying to figure out an easy way to do it. From what I have learnt, from iOS6 YUV can be directly used to create CIImage but as I am trying to create it the CIImage is only holding a nil value. My code is like this ->
NSLog(#"Started DrawVideoFrame\n");
CVPixelBufferRef pixelBuffer = NULL;
CVReturn ret = CVPixelBufferCreateWithBytes(
kCFAllocatorDefault, iWidth, iHeight, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
lpData, bytesPerRow, 0, 0, 0, &pixelBuffer
);
if(ret != kCVReturnSuccess)
{
NSLog(#"CVPixelBufferRelease Failed");
CVPixelBufferRelease(pixelBuffer);
}
NSDictionary *opt = #{ (id)kCVPixelBufferPixelFormatTypeKey :
#(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) };
CIImage *cimage = [CIImage imageWithCVPixelBuffer:pixelBuffer options:opt];
NSLog(#"CURRENT CIImage -> %p\n", cimage);
UIImage *image = [UIImage imageWithCIImage:cimage scale:1.0 orientation:UIImageOrientationUp];
NSLog(#"CURRENT UIImage -> %p\n", image);
Here the lpData is the YUV data which is an array of unsigned character.
This also looks interesting : vImageMatrixMultiply, can't find any example on this. Can anyone help me with this?
I have also faced with this similar problem. I was trying to Display YUV(NV12) formatted data to the screen. This solution is working in my project...
//YUV(NV12)-->CIImage--->UIImage Conversion
NSDictionary *pixelAttributes = #{kCVPixelBufferIOSurfacePropertiesKey : #{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
640,
480,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
(__bridge CFDictionaryRef)(pixelAttributes),
&pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer,0);
unsigned char *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
// Here y_ch0 is Y-Plane of YUV(NV12) data.
memcpy(yDestPlane, y_ch0, 640 * 480);
unsigned char *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
// Here y_ch1 is UV-Plane of YUV(NV12) data.
memcpy(uvDestPlane, y_ch1, 640*480/2);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
if (result != kCVReturnSuccess) {
NSLog(#"Unable to create cvpixelbuffer %d", result);
}
// CIImage Conversion
CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *MytemporaryContext = [CIContext contextWithOptions:nil];
CGImageRef MyvideoImage = [MytemporaryContext createCGImage:coreImage
fromRect:CGRectMake(0, 0, 640, 480)];
// UIImage Conversion
UIImage *Mynnnimage = [[UIImage alloc] initWithCGImage:MyvideoImage
scale:1.0
orientation:UIImageOrientationRight];
CVPixelBufferRelease(pixelBuffer);
CGImageRelease(MyvideoImage);
Here I am showing data structure of YUV(NV12) data and how we can get the Y-Plane(y_ch0) and UV-Plane(y_ch1) which is used to create CVPixelBufferRef. Let's look at the YUV(NV12) data structure..
If we look at the picture we can get following information about YUV(NV12),
Total Frame Size = Width * Height * 3/2,
Y-Plane Size = Frame Size * 2/3,
UV-Plane size = Frame Size * 1/3,
Data stored in Y-Plane -->{Y1, Y2, Y3, Y4, Y5.....}.
U-Plane-->(U1, V1, U2, V2, U3, V3,......}.
I hope it will be helpful to all. :) Have fun with IOS Development :D
If you have a video frame object that looks like this:
int width,
int height,
unsigned long long time_stamp,
unsigned char *yData,
unsigned char *uData,
unsigned char *vData,
int yStride
int uStride
int vStride
You can use the following to fill up a pixelBuffer:
NSDictionary *pixelAttributes = #{(NSString *)kCVPixelBufferIOSurfacePropertiesKey:#{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, // NV12
(__bridge CFDictionaryRef)(pixelAttributes),
&pixelBuffer);
if (result != kCVReturnSuccess) {
NSLog(#"Unable to create cvpixelbuffer %d", result);
}
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
unsigned char *yDestPlane = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
for (int i = 0, k = 0; i < height; i ++) {
for (int j = 0; j < width; j ++) {
yDestPlane[k++] = yData[j + i * yStride];
}
}
unsigned char *uvDestPlane = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
for (int i = 0, k = 0; i < height / 2; i ++) {
for (int j = 0; j < width / 2; j ++) {
uvDestPlane[k++] = uData[j + i * uStride];
uvDestPlane[k++] = vData[j + i * vStride];
}
}
Now you can convert it to CIImage:
CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *tempContext = [CIContext contextWithOptions:nil];
CGImageRef coreImageRef = [tempContext createCGImage:coreImage
fromRect:CGRectMake(0, 0, width, height)];
And UIImage if you need that. (image orientation can vary depending on your input)
UIImage *myUIImage = [[UIImage alloc] initWithCGImage:coreImageRef
scale:1.0
orientation:UIImageOrientationUp];
Don't forget to release the variables:
CVPixelBufferRelease(pixelBuffer);
CGImageRelease(coreImageRef);