Render to CVPixelBuffer produces black image - ios

I am grabbing CIImage's from CVPixelBufferRef's and then rendering those CIImage's back to CVPixelBufferRef's. The result is a black movie. I have tried several variations of creating the new CVPixelBufferRef but the result is always the same.
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CVPixelBufferRef pbuff = NULL;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
640,
480,
kCVPixelFormatType_32BGRA,
(__bridge CFDictionaryRef)(options),
&pbuff);
if (status == kCVReturnSuccess) {
[temporaryContext render:ciImage
toCVPixelBuffer:pbuff
bounds:ciImage.extent
colorSpace:CGColorSpaceCreateDeviceRGB()];
} else {
NSLog(#"Failed create pbuff");
}
What am I doing wrong?

It turns out that CIImage becomes nil right after creating it in the simulator. I did find that if I run the same code on a device then it works.

You need to use glReadPixels to manually read the pixels into the buffer. You can find more about this here.
Link with implementation is here

Related

objective-c: cv::Mat to CVPixelBuffer

Below code converts cv::Mat to CVPixelBufferRef
CVPixelBufferRef getImageBufferFromMat(cv::Mat matimg) {
cv::cvtColor(matimg, matimg, CV_BGR2BGRA);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool: YES], kCVPixelBufferMetalCompatibilityKey,
[NSNumber numberWithBool: YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool: YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
[NSNumber numberWithInt: matimg.cols], kCVPixelBufferWidthKey,
[NSNumber numberWithInt: matimg.rows], kCVPixelBufferHeightKey,
[NSNumber numberWithInt: matimg.step[0]], kCVPixelBufferBytesPerRowAlignmentKey,
nil];
CVPixelBufferRef imageBuffer;
CVReturn status = CVPixelBufferCreate(kCFAllocatorMalloc, matimg.cols, matimg.rows, kCVPixelFormatType_32BGRA, (CFDictionaryRef) CFBridgingRetain(options), &imageBuffer) ;
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *base = CVPixelBufferGetBaseAddress(imageBuffer);
memcpy(base, matimg.data, matimg.total() * matimg.elemSize());
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return imageBuffer;
}
The problem is I am getting half the image
Original Image
After Convertion (i convert CVPixelBufferRef back to UIImage and store it using UIImageWriteToSavedPhotosAlbum just for checking)
Interestingly, the image size of Mat and CVPixelBufferRef are the same.
Now, what I did was resizing the image just before memcopy, where the height is increased by 2 folds
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *base = CVPixelBufferGetBaseAddress(imageBuffer);
cv::resize(matimg, matimg, cv::Size(), 1 , 2);
memcpy(base, matimg.data, matimg.total() * matimg.elemSize());
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
Now the image size is still the same...
I want to badly know what's causing this behavior and I am sure I am missing something...
I found a solution to this problem after reading this.
The system likes images to be a multiple of 64 bytes per row, presumably for better performance due to cache line alignment. As image resolution is [1000 × 1000], not multiple of 64 hence bytes per row would default to 27840 don't know why... This was causing the problems.
Anyway, if anyone looking for the solution
CVPixelBufferRef getImageBufferFromMat(cv::Mat matimg) {
cv::cvtColor(matimg, matimg, CV_BGR2BGRA);
int widthReminder = matimg.cols % 64, heightReminder = matimg.rows % 64;
if (widthReminder != 0 || heightReminder != 0) {
cv::resize(matimg, matimg, cv::Size(matimg.cols + (64 - widthReminder), matimg.rows + (64 - heightReminder)));
}
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool: YES], kCVPixelBufferMetalCompatibilityKey,
[NSNumber numberWithBool: YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool: YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
[NSNumber numberWithInt: matimg.cols], kCVPixelBufferWidthKey,
[NSNumber numberWithInt: matimg.rows], kCVPixelBufferHeightKey,
[NSNumber numberWithInt: matimg.step[0]], kCVPixelBufferBytesPerRowAlignmentKey,
nil];
CVPixelBufferRef imageBuffer;
CVReturn status = CVPixelBufferCreate(kCFAllocatorMalloc, matimg.cols, matimg.rows, kCVPixelFormatType_32BGRA, (CFDictionaryRef) CFBridgingRetain(options), &imageBuffer) ;
NSParameterAssert(status == kCVReturnSuccess && imageBuffer != NULL);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *base = CVPixelBufferGetBaseAddress(imageBuffer);
memcpy(base, matimg.data, matimg.total() * matimg.elemSize());
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
// UIImageWriteToSavedPhotosAlbum(converts(imageBuffer), nil, nil, nil);
return imageBuffer;
}

Converting MTLTexture to CVPixelBuffer

I am currently working on live filters using Metal.
After defining my CIImage I render the image to a MTLTexture.
Below is my rendering code. context is a CIContext backed by Metal;
targetTexture is the alias to the texture attached to the currentDrawable property of my MTKView instance:
context?.render(drawImage, to: targetTexture, commandBuffer: commandBuffer, bounds: targetRect, colorSpace: colorSpace)
It renders correctly as I can see the image being displayed on the metal view.
The problem is that after rendering the image (and displaying it), I want to extract the CVPixelBuffer and save it to disk using the class AVAssetWriter.
Another alternative would be to have two rendering steps, one rendering to the texture and another rendering to a CVPixelBuffer. (But it isn't clear how to create such buffer, or the impact that two rendering steps would have in the framerate)
Any help will be appreciated, Thanks!
You can try to copy the raw data from the MTLTexture like this :
var outPixelbuffer: CVPixelBuffer?
if let datas = targetTexture.texture.buffer?.contents() {
CVPixelBufferCreateWithBytes(kCFAllocatorDefault, targetTexture.width,
targetTexture.height, kCVPixelFormatType_64RGBAHalf, datas,
targetTexture.texture.bufferBytesPerRow, nil, nil, nil, &outPixelbuffer);
}
+ (void)getPixelBufferFromBGRAMTLTexture:(id<MTLTexture>)texture result:(void(^)(CVPixelBufferRef pixelBuffer))block {
CVPixelBufferRef pxbuffer = NULL;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
size_t imageByteCount = texture.width * texture.height * 4;
void *imageBytes = malloc(imageByteCount);
NSUInteger bytesPerRow = texture.width * 4;
MTLRegion region = MTLRegionMake2D(0, 0, texture.width, texture.height);
[texture getBytes:imageBytes bytesPerRow:bytesPerRow fromRegion:region mipmapLevel:0];
CVPixelBufferCreateWithBytes(kCFAllocatorDefault,texture.width,texture.height,kCVPixelFormatType_32BGRA,imageBytes,bytesPerRow,NULL,NULL,(__bridge CFDictionaryRef)options,&pxbuffer);
if (block) {
block(pxbuffer);
}
CVPixelBufferRelease(pxbuffer);
free(imageBytes);
}

Create CIImage from CVImageBufferRef

I am trying to create a CIImage object from CVPixelBuffer, everything went well up to step 3.
But in step 4:
[[CIImage alloc] initWithCVPixelBuffer:imageBuffer options:(__bridge_transfer NSDictionary *)attachments]
returns nil.
So please give me solution for my problem....Thanks in advance.
CMSampleBufferRef sampleBufferRef = [readerVideoTrackOutput copyNextSampleBuffer];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault,
sampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *image = [[CIImage alloc] initWithCVPixelBuffer:imageBuffer options:(__bridge_transfer NSDictionary *)attachments];

UIImage color correction

I am using AVFoundation to play a video by creating a CGImage from the AVFoundation callback, creating a UIImage from the CGImage and then displaying the UIImage in a UIImageView.
I want to apply some color correction to the images before I display them on the screen. What is the best way to colorize the images I'm getting?
I tried using the CIFilters, but that requires me to first create a CIImage from AVFoundation, then colorise it, then create a CGImage and then create a UIImage, and I'd rather avoid the extra step of creating the CIImage if possible.
Additionally, it doesn't seem like the performance of the CIFilters is fast enough - at least not when also having to create the additional CIImage. Any suggestions for a faster way to go about doing this?
Thanks in advance.
It seems that using an EAGLContext instead of a standard CIContext is the answer. That gives fast enough performance in creating the colorized images for my needs.
Simple code example here:
On Init:
NSMutableDictionary *options = [[NSMutableDictionary alloc] init];
[options setObject: [NSNull null] forKey: kCIContextWorkingColorSpace];
m_EAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
m_CIContext = [CIContext contextWithEAGLContext:m_EAGLContext options:options];
Set the color:
-(void)setColorCorrection:(UIColor*)color
{
CGFloat r,g,b,a;
[color getRed:&r green:&g blue:&b alpha:&a];
CIVector *redVector = [CIVector vectorWithX:r Y:0 Z:0];
CIVector *greenVector = [CIVector vectorWithX:0 Y:g Z:0];
CIVector *blueVector = [CIVector vectorWithX:0 Y:0 Z:b];
m_ColorFilter = [CIFilter filterWithName:#"CIColorMatrix"];
[m_ColorFilter setDefaults];
[m_ColorFilter setValue:redVector forKey:#"inputRVector"];
[m_ColorFilter setValue:greenVector forKey:#"inputGVector"];
[m_ColorFilter setValue:blueVector forKey:#"inputBVector"];
}
On each video frame:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
CGImageRef cgImage = nil;
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
[m_ColorFilter setValue:ciImage forKey:kCIInputImageKey];
CIImage *adjustedImage = [m_ColorFilter valueForKey:kCIOutputImageKey];
cgImage = [m_CIContext createCGImage:adjustedImage fromRect:ciImage.extent];

Interpret XMP-Metadata in ALAssetRepresentation

When a user makes some changes (cropping, red-eye removal, ...) to photos in the built-in Photos.app on iOS, the changes are not applied to the fullResolutionImage returned by the corresponding ALAssetRepresentation.
However, the changes are applied to the thumbnail and the fullScreenImage returned by the ALAssetRepresentation.
Furthermore, information about the applied changes can be found in the ALAssetRepresentation's metadata dictionary via the key #"AdjustmentXMP".
I would like to apply these changes to the fullResolutionImage myself to preserve consistency. I've found out that on iOS6+ CIFilter's filterArrayFromSerializedXMP: inputImageExtent:error: can convert this XMP-metadata to an array of CIFilter's:
ALAssetRepresentation *rep;
NSString *xmpString = rep.metadata[#"AdjustmentXMP"];
NSData *xmpData = [xmpString dataUsingEncoding:NSUTF8StringEncoding];
CIImage *image = [CIImage imageWithCGImage:rep.fullResolutionImage];
NSError *error = nil;
NSArray *filterArray = [CIFilter filterArrayFromSerializedXMP:xmpData
inputImageExtent:image.extent
error:&error];
if (error) {
NSLog(#"Error during CIFilter creation: %#", [error localizedDescription]);
}
CIContext *context = [CIContext contextWithOptions:nil];
for (CIFilter *filter in filterArray) {
[filter setValue:image forKey:kCIInputImageKey];
image = [filter outputImage];
}
However, this works only for some filters (cropping, auto-enhance) but not for others like red-eye removal. In these cases, the CIFilters have no visible effect. Therefore, my questions:
Is anyone aware of a way to create red-eye removal CIFilter? (In a way consistent with the Photos.app. The filter with the key kCIImageAutoAdjustRedEye is not enough. E.g., it does not take parameters for the position of the eyes.)
Is there a possibility to generate and apply these filters under iOS 5?
ALAssetRepresentation* representation = [[self assetAtIndex:index] defaultRepresentation];
// Create a buffer to hold the data for the asset's image
uint8_t *buffer = (Byte*)malloc(representation.size); // Copy the data from the asset into the buffer
NSUInteger length = [representation getBytes:buffer fromOffset: 0.0 length:representation.size error:nil];
if (length==0)
return nil;
// Convert the buffer into a NSData object, and free the buffer after.
NSData *adata = [[NSData alloc] initWithBytesNoCopy:buffer length:representation.size freeWhenDone:YES];
// Set up a dictionary with a UTI hint. The UTI hint identifies the type
// of image we are dealing with (that is, a jpeg, png, or a possible
// RAW file).
// Specify the source hint.
NSDictionary* sourceOptionsDict = [NSDictionary dictionaryWithObjectsAndKeys:
(id)[representation UTI], kCGImageSourceTypeIdentifierHint, nil];
// Create a CGImageSource with the NSData. A image source can
// contain x number of thumbnails and full images.
CGImageSourceRef sourceRef = CGImageSourceCreateWithData((CFDataRef) adata, (CFDictionaryRef) sourceOptionsDict);
[adata release];
CFDictionaryRef imagePropertiesDictionary;
// Get a copy of the image properties from the CGImageSourceRef.
imagePropertiesDictionary = CGImageSourceCopyPropertiesAtIndex(sourceRef,0, NULL);
CFNumberRef imageWidth = (CFNumberRef)CFDictionaryGetValue(imagePropertiesDictionary, kCGImagePropertyPixelWidth);
CFNumberRef imageHeight = (CFNumberRef)CFDictionaryGetValue(imagePropertiesDictionary, kCGImagePropertyPixelHeight);
int w = 0;
int h = 0;
CFNumberGetValue(imageWidth, kCFNumberIntType, &w);
CFNumberGetValue(imageHeight, kCFNumberIntType, &h);
// Clean up memory
CFRelease(imagePropertiesDictionary);

Resources