I am trying to create a CIImage object from CVPixelBuffer, everything went well up to step 3.
But in step 4:
[[CIImage alloc] initWithCVPixelBuffer:imageBuffer options:(__bridge_transfer NSDictionary *)attachments]
returns nil.
So please give me solution for my problem....Thanks in advance.
CMSampleBufferRef sampleBufferRef = [readerVideoTrackOutput copyNextSampleBuffer];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault,
sampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *image = [[CIImage alloc] initWithCVPixelBuffer:imageBuffer options:(__bridge_transfer NSDictionary *)attachments];
Related
I'm trying to resize JPEG image in Objective-C (iOS). Input is a JPG file and output should be UIImage.
I have this code:
// Load image from a file
NSData *imageData = [NSData dataWithContentsOfFile:jpgFile];
UIImage *inputImage = [UIImage imageWithData:imageData];
CIImage *ciImage = inputImage.CIImage;
// Set Lanczos filter
CIFilter *scaleFilter = [CIFilter filterWithName:#"CILanczosScaleTransform"];
[scaleFilter setValue:ciImage forKey:#"inputImage"];
[scaleFilter setValue:[NSNumber numberWithFloat:0.5] forKey:#"inputScale"];
[scaleFilter setValue:[NSNumber numberWithFloat:1.0] forKey:#"inputAspectRatio"];
// Get an output
CIImage *finalImage = [scaleFilter valueForKey:#"outputImage"];
UIImage *outputImage = [[UIImage alloc] initWithCIImage:finalImage];
But the output image is invalid (outputImage.size.height is 0), and it causes following errors in an other processing:
CGContextDrawImage: invalid context 0x0. If you want to see the
backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental
variable. ImageIO: JPEG Empty JPEG image (DNL not supported)
Update:
I don't know, what is wrong with the code above (except the initialization mentioned by Sulthan below - thank him for that). I used following code at the end (this code works OK):
CIImage *input_ciimage = [[CIImage alloc] initWithImage:inputImage];
CIImage *output_ciimage =
[[CIFilter filterWithName:#"CILanczosScaleTransform" keysAndValues:
kCIInputImageKey, input_ciimage,
kCIInputScaleKey, [NSNumber numberWithFloat:0.5],
nil] outputImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef output_cgimage = [context createCGImage:output_ciimage fromRect:[output_ciimage extent]];
UIImage *output_uiimage = [UIImage imageWithCGImage:output_cgimage];
CGImageRelease(output_cgimage);
This line is the problem:
CIImage *ciImage = inputImage.CIImage
If the image is not initialized from a CIImage then it's own CIImage is nil.
A safer approach is:
CIImage *ciImage = [[CIImage alloc] initWithImage:inputImage];
Also, make sure the image has been loaded successfully from your data.
I'm trying to convert a CVPixelBufferRef into a UIImage using the following snippet:
UIImage *image = nil;
CMSampleBufferRef sampleBuffer = (CMSampleBufferRef)CMBufferQueueDequeueAndRetain(_queue);
if (sampleBuffer)
{
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
NSUInteger width = CVPixelBufferGetWidth(pixelBuffer);
NSUInteger height = CVPixelBufferGetHeight(pixelBuffer);
CIImage *coreImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:nil];
CGImageRef imageRef = [_context createCGImage:coreImage fromRect:CGRectMake(0, 0, width, height)];
image = [UIImage imageWithCGImage:imageRef];
CFRelease(sampleBuffer);
CFRelease(imageRef);
}
My problem is that it works fine when I run the code on a device but fails to render when run on simulator, the console outputs the following:
Render failed because a pixel format YCC420f is not supported
Any Ideas?
I am grabbing CIImage's from CVPixelBufferRef's and then rendering those CIImage's back to CVPixelBufferRef's. The result is a black movie. I have tried several variations of creating the new CVPixelBufferRef but the result is always the same.
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CVPixelBufferRef pbuff = NULL;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
640,
480,
kCVPixelFormatType_32BGRA,
(__bridge CFDictionaryRef)(options),
&pbuff);
if (status == kCVReturnSuccess) {
[temporaryContext render:ciImage
toCVPixelBuffer:pbuff
bounds:ciImage.extent
colorSpace:CGColorSpaceCreateDeviceRGB()];
} else {
NSLog(#"Failed create pbuff");
}
What am I doing wrong?
It turns out that CIImage becomes nil right after creating it in the simulator. I did find that if I run the same code on a device then it works.
You need to use glReadPixels to manually read the pixels into the buffer. You can find more about this here.
Link with implementation is here
I have an issue with new iOS 7 photo filters feature.
I have a photolibrary in my app. While I showing photo's thumbnails in UICollectionView I receive images with filters and crops already applied. There are two methods that return "ready for use" images:
[asset thumbnail]
[[asset defaultRepresentation] fullScreenImage]
On the contrary, when I want to share fullsize image I receive unchanged photo without any filters:
[[asset defaultRepresentation] fullResolutionImage]
Read image data through getBytes:fromOffset:length:error:
Is it possible to get a fullsize image with filter appropriate applied?
So far I figured out only one way to get what I want. All assets store their modification (like filters, crops and etc) info in the metadata dictionary by the key #"AdjustmentXMP". We're able to interpret this data and apply all filters to the fullResolutionImage like in this SO answer. Here is my complete solution:
...
ALAssetRepresentation *assetRepresentation = [asset defaultRepresentation];
CGImageRef fullResImage = [assetRepresentation fullResolutionImage];
NSString *adjustment = [[assetRepresentation metadata] objectForKey:#"AdjustmentXMP"];
if (adjustment) {
NSData *xmpData = [adjustment dataUsingEncoding:NSUTF8StringEncoding];
CIImage *image = [CIImage imageWithCGImage:fullResImage];
NSError *error = nil;
NSArray *filterArray = [CIFilter filterArrayFromSerializedXMP:xmpData
inputImageExtent:image.extent
error:&error];
CIContext *context = [CIContext contextWithOptions:nil];
if (filterArray && !error) {
for (CIFilter *filter in filterArray) {
[filter setValue:image forKey:kCIInputImageKey];
image = [filter outputImage];
}
fullResImage = [context createCGImage:image fromRect:[image extent]];
}
}
UIImage *result = [UIImage imageWithCGImage:fullResImage
scale:[assetRepresentation scale]
orientation:(UIImageOrientation)[assetRepresentation orientation]];
I am using AVFoundation to play a video by creating a CGImage from the AVFoundation callback, creating a UIImage from the CGImage and then displaying the UIImage in a UIImageView.
I want to apply some color correction to the images before I display them on the screen. What is the best way to colorize the images I'm getting?
I tried using the CIFilters, but that requires me to first create a CIImage from AVFoundation, then colorise it, then create a CGImage and then create a UIImage, and I'd rather avoid the extra step of creating the CIImage if possible.
Additionally, it doesn't seem like the performance of the CIFilters is fast enough - at least not when also having to create the additional CIImage. Any suggestions for a faster way to go about doing this?
Thanks in advance.
It seems that using an EAGLContext instead of a standard CIContext is the answer. That gives fast enough performance in creating the colorized images for my needs.
Simple code example here:
On Init:
NSMutableDictionary *options = [[NSMutableDictionary alloc] init];
[options setObject: [NSNull null] forKey: kCIContextWorkingColorSpace];
m_EAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
m_CIContext = [CIContext contextWithEAGLContext:m_EAGLContext options:options];
Set the color:
-(void)setColorCorrection:(UIColor*)color
{
CGFloat r,g,b,a;
[color getRed:&r green:&g blue:&b alpha:&a];
CIVector *redVector = [CIVector vectorWithX:r Y:0 Z:0];
CIVector *greenVector = [CIVector vectorWithX:0 Y:g Z:0];
CIVector *blueVector = [CIVector vectorWithX:0 Y:0 Z:b];
m_ColorFilter = [CIFilter filterWithName:#"CIColorMatrix"];
[m_ColorFilter setDefaults];
[m_ColorFilter setValue:redVector forKey:#"inputRVector"];
[m_ColorFilter setValue:greenVector forKey:#"inputGVector"];
[m_ColorFilter setValue:blueVector forKey:#"inputBVector"];
}
On each video frame:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
CGImageRef cgImage = nil;
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
[m_ColorFilter setValue:ciImage forKey:kCIInputImageKey];
CIImage *adjustedImage = [m_ColorFilter valueForKey:kCIOutputImageKey];
cgImage = [m_CIContext createCGImage:adjustedImage fromRect:ciImage.extent];