I am trying to crop an area out of a UIImage to use with GLKTextureLoader. I can set the texture directly using the the UIImage with the following:
- (void)setTextureImage:(UIImage *)image
{
NSError *error;
texture = [GLKTextureLoader textureWithCGImage:image.CGImage options:nil error:&error];
if(error)
{
NSLog(#"Texture Error:%#", error);
} else {
self.textureCoordinates[0] = GLKVector2Make(1.0f, 1.0f);
self.textureCoordinates[1] = GLKVector2Make(1.0f, 0);
self.textureCoordinates[2] = GLKVector2Make(0, 0);
self.textureCoordinates[3] = GLKVector2Make(0, 1.0f);
}
}
However, if I try crop the image this way:
- (void)setTextureImage:(UIImage *)image
{
NSError *error;
CGImageRef imageRef = CGImageCreateWithImageInRect(image.CGImage, CGRectMake(0.0, 64.0f, 64.0f, 64.0f));
texture = [GLKTextureLoader textureWithCGImage:imageRef options:nil error:&error];
if(error)
{
NSLog(#"[SF] Texture Error:%#", error);
} else {
self.textureCoordinates[0] = GLKVector2Make(1.0f, 1.0f);
self.textureCoordinates[1] = GLKVector2Make(1.0f, 0);
self.textureCoordinates[2] = GLKVector2Make(0, 0);
self.textureCoordinates[3] = GLKVector2Make(0, 1.0f);
}
}
I get this error:
Texture Error:Error Domain=GLKTextureLoaderErrorDomain Code=12 "The operation couldn’t be completed. (GLKTextureLoaderErrorDomain error 12.)" UserInfo=0x6a6b550 {GLKTextureLoaderErrorKey=Image decoding failed}
How can I take a section of the UIImage to use as a texture with GLKTextureLoader?
In the end I got it to work with UIImagePNGRepresentation, I'm only learning OpenGL/GLKit at the moment - so not an expert but I'm guessing the CGImage was missing some colorspace or some other data that is required for textures.
- (void)setTextureImage:(UIImage *)image withFrame:(CGRect)rect
{
NSError *error;
CGImageRef imageRef = CGImageCreateWithImageInRect(image.CGImage, rect);
image = [UIImage imageWithData:UIImagePNGRepresentation([UIImage imageWithCGImage:imageRef])];
texture = [GLKTextureLoader textureWithCGImage:image.CGImage options:nil error:&error];
if(error)
{
NSLog(#"[SF] Texture Error:%#", error);
} else {
self.textureCoordinates[0] = GLKVector2Make(1.0f, 1.0f);
self.textureCoordinates[1] = GLKVector2Make(1.0f, 0);
self.textureCoordinates[2] = GLKVector2Make(0, 0);
self.textureCoordinates[3] = GLKVector2Make(0, 1.0f);
}
}
I have the same issue.It seems like this bug comes from that I redraw the image.
It will come out,if I create the CGContext like this:
CGBitmapContextCreate(NULL, width, height, 8, 4 * width, colorSpace, kCGBitmapByteOrderDefault|kCGImageAlphaPremultipliedLast);
However,it will load texture successfully,if I create the CGContext like this:
UIGraphicsBeginImageContextWithOptions(aImage.size, NO, aImage.scale);
I've had the same error because of incorrect alpha settings for an CGImageRef.
The documentation for GLKTextureLoader lists the supported CGImage formats in Table 1.
Apple Documentation
I've changed code used to create context from
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8, width * (CGColorSpaceGetNumberOfComponents(space) + 1), space, kCGBitmapAlphaInfoMask & kCGImageAlphaPremultipliedLast);
to
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8, width * (CGColorSpaceGetNumberOfComponents(space) + 1), space, kCGBitmapAlphaInfoMask & kCGImageAlphaPremultipliedFirst);
Everything is fine for me now. And this should fix Feng Stone's problem.
For you I would recommend to skip CGImageCreateWithImageInRect function and create a new CGContext with correct parameters, where you can draw your image with CGContextDrawImage
Related
My project is working with openCV for iOS(2.4.9). And I found function MatToUIImage which will cause memory leaks, and it only occurs on iOS 10.X.
After I updated this function(2.4.9) to latest(3.2.0) version everything got worked. The only difference is CGBitmapInfo.
So can anyone tell me why?
2.4.9
UIImage* MatToUIImage(const cv::Mat& image) {
NSData *data = [NSData dataWithBytes:image.data
length:image.elemSize()*image.total()];
CGColorSpaceRef colorSpace;
if (image.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider =
CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(image.cols,
image.rows,
8,
8 * image.elemSize(),
image.step.p[0],
colorSpace,
kCGImageAlphaNone|
kCGBitmapByteOrderDefault,
provider,
NULL,
false,
kCGRenderingIntentDefault
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
3.2.0
UIImage* MatToUIImage(const cv::Mat& image) {
NSData *data = [NSData dataWithBytes:image.data
length:image.elemSize()*image.total()];
CGColorSpaceRef colorSpace;
if (image.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider =
CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Preserve alpha transparency, if exists
bool alpha = image.channels() == 4;
CGBitmapInfo bitmapInfo = (alpha ? kCGImageAlphaLast : kCGImageAlphaNone) | kCGBitmapByteOrderDefault;
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(image.cols,
image.rows,
8,
8 * image.elemSize(),
image.step.p[0],
colorSpace,
bitmapInfo,
provider,
NULL,
false,
kCGRenderingIntentDefault
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
Important update (5.06.2017) Finally, to perform CFRelease manually turned out to be bad idea as it can raise more troubles than solve! Though, it gave me a clue that leaks are somehow connected with NSData (not-)releasing.
I noticed that it's released automatically as expected with ARC when called from block in background thread, like that:
- (void)runInBackgroundWithImageBuffer:(CVImageBufferRef)imageBuffer
callback:(void (^)())callback {
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(queue, ^{
[self processImageBuffer:imageBuffer];
if (callback != nil) {
callback();
}
});
}
- (void)previewOpenCVImage:(cv::Mat *)image {
UIImage *preview = MatToUIImage(*image);
dispatch_async(dispatch_get_main_queue(), ^{
// _imagePreview has (UIImageView *) type
[_imagePreview setImage:preview];
});
}
I can confirm that for iPhone Simulator. Seems like current MatToUIImage implementation causes memory leaks on simulator. And I can't reproduce it on device.
For some reason they are not detected by profiler but memory usage just blows up after multiple calls.
I've added some tweaks to make it working:
Add line CFRelease((CFTypeRef)data) before returning final image
When image is not necessary we need to perform CFRelease(image.CGImage) and probably CFRelease((CFTypeRef)image)
Hope that helps. Actually I don't completely understand why this is happening, who is holding references and why do we need to manually perform release.
I'm developing a movie maker application which make some effects on imported videos.
I'm using AVAssetWriter to code my application.
Everything works very good but I have a big problem in memory.
My app takes over 500 MB of the RAM in buffering process.
Simply the algorithm for making a filtered video is going like this:
1- import video.
2- extract all the frames for the video as CMSampleBuffer objects.
3- convert CMSampleBuffer object to uiimage.
4- implement the filter on the uiimage.
5- convert the uiimage back to a new CMSAmpleBuffer object.
6- Append the new buffer to a writer output.
7- Finally save the new movie to PhotoGallery.
The problem is in step5 I have a function which converts a UIImage to cvpixelBuffer object and return it.
Then I convert the CVPixelBuffer object to CMSampleBuffer.
The function increases the memory a lot and the application crashes at the end.
This is my code:
-(CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image andSize:(CGSize)size
{
double height = CGImageGetHeight(image);
double width = CGImageGetWidth(image);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
if (status != kCVReturnSuccess) {
return NULL;
}
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata,size.width ,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
CGFloat Y ;
if (height == size.height)
Y = 0;
else
Y = (size.height /2) - (height/2) ;
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, Y,width,height), image);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
CGContextDrawImage increases the memory by 2~5 MB per frame conversion.
I tried the following solutions:
1- releasing pxbuffer using CFRelease.
2- I used CGImageRelease to release the image ref.
3- I surrounded the code with #autoreleasepool block.
4- I used CGContextRelease.
5- UIGraphicsEndImageContext.
6- Used Analyze too in Xcode and fixed all the points.
Here is the full code for Video filtering:
- (void)assetFilteringMethod:(FilterType)filterType AndAssetURL:(NSURL *)assetURL{
CMSampleBufferRef sbuff ;
[areader addOutput:rout];
[areader startReading];
UIImage* bufferedImage;
while ([areader status] != AVAssetReaderStatusCompleted) {
sbuff = [rout copyNextSampleBuffer];
if (sbuff == nil)
[areader cancelReading];
else{
if (writerInput.readyForMoreMediaData) {
#autoreleasepool {
bufferedImage = [self imageFromSampleBuffer:sbuff];
bufferedImage = [FrameFilterClass convertImageToFilterWithFilterType:filterType andImage: bufferedImage];
CVPixelBufferRef buffer = NULL;
buffer = [self pixelBufferFromCGImage:[bufferedImage CGImage] andSize:CGSizeMake(320,240)];
[adaptor appendPixelBuffer:buffer withPresentationTime:CMSampleBufferGetPresentationTimeStamp(sbuff)];
CFRelease(buffer);
CFRelease(sbuff);
}
}
}
}
//Finished buffering
[videoWriter finishWritingWithCompletionHandler:^{
if (videoWriter.status != AVAssetWriterStatusFailed && videoWriter.status == AVAssetWriterStatusCompleted){
dispatch_async(dispatch_get_main_queue(), ^{
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
if ([library
videoAtPathIsCompatibleWithSavedPhotosAlbum:[NSURL fileURLWithPath:moviePath]]) {
[library writeVideoAtPathToSavedPhotosAlbum:[NSURL fileURLWithPath:moviePath]
completionBlock:^(NSURL *assetURL, NSError *error){
}];
}
});
}
else
NSLog(#"Video writing failed: %#", videoWriter.error);
}];
}
I spent around 3 to 4 days trying to solve this problem...
Any help would be appreciated.
You have to release the image using this line:
cgimagerelease(image.cgimage)
I'm getting a UIImage from a CMSampleBufferRef video buffer every N video frames like:
- (void)imageFromVideoBuffer:(void(^)(UIImage* image))completion {
CMSampleBufferRef sampleBuffer = _myLastSampleBuffer;
if (sampleBuffer != nil) {
CFRetain(sampleBuffer);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)];
_lastAppendedVideoBuffer.sampleBuffer = nil;
if (_context == nil) {
_context = [CIContext contextWithOptions:nil];
}
CVPixelBufferRef buffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CGImageRef cgImage = [_context createCGImage:ciImage fromRect:
CGRectMake(0, 0, CVPixelBufferGetWidth(buffer), CVPixelBufferGetHeight(buffer))];
__block UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CFRelease(sampleBuffer);
if(completion) completion(image);
return;
}
if(completion) completion(nil);
}
XCode and Instruments detect a Memory Leak, but I'm not able to get rid of it.
I'm releasing the CGImageRef and CMSampleBufferRef as usual:
CGImageRelease(cgImage);
CFRelease(sampleBuffer);
[UPDATE]
I put in the AVCapture output callback to get the sampleBuffer.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
if (captureOutput == _videoOutput) {
_lastVideoBuffer.sampleBuffer = sampleBuffer;
id<CIImageRenderer> imageRenderer = _CIImageRenderer;
dispatch_async(dispatch_get_main_queue(), ^{
#autoreleasepool {
CIImage *ciImage = nil;
ciImage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)];
if(_context==nil) {
_context = [CIContext contextWithOptions:nil];
}
CGImageRef processedCGImage = [_context createCGImage:ciImage
fromRect:[ciImage extent]];
//UIImage *image=[UIImage imageWithCGImage:processedCGImage];
CGImageRelease(processedCGImage);
NSLog(#"Captured image %#", ciImage);
}
});
The code that leaks is the createCGImage:ciImage:
CGImageRef processedCGImage = [_context createCGImage:ciImage
fromRect:[ciImage extent]];
even having a autoreleasepool, the CGImageRelease of the CGImage reference and a CIContext as instance property.
This seems to be the same issue addressed here: Can't save CIImage to file on iOS without memory leaks
[UPDATE]
The leak seems to be due a bug. The issue is well described in
Memory leak on CIContext createCGImage at iOS 9?
A sample project shows how to reproduce this leak: http://www.osamu.co.jp/DataArea/VideoCameraTest.zip
The last comments assure that
It looks like they fixed this in 9.1b3. If anyone needs a workaround
that works on iOS 9.0.x, I was able to get it working with this:
in a test code (Swift in this case):
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
if (error) return;
__block NSString *filePath = [NSTemporaryDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:#"ipdf_pic_%i.jpeg",(int)[NSDate date].timeIntervalSince1970]];
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
dispatch_async(dispatch_get_main_queue(), ^
{
#autoreleasepool
{
CIImage *enhancedImage = [CIImage imageWithData:imageData];
if (!enhancedImage) return;
static CIContext *ctx = nil; if (!ctx) ctx = [CIContext contextWithOptions:nil];
CGImageRef imageRef = [ctx createCGImage:enhancedImage fromRect:enhancedImage.extent format:kCIFormatBGRA8 colorSpace:nil];
UIImage *image = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:UIImageOrientationRight];
[[NSFileManager defaultManager] createFileAtPath:filePath contents:UIImageJPEGRepresentation(image, 0.8) attributes:nil];
CGImageRelease(imageRef);
}
});
}];
and the workaround for iOS9.0 should be
extension CIContext {
func createCGImage_(image:CIImage, fromRect:CGRect) -> CGImage {
let width = Int(fromRect.width)
let height = Int(fromRect.height)
let rawData = UnsafeMutablePointer<UInt8>.alloc(width * height * 4)
render(image, toBitmap: rawData, rowBytes: width * 4, bounds: fromRect, format: kCIFormatRGBA8, colorSpace: CGColorSpaceCreateDeviceRGB())
let dataProvider = CGDataProviderCreateWithData(nil, rawData, height * width * 4) {info, data, size in UnsafeMutablePointer<UInt8>(data).dealloc(size)}
return CGImageCreate(width, height, 8, 32, width * 4, CGColorSpaceCreateDeviceRGB(), CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedLast.rawValue), dataProvider, nil, false, .RenderingIntentDefault)!
}
}
We were experiencing a similar issue in an app we created, where we are processing each frame for feature keypoints with OpenCV, and sending off a frame every couple of seconds. After a while of running we would end up with quite a few memory pressure messages.
We managed to rectify this by running our processing code in it's own auto release pool like so (jpegDataFromSampleBufferAndCrop does something similar to what you are doing, with added cropping):
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
#autoreleasepool {
if ([self.lastFrameSentAt timeIntervalSinceNow] < -kContinuousRateInSeconds) {
NSData *imageData = [self jpegDataFromSampleBufferAndCrop:sampleBuffer];
if (imageData) {
[self processImageData:imageData];
}
self.lastFrameSentAt = [NSDate date];
imageData = nil;
}
}
}
}
I can confirm that this memory leak still exists on iOS 9.2. (I've also posted on the Apple Developer Forum.)
I get the same memory leak on iOS 9.2. I've tested dropping EAGLContext by using MetalKit and MLKDevice. I've tested using different methods of CIContext like drawImage, createCGImage and render but nothing seem to work.
It is very clear that this is a bug as of iOS 9. Try it out your self by downloading the example app from Apple (see below) and then run the same project on a device with iOS 8.4, then on a device with iOS 9.2 and pay attention to the memory gauge in Xcode.
Download https://developer.apple.com/library/ios/samplecode/AVBasicVideoOutput/Introduction/Intro.html#//apple_ref/doc/uid/DTS40013109
Add this to the APLEAGLView.h:20
#property (strong, nonatomic) CIContext* ciContext;
Replace APLEAGLView.m:118 with this
[EAGLContext setCurrentContext:_context];
_ciContext = [CIContext contextWithEAGLContext:_context];
And finaly replace APLEAGLView.m:341-343 with this
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
#autoreleasepool
{
CIImage* sourceImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIFilter* filter = [CIFilter filterWithName:#"CIGaussianBlur" keysAndValues:kCIInputImageKey, sourceImage, nil];
CIImage* filteredImage = filter.outputImage;
[_ciContext render:filteredImage toCVPixelBuffer:pixelBuffer];
}
glBindRenderbuffer(GL_RENDERBUFFER, _colorBufferHandle);
I am working on a rich graphics iOS application. At an instance, the memory taken by our application is 250 MB. I would take each frame from Camera, process it with OpenGL shaders and extract some data. Each time I use the camera to get the frames for processing I see an increase in the memory up to 280 MB. When I stop capturing the frames, memory comes back to normal to 250 MB. If I repeat the process of starting the camera and exiting for 10 times (lets say), I receive a memory warning (Though no memory leak being observed). I am not using ARC here. I am maintaing an auto release pool that includes the entire processing of a frame. I don't see any leaks while profiling. After 10 times, the memory seems to stand at 250 MB. I am not sure of the reason for memory warning. Any insights? I am happy to provide further information. Opengl version - ES 2.0, iOS version - 7.0
you have to use ARC, it will automatically release the bad memory, and make your application optimized
According to some other questions like this one (Crash running OpenGL on iOS after memory warning) and this one (instruments with iOS: Why does Memory Monitor disagree with Allocations?) the problem may be that you aren't deleting OpenGL resources (VBOs, textures, renderbuffers, whatever) when you're done with them.
Without seeing code, who knows? Are you simply rendering the frame buffer using the presentRenderbuffer method of EAGLContext? Then, what are you doing with the pixelBuffer you passed to CVOpenGLESTextureCacheCreateTextureFromImage? The pixel buffer is the only source of substantial memory in a typical use scenario.
However, if you're swapping the data in the render buffer to another buffer with, say, glReadPixels, then you've introduced one of several memory hogs. If the buffer you swapped to was a CoreGraphics buffer via, say, a CGDataProvider, did you include a data release callback, or did you pass nil as the parameter when you created the provider? Did you glFlush after you swapped buffers?
These are questions for which I could ascertain answers if you provided code; if you think you can tackle this without doing so, but would like to see working code that successfully manages memory in the most arduous use-case scenario there could possibly be:
https://demonicactivity.blogspot.com/2016/11/tech-serious-ios-developers-use-every.html
For your convenience, I've provided some code below. Place it after any call to the presentRenderbuffer method, commenting out the call if you do not want to render the buffer to the display in the CAEAGLLayer (as I did in the sample below):
// [_context presentRenderbuffer:GL_RENDERBUFFER];
dispatch_async(dispatch_get_main_queue(), ^{
#autoreleasepool {
// To capture the output to an OpenGL render buffer...
NSInteger myDataLength = _backingWidth * _backingHeight * 4;
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glPixelStorei(GL_UNPACK_ALIGNMENT, 8);
glReadPixels(0, 0, _backingWidth, _backingHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// To swap the pixel buffer to a CoreGraphics context (as a CGImage)
CGDataProviderRef provider;
CGColorSpaceRef colorSpaceRef;
CGImageRef imageRef;
CVPixelBufferRef pixelBuffer;
#try {
provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, &releaseDataCallback);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * _backingWidth;
colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
imageRef = CGImageCreate(_backingWidth, _backingHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
} #catch (NSException *exception) {
NSLog(#"Exception: %#", [exception reason]);
} #finally {
if (imageRef) {
// To convert the CGImage to a pixel buffer (for writing to a file using AVAssetWriter)
pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:imageRef];
// To verify the integrity of the pixel buffer (by converting it back to a CGIImage, and thendisplaying it in a layer)
imageLayer.contents = (__bridge id)[CVCGImageUtil cgImageFromPixelBuffer:pixelBuffer context:_ciContext];
}
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(imageRef);
}
}
});
.
.
.
The callback to free the data in the instance of the CGDataProvider class:
static void releaseDataCallback (void *info, const void *data, size_t size) {
free((void*)data);
}
The CVCGImageUtil class interface and implementation files, respectively:
#import Foundation;
#import CoreMedia;
#import CoreGraphics;
#import QuartzCore;
#import CoreImage;
#import UIKit;
#interface CVCGImageUtil : NSObject
+ (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context;
+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image;
+ (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image;
#end
#import "CVCGImageUtil.h"
#implementation CVCGImageUtil
+ (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context
{
// CVPixelBuffer to CoreImage
CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
image = [image imageByApplyingTransform:CGAffineTransformMakeRotation(M_PI)];
CGPoint origin = [image extent].origin;
image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-origin.x, -origin.y)];
// CoreImage to CGImage via CoreImage context
CGImageRef cgImage = [context createCGImage:image fromRect:[image extent]];
// CGImage to UIImage (OPTIONAL)
//UIImage *uiImage = [UIImage imageWithCGImage:cgImage];
//return (CGImageRef)uiImage.CGImage;
return cgImage;
}
+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
{
CGSize frameSize = CGSizeMake(CGImageGetWidth(image),
CGImageGetHeight(image));
NSDictionary *options =
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES],
kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES],
kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status =
CVPixelBufferCreate(
kCFAllocatorDefault, frameSize.width, frameSize.height,
kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(
pxdata, frameSize.width, frameSize.height,
8, CVPixelBufferGetBytesPerRow(pxbuffer),
rgbColorSpace,
(CGBitmapInfo)kCGBitmapByteOrder32Little |
kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
+ (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image
{
CVPixelBufferRef pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:image];
CMSampleBufferRef newSampleBuffer = NULL;
CMSampleTimingInfo timimgInfo = kCMTimingInfoInvalid;
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(
NULL, pixelBuffer, &videoInfo);
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
pixelBuffer,
true,
NULL,
NULL,
videoInfo,
&timimgInfo,
&newSampleBuffer);
return newSampleBuffer;
}
#end
I have the pieces together on how to accomplish both of these tasks im just not sure how to put them together. The first block of code captures an Image, however it is only a image buffer and not something I can convert to a UIImage.
- (void) captureStillImage
{
AVCaptureConnection *stillImageConnection = [[self stillImageOutput] connectionWithMediaType:AVMediaTypeVideo];
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:stillImageConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer != NULL) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *captureImage = [[UIImage alloc] initWithData:imageData];
}
if ([[self delegate] respondsToSelector:#selector(captureManagerStillImageCaptured:)]) {
[[self delegate] captureManagerStillImageCaptured:self];
}
}];
}
Here is from an apple example of taking an image buffer and having it be converted to a UIImage. How do I combine these two methods to work together?
-(UIImage*) getUIImageFromBuffer:(CMSampleBufferRef) imageSampleBuffer{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);
if (imageBuffer==NULL) {
NSLog(#"No buffer");
}
// Lock the base address of the pixel buffer
if((CVPixelBufferLockBaseAddress(imageBuffer, 0))==kCVReturnSuccess){
NSLog(#"Buffer locked successfully");
}
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
NSLog(#"bytes per row %zu",bytesPerRow );
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
NSLog(#"width %zu",width);
size_t height = CVPixelBufferGetHeight(imageBuffer);
NSLog(#"height %zu",height);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image= [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
return (image );
}
The first block of code does exactly what you need and is an acceptable way of doing it. What are you trying to do with the second block?