Capture still UIImage without compression (from CMSampleBufferRef)? - ios

I need to obtain the UIImage from uncompressed image data from CMSampleBufferRef. I'm using the code:
captureStillImageOutput captureStillImageAsynchronouslyFromConnection:connection
completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
// that famous function from Apple docs found on a lot of websites
// does NOT work for still images
UIImage *capturedImage = [self imageFromSampleBuffer:imageSampleBuffer];
}
http://developer.apple.com/library/ios/#qa/qa1702/_index.html is a link to imageFromSampleBuffer function.
But it does not work properly. :(
There is a jpegStillImageNSDataRepresentation:imageSampleBuffer method, but it gives the compressed data (well, because JPEG).
How can I get UIImage created with the most raw non-compressed data after capturing Still Image?
Maybe, I should specify some settings to video output? I'm currently using those:
captureStillImageOutput = [[AVCaptureStillImageOutput alloc] init];
captureStillImageOutput.outputSettings = #{ (id)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
I've noticed, that output has a default value for AVVideoCodecKey, which is AVVideoCodecJPEG. Can it be avoided in any way, or does it even matter when capturing still image?
I found something there: Raw image data from camera like "645 PRO" , but I need just a UIImage, without using OpenCV or OGLES or other 3rd party.

The method imageFromSampleBuffer does work in fact I'm using a changed version of it, but if I remember correctly you need to set the outputSettings right. I think you need to set the key as kCVPixelBufferPixelFormatTypeKey and the value as kCVPixelFormatType_32BGRA.
So for example:
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* outputSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[newStillImageOutput setOutputSettings:outputSettings];
EDIT
I am using those settings to take stillImages not video.
Is your sessionPreset AVCaptureSessionPresetPhoto? There may be problems with that
AVCaptureSession *newCaptureSession = [[AVCaptureSession alloc] init];
[newCaptureSession setSessionPreset:AVCaptureSessionPresetPhoto];
EDIT 2
The part about saving it to UIImage is identical with the one from the documentation. That's the reason I was asking for other origins of the problem, but I guess that was just grasping for straws.
There is another way I know of, but that requires OpenCV.
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
I guess that is of no help to you, sorry. I don't know enough to think of other origins for your problem.

Here's a more efficient way:
UIImage *image = [UIImage imageWithData:[self imageToBuffer:sampleBuffer]];
- (NSData *) imageToBuffer:(CMSampleBufferRef)source {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(source);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void *src_buff = CVPixelBufferGetBaseAddress(imageBuffer);
NSData *data = [NSData dataWithBytes:src_buff length:bytesPerRow * height];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return data;
}

Related

Peer to peer video in iOS

I am trying to transmit real time video buffers on one iPhone to another iPhone (called client iPhone) for preview display, and also to accept commands from the client iPhone. I am thinking of a standard way to achieve this. The closest thing I found is AVCaptureMultipeerVideoDataOutput on Github.
However that still uses Multipeer connectivity framework and I think it still requires some setup on both iPhones. The thing I want is there should be ideally no setup required on both iPhones, as long as Wifi (or if possible, bluetooth) is enabled on both iPhones, the peers should recognize each other within the app and prompt user about device discovery. What are the standard ways to achieve this and any links to sample code?
EDIT: I got it working through Multipeer connectivity after writing code from scratch. As of now, I am sending the pixel buffers to peer device by downscaling & compressing the data as jpeg. On the remote device, I have UIImage setup where I display the data every frame time. However I think UIKit may not be the best way to display data, even though images are small. How do I display this data using OpenGLES? Is direct decoding of jpeg possible in Opengles?
Comments:
As of now, I am sending the pixel buffers to peer device by
downscaling & compressing the data as jpeg. On the remote device, I
have UIImage setup where I display the data every frame time. However
I think UIKit may not be the best way to display data, even though
images are small.
Turns out, this is the best way to transmit an image via the Multipeer Connectivity framework. I have tried all the alternatives:
I've compressed frames using VideoToolbox. Too slow.
I've compressed frames using Compression. Too slow, but better.
Let me provide some code for #2:
On the iOS device transmitting image data:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
__block uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
dispatch_async(self.compressionQueue, ^{
uint8_t *compressed = malloc(sizeof(uint8_t) * 1228808);
size_t compressedSize = compression_encode_buffer(compressed, 1228808, baseAddress, 1228808, NULL, COMPRESSION_ZLIB);
NSData *data = [NSData dataWithBytes:compressed length:compressedSize];
NSLog(#"Sending size: %lu", [data length]);
dispatch_async(dispatch_get_main_queue(), ^{
__autoreleasing NSError *err;
[((ViewController *)self.parentViewController).session sendData:data toPeers:((ViewController *)self.parentViewController).session.connectedPeers withMode:MCSessionSendDataReliable error:&err];
});
});
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
On the iOS device displaying image data:
typedef struct {
size_t length;
void *data;
} ImageCacheDataStruct;
- (void)session:(nonnull MCSession *)session didReceiveData:(nonnull NSData *)data fromPeer:(nonnull MCPeerID *)peerID
{
NSLog(#"Receiving size: %lu", [data length]);
uint8_t *original = malloc(sizeof(uint8_t) * 1228808);
size_t originalSize = compression_decode_buffer(original, 1228808, [data bytes], [data length], NULL, COMPRESSION_ZLIB);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(original, 640, 480, 8, 2560, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
UIImage *image = [[UIImage alloc] initWithCGImage:newImage scale:1 orientation:UIImageOrientationUp];
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CGImageRelease(newImage);
if (image) {
dispatch_async(dispatch_get_main_queue(), ^{
[((ViewerViewController *)self.childViewControllers.lastObject).view.layer setContents:(__bridge id)image.CGImage];
});
}
}
Although this code produces original-quality images on the receiving end, you'll find this far too slow for real-time playback.
Here's the best way to do it:
On the iOS device sending the image data:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
UIImage *image = [[UIImage alloc] initWithCGImage:newImage scale:1 orientation:UIImageOrientationUp];
CGImageRelease(newImage);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
if (image) {
NSData *data = UIImageJPEGRepresentation(image, 0.7);
NSError *err;
[((ViewController *)self.parentViewController).session sendData:data toPeers:((ViewController *)self.parentViewController).session.connectedPeers withMode:MCSessionSendDataReliable error:&err];
}
}
On the iOS device receiving the image data:
- (void)session:(nonnull MCSession *)session didReceiveData:(nonnull NSData *)data fromPeer:(nonnull MCPeerID *)peerID
{
dispatch_async(self.imageCacheDataQueue, ^{
dispatch_semaphore_wait(self.semaphore, DISPATCH_TIME_FOREVER);
const void *dataBuffer = [data bytes];
size_t dataLength = [data length];
ImageCacheDataStruct *imageCacheDataStruct = calloc(1, sizeof(imageCacheDataStruct));
imageCacheDataStruct->data = (void*)dataBuffer;
imageCacheDataStruct->length = dataLength;
__block const void * kMyKey;
dispatch_queue_set_specific(self.imageDisplayQueue, &kMyKey, (void *)imageCacheDataStruct, NULL);
dispatch_sync(self.imageDisplayQueue, ^{
ImageCacheDataStruct *imageCacheDataStruct = calloc(1, sizeof(imageCacheDataStruct));
imageCacheDataStruct = dispatch_queue_get_specific(self.imageDisplayQueue, &kMyKey);
const void *dataBytes = imageCacheDataStruct->data;
size_t length = imageCacheDataStruct->length;
NSData *imageData = [NSData dataWithBytes:dataBytes length:length];
UIImage *image = [UIImage imageWithData:imageData];
if (image) {
dispatch_async(dispatch_get_main_queue(), ^{
[((ViewerViewController *)self.childViewControllers.lastObject).view.layer setContents:(__bridge id)image.CGImage];
dispatch_semaphore_signal(self.semaphore);
});
}
});
});
}
The reason for the semaphores and the separate GCD queues is simple: you want the frames to display at equal time intervals. Otherwise, the video will seem to slow down at first at times, right before speeding up way past normal in order to catch up. My scheme ensures that each frame plays one after another at the same pace, regardless of network bandwidth bottlenecks.

iPhone 6+ (Wrong scale?)

I have an iOS app using the camera to take pictures.
It uses a path(CGPath) drawn on the screen (for example a rectangle), and it takes a photo within that path. The app supports only portrait orientation.
For that to happen I use: AVCaptureSession, AVCaptureStillImageOutput, AVCaptureDevice, AVCaptureVideoPreviewLayer
(I guess all familiar to developers making this kind of apps).
My code uses UIScreen.mainScreen().bounds and UIScreen.mainScreen().scale to adapt do various devices and do its job.
It all goes fine(on iPhone 5, iPhone 6), until I try the app on an iPhone 6+ (running iOS 9.3.1) and see that something is wrong.
The picture taken is not layed out in the right place anymore.
I had someone try on an iPhone 6+, and by putting an appropriate message I was able to confirm that (UIScreen.mainScreen().scale) is what it shoud be: 3.0.
I have put the proper size launch images(640 × 960, 640 × 1136, 750 × 1334, 1242 × 2208) in the project.
So what could be the problem?
I use the code below in an app, it works on 6+.
The code starts a AVCaptureSession, pulling video input from the device's camera.
As it does so, it continuously updates the runImage var, from the captureOutput delegate function.
When the user wants to take a picture, the takePhoto method is called. This method creates a temporary UIImageview and feeds the runImage into it. This temp UIImageView is then used to draw another variable called currentImage to the scale of the device.
The currentImage, in my case, is square, matching the previewHolder frame, but I suppose you can make anything you want.
Declare these:
AVCaptureDevice * device;
AVCaptureDeviceInput * input;
AVCaptureVideoDataOutput * output;
AVCaptureSession * session;
AVCaptureVideoPreviewLayer * preview;
AVCaptureConnection * connection;
UIImage * runImage;
Load scanner:
-(void)loadScanner
{
device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
input = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
output = [AVCaptureVideoDataOutput new];
session = [AVCaptureSession new];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
[session addInput:input];
[session addOutput:output];
[output setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[output setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
preview = [AVCaptureVideoPreviewLayer layerWithSession:session];
preview.videoGravity = AVLayerVideoGravityResizeAspectFill;
preview.frame = previewHolder.bounds;
connection = preview.connection;
[connection setVideoOrientation:AVCaptureVideoOrientationPortrait];
[previewHolder.layer insertSublayer:preview atIndex:0];
}
Ongoing image capture, updates runImage var.
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
runImage = [self imageForBuffer:sampleBuffer];
}
Related to above.
-(UIImage *)imageForBuffer:(CMSampleBufferRef)sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage *image = [UIImage imageWithCGImage:quartzImage];
CGImageRelease(quartzImage);
UIImage * rotated = [[UIImage alloc] initWithCGImage:image.CGImage scale:1.0 orientation:UIImageOrientationRight];
return rotated;
}
On take photo:
-(void)takePhoto
{
UIImageView * temp = [UIImageView new];
temp.frame = previewHolder.frame;
temp.image = runImage;
temp.contentMode = UIViewContentModeScaleAspectFill;
temp.clipsToBounds = true;
[self.view addSubview:temp];
UIGraphicsBeginImageContextWithOptions(temp.bounds.size, NO, [UIScreen mainScreen].scale);
[temp drawViewHierarchyInRect:temp.bounds afterScreenUpdates:YES];
currentImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[temp removeFromSuperview];
//further code...
}
In case someone else has the same issue. Here is what made things go wrong for me:
I was naming a file : xyz#2x.png.
When UIScreen.mainScreen().scale == 3.0 (case of an iPhone 6+)
it has to be named : xyz#3x.png.

How can I get full resolution images from an AVCaptureSession without using the jpeg encoder?

I'm trying to save full resolution tiff files from the camera. I've seen a bunch of really helpful guides around here on capturing still images in either straight pixel data or using the jpeg hardware compressor. Thus far I'm able to capture straight pixel data in BGRA format, but I can't seem to get a sample buffer larger than 1920x1080 on a 5s. When I switch to the jpeg compressor route I get the full 5MP image.. just in jpeg format.
Here's my setup:
// Create the AVCaptureSession
AVCaptureSession *session = [[AVCaptureSession alloc] init];
[self.session setSessionPreset:AVCaptureSessionPresetPhoto];
and later on for output settings:
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey,nil];
Or:
[stillImageOutput setOutputSettings:#{AVVideoCodecKey : AVVideoCodecJPEG}];
Before I ask for the sample buffer I set:
setHighResolutionStillImageOutputEnabled:YES
Then I'm using:
-captureStillImageAsynchronouslyFromConnection
to get the sample buffer.
Just to finish up...
Within the completion block of -captureStillImageAsynchronouslyFromConnection:
For Jpegs I use:
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
For BGRA I use:
CFDictionaryRef metadata = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, imageDataSampleBuffer, kCMAttachmentMode_ShouldPropagate);
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer);
// >>>>>>>>>> lock buffer address
CVPixelBufferLockBaseAddress(imageBuffer, 0);
//Get information about the image
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// create suitable color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
//Create suitable context (suitable for camera output setting kCVPixelFormatType_32BGRA)
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// <<<<<<<<<< unlock buffer address
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
// release color space
CGColorSpaceRelease(colorSpace);
//Create a CGImageRef from the CVImageBufferRef
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
Have I overlooked something? Everything in the pipeline seems to work and I can't find any extra settings in the Apple docs. Is there a better/different way to do this?
TDLR: I can't seem to get more than 1920x1080 from a still image capture session using the BGRA pixel format output settings. Hoping someone can point me in the right direction.
Oops! I set the class var instead of the local one and overwrote the class var later on so no session preset is used.
Code should be:
// Create the AVCaptureSession
AVCaptureSession *session = [[AVCaptureSession alloc] init];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
Super simple mistake.

Memory warning OpenGL iOS Application

I am working on a rich graphics iOS application. At an instance, the memory taken by our application is 250 MB. I would take each frame from Camera, process it with OpenGL shaders and extract some data. Each time I use the camera to get the frames for processing I see an increase in the memory up to 280 MB. When I stop capturing the frames, memory comes back to normal to 250 MB. If I repeat the process of starting the camera and exiting for 10 times (lets say), I receive a memory warning (Though no memory leak being observed). I am not using ARC here. I am maintaing an auto release pool that includes the entire processing of a frame. I don't see any leaks while profiling. After 10 times, the memory seems to stand at 250 MB. I am not sure of the reason for memory warning. Any insights? I am happy to provide further information. Opengl version - ES 2.0, iOS version - 7.0
you have to use ARC, it will automatically release the bad memory, and make your application optimized
According to some other questions like this one (Crash running OpenGL on iOS after memory warning) and this one (instruments with iOS: Why does Memory Monitor disagree with Allocations?) the problem may be that you aren't deleting OpenGL resources (VBOs, textures, renderbuffers, whatever) when you're done with them.
Without seeing code, who knows? Are you simply rendering the frame buffer using the presentRenderbuffer method of EAGLContext? Then, what are you doing with the pixelBuffer you passed to CVOpenGLESTextureCacheCreateTextureFromImage? The pixel buffer is the only source of substantial memory in a typical use scenario.
However, if you're swapping the data in the render buffer to another buffer with, say, glReadPixels, then you've introduced one of several memory hogs. If the buffer you swapped to was a CoreGraphics buffer via, say, a CGDataProvider, did you include a data release callback, or did you pass nil as the parameter when you created the provider? Did you glFlush after you swapped buffers?
These are questions for which I could ascertain answers if you provided code; if you think you can tackle this without doing so, but would like to see working code that successfully manages memory in the most arduous use-case scenario there could possibly be:
https://demonicactivity.blogspot.com/2016/11/tech-serious-ios-developers-use-every.html
For your convenience, I've provided some code below. Place it after any call to the presentRenderbuffer method, commenting out the call if you do not want to render the buffer to the display in the CAEAGLLayer (as I did in the sample below):
// [_context presentRenderbuffer:GL_RENDERBUFFER];
dispatch_async(dispatch_get_main_queue(), ^{
#autoreleasepool {
// To capture the output to an OpenGL render buffer...
NSInteger myDataLength = _backingWidth * _backingHeight * 4;
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glPixelStorei(GL_UNPACK_ALIGNMENT, 8);
glReadPixels(0, 0, _backingWidth, _backingHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// To swap the pixel buffer to a CoreGraphics context (as a CGImage)
CGDataProviderRef provider;
CGColorSpaceRef colorSpaceRef;
CGImageRef imageRef;
CVPixelBufferRef pixelBuffer;
#try {
provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, &releaseDataCallback);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * _backingWidth;
colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
imageRef = CGImageCreate(_backingWidth, _backingHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
} #catch (NSException *exception) {
NSLog(#"Exception: %#", [exception reason]);
} #finally {
if (imageRef) {
// To convert the CGImage to a pixel buffer (for writing to a file using AVAssetWriter)
pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:imageRef];
// To verify the integrity of the pixel buffer (by converting it back to a CGIImage, and thendisplaying it in a layer)
imageLayer.contents = (__bridge id)[CVCGImageUtil cgImageFromPixelBuffer:pixelBuffer context:_ciContext];
}
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(imageRef);
}
}
});
.
.
.
The callback to free the data in the instance of the CGDataProvider class:
static void releaseDataCallback (void *info, const void *data, size_t size) {
free((void*)data);
}
The CVCGImageUtil class interface and implementation files, respectively:
#import Foundation;
#import CoreMedia;
#import CoreGraphics;
#import QuartzCore;
#import CoreImage;
#import UIKit;
#interface CVCGImageUtil : NSObject
+ (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context;
+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image;
+ (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image;
#end
#import "CVCGImageUtil.h"
#implementation CVCGImageUtil
+ (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context
{
// CVPixelBuffer to CoreImage
CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
image = [image imageByApplyingTransform:CGAffineTransformMakeRotation(M_PI)];
CGPoint origin = [image extent].origin;
image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-origin.x, -origin.y)];
// CoreImage to CGImage via CoreImage context
CGImageRef cgImage = [context createCGImage:image fromRect:[image extent]];
// CGImage to UIImage (OPTIONAL)
//UIImage *uiImage = [UIImage imageWithCGImage:cgImage];
//return (CGImageRef)uiImage.CGImage;
return cgImage;
}
+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
{
CGSize frameSize = CGSizeMake(CGImageGetWidth(image),
CGImageGetHeight(image));
NSDictionary *options =
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES],
kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES],
kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status =
CVPixelBufferCreate(
kCFAllocatorDefault, frameSize.width, frameSize.height,
kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(
pxdata, frameSize.width, frameSize.height,
8, CVPixelBufferGetBytesPerRow(pxbuffer),
rgbColorSpace,
(CGBitmapInfo)kCGBitmapByteOrder32Little |
kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
+ (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image
{
CVPixelBufferRef pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:image];
CMSampleBufferRef newSampleBuffer = NULL;
CMSampleTimingInfo timimgInfo = kCMTimingInfoInvalid;
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(
NULL, pixelBuffer, &videoInfo);
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
pixelBuffer,
true,
NULL,
NULL,
videoInfo,
&timimgInfo,
&newSampleBuffer);
return newSampleBuffer;
}
#end

Capture Still Image with AVFoundation and Convert To UIImage

I have the pieces together on how to accomplish both of these tasks im just not sure how to put them together. The first block of code captures an Image, however it is only a image buffer and not something I can convert to a UIImage.
- (void) captureStillImage
{
AVCaptureConnection *stillImageConnection = [[self stillImageOutput] connectionWithMediaType:AVMediaTypeVideo];
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:stillImageConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer != NULL) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *captureImage = [[UIImage alloc] initWithData:imageData];
}
if ([[self delegate] respondsToSelector:#selector(captureManagerStillImageCaptured:)]) {
[[self delegate] captureManagerStillImageCaptured:self];
}
}];
}
Here is from an apple example of taking an image buffer and having it be converted to a UIImage. How do I combine these two methods to work together?
-(UIImage*) getUIImageFromBuffer:(CMSampleBufferRef) imageSampleBuffer{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);
if (imageBuffer==NULL) {
NSLog(#"No buffer");
}
// Lock the base address of the pixel buffer
if((CVPixelBufferLockBaseAddress(imageBuffer, 0))==kCVReturnSuccess){
NSLog(#"Buffer locked successfully");
}
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
NSLog(#"bytes per row %zu",bytesPerRow );
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
NSLog(#"width %zu",width);
size_t height = CVPixelBufferGetHeight(imageBuffer);
NSLog(#"height %zu",height);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image= [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
return (image );
}
The first block of code does exactly what you need and is an acceptable way of doing it. What are you trying to do with the second block?

Resources