Capturing video with AVCaptureSession, no visible output with EAGLContext - ios

I'm capturing live video with the back camera on the iPhone with AVCaptureSession, applying some filters with CoreImage and then trying to output the resulting video with OpenGL ES. Most of the code is from an example from the WWDC 2012 session 'Core Image Techniques'.
Displaying the output of the filter chain using [UIImage imageWithCIImage:...] or by creating a CGImageRef for every frame works fine. However, when trying to display with OpenGL ES all I get is a black screen.
In the course they use a custom view class to display the output, however the code for that class isn't available. My view controller class extends GLKViewController and the class of it's view is set as GLKView.
I've searched for and downloaded all GLKit tutorials and examples I can find but nothing is helping. In particular I can't get any video output when I try to run the example from here either. Can anyone point me in the right direction?
#import "VideoViewController.h"
#interface VideoViewController ()
{
AVCaptureSession *_session;
EAGLContext *_eaglContext;
CIContext *_ciContext;
CIFilter *_sepia;
CIFilter *_bumpDistortion;
}
- (void)setupCamera;
- (void)setupFilters;
#end
#implementation VideoViewController
- (void)viewDidLoad
{
[super viewDidLoad];
GLKView *view = (GLKView *)self.view;
_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
[EAGLContext setCurrentContext:_eaglContext];
view.context = _eaglContext;
// Configure renderbuffers created by the view
view.drawableColorFormat = GLKViewDrawableColorFormatRGBA8888;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
view.drawableStencilFormat = GLKViewDrawableStencilFormat8;
[self setupCamera];
[self setupFilters];
}
- (void)setupCamera {
_session = [AVCaptureSession new];
[_session beginConfiguration];
[_session setSessionPreset:AVCaptureSessionPreset640x480];
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
[_session addInput:input];
AVCaptureVideoDataOutput *dataOutput = [AVCaptureVideoDataOutput new];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES];
NSDictionary *options;
options = #{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange] };
[dataOutput setVideoSettings:options];
[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[_session addOutput:dataOutput];
[_session commitConfiguration];
}
#pragma mark Setup Filters
- (void)setupFilters {
_sepia = [CIFilter filterWithName:#"CISepiaTone"];
[_sepia setValue:#0.7 forKey:#"inputIntensity"];
_bumpDistortion = [CIFilter filterWithName:#"CIBumpDistortion"];
[_bumpDistortion setValue:[CIVector vectorWithX:240 Y:320] forKey:#"inputCenter"];
[_bumpDistortion setValue:[NSNumber numberWithFloat:200] forKey:#"inputRadius"];
[_bumpDistortion setValue:[NSNumber numberWithFloat:3.0] forKey:#"inputScale"];
}
#pragma mark Main Loop
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// Grab the pixel buffer
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
// null colorspace to avoid colormatching
NSDictionary *options = #{ (id)kCIImageColorSpace : (id)kCFNull };
CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer options:options];
image = [image imageByApplyingTransform:CGAffineTransformMakeRotation(-M_PI/2.0)];
CGPoint origin = [image extent].origin;
image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-origin.x, -origin.y)];
// Pass it through the filter chain
[_sepia setValue:image forKey:#"inputImage"];
[_bumpDistortion setValue:_sepia.outputImage forKey:#"inputImage"];
// Grab the final output image
image = _bumpDistortion.outputImage;
// draw to GLES context
[_ciContext drawImage:image inRect:CGRectMake(0, 0, 480, 640) fromRect:[image extent]];
// and present to screen
[_eaglContext presentRenderbuffer:GL_RENDERBUFFER];
NSLog(#"frame hatched");
[_sepia setValue:nil forKey:#"inputImage"];
}
- (void)loadView {
[super loadView];
// Initialize the CIContext with a null working space
NSDictionary *options = #{ (id)kCIContextWorkingColorSpace : (id)kCFNull };
_ciContext = [CIContext contextWithEAGLContext:_eaglContext options:options];
}
- (void)viewWillAppear:(BOOL)animated {
[super viewWillAppear:animated];
[_session startRunning];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
#end

Wow, actually figured it out myself. This line of work may suit me after all ;)
First, for whatever reason, this code only works with OpenGL ES 2, not 3. Yet to figure out why.
Second, I was setting up the CIContext in the loadView method, which obviously runs before the viewDidLoad method and thus uses a not yet initialized EAGLContext.

Related

ios objective C screenshot sublayer not visible

I'm building an app where i want to take a snapshot from the camera and show it in a UIImageView. I'm able to take the snapshot but the AVCaptureVideoPreviewLayer is not visible in the screenshot. Does anyone know how to do that?
Here is my code:
#implementation ViewController
CGRect imgRect;
AVCaptureVideoPreviewLayer *previewLayer;
AVCaptureVideoDataOutput *output;
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
//Capture Session
AVCaptureSession *session = [[AVCaptureSession alloc]init];
session.sessionPreset = AVCaptureSessionPresetPhoto;
//Add device
AVCaptureDevice *device =
[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
//Input
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
if (!input)
{
NSLog(#"No Input");
}
[session addInput:input];
//Output
output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
output.videoSettings = #{ (NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
//Preview
previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
CGFloat x = self.view.bounds.size.width * 0.5 - 128;
imgRect = CGRectMake(x, 64, 256, 256);
previewLayer.frame = imgRect;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer:previewLayer];
//Start capture session
[session startRunning];
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (IBAction)TakeSnapshot:(id)sender {
self.imgResult.image = self.pb_takeSnapshot;
}
- (UIImage *)pb_takeSnapshot {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, [UIScreen mainScreen].scale);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
#end
a bit of help is very much appreciated.
Thank you in advance
Gilbert Avezaat
You should use AVCaptureStillImageOutput to get image from the camera connection,
Here is how you could do it,
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
stillImageOutput.outputSettings = #{
AVVideoCodecKey: AVVideoCodecJPEG,
(__bridge id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)
};
[stillImageOutput captureStillImageAsynchronouslyFromConnection:connection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [UIImage imageWithData:imageData];
}];
first check for Image is return or not . if return then ...
- (IBAction)TakeSnapshot:(id)sender {
self.imgResult.image = self.pb_takeSnapshot;
[self.view bringSubviewToFrunt:self.imgResult];
}
hope it help you .

Applying CIFilter on AVFoundation camera feed fails

I am trying to apply a CIFilter onto live camera feed (and be able to capture a filtered still image).
I have seen on StackOverflow some code pertaining the issue, but I haven't been able to get it to work.
My issue is that in the method captureOutput the filter seems correctly applied (I put a breakpoint in there and QuickLooked it), but I don't see it in my UIView (I see the original feed, without the filter).
Also I am not sure which output I should add to the session:
[self.session addOutput: self.stillOutput]; //AVCaptureStillImageOutput
[self.session addOutput: self.videoDataOut]; //AVCaptureVideoDataOutput
And which of those I should iterate through when looking for a connection (in findVideoConnection).
I am totally confused.
Here's some code:
viewDidLoad
-(void)viewDidLoad {
[super viewDidLoad];
self.shutterButton.userInteractionEnabled = YES;
self.context = [CIContext contextWithOptions: #{kCIContextUseSoftwareRenderer : #(YES)}];
self.filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[self.filter setValue:#15 forKey:kCIInputRadiusKey];
}
prepare session
-(void)prepareSessionWithDevicePosition: (AVCaptureDevicePosition)position {
AVCaptureDevice* device = [self videoDeviceWithPosition: position];
self.currentPosition = position;
self.session = [[AVCaptureSession alloc] init];
self.session.sessionPreset = AVCaptureSessionPresetPhoto;
NSError* error = nil;
self.deviceInput = [AVCaptureDeviceInput deviceInputWithDevice: device error: &error];
if ([self.session canAddInput: self.deviceInput]) {
[self.session addInput: self.deviceInput];
}
AVCaptureVideoPreviewLayer* previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession: self.session];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspect;
self.videoDataOut = [AVCaptureVideoDataOutput new];
[self.videoDataOut setSampleBufferDelegate: self queue:dispatch_queue_create("bufferQueue", DISPATCH_QUEUE_SERIAL)];
self.videoDataOut.alwaysDiscardsLateVideoFrames = YES;
CALayer* rootLayer = [[self view] layer];
rootLayer.masksToBounds = YES;
CGRect frame = self.previewView.frame;
previewLayer.frame = frame;
[rootLayer insertSublayer: previewLayer atIndex: 1];
self.stillOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary* outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
self.stillOutput.outputSettings = outputSettings;
[self.session addOutput: self.stillOutput];
//tried [self.session addOutput: self.videoDataOut];
//and didn't work (filtered image didn't show, and also couldn't take pictures)
[self findVideoConnection];
}
find video connection
-(void)findVideoConnection {
for (AVCaptureConnection* connection in self.stillOutput.connections) {
//also tried self.videoDataOut.connections
for (AVCaptureInputPort* port in [connection inputPorts]) {
if ([[port mediaType] isEqualToString: AVMediaTypeVideo]) {
self.videoConnection = connection;
break;
}
}
if (self.videoConnection != nil) {
break;
}
}
}
capture output, apply filter and put it in the CALayer
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// turn buffer into an image we can manipulate
CIImage *result = [CIImage imageWithCVPixelBuffer:imageBuffer];
// filter
[self.filter setValue:result forKey: #"inputImage"];
// render image
CGImageRef blurredImage = [self.context createCGImage:self.filter.outputImage fromRect:result.extent];
UIImage* img = [UIImage imageWithCGImage: blurredImage];
//Did this to check whether the image was actually filtered.
//And surprisingly it was.
dispatch_async(dispatch_get_main_queue(), ^{
//The image present in my UIView is for some reason not blurred.
self.previewView.layer.contents = (__bridge id)blurredImage;
CGImageRelease(blurredImage);
});
}

How to output a CIFilter to a Camera view?

I'm just starting out in Objective-C and I'm trying to create a simple app where it shows the camera view with a blur effect on it. I got the Camera output working with the AVFoundation framework. Now, I'm trying to hook up the Core image framework but to no knowledge how to, Apple documentation is confusing for me and searching for guides and tutorials online leads to no results. Thanks in advance for the help.
#import "ViewController.h"
#import <AVFoundation/AVFoundation.h>
#interface ViewController ()
#property (strong ,nonatomic) CIContext *context;
#end
#implementation ViewController
AVCaptureSession *session;
AVCaptureStillImageOutput *stillImageOutput;
-(CIContext *)context
{
if(!_context)
{
_context = [CIContext contextWithOptions:nil];
}
return _context;
}
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
}
-(void)viewWillAppear:(BOOL)animated{
session = [[AVCaptureSession alloc] init];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error;
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];
if ([session canAddInput:deviceInput]) {
[session addInput:deviceInput];
}
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CALayer *rootLayer = [[self view] layer];
[rootLayer setMasksToBounds:YES];
CGRect frame = self.imageView.frame;
[previewLayer setFrame:frame];
[previewLayer.connection setVideoOrientation:AVCaptureVideoOrientationLandscapeRight];
[rootLayer insertSublayer:previewLayer atIndex:0];
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
[session addOutput:stillImageOutput];
[session startRunning];
}
#end
Here's something to get you started. This is an updated version of the code from the following link.
https://gist.github.com/eladb/9662102
The trick is to use the AVCaptureVideoDataOutputSampleBufferDelegate.
With this delegate, you can use imageWithCVPixelBuffer to construct a CIImage from your camera buffer.
Right now though I'm trying to figure out how to reduce lag. I'll update asap.
Update: Latency is now minimal, and on some effects unnoticeable. Unfortunately, it seems that blur is one of the slowest. You may want to look into vImage.
#import "ViewController.h"
#import <CoreImage/CoreImage.h>
#import <AVFoundation/AVFoundation.h>
#interface ViewController () {
}
#property (strong, nonatomic) CIContext *coreImageContext;
#property (strong, nonatomic) AVCaptureSession *cameraSession;
#property (strong, nonatomic) AVCaptureVideoDataOutput *videoOutput;
#property (strong, nonatomic) UIView *blurCameraView;
#property (strong, nonatomic) CIFilter *filter;
#property BOOL cameraOpen;
#end
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
self.blurCameraView = [[UIView alloc]initWithFrame:[[UIScreen mainScreen] bounds]];
[self.view addSubview:self.blurCameraView];
//setup filter
self.filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[self.filter setDefaults];
[self.filter setValue:#(3.0f) forKey:#"inputRadius"];
[self setupCamera];
[self openCamera];
// Do any additional setup after loading the view, typically from a nib.
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (void)setupCamera
{
self.coreImageContext = [CIContext contextWithOptions:#{kCIContextUseSoftwareRenderer : #(YES)}];
// session
self.cameraSession = [[AVCaptureSession alloc] init];
[self.cameraSession setSessionPreset:AVCaptureSessionPresetLow];
[self.cameraSession commitConfiguration];
// input
AVCaptureDevice *shootingCamera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *shootingDevice = [AVCaptureDeviceInput deviceInputWithDevice:shootingCamera error:NULL];
if ([self.cameraSession canAddInput:shootingDevice]) {
[self.cameraSession addInput:shootingDevice];
}
// video output
self.videoOutput = [[AVCaptureVideoDataOutput alloc] init];
self.videoOutput.alwaysDiscardsLateVideoFrames = YES;
[self.videoOutput setSampleBufferDelegate:self queue:dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0)];
if ([self.cameraSession canAddOutput:self.videoOutput]) {
[self.cameraSession addOutput:self.videoOutput];
}
if (self.videoOutput.connections.count > 0) {
AVCaptureConnection *connection = self.videoOutput.connections[0];
connection.videoOrientation = AVCaptureVideoOrientationPortrait;
}
self.cameraOpen = NO;
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// turn buffer into an image we can manipulate
CIImage *result = [CIImage imageWithCVPixelBuffer:imageBuffer];
// filter
[self.filter setValue:result forKey:#"inputImage"];
// render image
CGImageRef blurredImage = [self.coreImageContext createCGImage:self.filter.outputImage fromRect:result.extent];
dispatch_async(dispatch_get_main_queue(), ^{
self.blurCameraView.layer.contents = (__bridge id)blurredImage;
CGImageRelease(blurredImage);
});
}
- (void)openCamera {
if (self.cameraOpen) {
return;
}
self.blurCameraView.alpha = 0.0f;
[self.cameraSession startRunning];
[self.view layoutIfNeeded];
[UIView animateWithDuration:3.0f animations:^{
self.blurCameraView.alpha = 1.0f;
}];
self.cameraOpen = YES;
}

How to capture frame-by-frame images from iPhone video recording in real time

I am trying to measure the saturation of a selected color in real-time, like this:
I am following this guide from Apple. I updated the code to work with ARC, and of course made my view controller an AVCaptureVideoDataOutputSampleBufferDelegate, but I don't know how to actually start capturing the data, as in starting up the camera to get some actual input.
Here is my code:
#import "ViewController.h"
#interface ViewController ()
#property (nonatomic, strong) AVCaptureSession *session;
#property (nonatomic, strong) AVCaptureVideoPreviewLayer *previewLayer;
#end
#implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib
[self setupCaptureSession];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetMedium;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[session addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// Start the session running to start the flow of data
[self startCapturingWithSession:session];
// Assign session to an ivar.
[self setSession:session];
}
- (void)startCapturingWithSession: (AVCaptureSession *) captureSession
{
//----- DISPLAY THE PREVIEW LAYER -----
//Display it full screen under out view controller existing controls
NSLog(#"Display the preview layer");
CGRect layerRect = [[[self view] layer] bounds];
[self.previewLayer setBounds:layerRect];
[self.previewLayer setPosition:CGPointMake(CGRectGetMidX(layerRect),
CGRectGetMidY(layerRect))];
//[[[self view] layer] addSublayer:[[self CaptureManager] self.previewLayer]];
//We use this instead so it goes on a layer behind our UI controls (avoids us having to manually bring each control to the front):
UIView *CameraView = [[UIView alloc] init];
[[self view] addSubview:CameraView];
[self.view sendSubviewToBack:CameraView];
[[CameraView layer] addSublayer:self.previewLayer];
//----- START THE CAPTURE SESSION RUNNING -----
[captureSession startRunning];
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
}
// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
#end
This did it for me, it was all about setting up a video preview:
#import "ViewController.h"
#interface ViewController ()
#property (nonatomic, strong) AVCaptureSession *session;
#property (nonatomic, strong) AVCaptureVideoPreviewLayer *previewLayer;
#end
#implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib
[self setupCaptureSession];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetMedium;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[session addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// Start the session running to start the flow of data
[self startCapturingWithSession:session];
// Assign session to an ivar.
[self setSession:session];
}
- (void)startCapturingWithSession: (AVCaptureSession *) captureSession
{
NSLog(#"Adding video preview layer");
[self setPreviewLayer:[[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession]];
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
//----- DISPLAY THE PREVIEW LAYER -----
//Display it full screen under out view controller existing controls
NSLog(#"Display the preview layer");
CGRect layerRect = [[[self view] layer] bounds];
[self.previewLayer setBounds:layerRect];
[self.previewLayer setPosition:CGPointMake(CGRectGetMidX(layerRect),
CGRectGetMidY(layerRect))];
//[[[self view] layer] addSublayer:[[self CaptureManager] self.previewLayer]];
//We use this instead so it goes on a layer behind our UI controls (avoids us having to manually bring each control to the front):
UIView *CameraView = [[UIView alloc] init];
[[self view] addSubview:CameraView];
[self.view sendSubviewToBack:CameraView];
[[CameraView layer] addSublayer:self.previewLayer];
//----- START THE CAPTURE SESSION RUNNING -----
[captureSession startRunning];
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
[connection setVideoOrientation:AVCaptureVideoOrientationLandscapeLeft];
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
}
// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
#end

CIContext drawImage causes EXC_BAD_ACCESS - iOS 6

I'm trying to apply a simple Core Image Filter to the live camera input. I think my code is OK but using the method drawImage:inRect:fromRect in the captureOutput method causes either an EXC_BAD_ACCESS, or a [__NSCFNumber drawImage:inRect:fromRect:]: unrecognized selector which makes me think that my context has been deallocated when I try to call drawImage on it. This does not make sense to me since my CIContext is a class member.
The problem does not seem to come from OpenGL since I tried with a simple context (not created from a EAGLContext) and I've got the same issue.
I'm testing it on an iphone 5 with ios 6 since the camera doesn't work on the simulator.
Could you help me on that ? Thank you very much for your time
I've got my .h file :
<!-- language: c# -->
// CameraController.h
#import <UIKit/UIKit.h>
#import <OpenGLES/EAGL.h>
#import <AVFoundation/AVFoundation.h>
#import <GLKit/GLKit.h>
#import <CoreMedia/CoreMedia.h>
#import <CoreVideo/CoreVideo.h>
#import <QuartzCore/QuartzCore.h>
#import <CoreImage/CoreImage.h>
#import <ImageIO/ImageIO.h>
#interface CameraController : GLKViewController <AVCaptureVideoDataOutputSampleBufferDelegate>{
AVCaptureSession *avCaptureSession;
CIContext *coreImageContext;
CIContext *ciTestContext;
GLuint _renderBuffer;
EAGLContext *glContext;
}
#end
and my .m file
<!-- language: c# -->
// CameraController.m
#import "CameraController.h"
#interface CameraController ()
#end
#implementation CameraController
- (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil
{
self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil];
if (self) {
}
return self;
}
- (void)viewDidLoad
{
[super viewDidLoad];
// Initialize Open GL ES2 Context
glContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if (!glContext) {
NSLog(#"Failed to create ES context");
}
[EAGLContext setCurrentContext:nil];
// Gets the GL View and sets the depth format to 24 bits, and the context of the view to be the Open GL context created above
GLKView *view = (GLKView *)self.view;
view.context = glContext;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
// Creates CI Context from EAGLContext
NSMutableDictionary *options = [[NSMutableDictionary alloc] init];
[options setObject: [NSNull null] forKey: kCIContextWorkingColorSpace];
coreImageContext = [CIContext contextWithEAGLContext:glContext options:options];
glGenRenderbuffers(1, &_renderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _renderBuffer);
// Initialize Video Capture Device
NSError *error;
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
// Initialize Video Output object and set output settings
AVCaptureVideoDataOutput *dataOutput = [[AVCaptureVideoDataOutput alloc] init];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES];
[dataOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
// Delegates the SampleBuffer to the current object which implements the AVCaptureVideoDataOutputSampleBufferDelegate interface via the captureOutput method
[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
// Initialize the capture session, add input, output, start urnning
avCaptureSession = [[AVCaptureSession alloc] init];
[avCaptureSession beginConfiguration];
[avCaptureSession setSessionPreset:AVCaptureSessionPreset1280x720];
[avCaptureSession addInput:input];
[avCaptureSession addOutput:dataOutput];
[avCaptureSession commitConfiguration];
[avCaptureSession startRunning];
}
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// Creates a CIImage from the sample buffer of the camera frame
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *inputImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
// Creates the relevant filter
CIFilter *filter = [CIFilter filterWithName:#"CISepiaTone"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:0.8f] forKey:#"InputIntensity"];
// Creates a reference to the output of the filter
CIImage *result = [filter valueForKey:kCIOutputImageKey];
// Draw to the context
[coreImageContext drawImage:result inRect:[result extent] fromRect:[result extent]]; // 5
[glContext presentRenderbuffer:GL_RENDERBUFFER];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
#end
In your viewDidLoad method, you have:
coreImageContext = [CIContext contextWithEAGLContext:glContext options:options];
coreImageContext needs to be retained if you want to use it in captureOutput method.

Resources