I have a UIView on my Storyboard with four simple constraints set the fill the entire screen - my application is for landscape use only. This UIView is to show the camera's live preview, utilizing AVCaptureVideoPreviewLayer.
I have tried using some code examples found on StackOverflow and the wider internet, initiating an AVCaptureSession and AVCaptureVideoPreviewLayer; however, none have successfully functioned. The UIView on the storyboard always ends up failing to display on screen (white screen when view loads)
I wish to create my app's entire interface with the interface builder using constraints, yet reap the functionality of AVCaptureVideoPreviewLayer. Can someone provide a full code example to assist me if they have a possible solution?
This is my code, it works for me. The head file AVCameraCaptureData.h:
#interface AVCameraCaptureData : NSObject
#property (nonatomic, strong) AVCaptureSession *captureSession;
#property (nonatomic, strong) AVCaptureConnection *captureConnection;
#property (nonatomic, strong) AVCaptureDeviceInput *deviceInput;
#property (nonatomic, strong) AVCaptureStillImageOutput *captureOutput;
#property (nonatomic, strong) AVCaptureVideoPreviewLayer *previewLayer;
#end
#if defined(__cplusplus)
extern "C"
{
#endif /* defined(__cplusplus) */
AVCaptureDevice* getCaptureDeviceByPosition(AVCaptureDevicePosition position);
AVCaptureStillImageOutput* getCaptureOutput();
AVCaptureConnection* ConnectionWithMediaType (NSString* mediaType, NSArray* fromConnections);
AVCaptureVideoPreviewLayer* getCapturePreviewLayer(AVCaptureSession* captureSession);
AVCameraCaptureData* CreateCaptureData (AVCaptureDevicePosition position, CGRect presentLayerFrame, NSString* sessionPreset);
#if defined(__cplusplus)
}
#endif /* defined(__cplusplus) */
AVCameraCaptureData.m file:
#import "AVCameraCaptureData.h"
#import "AVCameraUtil.h"
#implementation AVCameraCaptureData
AVCaptureDevice* getCaptureDeviceByPosition(AVCaptureDevicePosition position)
{
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices)
{
if ([device position] == position)
return device;
}
return nil;
}
AVCaptureStillImageOutput* getCaptureOutput()
{
AVCaptureStillImageOutput* captureOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary* outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey, nil];
[captureOutput setOutputSettings:outputSettings];
return captureOutput;
}
AVCaptureConnection* ConnectionWithMediaType (NSString* mediaType, NSArray* fromConnections)
{
for (AVCaptureConnection* connection in fromConnections)
{
for (AVCaptureInputPort* port in [connection inputPorts])
{
if ([[port mediaType] isEqual:mediaType])
return connection;
}
}
return nil;
}
AVCaptureVideoPreviewLayer* getCapturePreviewLayer(AVCaptureSession* captureSession)
{
AVCaptureVideoPreviewLayer* previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession];
//please set frame for yourself
[previewLayer setFrame:[AVCameraUtil screenBoundsFixedToPortraitOrientation]];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
return previewLayer;
}
AVCameraCaptureData* CreateCaptureData (AVCaptureDevicePosition position, CGRect presentLayerFrame, NSString* sessionPreset)
{
AVCaptureSession* captureSession = [[AVCaptureSession alloc] init];
if (sessionPreset == nil)
[captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
else
[captureSession setSessionPreset:sessionPreset];
AVCaptureDevice *device = getCaptureDeviceByPosition (position);
AVCaptureDeviceInput *videoDeviceInput = [[AVCaptureDeviceInput alloc] initWithDevice:device error:nil];
if ([captureSession canAddInput:videoDeviceInput])
[captureSession addInput:videoDeviceInput];
AVCaptureStillImageOutput* captureOutput = getCaptureOutput();
if ([captureSession canAddOutput:captureOutput])
[captureSession addOutput:captureOutput];
AVCameraCaptureData *data = [[AVCameraCaptureData alloc] init];
data.captureSession = captureSession;
data.deviceInput = videoDeviceInput;
data.captureOutput = captureOutput;
data.captureConnection = ConnectionWithMediaType(AVMediaTypeVideo, [captureOutput connections]);
data.previewLayer = getCapturePreviewLayer(captureSession);
return data;
}
#end
Then you can init AVCameraCaptureData with function in your controller
CreateCaptureData(AVCaptureDevicePositionBack, self.view.frame,nil);
Start run session:
[self.captureData.captureSession startRunning];
Add AVCaptureVideoPreviewLayer to view.layer:
[self.view.layer addSublayer:self.captureData.previewLayer];
Related
I am trying to speed up the reading of barcodes in my app, the app works fine but is a tad slow at reading barcodes.
How do I improve the speed of reading the barcodes?
Here is the code I have so far.
#import "ScanViewController.h"
#import "Utils.h"
#import <AVFoundation/AVFoundation.h>
#import <AudioToolbox/AudioToolbox.h>
#interface ScanViewController ()<AVCaptureMetadataOutputObjectsDelegate>
#property (nonatomic, readwrite) AVCaptureSession *captureSession;
#property (nonatomic, readwrite) AVCaptureVideoPreviewLayer *videoPreviewLayer;
#property (nonatomic, readwrite) UIView *qrCodeFrameView;
#property (nonatomic, readwrite) UILabel *qrCodeTextView;
#property (nonatomic, readwrite) NSArray *supportedCodeTypes;
#property (nonatomic, readwrite) long long lastScanTime;
#property (nonatomic, readwrite) NSString *lastScanCode;
#property (nonatomic, readwrite) AVAudioPlayer *audioPlayer;
#end
#implementation ScanViewController
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view.
self.captureSession = [AVCaptureSession new];
self.captureSession.sessionPreset = AVCaptureSessionPresetHigh;
self.supportedCodeTypes = #[AVMetadataObjectTypeUPCECode,
AVMetadataObjectTypeEAN13Code,
AVMetadataObjectTypeEAN8Code];
// AVMetadataObjectTypeCode39Code,
// AVMetadataObjectTypeCode39Mod43Code,
// AVMetadataObjectTypeCode93Code,
// AVMetadataObjectTypeCode128Code,
// AVMetadataObjectTypeAztecCode,
// AVMetadataObjectTypePDF417Code,
// AVMetadataObjectTypeITF14Code,
// AVMetadataObjectTypeDataMatrixCode,
// AVMetadataObjectTypeInterleaved2of5Code,
// AVMetadataObjectTypeQRCode];
AVCaptureDevice *captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVCaptureSessionPre];
if(captureDevice == nil) {
NSLog(#"Failed to get the camera device");
return;
}
#try {
// Get an instance of the AVCaptureDeviceInput class using the previous device object.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:nil];
// Set the input device on the capture session.
[self.captureSession addInput:input];
// Initialize a AVCaptureMetadataOutput object and set it as the output device to the capture session.
AVCaptureMetadataOutput *captureMetadataOutput = [[AVCaptureMetadataOutput alloc] init];
[self.captureSession addOutput:captureMetadataOutput];
// Set delegate and use the default dispatch queue to execute the call back
[captureMetadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
captureMetadataOutput.metadataObjectTypes = self.supportedCodeTypes;
// captureMetadataOutput.metadataObjectTypes = [AVMetadataObject.ObjectType.qr]
} #catch (NSException *error) {
// If any error occurs, simply print it out and don't continue any more.
NSLog(#"%#", error);
return;
}
// Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer.
self.videoPreviewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
self.videoPreviewLayer.videoGravity = kCAGravityResizeAspectFill;
self.videoPreviewLayer.frame = self.view.layer.bounds;
[self.view.layer addSublayer:self.videoPreviewLayer];
// Start video capture.
[self.captureSession startRunning];
// Move the result view and loading view to the front
[self.view bringSubviewToFront:self.resultView];
[self.view bringSubviewToFront:self.loadingView];
// Initialize QR Code Frame to highlight the QR code
self.qrCodeFrameView = [[UIView alloc] init];
if (self.qrCodeFrameView) {
self.qrCodeFrameView.layer.borderColor = UIColor.greenColor.CGColor;
self.qrCodeFrameView.layer.borderWidth = 2;
[self.view addSubview:self.qrCodeFrameView];
[self.view bringSubviewToFront:self.qrCodeFrameView];
}
self.qrCodeTextView = [[UILabel alloc] init];
if (self.qrCodeTextView) {
[self.qrCodeTextView setTextColor:UIColor.greenColor];
[self.qrCodeTextView setFont:[UIFont systemFontOfSize:20]];
[self.qrCodeFrameView addSubview:self.qrCodeTextView];
}
[self rotateLoadingImage];
[self setResultType:RESULT_TYPE_WORKING codeContent:#"Ready" price:0.00];
[self.loadingView setHidden:YES];
}
-(void)viewWillDisappear:(BOOL)animated {
if (self.audioPlayer != nil) {
[self.audioPlayer stop];
self.audioPlayer = nil;
}
[super viewWillDisappear:animated];
}
/*
#pragma mark - Navigation
// In a storyboard-based application, you will often want to do a little preparation before navigation
- (void)prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender {
// Get the new view controller using [segue destinationViewController].
// Pass the selected object to the new view controller.
}
*/
-(void) updatePreviewLayer:(AVCaptureConnection*)layer orientation:(AVCaptureVideoOrientation)orientation {
layer.videoOrientation = orientation;
self.videoPreviewLayer.frame = self.view.bounds;
}
-(void)viewDidLayoutSubviews {
[super viewDidLayoutSubviews];
if(self.videoPreviewLayer.connection != nil) {
UIDevice *currentDevice = [UIDevice currentDevice];
UIDeviceOrientation orientation = [currentDevice orientation];
AVCaptureConnection *previewLayerConnection = self.videoPreviewLayer.connection;
if(previewLayerConnection.isVideoOrientationSupported) {
switch (orientation) {
case UIDeviceOrientationPortrait:
[self updatePreviewLayer:previewLayerConnection orientation:AVCaptureVideoOrientationPortrait];
break;
case UIDeviceOrientationLandscapeRight:
[self updatePreviewLayer:previewLayerConnection orientation:AVCaptureVideoOrientationLandscapeLeft];
break;
case UIDeviceOrientationLandscapeLeft:
[self updatePreviewLayer:previewLayerConnection orientation:AVCaptureVideoOrientationLandscapeRight];
break;
case UIDeviceOrientationPortraitUpsideDown:
[self updatePreviewLayer:previewLayerConnection orientation:AVCaptureVideoOrientationPortraitUpsideDown];
break;
default:
[self updatePreviewLayer:previewLayerConnection orientation:AVCaptureVideoOrientationPortrait];
break;
}
}
}
}
-(void)captureOutput:(AVCaptureOutput *)output didOutputMetadataObjects:(NSArray<__kindof AVMetadataObject *> *)metadataObjects fromConnection:(AVCaptureConnection *)connection {
// Check if the metadataObjects array is not nil and it contains at least one object.
if (metadataObjects.count == 0) {
self.qrCodeFrameView.frame = CGRectZero;
return;
}
// Get the metadata object.
AVMetadataMachineReadableCodeObject *metadataObj = (AVMetadataMachineReadableCodeObject*)(metadataObjects[0]);
if ([self.supportedCodeTypes containsObject:metadataObj.type]) {
// If the found metadata is equal to the QR code metadata (or barcode) then update the status label's text and set the bounds
AVMetadataObject *barCodeObject = [self.videoPreviewLayer transformedMetadataObjectForMetadataObject:metadataObj];
NSString *code = metadataObj.stringValue;
if (code != nil) {
// check upc a code
if ([self checkUpcACode:metadataObj.type code:code] == NO) {
self.qrCodeTextView.text = #"";
return;
}
int i=0;
for (i=0; i<code.length; i++) {
char ch = [code characterAtIndex:i];
if (ch != '0') break;
}
if (i>0) i--;
code = [code substringFromIndex:i];
self.qrCodeFrameView.frame = barCodeObject.bounds;
[self.qrCodeTextView setText:code];
self.qrCodeTextView.frame = CGRectMake(0, self.qrCodeFrameView.frame.size.height-20, self.qrCodeFrameView.frame.size.width, 20);
NSLog(#"%#", code);
[self handleBarcode:code];
} else {
self.qrCodeTextView.text = #"";
}
}
}
-(BOOL)checkUpcACode:(AVMetadataObjectType)type code:(NSString*)code {
if (type == AVMetadataObjectTypeEAN13Code) {
if ([code hasPrefix:#"0"] && [code length] > 0) {
return YES;
}
}
return NO;
}
#end```
Solution was from PBK on Apple Forums
//change [self handleBarcode:code];
// to
dispatch_async(dispatch_get_main_queue(), ^{
[self handleBarcode:code];
});
I was wondering if it's possible to add multiple AVCaptureVideoDataOutput to AVCaptureSession with a single camera device input?
My experiments indicate that adding a second VideoDataOutput will cause canAddOutput return NO. But I couldn't find anywhere on Apple's documentation says multiple data output is disallow.
We can not use single AVCaptureSession for multiple AVCaptureVideoDataOutput objects.
What can you do is you can make multiple AVCaptureVideoDataOutput with multiple AVCaptureSession objects.
You can create two different setups of the AVCaptureVideoDataOutput and AVCaptureSession then you can use them one after another in the app and you will be able to achieve the goal.
In my case, I had to capture front and back image using the camera at a time.
I did create two different objects for AVCaptureVideoDataOutput and AVCaptureSession as given below.
/* Front camera settings */
#property bool isFrontRecording;
#property (strong, nonatomic) AVCaptureDeviceInput *videoInputBack;
#property (strong, nonatomic) AVCaptureStillImageOutput *imageOutputBack;
#property (strong, nonatomic) AVCaptureSession *sessionBack;
/* Back camera settings */
#property bool isBackRecording;
#property (strong, nonatomic) AVCaptureDeviceInput *videoInputFront;
#property (strong, nonatomic) AVCaptureStillImageOutput *imageOutputFront;
#property (strong, nonatomic) AVCaptureSession *sessionFront;
Now in view did load initially I did setup back camera and set up flags for session started recording or not for both sessions as followed.
- (void)viewDidLoad {
[super viewDidLoad];
[self setupBackAVCapture];
self.isFrontRecording = NO;
self.isBackRecording = NO;
}
- (void)setupBackAVCapture
{
NSError *error = nil;
self.sessionBack = [[AVCaptureSession alloc] init];
self.sessionBack.sessionPreset = AVCaptureSessionPresetPhoto;
AVCaptureDevice *camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
self.videoInputBack = [[AVCaptureDeviceInput alloc] initWithDevice:camera error:&error];
[self.sessionBack addInput:self.videoInputBack];
self.imageOutputBack = [[AVCaptureStillImageOutput alloc] init];
[self.sessionBack addOutput:self.imageOutputBack];
}
Now, whenever the user starts capturing photo we will capture the front photo using below code.
- (IBAction)buttonCapture:(id)sender {
[self takeBackPhoto];
}
- (void)takeBackPhoto
{
[self.sessionBack startRunning];
if (!self.isFrontRecording) {
self.isFrontRecording = YES;
AudioServicesPlaySystemSound(kSystemSoundID_Vibrate);
AVCaptureConnection *videoConnection = [self.imageOutputBack connectionWithMediaType:AVMediaTypeVideo];
if (videoConnection == nil) {
return;
}
[self.imageOutputBack
captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer == NULL) {
return;
}
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
UIImageWriteToSavedPhotosAlbum(image, self, nil, nil);
[self.imageView setImage:image];
[self.sessionBack stopRunning];
// Set up front camera setting and capture photo.
[self setupFrontAVCapture];
[self takeFrontPhoto];
}];
self.isFrontRecording = NO;
}
}
As soon as the back image will be captured we will setup session to capture front image using setupFrontAVCapture method and then we will capture front image using takeFrontPhoto method as given below.
- (void)setupFrontAVCapture
{
NSError *error = nil;
self.sessionFront = [[AVCaptureSession alloc] init];
self.sessionFront.sessionPreset = AVCaptureSessionPresetPhoto;
AVCaptureDevice *camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
camera = [self cameraWithPosition:AVCaptureDevicePositionFront];
self.videoInputFront = [[AVCaptureDeviceInput alloc] initWithDevice:camera error:&error];
[self.sessionFront addInput:self.videoInputFront];
self.imageOutputFront = [[AVCaptureStillImageOutput alloc] init];
[self.sessionFront addOutput:self.imageOutputFront];
}
- (void)takeFrontPhoto
{
[self.sessionFront startRunning];
if (!self.isBackRecording) {
self.isBackRecording = YES;
AudioServicesPlaySystemSound(kSystemSoundID_Vibrate);
AVCaptureConnection *videoConnection = [self.imageOutputFront connectionWithMediaType:AVMediaTypeVideo];
if (videoConnection == nil) {
return;
}
[self.imageOutputFront
captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer == NULL) {
return;
}
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
UIImageWriteToSavedPhotosAlbum(image, self, nil, nil);
[self.imageViewBack setImage:image];
[self.sessionFront stopRunning];
}];
self.isBackRecording = NO;
}
}
This way you can use two different set of AVCaptureSession and AVCaptureStillImageOutput objects and you can achieve your goal.
Please let me know if you have any confusion.
I'm just starting out in Objective-C and I'm trying to create a simple app where it shows the camera view with a blur effect on it. I got the Camera output working with the AVFoundation framework. Now, I'm trying to hook up the Core image framework but to no knowledge how to, Apple documentation is confusing for me and searching for guides and tutorials online leads to no results. Thanks in advance for the help.
#import "ViewController.h"
#import <AVFoundation/AVFoundation.h>
#interface ViewController ()
#property (strong ,nonatomic) CIContext *context;
#end
#implementation ViewController
AVCaptureSession *session;
AVCaptureStillImageOutput *stillImageOutput;
-(CIContext *)context
{
if(!_context)
{
_context = [CIContext contextWithOptions:nil];
}
return _context;
}
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
}
-(void)viewWillAppear:(BOOL)animated{
session = [[AVCaptureSession alloc] init];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error;
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];
if ([session canAddInput:deviceInput]) {
[session addInput:deviceInput];
}
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CALayer *rootLayer = [[self view] layer];
[rootLayer setMasksToBounds:YES];
CGRect frame = self.imageView.frame;
[previewLayer setFrame:frame];
[previewLayer.connection setVideoOrientation:AVCaptureVideoOrientationLandscapeRight];
[rootLayer insertSublayer:previewLayer atIndex:0];
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
[session addOutput:stillImageOutput];
[session startRunning];
}
#end
Here's something to get you started. This is an updated version of the code from the following link.
https://gist.github.com/eladb/9662102
The trick is to use the AVCaptureVideoDataOutputSampleBufferDelegate.
With this delegate, you can use imageWithCVPixelBuffer to construct a CIImage from your camera buffer.
Right now though I'm trying to figure out how to reduce lag. I'll update asap.
Update: Latency is now minimal, and on some effects unnoticeable. Unfortunately, it seems that blur is one of the slowest. You may want to look into vImage.
#import "ViewController.h"
#import <CoreImage/CoreImage.h>
#import <AVFoundation/AVFoundation.h>
#interface ViewController () {
}
#property (strong, nonatomic) CIContext *coreImageContext;
#property (strong, nonatomic) AVCaptureSession *cameraSession;
#property (strong, nonatomic) AVCaptureVideoDataOutput *videoOutput;
#property (strong, nonatomic) UIView *blurCameraView;
#property (strong, nonatomic) CIFilter *filter;
#property BOOL cameraOpen;
#end
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
self.blurCameraView = [[UIView alloc]initWithFrame:[[UIScreen mainScreen] bounds]];
[self.view addSubview:self.blurCameraView];
//setup filter
self.filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[self.filter setDefaults];
[self.filter setValue:#(3.0f) forKey:#"inputRadius"];
[self setupCamera];
[self openCamera];
// Do any additional setup after loading the view, typically from a nib.
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (void)setupCamera
{
self.coreImageContext = [CIContext contextWithOptions:#{kCIContextUseSoftwareRenderer : #(YES)}];
// session
self.cameraSession = [[AVCaptureSession alloc] init];
[self.cameraSession setSessionPreset:AVCaptureSessionPresetLow];
[self.cameraSession commitConfiguration];
// input
AVCaptureDevice *shootingCamera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *shootingDevice = [AVCaptureDeviceInput deviceInputWithDevice:shootingCamera error:NULL];
if ([self.cameraSession canAddInput:shootingDevice]) {
[self.cameraSession addInput:shootingDevice];
}
// video output
self.videoOutput = [[AVCaptureVideoDataOutput alloc] init];
self.videoOutput.alwaysDiscardsLateVideoFrames = YES;
[self.videoOutput setSampleBufferDelegate:self queue:dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0)];
if ([self.cameraSession canAddOutput:self.videoOutput]) {
[self.cameraSession addOutput:self.videoOutput];
}
if (self.videoOutput.connections.count > 0) {
AVCaptureConnection *connection = self.videoOutput.connections[0];
connection.videoOrientation = AVCaptureVideoOrientationPortrait;
}
self.cameraOpen = NO;
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// turn buffer into an image we can manipulate
CIImage *result = [CIImage imageWithCVPixelBuffer:imageBuffer];
// filter
[self.filter setValue:result forKey:#"inputImage"];
// render image
CGImageRef blurredImage = [self.coreImageContext createCGImage:self.filter.outputImage fromRect:result.extent];
dispatch_async(dispatch_get_main_queue(), ^{
self.blurCameraView.layer.contents = (__bridge id)blurredImage;
CGImageRelease(blurredImage);
});
}
- (void)openCamera {
if (self.cameraOpen) {
return;
}
self.blurCameraView.alpha = 0.0f;
[self.cameraSession startRunning];
[self.view layoutIfNeeded];
[UIView animateWithDuration:3.0f animations:^{
self.blurCameraView.alpha = 1.0f;
}];
self.cameraOpen = YES;
}
I'm struggling with delegate creation and usage. Could someone help me understand what I'm doing wrong? According to all the examples I have read this is correct but my data is not being returned.
ViewController (Parent)
.h
#import <UIKit/UIKit.h>
#import "BarcodeViewController.h"
#interface ViewController: UIViewController <BarcodeViewDelegate> {
IBOutlet UILabel *bcode;
}
#end
.m
-(void)setbarcode:(NSString*)barcode
{
NSLog(#" data %#", barcode);
bcode.text = barcode;
}
- (void)prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender
{
// Make sure your segue name in storyboard is the same as this line
if ([[segue identifier] isEqualToString:#"scanbarcode"])
{
BarcodeViewController *cv = [segue destinationViewController];
cv.delegate = self;
}
}
The Barcode view controller (Child view)
.h
#protocol BarcodeViewDelegate <NSObject>
-(void)setbarcode:(NSString*)barcode;
#end
#interface BarcodeViewController : UIViewController
{
id<BarcodeViewDelegate> delegate;
}
#property(nonatomic,assign)id delegate;
#end
.m
#import "BarcodeViewController.h"
#import <AVFoundation/AVFoundation.h>
#interface BarcodeViewController () <AVCaptureMetadataOutputObjectsDelegate>
{
AVCaptureSession *_session;
AVCaptureDevice *_device;
AVCaptureDeviceInput *_input;
AVCaptureMetadataOutput *_output;
AVCaptureVideoPreviewLayer *_prevLayer;
UIView *_highlightView;
UIImageView *_imageOverlay;
}
#end
#implementation BarcodeViewController
- (void)viewDidLoad
{
[super viewDidLoad];
/*
* setup scanner view
*/
_highlightView = [[UIView alloc] init];
_highlightView.autoresizingMask = UIViewAutoresizingFlexibleTopMargin|UIViewAutoresizingFlexibleLeftMargin|UIViewAutoresizingFlexibleRightMargin|UIViewAutoresizingFlexibleBottomMargin;
_highlightView.layer.borderColor = [UIColor greenColor].CGColor;
_highlightView.layer.borderWidth = 3;
[self.view addSubview:_highlightView];
/*
* setup a overlay guide
*/
_imageOverlay = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"camera_overlay"]];
[self.view addSubview:_imageOverlay];
_session = [[AVCaptureSession alloc] init];
_device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
_input = [AVCaptureDeviceInput deviceInputWithDevice:_device error:&error];
if (_input) {
[_session addInput:_input];
} else {
NSLog(#"Error: %#", error);
}
_output = [[AVCaptureMetadataOutput alloc] init];
[_output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[_session addOutput:_output];
_output.metadataObjectTypes = [_output availableMetadataObjectTypes];
_prevLayer = [AVCaptureVideoPreviewLayer layerWithSession:_session];
_prevLayer.frame = self.view.bounds;
_prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer:_prevLayer];
[_session startRunning];
[self.view bringSubviewToFront:_highlightView];
[self.view bringSubviewToFront:_imageOverlay];
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection
{
CGRect highlightViewRect = CGRectZero;
AVMetadataMachineReadableCodeObject *barCodeObject;
NSString *detectionString = nil;
NSArray *barCodeTypes = #[AVMetadataObjectTypeUPCECode, AVMetadataObjectTypeCode39Code, AVMetadataObjectTypeCode39Mod43Code,
AVMetadataObjectTypeEAN13Code, AVMetadataObjectTypeEAN8Code, AVMetadataObjectTypeCode93Code, AVMetadataObjectTypeCode128Code,
AVMetadataObjectTypePDF417Code, AVMetadataObjectTypeQRCode, AVMetadataObjectTypeAztecCode];
/*
* keep looking around while we look for the barcode
*/
for (AVMetadataObject *metadata in metadataObjects) {
for (NSString *type in barCodeTypes) {
if ([metadata.type isEqualToString:type])
{
barCodeObject = (AVMetadataMachineReadableCodeObject *)[_prevLayer transformedMetadataObjectForMetadataObject:(AVMetadataMachineReadableCodeObject *)metadata];
//highlightViewRect = barCodeObject.bounds;
detectionString = [(AVMetadataMachineReadableCodeObject *)metadata stringValue];
break;
}
}
if (detectionString != nil)
{
/*
* Set the detected barcode so we can use it.
* - perform segway to another view
*/
//barcode = detectionString;
//NSLog(#" data %#", detectionString);
[delegate setbarcode: detectionString];
[self.navigationController popViewControllerAnimated:YES];
break;
}
//else
/*
* Just reset the barcode value for now
*/
//_barcode = false;
}
_highlightView.frame = highlightViewRect;
}
#end
Your problem is simple.
You define an instance variable and a property to hold the delegate:
{
id delegate;
}
#property(nonatomic,assign)id delegate;
You set the delegate via the property:
cv.delegate = self;
Then access it via the instance variable:
[delegate setbarcode: detectionString];
The property is backed by a different instance variable, automatically defined as _delegate, which you should not be accessing. You should always access via the property, as self.delegate. When calling your delegate ivar, it will be nil, so nothing is being sent back.
Remove the unnecessary instance variable declaration, type the property correctly (as id<BarcodeViewDelegate> rather than just id) and always access it via the property, and you'll be fine.
I'm trying to build a non-realtime face detection application.
Following this article: http://maniacdev.com/2011/11/tutorial-easy-face-detection-with-core-image-in-ios-5/ I can load in a jpg and detect faces.
I would like to automatically take a picture every 20 seconds, then display the image in a UIImageView* and then run the existing detect face function on it.
My question is two fold.
Is there an easy way to take a sample picture from the camera and
load it into a UIImageView* without saving it?
How can i automate this to happen every 30 seconds with no user interaction?
Thanks!
Look at AVFoundation Programming Guide
AVFoundation Programming Guide
This guide shows you how to use the AVFoundation to capture media.
You will need to take into account Device Rotation as the camera will display only its raw output until you rotate the output via CATransformMatrix But that is a bit more in depth than you want.
You may be able to get away with just knowing. You rotate 45° from the original point to the final rotation location.
Here is my code for my little camera testing utility.
Build a UIView and connect the IBOutlets and IBActions
ViewController.h
#import <UIKit/UIKit.h>
#interface ViewController : UIViewController
#property (weak, nonatomic) IBOutlet UIView *previewViewContainer;
#property (weak, nonatomic) IBOutlet UIView *playerViewContainer;
- (IBAction)button1Pressed:(id)sender;
- (IBAction)button2Pressed:(id)sender;
- (IBAction)button3Pressed:(id)sender;
- (IBAction)button4Pressed:(id)sender;
- (IBAction)startPressed:(id)sender;
- (IBAction)stopPressed:(id)sender;
- (IBAction)swapInputsPressed:(id)sender;
- (IBAction)recordPressed:(id)sender;
#end
ViewController.m
#import "ViewController.h"
#import <AVFoundation/AVFoundation.h>
#interface ViewController ()
#property (nonatomic, strong) AVCaptureSession *captureSession;
#property (nonatomic, strong) AVCaptureVideoPreviewLayer *capturePreviewLayer;
#property (nonatomic, strong) AVCaptureDeviceInput *frontCam;
#property (nonatomic, readonly) BOOL frontCamIsSet;
#property (nonatomic, readonly) BOOL hasFrontCam;
#property (nonatomic, readonly) BOOL isUsingFrontCam;
#property (nonatomic, strong) AVCaptureDeviceInput *backCam;
#property (nonatomic, readonly) BOOL backCamIsSet;
#property (nonatomic, readonly) BOOL hasBackCam;
#property (nonatomic, readonly) BOOL isUsingBackCam;
#property (nonatomic, strong) AVCaptureDeviceInput *mic;
#property (nonatomic, readonly) BOOL micIsSet;
#property (nonatomic, readonly) BOOL hasMic;
#end
CGFloat DegreesToRadians(CGFloat degrees)
{
return degrees * M_PI / 180;
};
CGFloat RadiansToDegrees(CGFloat radians)
{
return radians * 180 / M_PI;
};
#implementation ViewController
#pragma mark - Helper Methods
- (NSArray *) inputDevices{
return [AVCaptureDevice devices];
}
- (NSArray *) videoInputDevices{
return [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
}
- (NSArray *) audioInputDevices{
return [AVCaptureDevice devicesWithMediaType:AVMediaTypeAudio];
}
#pragma mark - Properties
#synthesize captureSession = _captureSession;
- (AVCaptureSession *)captureSession{
if (_captureSession == nil){
_captureSession = [[AVCaptureSession alloc] init];
}
return _captureSession;
}
#synthesize capturePreviewLayer = _capturePreviewLayer;
- (AVCaptureVideoPreviewLayer *)capturePreviewLayer{
if (_capturePreviewLayer == nil){
_capturePreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.captureSession];
}
return _capturePreviewLayer;
}
#synthesize frontCam = _frontCam;
- (AVCaptureDeviceInput *)frontCam{
if (_frontCam == nil && !self.frontCamIsSet){
_frontCamIsSet = YES;
NSArray *videoDevices = [self videoInputDevices];
for (AVCaptureDevice *inputDevice in videoDevices) {
if ([inputDevice position] == AVCaptureDevicePositionFront){
NSError *error = nil;
_frontCam = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];
if (!_frontCam){
NSLog(#"Error Attaching Front Cam %#",error);
}
}
}
}
return _frontCam;
}
- (BOOL)hasFrontCam{
return self.frontCam != nil;
}
#synthesize isUsingFrontCam = _isUsingFrontCam;
#synthesize backCam = _backCam;
- (AVCaptureDeviceInput *)backCam{
if (_backCam == nil && !self.backCamIsSet){
_backCamIsSet = YES;
NSArray *videoDevices = [self videoInputDevices];
for (AVCaptureDevice *inputDevice in videoDevices) {
if ([inputDevice position] == AVCaptureDevicePositionBack){
NSError *error = nil;
_backCam = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];
if (!_backCam){
NSLog(#"Error Attaching Back Cam %#",error);
}
}
}
}
return _backCam;
}
- (BOOL)hasBackCam{
return self.backCam != nil;
}
#synthesize mic = _mic;
- (AVCaptureDeviceInput *)mic{
if (_mic == nil && !self.micIsSet){
_micIsSet = YES;
NSArray *audioDevices = [self audioInputDevices];
for (AVCaptureDevice *inputDevice in audioDevices) {
NSError *error = nil;
_mic = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];
if (!_mic){
NSLog(#"Error Attaching Mic %#",error);
}
}
}
return _mic;
}
- (BOOL)hasMic{
return self.mic != nil;
}
- (BOOL)isUsingBackCam{
return !self.isUsingFrontCam;
}
- (IBAction)button1Pressed:(id)sender {
if (NO && self.hasFrontCam && [self.captureSession canAddInput:self.frontCam]){
_isUsingFrontCam = YES;
[self.captureSession addInput:self.frontCam];
}
else if(self.hasBackCam && [self.captureSession canAddInput:self.backCam]){
_isUsingFrontCam = NO;
[self.captureSession addInput:self.backCam];
}
if (self.hasMic && [self.captureSession canAddInput:self.mic]) {
[self.captureSession addInput:self.mic];
}
}
- (IBAction)button2Pressed:(id)sender {
self.capturePreviewLayer.frame = self.previewViewContainer.layer.bounds;
[self.previewViewContainer.layer addSublayer:self.capturePreviewLayer];
}
- (void) orientationChanged:(NSNotification*) notification{
NSLog(#"Notification Of Orientation Change\n\n%#",notification.userInfo);
if (_capturePreviewLayer != nil){
CGFloat rotate90 = DegreesToRadians(90);
CGFloat rotateFinish = 0;
UIDeviceOrientation orientation = [UIDevice currentDevice].orientation;
switch (orientation) {
case UIDeviceOrientationLandscapeLeft:
rotateFinish += rotate90;
case UIDeviceOrientationPortraitUpsideDown:
rotateFinish += rotate90;
case UIDeviceOrientationLandscapeRight:
rotateFinish += rotate90;
case UIDeviceOrientationPortrait:
default:
break;
}
_capturePreviewLayer.transform = CATransform3DMakeRotation(rotateFinish, 0.0, 0.0, 1.0);
}
}
- (IBAction)button3Pressed:(id)sender {
}
- (IBAction)button4Pressed:(id)sender {
}
- (IBAction)startPressed:(id)sender {
[self.captureSession startRunning];
}
- (IBAction)stopPressed:(id)sender {
[self.captureSession stopRunning];
}
- (IBAction)swapInputsPressed:(id)sender {
if (!self.isUsingFrontCam){
_isUsingFrontCam = YES;
[self.captureSession removeInput:self.backCam];
[self.captureSession addInput:self.frontCam];
}
else {
_isUsingFrontCam = NO;
[self.captureSession removeInput:self.frontCam];
[self.captureSession addInput:self.backCam];
}
}
- (IBAction)recordPressed:(id)sender {
}
- (NSString *) applicationDocumentsDirectory{
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;
return basePath;
}
- (void)viewDidLoad{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
[[UIDevice currentDevice] beginGeneratingDeviceOrientationNotifications];
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(orientationChanged:)
name:UIDeviceOrientationDidChangeNotification
object:nil];
}
- (void) dealloc{
[[UIDevice currentDevice] endGeneratingDeviceOrientationNotifications];
[[NSNotificationCenter defaultCenter] removeObserver:self
name:UIDeviceOrientationDidChangeNotification
object:nil];
}
- (void)didReceiveMemoryWarning{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
#end
Fortunately for you I just built this test app for grabbing photos.
Oh before I forget. Rending a CALayer into a graphic is as simple as
+ (UIImage *) captureImageOfView:(UIView *)srcView{
UIGraphicsBeginImageContext(srcView.bounds.size);
[srcView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *anImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return anImage;
}
However I recommend you look into the AVFoundation programming guide to see how they actually capture it. This was just my own demo app and as i said. its not complete.