How to show same camera video in two views - ios

I am trying to show the same camera video in two different views; However I only get the video in one view. Could you help. Code is below
-(void) showCameraPreview{
self.camerPreviewCaptureSession =[[AVCaptureSession alloc] init];
self.camerPreviewCaptureSession.sessionPreset = AVCaptureSessionPresetHigh;
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *videoInput1 = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
[self.camerPreviewCaptureSession addInput:videoInput1];
AVCaptureVideoPreviewLayer *newCaptureVideoViewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.camerPreviewCaptureSession];
newCaptureVideoViewLayer.frame = self.viewPreview.bounds;
newCaptureVideoViewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[newCaptureVideoViewLayer setFrame:CGRectMake(0.0, 0.0, self.viewPreview.bounds.size.width, self.viewPreview.bounds.size.height )];
AVCaptureVideoPreviewLayer *newCameraViewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.camerPreviewCaptureSession];
newCameraViewLayer.frame = self.viewPreview1.bounds;
newCameraViewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[newCameraViewLayer setFrame:CGRectMake(0.0, 0.0, self.viewPreview1.bounds.size.width, self.viewPreview1.bounds.size.height )];
[self.viewPreview1.layer addSublayer:newCameraViewLayer];
[self.viewPreview.layer addSublayer:newCaptureVideoViewLayer];
[self.camerPreviewCaptureSession startRunning];
}

Related

how to correctly start a camera session using AVCapture session/AVCapture

I want to make an iOS app in objective C. Right now I'm stuck on making the preview layer to the AVCapture preview output. Could someone please tell me how to successfully start an image capture session using the AVCapture camera session in iOS Objective C? Any help is much appreciated. Thank you.
I give you answer for AVCaptureSession
-(void)capture
{
NSError *error=nil;
//Capture Session
AVCaptureSession *session = [[AVCaptureSession alloc]init];
session.sessionPreset = AVCaptureSessionPresetPhoto;
//Add device
AVCaptureDevice *inputDevice = nil;
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for(AVCaptureDevice *camera in devices)
{
if([camera position] == AVCaptureDevicePositionBack) // is Back camera
{
inputDevice = camera;
break;
}
}
[session addInput:inputDevice];
//Output
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
output.videoSettings = #{ (NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
//Preview Layer
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
previewLayer.frame = viewForCamera.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[viewForCamera.layer addSublayer:previewLayer];
//Start capture session
[session startRunning];
}
Try this code to get camera id.
NSString *cameraID = nil;
NSArray *captureDeviceType = #[AVCaptureDeviceTypeBuiltInWideAngleCamera];
AVCaptureDeviceDiscoverySession *captureDevice =
[AVCaptureDeviceDiscoverySession
discoverySessionWithDeviceTypes:captureDeviceType
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionUnspecified];
cameraID = [captureDevice.devices.lastObject localizedName];

AVCaptureMetadataOutput Inverse Colors

I am making an app that scans a barcode that inverted color (black background & white bars). I have to use AVFoundation. Currently, I am using AVCaptureMetadataOutput. I can get it to work perfectly with a normal barcode. I need to invert the color on the white -> black & black -> white etc. Can I add a CIColorInvert to the Input in AVCaptureSession
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view from its nib.
mCaptureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *videoCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoCaptureDevice error:&error];
if([mCaptureSession canAddInput:videoInput]){
[mCaptureSession addInput:videoInput];
} else {
NSLog(#"Could not add video input: %#", [error localizedDescription]);
}
// set up metadata output and this class as its delegate so that if metadata (barcode 39) is detected it will send the data to this class
AVCaptureMetadataOutput *metadataOutput = [[AVCaptureMetadataOutput alloc] init];
if([mCaptureSession canAddOutput:metadataOutput]){
[mCaptureSession addOutput:metadataOutput];
[metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[metadataOutput setMetadataObjectTypes:#[AVMetadataObjectTypeCode39Code]];
} else {
NSLog(#"Could not add metadata output");
}
// sets up what the camera sees as a layer of the view
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:mCaptureSession];
//CGRect frame = CGRectMake(0.0 - 50, 0.0, 1024.0, 1024.0 + 720.0);
CGRect bounds=self.view.layer.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
previewLayer.bounds=bounds;
previewLayer.position=CGPointMake(CGRectGetMidX(bounds), CGRectGetMidY(bounds));
NSArray *filters = [[NSArray alloc] initWithObjects:[CIFilter filterWithName:#"CIColorInvert"], nil];
[previewLayer setFilters:filters];
//[previewLayer setFrame:self.view.bounds];
[self.view.layer addSublayer:previewLayer];
//starts the camera session
[mCaptureSession startRunning];
}

Displaying camera in custom frame

I'm trying to load the camera into a custom CGRect, but I'm not able to do so as it appears the view is bound by the camera aspect. This is the code I'm using:
AVCaptureSession *session = [[AVCaptureSession alloc] init];
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (videoDevice)
{
NSError *error;
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if (!error)
{
if ([session canAddInput:videoInput])
{
[session addInput:videoInput];
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
previewLayer.frame = self.view.bounds;
CGFloat x = self.view.bounds.origin.x;
CGFloat y = self.view.bounds.origin.y;
CGFloat width = self.view.bounds.size.width;
CGFloat height = 206;
CGRect newFrame = CGRectMake(x, y, width, height);
previewLayer.frame = newFrame;
[self.view.layer addSublayer:previewLayer];
[session startRunning];
}
}
}
This is the frame that the app is currently displaying:
But I need it to be framed like this:
I can't figure out how to "unlock" the camera frame or adjust the aspect. Is my desired result possible without lowering image quality, and if so - how?
Add this line at the end of your code:
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;

AVCapture doesn't fill whole view

I'm trying to display live camera video in a view, but when I run the following code, it doesn't fill up the whole screen. It only fills up 3.5 inch screen size (running on 5.5 inch screen). I have set up the autolayout of the view to 0 on each side.
session = [[AVCaptureSession alloc] init];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error;
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];
if ([session canAddInput:deviceInput]) {
[session addInput:deviceInput];
}
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CALayer *rootLayer = [[self view] layer];
[rootLayer setMasksToBounds:YES];
CGRect frame = frameForCapture.frame;
[previewLayer setFrame:frame];
[rootLayer insertSublayer:previewLayer atIndex:0];
stillImage = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImage setOutputSettings:outputSettings];
[session addOutput:stillImage];
[session startRunning];
When I set the view controller's size from 3.5-inch to 5.5-inch and run, it fills up the screen though. The autolayout is working when size changes, but not when It runs.
Layers don't adhere to autoresizing/auto layout the same way that views do, you'll need to manually set the layer's frame on resize. A nice way to do this is to introduce a view subclass that contains the preview layer and sets its frame to the view's bounds in -layoutSubviews

Weird UIScrollview behaviour in IOS 7 in IPhone 4

I am working on a video-related application. For this, I used AVFoundation framework to capture image and video. I am capturing an image and showing that captured image in next view, where image view is a subview to the scrollview. It's working fine in iPhone 5,iPhone 5s and also in iPad but in iPhone 4 after capturing the image and while attaching to scrollView, the app is becoming slow. The scroll view is not scrolling smoothly as in other devices. I am not getting issue where it has gone wrong. I am using the below code to capture images:
-(void)capturePhoto
{
_ciContext = [CIContext contextWithOptions:nil];
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// - input
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:NULL];
NSError *error = nil;
if ([device lockForConfiguration:&error])
{
if ([device isFlashModeSupported:AVCaptureFlashModeOff])
{
device.flashMode = AVCaptureFlashModeOff;
}
[device unlockForConfiguration];
}
// - output
_dataOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:
AVVideoCodecJPEG, AVVideoCodecKey, nil];
_dataOutput.outputSettings = outputSettings;
// - output
NSMutableDictionary *settings;
settings = [NSMutableDictionary dictionary];
[settings setObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(__bridge id) kCVPixelBufferPixelFormatTypeKey];
_dataOutputVideo = [[AVCaptureVideoDataOutput alloc] init];
_dataOutputVideo.videoSettings = settings;
[_dataOutputVideo setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
_session = [[AVCaptureSession alloc] init];
[_session addInput:deviceInput];
[_session addOutput:_dataOutput];
[_session addOutput:_dataOutputVideo];
// _session.sessionPreset = AVCaptureSessionPresetPhoto;
_session.sessionPreset = AVCaptureSessionPresetHigh;
// _session.sessionPreset = AVCaptureSessionPresetMedium;
[_session startRunning];
// add gesture
// UIGestureRecognizer *gr = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(didTapGesture:)];
// gr.delegate = self;
// [self.touchView addGestureRecognizer:gr];
_focusView = [[UIView alloc] init];
CGRect imageFrame = _focusView.frame;
imageFrame.size.width = 80;
imageFrame.size.height = 80;
_focusView.frame = imageFrame;
_focusView.center = CGPointMake(160, 202);
CALayer *layer = _focusView.layer;
layer.shadowOffset = CGSizeMake(2.5, 2.5);
layer.shadowColor = [[UIColor blackColor] CGColor];
layer.shadowOpacity = 0.5;
layer.borderWidth = 2;
layer.borderColor = [UIColor yellowColor].CGColor;
[self.touchView addSubview:_focusView];
_focusView.alpha = 0;
_isShowFlash = NO;
[self.view bringSubviewToFront:self.touchView];
UIView *footerView = [self.view viewWithTag:2];
[self.view bringSubviewToFront:footerView];
}
Later I am attaching to scroll view like this:
scrollImgView=[[UIImageView alloc]initWithFrame:CGRectMake(0, 0, 320, 340)];
UIImage *image = [UIImage imageWithData:appdelegate.capturedImgData];
UIImage *tempImage=[self resizeImage:image withWidth:320 withHeight:340];
NSData *imgData=UIImageJPEGRepresentation(tempImage,1.0);//0.25f
NSLog(#"image is %#",image);
scrollImgView.image=[UIImage imageWithData:imgData];
// scrollImgView.contentMode = UIViewContentModeScaleAspectFit;
// UIViewContentModeScaleAspectFit;
[postScrollView addSubview:scrollImgView];
Please give me suggestions if any one faced the same problem.
your coding is fine , and also is not a problem in in device, it may be occur
1. network problem
2. device memory is already loaded fully.
3. some Data conversation also taken times
here
UIImage *image = [UIImage imageWithData:appdelegate.capturedImgData];
UIImage *tempImage=[self resizeImage:image withWidth:320 withHeight:340]; //
NSData *imgData=UIImageJPEGRepresentation(tempImage,1.0);//0.25f
the above code u having used
1. data conversion is taken high and also taken the 2 time of image conversion optimize any `NSData`
2. third line is improve the `quality` of your image-- this also take the time for image conversion.
in my suggestion
1. use `Asychronous method`
-- sdwebimage
-- Asychronous Imageview
-- NSOperation_Queue
-- Dispatch im main_queue
use anyone of this , it will be taken the some response for you. in my suggestion is use SDWebImage.. use this link Loading takes a while when i set UIImage to a NSData with a url.
I solved the issue by changing
_session.sessionPreset = AVCaptureSessionPresetHigh to
_session.sessionPreset = AVCaptureSessionPresetMedium

Resources