Displaying camera in custom frame - ios

I'm trying to load the camera into a custom CGRect, but I'm not able to do so as it appears the view is bound by the camera aspect. This is the code I'm using:
AVCaptureSession *session = [[AVCaptureSession alloc] init];
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (videoDevice)
{
NSError *error;
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if (!error)
{
if ([session canAddInput:videoInput])
{
[session addInput:videoInput];
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
previewLayer.frame = self.view.bounds;
CGFloat x = self.view.bounds.origin.x;
CGFloat y = self.view.bounds.origin.y;
CGFloat width = self.view.bounds.size.width;
CGFloat height = 206;
CGRect newFrame = CGRectMake(x, y, width, height);
previewLayer.frame = newFrame;
[self.view.layer addSublayer:previewLayer];
[session startRunning];
}
}
}
This is the frame that the app is currently displaying:
But I need it to be framed like this:
I can't figure out how to "unlock" the camera frame or adjust the aspect. Is my desired result possible without lowering image quality, and if so - how?

Add this line at the end of your code:
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;

Related

How to show same camera video in two views

I am trying to show the same camera video in two different views; However I only get the video in one view. Could you help. Code is below
-(void) showCameraPreview{
self.camerPreviewCaptureSession =[[AVCaptureSession alloc] init];
self.camerPreviewCaptureSession.sessionPreset = AVCaptureSessionPresetHigh;
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *videoInput1 = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
[self.camerPreviewCaptureSession addInput:videoInput1];
AVCaptureVideoPreviewLayer *newCaptureVideoViewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.camerPreviewCaptureSession];
newCaptureVideoViewLayer.frame = self.viewPreview.bounds;
newCaptureVideoViewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[newCaptureVideoViewLayer setFrame:CGRectMake(0.0, 0.0, self.viewPreview.bounds.size.width, self.viewPreview.bounds.size.height )];
AVCaptureVideoPreviewLayer *newCameraViewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.camerPreviewCaptureSession];
newCameraViewLayer.frame = self.viewPreview1.bounds;
newCameraViewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[newCameraViewLayer setFrame:CGRectMake(0.0, 0.0, self.viewPreview1.bounds.size.width, self.viewPreview1.bounds.size.height )];
[self.viewPreview1.layer addSublayer:newCameraViewLayer];
[self.viewPreview.layer addSublayer:newCaptureVideoViewLayer];
[self.camerPreviewCaptureSession startRunning];
}

AVCaptureMetadataOutput Inverse Colors

I am making an app that scans a barcode that inverted color (black background & white bars). I have to use AVFoundation. Currently, I am using AVCaptureMetadataOutput. I can get it to work perfectly with a normal barcode. I need to invert the color on the white -> black & black -> white etc. Can I add a CIColorInvert to the Input in AVCaptureSession
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view from its nib.
mCaptureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *videoCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoCaptureDevice error:&error];
if([mCaptureSession canAddInput:videoInput]){
[mCaptureSession addInput:videoInput];
} else {
NSLog(#"Could not add video input: %#", [error localizedDescription]);
}
// set up metadata output and this class as its delegate so that if metadata (barcode 39) is detected it will send the data to this class
AVCaptureMetadataOutput *metadataOutput = [[AVCaptureMetadataOutput alloc] init];
if([mCaptureSession canAddOutput:metadataOutput]){
[mCaptureSession addOutput:metadataOutput];
[metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[metadataOutput setMetadataObjectTypes:#[AVMetadataObjectTypeCode39Code]];
} else {
NSLog(#"Could not add metadata output");
}
// sets up what the camera sees as a layer of the view
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:mCaptureSession];
//CGRect frame = CGRectMake(0.0 - 50, 0.0, 1024.0, 1024.0 + 720.0);
CGRect bounds=self.view.layer.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
previewLayer.bounds=bounds;
previewLayer.position=CGPointMake(CGRectGetMidX(bounds), CGRectGetMidY(bounds));
NSArray *filters = [[NSArray alloc] initWithObjects:[CIFilter filterWithName:#"CIColorInvert"], nil];
[previewLayer setFilters:filters];
//[previewLayer setFrame:self.view.bounds];
[self.view.layer addSublayer:previewLayer];
//starts the camera session
[mCaptureSession startRunning];
}

ios objective C screenshot sublayer not visible

I'm building an app where i want to take a snapshot from the camera and show it in a UIImageView. I'm able to take the snapshot but the AVCaptureVideoPreviewLayer is not visible in the screenshot. Does anyone know how to do that?
Here is my code:
#implementation ViewController
CGRect imgRect;
AVCaptureVideoPreviewLayer *previewLayer;
AVCaptureVideoDataOutput *output;
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
//Capture Session
AVCaptureSession *session = [[AVCaptureSession alloc]init];
session.sessionPreset = AVCaptureSessionPresetPhoto;
//Add device
AVCaptureDevice *device =
[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
//Input
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
if (!input)
{
NSLog(#"No Input");
}
[session addInput:input];
//Output
output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
output.videoSettings = #{ (NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
//Preview
previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
CGFloat x = self.view.bounds.size.width * 0.5 - 128;
imgRect = CGRectMake(x, 64, 256, 256);
previewLayer.frame = imgRect;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer:previewLayer];
//Start capture session
[session startRunning];
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (IBAction)TakeSnapshot:(id)sender {
self.imgResult.image = self.pb_takeSnapshot;
}
- (UIImage *)pb_takeSnapshot {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, [UIScreen mainScreen].scale);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
#end
a bit of help is very much appreciated.
Thank you in advance
Gilbert Avezaat
You should use AVCaptureStillImageOutput to get image from the camera connection,
Here is how you could do it,
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
stillImageOutput.outputSettings = #{
AVVideoCodecKey: AVVideoCodecJPEG,
(__bridge id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)
};
[stillImageOutput captureStillImageAsynchronouslyFromConnection:connection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [UIImage imageWithData:imageData];
}];
first check for Image is return or not . if return then ...
- (IBAction)TakeSnapshot:(id)sender {
self.imgResult.image = self.pb_takeSnapshot;
[self.view bringSubviewToFrunt:self.imgResult];
}
hope it help you .

AVCapture doesn't fill whole view

I'm trying to display live camera video in a view, but when I run the following code, it doesn't fill up the whole screen. It only fills up 3.5 inch screen size (running on 5.5 inch screen). I have set up the autolayout of the view to 0 on each side.
session = [[AVCaptureSession alloc] init];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error;
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];
if ([session canAddInput:deviceInput]) {
[session addInput:deviceInput];
}
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CALayer *rootLayer = [[self view] layer];
[rootLayer setMasksToBounds:YES];
CGRect frame = frameForCapture.frame;
[previewLayer setFrame:frame];
[rootLayer insertSublayer:previewLayer atIndex:0];
stillImage = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImage setOutputSettings:outputSettings];
[session addOutput:stillImage];
[session startRunning];
When I set the view controller's size from 3.5-inch to 5.5-inch and run, it fills up the screen though. The autolayout is working when size changes, but not when It runs.
Layers don't adhere to autoresizing/auto layout the same way that views do, you'll need to manually set the layer's frame on resize. A nice way to do this is to introduce a view subclass that contains the preview layer and sets its frame to the view's bounds in -layoutSubviews

AVCaptureVideoPreviewLayer not filling screen

I read about one million threads about how to make a VideoPreviewLayer filling the complete screen of an iPhone but nothing works ... maybe you can help me because I'm really stuck.
This is my Preview layer init:
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
{
// Choosing bigger preset for bigger screen.
_sessionPreset = AVCaptureSessionPreset1280x720;
}
else
{
_sessionPreset = AVCaptureSessionPresetHigh;
}
[self setupAVCapture];
AVCaptureSession *captureSession = _session;
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
UIView *aView = self.view;
previewLayer.frame = aView.bounds;
previewLayer.connection.videoOrientation = AVCaptureVideoOrientationLandscapeRight;
[aView.layer addSublayer:previewLayer];
That's my setupAvCapture Method:
//-- Setup Capture Session.
_session = [[AVCaptureSession alloc] init];
[_session beginConfiguration];
//-- Set preset session size.
[_session setSessionPreset:_sessionPreset];
//-- Creata a video device and input from that Device. Add the input to the capture session.
AVCaptureDevice * videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if(videoDevice == nil)
assert(0);
//-- Add the device to the session.
NSError *error;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if(error)
assert(0);
[_session addInput:input];
//-- Create the output for the capture session.
AVCaptureVideoDataOutput * dataOutput = [[AVCaptureVideoDataOutput alloc] init];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES]; // Probably want to set this to NO when recording
//-- Set to YUV420.
[dataOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; // Necessary for manual preview
// Set dispatch to be on the main thread so OpenGL can do things with the data
[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[_session addOutput:dataOutput];
[_session commitConfiguration];
[_session startRunning];
I already tried to use different AVCaptureSessionPresets and ResizeFit options. But it always looks like this:
http://imageshack.us/photo/my-images/707/img0013g.png/
Or this if I use previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill; If I log the size of the layer the correct full screen size is returned.
http://imageshack.us/photo/my-images/194/img0014k.png/
try:
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [AVCaptureVideoPreviewLayer layerWithSession: self.session];
[captureVideoPreviewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
Swift Update
let previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
Swift 4.0 Update
let previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.videoGravity = .resizeAspectFill
In case someone have this issue you need just to take the bounds of screen
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.frame = UIScreen.main.bounds
previewLayer.videoGravity = .resizeAspectFill
camPreview.layer.addSublayer(previewLayer)
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: session)
DispatchQueue.main.async {
self.videoPreviewLayer?.frame = self.captureImageView.bounds
}
cropImageRect = captureImageView.frame
videoPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
videoPreviewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
captureImageView.layer.addSublayer(videoPreviewLayer!)
session!.startRunning()
This is work for me. The problem I faced was, I put the following line out of main thread. Its create the problem I think.
self.videoPreviewLayer?.frame = self.captureImageView.bounds
Swift 5 and Xcode13
This worked for me
Creates the AVCapturePreviewLayer
class ViewController: UIViewController {
let previewLayer = AVCapturePreviewLayer()
}
Adds PreviewLayer into View
override func viewDidLoad() {
super.viewDidLoad()
// Adds Element to the View Layer
view.layer.addSublayer(previewLayer)
Makes PreviewLayer the full bounds of View
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
previewLayer.frame = view.bounds

Resources