AVCaptureVideoPreviewLayer not filling screen - ios

I read about one million threads about how to make a VideoPreviewLayer filling the complete screen of an iPhone but nothing works ... maybe you can help me because I'm really stuck.
This is my Preview layer init:
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
{
// Choosing bigger preset for bigger screen.
_sessionPreset = AVCaptureSessionPreset1280x720;
}
else
{
_sessionPreset = AVCaptureSessionPresetHigh;
}
[self setupAVCapture];
AVCaptureSession *captureSession = _session;
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
UIView *aView = self.view;
previewLayer.frame = aView.bounds;
previewLayer.connection.videoOrientation = AVCaptureVideoOrientationLandscapeRight;
[aView.layer addSublayer:previewLayer];
That's my setupAvCapture Method:
//-- Setup Capture Session.
_session = [[AVCaptureSession alloc] init];
[_session beginConfiguration];
//-- Set preset session size.
[_session setSessionPreset:_sessionPreset];
//-- Creata a video device and input from that Device. Add the input to the capture session.
AVCaptureDevice * videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if(videoDevice == nil)
assert(0);
//-- Add the device to the session.
NSError *error;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if(error)
assert(0);
[_session addInput:input];
//-- Create the output for the capture session.
AVCaptureVideoDataOutput * dataOutput = [[AVCaptureVideoDataOutput alloc] init];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES]; // Probably want to set this to NO when recording
//-- Set to YUV420.
[dataOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; // Necessary for manual preview
// Set dispatch to be on the main thread so OpenGL can do things with the data
[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[_session addOutput:dataOutput];
[_session commitConfiguration];
[_session startRunning];
I already tried to use different AVCaptureSessionPresets and ResizeFit options. But it always looks like this:
http://imageshack.us/photo/my-images/707/img0013g.png/
Or this if I use previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill; If I log the size of the layer the correct full screen size is returned.
http://imageshack.us/photo/my-images/194/img0014k.png/

try:
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [AVCaptureVideoPreviewLayer layerWithSession: self.session];
[captureVideoPreviewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
Swift Update
let previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
Swift 4.0 Update
let previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.videoGravity = .resizeAspectFill

In case someone have this issue you need just to take the bounds of screen
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.frame = UIScreen.main.bounds
previewLayer.videoGravity = .resizeAspectFill
camPreview.layer.addSublayer(previewLayer)

videoPreviewLayer = AVCaptureVideoPreviewLayer(session: session)
DispatchQueue.main.async {
self.videoPreviewLayer?.frame = self.captureImageView.bounds
}
cropImageRect = captureImageView.frame
videoPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
videoPreviewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
captureImageView.layer.addSublayer(videoPreviewLayer!)
session!.startRunning()
This is work for me. The problem I faced was, I put the following line out of main thread. Its create the problem I think.
self.videoPreviewLayer?.frame = self.captureImageView.bounds

Swift 5 and Xcode13
This worked for me
Creates the AVCapturePreviewLayer
class ViewController: UIViewController {
let previewLayer = AVCapturePreviewLayer()
}
Adds PreviewLayer into View
override func viewDidLoad() {
super.viewDidLoad()
// Adds Element to the View Layer
view.layer.addSublayer(previewLayer)
Makes PreviewLayer the full bounds of View
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
previewLayer.frame = view.bounds

Related

How to show same camera video in two views

I am trying to show the same camera video in two different views; However I only get the video in one view. Could you help. Code is below
-(void) showCameraPreview{
self.camerPreviewCaptureSession =[[AVCaptureSession alloc] init];
self.camerPreviewCaptureSession.sessionPreset = AVCaptureSessionPresetHigh;
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *videoInput1 = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
[self.camerPreviewCaptureSession addInput:videoInput1];
AVCaptureVideoPreviewLayer *newCaptureVideoViewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.camerPreviewCaptureSession];
newCaptureVideoViewLayer.frame = self.viewPreview.bounds;
newCaptureVideoViewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[newCaptureVideoViewLayer setFrame:CGRectMake(0.0, 0.0, self.viewPreview.bounds.size.width, self.viewPreview.bounds.size.height )];
AVCaptureVideoPreviewLayer *newCameraViewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.camerPreviewCaptureSession];
newCameraViewLayer.frame = self.viewPreview1.bounds;
newCameraViewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[newCameraViewLayer setFrame:CGRectMake(0.0, 0.0, self.viewPreview1.bounds.size.width, self.viewPreview1.bounds.size.height )];
[self.viewPreview1.layer addSublayer:newCameraViewLayer];
[self.viewPreview.layer addSublayer:newCaptureVideoViewLayer];
[self.camerPreviewCaptureSession startRunning];
}

how to correctly start a camera session using AVCapture session/AVCapture

I want to make an iOS app in objective C. Right now I'm stuck on making the preview layer to the AVCapture preview output. Could someone please tell me how to successfully start an image capture session using the AVCapture camera session in iOS Objective C? Any help is much appreciated. Thank you.
I give you answer for AVCaptureSession
-(void)capture
{
NSError *error=nil;
//Capture Session
AVCaptureSession *session = [[AVCaptureSession alloc]init];
session.sessionPreset = AVCaptureSessionPresetPhoto;
//Add device
AVCaptureDevice *inputDevice = nil;
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for(AVCaptureDevice *camera in devices)
{
if([camera position] == AVCaptureDevicePositionBack) // is Back camera
{
inputDevice = camera;
break;
}
}
[session addInput:inputDevice];
//Output
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
output.videoSettings = #{ (NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
//Preview Layer
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
previewLayer.frame = viewForCamera.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[viewForCamera.layer addSublayer:previewLayer];
//Start capture session
[session startRunning];
}
Try this code to get camera id.
NSString *cameraID = nil;
NSArray *captureDeviceType = #[AVCaptureDeviceTypeBuiltInWideAngleCamera];
AVCaptureDeviceDiscoverySession *captureDevice =
[AVCaptureDeviceDiscoverySession
discoverySessionWithDeviceTypes:captureDeviceType
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionUnspecified];
cameraID = [captureDevice.devices.lastObject localizedName];

AVCaptureMetadataOutput Inverse Colors

I am making an app that scans a barcode that inverted color (black background & white bars). I have to use AVFoundation. Currently, I am using AVCaptureMetadataOutput. I can get it to work perfectly with a normal barcode. I need to invert the color on the white -> black & black -> white etc. Can I add a CIColorInvert to the Input in AVCaptureSession
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view from its nib.
mCaptureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *videoCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoCaptureDevice error:&error];
if([mCaptureSession canAddInput:videoInput]){
[mCaptureSession addInput:videoInput];
} else {
NSLog(#"Could not add video input: %#", [error localizedDescription]);
}
// set up metadata output and this class as its delegate so that if metadata (barcode 39) is detected it will send the data to this class
AVCaptureMetadataOutput *metadataOutput = [[AVCaptureMetadataOutput alloc] init];
if([mCaptureSession canAddOutput:metadataOutput]){
[mCaptureSession addOutput:metadataOutput];
[metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[metadataOutput setMetadataObjectTypes:#[AVMetadataObjectTypeCode39Code]];
} else {
NSLog(#"Could not add metadata output");
}
// sets up what the camera sees as a layer of the view
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:mCaptureSession];
//CGRect frame = CGRectMake(0.0 - 50, 0.0, 1024.0, 1024.0 + 720.0);
CGRect bounds=self.view.layer.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
previewLayer.bounds=bounds;
previewLayer.position=CGPointMake(CGRectGetMidX(bounds), CGRectGetMidY(bounds));
NSArray *filters = [[NSArray alloc] initWithObjects:[CIFilter filterWithName:#"CIColorInvert"], nil];
[previewLayer setFilters:filters];
//[previewLayer setFrame:self.view.bounds];
[self.view.layer addSublayer:previewLayer];
//starts the camera session
[mCaptureSession startRunning];
}

Displaying camera in custom frame

I'm trying to load the camera into a custom CGRect, but I'm not able to do so as it appears the view is bound by the camera aspect. This is the code I'm using:
AVCaptureSession *session = [[AVCaptureSession alloc] init];
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (videoDevice)
{
NSError *error;
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if (!error)
{
if ([session canAddInput:videoInput])
{
[session addInput:videoInput];
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
previewLayer.frame = self.view.bounds;
CGFloat x = self.view.bounds.origin.x;
CGFloat y = self.view.bounds.origin.y;
CGFloat width = self.view.bounds.size.width;
CGFloat height = 206;
CGRect newFrame = CGRectMake(x, y, width, height);
previewLayer.frame = newFrame;
[self.view.layer addSublayer:previewLayer];
[session startRunning];
}
}
}
This is the frame that the app is currently displaying:
But I need it to be framed like this:
I can't figure out how to "unlock" the camera frame or adjust the aspect. Is my desired result possible without lowering image quality, and if so - how?
Add this line at the end of your code:
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;

iOS7 AVCapture captureOutput never gets called

Please understand that I cannot upload the whole code here.
I have
#interface BcsProcessor : NSObject <AVCaptureMetadataOutputObjectsDelegate> {}
and BcsProcessor has setupCaptureSession and captureOutput method.
- (void)captureOutput:(AVCaptureOutput*)captureOutput didOutputMetadataObjects:(NSArray*)metadataObjects fromConnection:(AVCaptureConnection*)connection
- (NSString*)setUpCaptureSession {
NSError* error = nil;
AVCaptureSession* captureSession = [[[AVCaptureSession alloc] init] autorelease];
self.captureSession = captureSession;
AVCaptureDevice* __block device = nil;
if (self.isFrontCamera) {
NSArray* devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
[devices enumerateObjectsUsingBlock:^(AVCaptureDevice *obj, NSUInteger idx, BOOL *stop) {
if (obj.position == AVCaptureDevicePositionFront) {
device = obj;
}
}];
} else {
device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
AVCaptureMetadataOutput* output = [[[AVCaptureMetadataOutput alloc] init] autorelease];
output.metadataObjectTypes = output.availableMetadataObjectTypes
dispatch_queue_t outputQueue = dispatch_queue_create("com.1337labz.featurebuild.metadata", 0);
[output setMetadataObjectsDelegate:self queue:outputQueue];
captureSession.sessionPreset = AVCaptureSessionPresetPhoto;
if ([captureSession canAddInput:input]) {
[captureSession addInput:input];
}
if ([captureSession canAddOutput:output]) {
[captureSession addOutput:output];
}
// setup capture preview layer
self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
// run on next event loop pass [captureSession startRunning]
[captureSession performSelector:#selector(startRunning) withObject:nil afterDelay:0];
return nil;
}
So the code above sets up the session and add AVCaptureMetadataOutput. and BcsProcessor is supposed to receive the captured metadata. but my captureOutput method does not receive any data, or gets called.
I'll appreciate any help or comments.
First make sure your input and output are correctly added to the session. You can check by logging captureSession.inputs and captureSession.outputs.
Second make sure output.metadataObjectTypes is correctly setup meaning output of availableMetadataObjectTypes is not empty. I believe this will be empty if you call it before adding the output.
and finally i don't see you adding the preview layer to the views layer
try after you init your layer with session...
self.previewLayer.frame = self.view.layer.bounds;
[self.view.layer addSublayer:previewLayer];

Resources