I have a viewcontroller which has a button that calls a second viewcontroller, which adds a video sublayer and calls the camera.
The code has been working fine until I tried adding other things like another button to the second viewcontroller, then, it would sometimes work and sometimes not work.
By "not work" I mean it would open up a black screen with nothing at all. Does not respond to anything.
I've deleted the buttons / code etc but it hasn't fixed anything.
It seems it would sometimes just work. I.e. after it works I can add a button or change code and it would work and then it would show black screen again.
There are no build errors and trace and it basically is sitting there waiting for me to do something (like press the record button) but nothing is showing.
I've read that I should "bringsubviewtofront" but that doesn't seem to do anything.
Any suggestions?
Thanks in advance.
UPDATE: I think I found something related. I was trying to do a programmatically position a button on the screen using CGRect and part of that involved getting the text view's width and height.
I found that the code crashed with "expected to find optional value but found nil" message, i.e. I couldn't do anything like: textView.frame.width, textView.frame.height, textView.translatesAutoresizingMaskIntoConstraints = false etc.
At first I thought it was my code but after trying it on another VC using the same code, it suddenly started working again, i.e. I get values for textView.frame.width and textView.frame.height.
And my camera started showing preview!
So I reckon when the preview is black, then my buttons and text views have no values.
let captureSession = AVCaptureSession()
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
captureSession.sessionPreset = AVCaptureSession.Preset.high
// loop through all devices looking for cameras
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: AVMediaType.video, position: AVCaptureDevice.Position.unspecified)
let devices = deviceDiscoverySession.devices
for device in devices {
if (device.hasMediaType(AVMediaType.video)) {
if device.position == AVCaptureDevice.Position.back {
backCamera = device
} else if device.position == AVCaptureDevice.Position.front {
frontCamera = device
}
}
}
currentDevice = frontCamera
// look through all devices looking for microphone
let audioDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInMicrophone], mediaType: AVMediaType.audio, position: AVCaptureDevice.Position.unspecified)
let audioDevices = audioDiscoverySession.devices
for audioDevice in audioDevices {
if (audioDevice.hasMediaType(AVMediaType.audio)) {
audioCapture = audioDevice
}
}
// set up input output
do {
// setup camera input
let captureDeviceInput = try AVCaptureDeviceInput(device: currentDevice!)
captureSession.addInput(captureDeviceInput)
// setup audio input
let captureDeviceAudio = try AVCaptureDeviceInput(device: audioCapture!)
captureSession.addInput(captureDeviceAudio)
videoFileOutput = AVCaptureMovieFileOutput()
captureSession.addOutput(videoFileOutput!)
} catch {
print(error)
}
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
cameraPreviewLayer?.connection?.videoOrientation = currentVideoOrientation()
cameraPreviewLayer?.frame = self.view.frame
self.view.layer.insertSublayer(cameraPreviewLayer!, at: 0)
captureSession.startRunning()
}
OK, I find out how to resolve it but I don't know why it's doing it, other than it's a bug in Xcode.
It seems the problem has nothing to do with the video sublayer and its code.
I have Text Views and buttons etc on this ViewController.
I found that if I change the size of a button or TextView, e.g. increase and decrease the size of a text view, then the problem goes away.
The problem comes back if you then change something, e.g. code or move buttons around etc. but if you go back to the Text View and change the size, then it'll work again.
That's the workaround but I don't know what triggers this problem.
Check if you have provided the code to ask for the permission to your app to access the camera. If you have added the code but not allowed the app to access camera while it was asking permission, then got to settings > app name > camera and switch on the access. If not add this:
//Permission for Camera
AVCaptureDevice.requestAccess(for: AVMediaType.video) { response in
if response {
//access granted
} else {
//Take required action
}
}
Related
I'm planning to use ARKit's camera feed as input into Apple's Vision API so I can recognize people's faces in screen-space, no depth information required. To simplify the process, I'm attempting to modify Apple's face tracking over frames example here: Tracking the User’s Face in Real Time
I thought that I could simply change the function here:
fileprivate func configureFrontCamera(for captureSession: AVCaptureSession) throws -> (device: AVCaptureDevice, resolution: CGSize) {
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .video, position: .front)
if let device = deviceDiscoverySession.devices.first {
if let deviceInput = try? AVCaptureDeviceInput(device: device) {
if captureSession.canAddInput(deviceInput) {
captureSession.addInput(deviceInput)
}
if let highestResolution = self.highestResolution420Format(for: device) {
try device.lockForConfiguration()
device.activeFormat = highestResolution.format
device.unlockForConfiguration()
return (device, highestResolution.resolution)
}
}
}
throw NSError(domain: "ViewController", code: 1, userInfo: nil)
}
In the first line of the function, one of the arguments is .front for front-facing camera. I changed this to .back. This successfully gives me the rear-facing camera. However, the recognition region seems a little bit choppy, and as soon as it fixates on a face in the image, Xcode reports the error:
VisionFaceTrack[877:54517] [ServicesDaemonManager] interruptionHandler is called. -[FontServicesDaemonManager connection]_block_invoke
Message from debugger: Terminated due to memory issue
In other words, the program crashes when a face is recognized, it seems. Clearly there is more to this than simply changing the constant used. Perhaps there is a buffer somewhere with the wrong size, or a wrong resolution. May I have help figuring out what may be wrong here?
A better solution would also include information about how to achieve this with arkit's camera feed, but I'm pretty sure it's the same idea with the CVPixelBuffer.
How would I adapt this example to use the rear camera?
EDIT: I think the issue is that my device has too little memory to support the algorithm using the back camera, as the back camera has a higher resolution.
However, even on another higher performance device, the tracking quality is pretty bad. --yet the vision algorithm only needs raw images, doesn't it? In that case, shouldn't this work? I can't find any examples online of using the back camera for face tracking.
Here's how I adapted the sample to make it work on my iPad Pro.
1) Download the sample project from here: Tracking the User’s Face in Real Time.
2) Change the function which loads the front facing camera to use back facing. Rename it to configureBackCamera and call this method setupAVCaptureSession:
fileprivate func configureBackCamera(for captureSession: AVCaptureSession) throws -> (device: AVCaptureDevice, resolution: CGSize) {
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .video, position: .back)
if let device = deviceDiscoverySession.devices.first {
if let deviceInput = try? AVCaptureDeviceInput(device: device) {
if captureSession.canAddInput(deviceInput) {
captureSession.addInput(deviceInput)
}
if let highestResolution = self.highestResolution420Format(for: device) {
try device.lockForConfiguration()
device.activeFormat = highestResolution.format
device.unlockForConfiguration()
return (device, highestResolution.resolution)
}
}
}
throw NSError(domain: "ViewController", code: 1, userInfo: nil)
}
3) Change the implementation of the method highestResolution420Format. The problem is, now that the back-facing camera is used, you have access to formats with much higher resolution than with front-facing camera, which can impact the performance of the tracking. You need to adapt to your use case, but here's an example of limiting the resolution to 1080p.
fileprivate func highestResolution420Format(for device: AVCaptureDevice) -> (format: AVCaptureDevice.Format, resolution: CGSize)? {
var highestResolutionFormat: AVCaptureDevice.Format? = nil
var highestResolutionDimensions = CMVideoDimensions(width: 0, height: 0)
for format in device.formats {
let deviceFormat = format as AVCaptureDevice.Format
let deviceFormatDescription = deviceFormat.formatDescription
if CMFormatDescriptionGetMediaSubType(deviceFormatDescription) == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange {
let candidateDimensions = CMVideoFormatDescriptionGetDimensions(deviceFormatDescription)
if (candidateDimensions.height > 1080) {
continue
}
if (highestResolutionFormat == nil) || (candidateDimensions.width > highestResolutionDimensions.width) {
highestResolutionFormat = deviceFormat
highestResolutionDimensions = candidateDimensions
}
}
}
if highestResolutionFormat != nil {
let resolution = CGSize(width: CGFloat(highestResolutionDimensions.width), height: CGFloat(highestResolutionDimensions.height))
return (highestResolutionFormat!, resolution)
}
return nil
}
4) Now the tracking will work, but the face positions will not be correct. The reason is that UI presentation is wrong, because original sample was designed for front facing cameras with mirrored display, while the back facing camera doesn't need mirroring.
In order to tweak for this, simply change updateLayerGeometry() method. Specifically, you need to change this:
// Scale and mirror the image to ensure upright presentation.
let affineTransform = CGAffineTransform(rotationAngle: radiansForDegrees(rotation))
.scaledBy(x: scaleX, y: -scaleY)
overlayLayer.setAffineTransform(affineTransform)
into this:
// Scale the image to ensure upright presentation.
let affineTransform = CGAffineTransform(rotationAngle: radiansForDegrees(rotation))
.scaledBy(x: -scaleX, y: -scaleY)
overlayLayer.setAffineTransform(affineTransform)
After this, the tracking should work and the results should be correct.
I am creating a ViewController in which I want to have a somewhat small UIView in the corner of the ViewController to display the camera preview. I am using a function to do this. However when I pass in the small UIView into the function the camera preview is not showing up. The weird thing is if I tell the function to display the preview on self.view everything works fine and I can see the camera preview. For this reason I think the problem is with the way I insert the layer or something similar.
Here is the function I am using to display the preview...
func displayPreview(on view: UIView) throws {
guard let captureSession = self.captureSession, captureSession.isRunning else { throw CameraControllerError.captureSessionIsMissing }
self.previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
self.previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
self.previewLayer?.connection?.videoOrientation = .portrait
view.layer.insertSublayer(self.previewLayer!, at: 0)
self.previewLayer?.frame = view.frame
}
I call this function from inside another function which handles setting up the capture session and other similar things.
func configureCameraController() {
cameraController.prepare {(error) in
if let error = error {
print("ERROR")
print(error)
}else{
}
print("hello")
try! self.cameraController.displayPreview(on: self.mirrorView)
}
}
configureCameraController()
How can I get the camera preview layer to show up on the smaller UIView?
Can you try adding the following
let rootLayer: CALayer = self.yourSmallerView.layer
rootLayer.masksToBounds = true
self.previewLayer.frame = rootLayer.bounds
rootLayer.addSublayer(self.previewLayer)
in place of
view.layer.insertSublayer(self.previewLayer!, at: 0)
Also ensure, yourSmallerView.contentMode = UIViewContentMode.scaleToFill
Of course with iOS 10, you now have to do THIS
to use the phone's camera. On first launch, the user gets a question such as,
and all is well.
BUT we have a client app that is a "camera app": when you launch the app it simply immediately launches the camera, when the app is running the camera is running and is shown fullscreen. The code to do so is the usual way, see below.
The problem is - the first launch of the app on a phone, the user is asked the question; user says yes. But then, the camera is just black on devices we have tried. It does not crash (as it would if you forget the plist item) but it goes black and stays black.
If the user quits the app and launches it again - it's fine, everything works.
What the heck is the workflow for a "camera app"? I can't see a good solution, but there must be one for the various camera apps out there - which immediately go to fullscreen camera when you launch the app.
class CameraPlane:UIViewController
{
...
func cameraBegin()
{
captureSession = AVCaptureSession()
captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
let backCamera = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
var error: NSError?
var input: AVCaptureDeviceInput!
do {
input = try AVCaptureDeviceInput(device: backCamera)
} catch let error1 as NSError
{
error = error1
input = nil
}
if ( error != nil )
{
print("probably on simulator? no camera?")
return;
}
if ( captureSession!.canAddInput(input) == false )
{
print("capture session problem?")
return;
}
captureSession!.addInput(input)
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput!.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if ( captureSession!.canAddOutput(stillImageOutput) == false )
{
print("capture session with stillImageOutput problem?")
return;
}
captureSession!.addOutput(stillImageOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer!.videoGravity = AVLayerVideoGravityResizeAspectFill
// or, AVLayerVideoGravityResizeAspect
fixConnectionOrientation()
view.layer.addSublayer(previewLayer!)
captureSession!.startRunning()
previewLayer!.frame = view.bounds
}
Note: it's likely OP's code was actually working correctly in terms of the new ISO10 permission string, and OP had another problem causing the black screen.
From the code you've posted, I can't tell why you experience this kind of behavior. I can only give you the code that is working for me.
This code also runs on iOS 9. Note that I am loading the camera in viewDidAppear to make sure that all constraints are set.
import AVFoundation
class ViewController : UIViewController {
//the view where the camera feed is shown
#IBOutlet weak var cameraView: UIView!
var captureSession: AVCaptureSession = {
let session = AVCaptureSession()
session.sessionPreset = AVCaptureSessionPresetPhoto
return session
}()
var sessionOutput = AVCaptureStillImageOutput()
var previewLayer = AVCaptureVideoPreviewLayer()
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
let devices = AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo) as? [AVCaptureDevice]
guard let backCamera = (devices?.first { $0.position == .back }) else {
print("The back camera is currently not available")
return
}
do {
let input = try AVCaptureDeviceInput(device: backCamera)
if captureSession.canAddInput(input){
captureSession.addInput(input)
sessionOutput.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if captureSession.canAddOutput(sessionOutput) {
captureSession.addOutput(sessionOutput)
captureSession.startRunning()
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer.connection.videoOrientation = .portrait
cameraView.layer.addSublayer(previewLayer)
previewLayer.position = CGPoint(x: cameraView.frame.width / 2, y: cameraView.frame.height / 2)
previewLayer.bounds = cameraView.frame
}
}
} catch {
print("Could not create a AVCaptureSession")
}
}
}
If you want the camera to show fullscreen you can simply use view instead of cameraView. In my camera implementation the camera feed does not cover the entire view, there's still some navigation stuff.
What's happening is that on that first launch, you're activating the camera before it can check what the permissions are and display the appropriate UIAlertController. What you'd want to do is include this code inside an if statement to check the status of the camera permissions (AVAuthorizationStatus). Make sure that if it's not allowed to ask for permission before displaying the camera. See this question for more help.
Here's the code I'm working with:
func toggleflash(On on: Bool){
guard let session = captureSession where session.running == true else {
return
}
session.beginConfiguration()
let input = session.inputs[0] as! AVCaptureDeviceInput
if input.device.isFlashModeSupported(.On) {
do {
try input.device.lockForConfiguration()
input.device.flashMode = .On
input.device.unlockForConfiguration()
}catch {}
}
session.commitConfiguration()
}
When I use this function to use flash on my front camera, the iPhone screen does everything you'd expect. It flashes, but it doesn't illuminate the subject the way that the iPhone camera app/snapchat/instagram does. So I wonder if it's not bright enough or if it's not working correctly.
I am using AvFoundation for camera.
This is my live preview:
It looks good. When user presses to "Button" I am creating a snapshot on same screen. (Like snapchat)
I am using following code for capturing image and showing it on the screen:
self.stillOutput.captureStillImageAsynchronouslyFromConnection(videoConnection){
(imageSampleBuffer : CMSampleBuffer!, _) in
let imageDataJpeg = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageSampleBuffer)
let pickedImage: UIImage = UIImage(data: imageDataJpeg)!
self.captureSession.stopRunning()
self.previewImageView.frame = CGRect(x:0, y:0, width:UIScreen.mainScreen().bounds.width, height:UIScreen.mainScreen().bounds.height)
self.previewImageView.image = pickedImage
self.previewImageView.layer.zPosition = 100
}
After user captures an image screen looks like this:
Please look at the marked area. It wasn't looking on the live preview screen(Screenshot 1).
I mean live preview is not showing everything. But I am sure my live preview works well because I compared with other camera apps and everything was same as my live preview screen. I guess I have a problem with captured image.
I am creating live preview with following code:
override func viewWillAppear(animated: Bool) {
super.viewWillAppear(animated)
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
let devices = AVCaptureDevice.devices()
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the back camera
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
}
}
}
if captureDevice != nil {
beginSession()
}
}
func beginSession() {
let err : NSError? = nil
do {
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
} catch{
}
captureSession.addOutput(stillOutput)
if err != nil {
print("error: \(err?.localizedDescription)")
}
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.videoGravity=AVLayerVideoGravityResizeAspectFill
self.cameraLayer.layer.addSublayer(previewLayer!)
previewLayer?.frame = self.cameraLayer.frame
captureSession.startRunning()
}
My cameraLayer :
How can I resolve this problem?
Presumably you are using an AVCaptureVideoPreviewLayer. So it sounds like this layer is incorrectly placed or incorrectly sized, or it has the wrong AVLayerVideoGravity setting. Part of the image is off the screen or cropped; that's why you don't see that part of what the camera sees while you are previewing.
Ok, I found the solution.
I used the
captureSession.sessionPreset = AVCaptureSessionPresetHigh
Instead of
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
Then problem fixed.