I am trying to develop an augmented reality app using swift and scenekit on ios. Is there a way to draw the video captured by the device camera as a background of the scene?
This worked for me,
I used AVFoundation to capture the video input of the device camera:
let captureSession = AVCaptureSession()
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
if let videoDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo) {
var err: NSError? = nil
if let videoIn : AVCaptureDeviceInput = AVCaptureDeviceInput.deviceInputWithDevice(videoDevice, error: &err) as? AVCaptureDeviceInput {
if(err == nil){
if (captureSession.canAddInput(videoIn as AVCaptureInput)){
captureSession.addInput(videoIn as AVCaptureDeviceInput)
}
else {
println("Failed to add video input.")
}
}
else {
println("Failed to create video input.")
}
}
else {
println("Failed to create video capture device.")
}
}
captureSession.startRunning()
At this point, based on Apple's documentation of the background property of SCNScene, I expected to add the instance of AVCaptureVideoPreviewLayer to an SCNScene's background.contents, with something like:
previewLayer.frame = sceneView.bounds
sceneView.scene.background.contents = previewLayer
This changed the background color of my scene from default white to black, but offered no video input. This may be an iOS bug?.
So Plan B. Instead, I added the 'AVCaptureVideoPreviewLayer' as a sublayer of a UIView's layer:
previewLayer.frame = self.view.bounds
self.view.layer.addSublayer(previewLayer)
I then set an SCNView as a subview of that same UIView, setting the SCNView's background color to clear:
let sceneView = SCNView()
sceneView.frame = self.view.bounds
sceneView.backgroundColor = UIColor.clearColor()
self.view.addSubview(sceneView)
The device camera's video is now visible as the background of the scene.
I've created a small demo.
As of iOS 11, you can now use an AVCaptureDevice as a material on an object or the scene's background (see "Using Animated Content" here)
Example:
let captureDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back)!
scnScene.background.contents = captureDevice
Yes, you can use a AVCaptureVideoPreviewLayer as the contents of a material property (just make sure that you give the layer a bounds).
The scene view has a material property for the background that you can assign the video preview to (assign the layer to the background's contents).
var background: SCNMaterialProperty! { get }
Related
I am creating a ViewController in which I want to have a somewhat small UIView in the corner of the ViewController to display the camera preview. I am using a function to do this. However when I pass in the small UIView into the function the camera preview is not showing up. The weird thing is if I tell the function to display the preview on self.view everything works fine and I can see the camera preview. For this reason I think the problem is with the way I insert the layer or something similar.
Here is the function I am using to display the preview...
func displayPreview(on view: UIView) throws {
guard let captureSession = self.captureSession, captureSession.isRunning else { throw CameraControllerError.captureSessionIsMissing }
self.previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
self.previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
self.previewLayer?.connection?.videoOrientation = .portrait
view.layer.insertSublayer(self.previewLayer!, at: 0)
self.previewLayer?.frame = view.frame
}
I call this function from inside another function which handles setting up the capture session and other similar things.
func configureCameraController() {
cameraController.prepare {(error) in
if let error = error {
print("ERROR")
print(error)
}else{
}
print("hello")
try! self.cameraController.displayPreview(on: self.mirrorView)
}
}
configureCameraController()
How can I get the camera preview layer to show up on the smaller UIView?
Can you try adding the following
let rootLayer: CALayer = self.yourSmallerView.layer
rootLayer.masksToBounds = true
self.previewLayer.frame = rootLayer.bounds
rootLayer.addSublayer(self.previewLayer)
in place of
view.layer.insertSublayer(self.previewLayer!, at: 0)
Also ensure, yourSmallerView.contentMode = UIViewContentMode.scaleToFill
I am looking to display a UIView subclass within a UIStackView. The subclass is called PLBarcodeScannerView and is using AVCaptureMetadataOuput to detect barcodes within the camera's field of view. Because this view is not the entire screen, I need to set the region of interest to be the same as the frame of the PLBarcodeScannerView because the user is only seeing a portion of the camera view and we want to be sure that the barcode in the visible view is the one being scanned.
Issue
I cannot seem to set the metadataOutputRectOfInterest properly, nor does the "zoom level" of the preview layer on this view seem correct, although the aspect ratio is correct. The system does receive barcodes successfully, but they are not always visible within the preview window. Codes are still scanned when they reside outside the visible preview window.
Screenshot:
The colorful photo is the PLBarcodeScannerView. Only codes which are fully visible inside this view should be considered.
Below is the code that initializes the view:
This is called within the init methods of PLBarcodeScannerView:UIView
func setupView() {
session = AVCaptureSession()
let tap = UITapGestureRecognizer(target: self, action: #selector(self.resume))
addGestureRecognizer(tap)
// Set the captureDevice.
let videoCaptureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
// Create input object.
let videoInput: AVCaptureDeviceInput?
do {
videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice)
} catch {
return
}
// Add input to the session.
if (session!.canAddInput(videoInput)) {
session!.addInput(videoInput)
} else {
scanningNotPossible()
}
// Create output object.
let metadataOutput = AVCaptureMetadataOutput()
// Add output to the session.
if (session!.canAddOutput(metadataOutput)) {
session!.addOutput(metadataOutput)
// Send captured data to the delegate object via a serial queue.
metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
// Set barcode type for which to scan: EAN-13.
metadataOutput.metadataObjectTypes = [
AVMetadataObjectTypeCode128Code
]
} else {
scanningNotPossible()
}
// Determine the size of the region of interest
let x = self.frame.origin.x/UIScreen.main.bounds.width
let y = self.frame.origin.y/UIScreen.main.bounds.height
let width = self.frame.width/UIScreen.main.bounds.height
let height = self.frame.height/UIScreen.main.bounds.height
let scanRectTransformed = CGRect(x: x, y: y, width: 1, height: height)
metadataOutput.metadataOutputRectOfInterest(for: scanRectTransformed)
// Add previewLayer and have it show the video data.
previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.frame = self.bounds
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
layer.addSublayer(previewLayer)
// Begin the capture session.
session!.startRunning()
}
Of course with iOS 10, you now have to do THIS
to use the phone's camera. On first launch, the user gets a question such as,
and all is well.
BUT we have a client app that is a "camera app": when you launch the app it simply immediately launches the camera, when the app is running the camera is running and is shown fullscreen. The code to do so is the usual way, see below.
The problem is - the first launch of the app on a phone, the user is asked the question; user says yes. But then, the camera is just black on devices we have tried. It does not crash (as it would if you forget the plist item) but it goes black and stays black.
If the user quits the app and launches it again - it's fine, everything works.
What the heck is the workflow for a "camera app"? I can't see a good solution, but there must be one for the various camera apps out there - which immediately go to fullscreen camera when you launch the app.
class CameraPlane:UIViewController
{
...
func cameraBegin()
{
captureSession = AVCaptureSession()
captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
let backCamera = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
var error: NSError?
var input: AVCaptureDeviceInput!
do {
input = try AVCaptureDeviceInput(device: backCamera)
} catch let error1 as NSError
{
error = error1
input = nil
}
if ( error != nil )
{
print("probably on simulator? no camera?")
return;
}
if ( captureSession!.canAddInput(input) == false )
{
print("capture session problem?")
return;
}
captureSession!.addInput(input)
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput!.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if ( captureSession!.canAddOutput(stillImageOutput) == false )
{
print("capture session with stillImageOutput problem?")
return;
}
captureSession!.addOutput(stillImageOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer!.videoGravity = AVLayerVideoGravityResizeAspectFill
// or, AVLayerVideoGravityResizeAspect
fixConnectionOrientation()
view.layer.addSublayer(previewLayer!)
captureSession!.startRunning()
previewLayer!.frame = view.bounds
}
Note: it's likely OP's code was actually working correctly in terms of the new ISO10 permission string, and OP had another problem causing the black screen.
From the code you've posted, I can't tell why you experience this kind of behavior. I can only give you the code that is working for me.
This code also runs on iOS 9. Note that I am loading the camera in viewDidAppear to make sure that all constraints are set.
import AVFoundation
class ViewController : UIViewController {
//the view where the camera feed is shown
#IBOutlet weak var cameraView: UIView!
var captureSession: AVCaptureSession = {
let session = AVCaptureSession()
session.sessionPreset = AVCaptureSessionPresetPhoto
return session
}()
var sessionOutput = AVCaptureStillImageOutput()
var previewLayer = AVCaptureVideoPreviewLayer()
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
let devices = AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo) as? [AVCaptureDevice]
guard let backCamera = (devices?.first { $0.position == .back }) else {
print("The back camera is currently not available")
return
}
do {
let input = try AVCaptureDeviceInput(device: backCamera)
if captureSession.canAddInput(input){
captureSession.addInput(input)
sessionOutput.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if captureSession.canAddOutput(sessionOutput) {
captureSession.addOutput(sessionOutput)
captureSession.startRunning()
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer.connection.videoOrientation = .portrait
cameraView.layer.addSublayer(previewLayer)
previewLayer.position = CGPoint(x: cameraView.frame.width / 2, y: cameraView.frame.height / 2)
previewLayer.bounds = cameraView.frame
}
}
} catch {
print("Could not create a AVCaptureSession")
}
}
}
If you want the camera to show fullscreen you can simply use view instead of cameraView. In my camera implementation the camera feed does not cover the entire view, there's still some navigation stuff.
What's happening is that on that first launch, you're activating the camera before it can check what the permissions are and display the appropriate UIAlertController. What you'd want to do is include this code inside an if statement to check the status of the camera permissions (AVAuthorizationStatus). Make sure that if it's not allowed to ask for permission before displaying the camera. See this question for more help.
I am using AvFoundation for camera.
This is my live preview:
It looks good. When user presses to "Button" I am creating a snapshot on same screen. (Like snapchat)
I am using following code for capturing image and showing it on the screen:
self.stillOutput.captureStillImageAsynchronouslyFromConnection(videoConnection){
(imageSampleBuffer : CMSampleBuffer!, _) in
let imageDataJpeg = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageSampleBuffer)
let pickedImage: UIImage = UIImage(data: imageDataJpeg)!
self.captureSession.stopRunning()
self.previewImageView.frame = CGRect(x:0, y:0, width:UIScreen.mainScreen().bounds.width, height:UIScreen.mainScreen().bounds.height)
self.previewImageView.image = pickedImage
self.previewImageView.layer.zPosition = 100
}
After user captures an image screen looks like this:
Please look at the marked area. It wasn't looking on the live preview screen(Screenshot 1).
I mean live preview is not showing everything. But I am sure my live preview works well because I compared with other camera apps and everything was same as my live preview screen. I guess I have a problem with captured image.
I am creating live preview with following code:
override func viewWillAppear(animated: Bool) {
super.viewWillAppear(animated)
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
let devices = AVCaptureDevice.devices()
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the back camera
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
}
}
}
if captureDevice != nil {
beginSession()
}
}
func beginSession() {
let err : NSError? = nil
do {
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
} catch{
}
captureSession.addOutput(stillOutput)
if err != nil {
print("error: \(err?.localizedDescription)")
}
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.videoGravity=AVLayerVideoGravityResizeAspectFill
self.cameraLayer.layer.addSublayer(previewLayer!)
previewLayer?.frame = self.cameraLayer.frame
captureSession.startRunning()
}
My cameraLayer :
How can I resolve this problem?
Presumably you are using an AVCaptureVideoPreviewLayer. So it sounds like this layer is incorrectly placed or incorrectly sized, or it has the wrong AVLayerVideoGravity setting. Part of the image is off the screen or cropped; that's why you don't see that part of what the camera sees while you are previewing.
Ok, I found the solution.
I used the
captureSession.sessionPreset = AVCaptureSessionPresetHigh
Instead of
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
Then problem fixed.
is possible to apply filter to AVLayer and add it to view as addSublayer? I want to change colors and add some noise to video from camera using Swift and I don't know how.
I thought, that is possible to add filterLayer and previewLayer like this:
self.view.layer.addSublayer(previewLayer)
self.view.layer.addSublayer(filterLayer)
and this can maybe create video with my custom filter, but I think, that is possible to do that more effectively usign AVComposition
So what I need to know:
What is simplest way to apply filter to camera video output realtime?
Is possible to merge AVCaptureVideoPreviewLayer and CALayer?
Thanks for every suggestion..
There's another alternative, use an AVCaptureSession to create instances of CIImage to which you can apply CIFilters (of which there are loads, from blurs to color correction to VFX).
Here's an example using the ComicBook effect. In a nutshell, create an AVCaptureSession:
let captureSession = AVCaptureSession()
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
Create an AVCaptureDevice to represent the camera, here I'm setting the back camera:
let backCamera = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
Then create a concrete implementation of the device and attach it to the session. In Swift 2, instantiating AVCaptureDeviceInput can throw an error, so we need to catch that:
do
{
let input = try AVCaptureDeviceInput(device: backCamera)
captureSession.addInput(input)
}
catch
{
print("can't access camera")
return
}
Now, here's a little 'gotcha': although we don't actually use an AVCaptureVideoPreviewLayer but it's required to get the sample delegate working, so we create one of those:
// although we don't use this, it's required to get captureOutput invoked
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
view.layer.addSublayer(previewLayer)
Next, we create a video output, AVCaptureVideoDataOutput which we'll use to access the video feed:
let videoOutput = AVCaptureVideoDataOutput()
Ensuring that self implements AVCaptureVideoDataOutputSampleBufferDelegate, we can set the sample buffer delegate on the video output:
videoOutput.setSampleBufferDelegate(self,
queue: dispatch_queue_create("sample buffer delegate", DISPATCH_QUEUE_SERIAL))
The video output is then attached to the capture session:
captureSession.addOutput(videoOutput)
...and, finally, we start the capture session:
captureSession.startRunning()
Because we've set the delegate, captureOutput will be invoked with each frame capture. captureOutput is passed a sample buffer of type CMSampleBuffer and it just takes two lines of code to convert that data to a CIImage for Core Image to handle:
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(CVPixelBuffer: pixelBuffer!)
...and that image data is passed to our Comic Book effect which, in turn, is used to populate an image view:
let comicEffect = CIFilter(name: "CIComicEffect")
comicEffect!.setValue(cameraImage, forKey: kCIInputImageKey)
let filteredImage = UIImage(CIImage: comicEffect!.valueForKey(kCIOutputImageKey) as! CIImage!)
dispatch_async(dispatch_get_main_queue())
{
self.imageView.image = filteredImage
}
I have the source code for this project available in my GitHub repo here.
If you're using an AVPlayerViewController, you can set the compositingFilter property of the view's layer:
playerController.view.layer.compositingFilter = "multiplyBlendMode"
See here for the compositing filter options you can use. e.g. "multiplyBlendMode", "screenBlendMode", etc.
Example of doing this in a UIViewController:
class ViewController : UIViewController{
override func viewDidLoad() {
//load a movie called my_movie.mp4 that's in your xcode project
let path = Bundle.main.path(forResource: "my_movie", ofType:"mp4")
let player = AVPlayer(url: URL(fileURLWithPath: path!))
//make a movie player and set the filter
let playerController = AVPlayerViewController()
playerController.player = player
playerController.view.layer.compositingFilter = "multiplyBlendMode"
//add the player view controller to this view controller
self.addChild(playerController)
view.addSubview(playerController.view)
playerController.didMove(toParent: self)
//play the movie
player.play()
}
}
For let path = Bundle.main.path(forResource: "my_movie", ofType:"mp4"), make sure you add the .mp4 file to Build Phases > Copy Bundle Resources in your Xcode project. Or check the 'add to target' boxes when you import the file.