I'd like to replace the camera feed in ARSCNView as the camera quality is not as good as the native camera feed.
Does anyone know how to remove the camera feed in ARSCNView?
The above answer does not work for me
sceneView.scene.background.contents = UIColor.clear
does work for me though
You can simply do this on an instance of ARSCNView:
scene.background.contents = nil
Related
I have managed to setup a basic AVCaptureSession which records a video and saves it on device by using AVCaptureFileOutputRecordingDelegate. I have been searching through docs to understand how we can add statistics overlays on top of the video which is being recorded.
i.e.
As you can see in the above image. I have multiple overlays on top of video preview layer. Now, when I save my video output I would like to compose those views onto the video as well.
What have I tried so far?
Honestly, I am just jumping around on internet to find a reputable blog explaining how one would do this. But failed to find one.
I have read few places that one could render text layer overlays as described in following post by creating CALayer and adding it as a sublayer.
But, what about if I want to render MapView on top of the video being recorded. Also, I am not looking for screen capture. Some of the content on the screen will not be part of the final recording so I want to be able to cherry pick view that will be composed.
What am I looking for?
Direction.
No straight up solution
Documentation link and class names I should be reading more about to create this.
Progress So Far:
I have managed to understand that I need to get hold of CVImageBuffer from CMSampleBuffer and draw text over it. There are things still unclear to me whether it is possible to somehow overlay MapView over the video that is being recorded.
The best way that helps you to achieve your goal is to use a Metal framework. Using a Metal camera is good for minimising the impact on device’s limited computational resources. If you are trying to achieve the lowest-overhead access to camera sensor, using a AVCaptureSession would be a really good start.
You need to grab each frame data from CMSampleBuffer (you're right) and then to convert a frame to a MTLTexture. AVCaptureSession will continuously send us frames from device’s camera via a delegate callback.
All available overlays must be converted to MTLTextures too. Then you can composite all MTLTextures layers with over operation.
So, here you'll find all necessary info in four-part Metal Camera series.
And here's a link to a blog: About Compositing in Metal.
Also, I'd like to publish code's excerpt (working with AVCaptureSession in Metal):
import Metal
guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
// Handle an error here.
}
// Texture cache for converting frame images to textures
var textureCache: CVMetalTextureCache?
// `MTLDevice` for initializing texture cache
var metalDevice = MTLCreateSystemDefaultDevice()
guard
let metalDevice = metalDevice
where CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, metalDevice, nil, &textureCache) == kCVReturnSuccess
else {
// Handle an error (failed to create texture cache)
}
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
var imageTexture: CVMetalTexture?
let result = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache.takeUnretainedValue(), imageBuffer, nil, pixelFormat, width, height, planeIndex, &imageTexture)
// `MTLTexture` is in the `texture` variable now.
guard
let unwrappedImageTexture = imageTexture,
let texture = CVMetalTextureGetTexture(unwrappedImageTexture),
result == kCVReturnSuccess
else {
throw MetalCameraSessionError.failedToCreateTextureFromImage
}
And here you can find a final project on a GitHub: MetalRenderCamera
I'm trying to create some animations to be seen in the front facing camera in iOS, with SceneKit.
Something similar to what ArKit does on the back camera.
Not using iPhone X nor ArKit (ArKit does not work on the front facing camera).
Right now I'm not able to combine the camera and the sceneKit scene for both to appear. I can see either the front camera view from:
AVCaptureDevice.default(.builtInWideAngleCamera, for: AVMediaType.video, position: .front)
or I can see the scene view.
It should look like the front facing camera live as a backdrop and all the nodes in the scene appearing in the front on top of that.
Another way to think of it is that I want the background of the scene to by a live feed from the front facing camera.
How would I do this?
SCNScene.background is an instance of SCNMaterialProperty which can take an AVCaptureDevice instance as its contents. The online documentation doesn't reflect that addition to iOS 11 yet, but it's mentioned in the SceneKit headers as well as their WWDC'17 presentation.
// Setup background video
let captureDevice: AVCaptureDevice = ...
scene.background.contents = captureDevice
Are they any ways in swift to take high quality pictures with the sceneview camera session like snapchat does?
Or at least without the scnnode objects in the picture?
Because I don't want to init a second camera frame to take pictures.
I want it to be integrated like snapchat camera.
Best regards,
moe
Yep
ARSCNViews have a built in snapshot method that returns a UIImage of the ARScene
So in Objective-C you could do:
UIImage *image = [sceneView snapshot];
and in Swift
var image = sceneView.snapshot()
I am building an iOS app using Swift which requires QR code scanner functionality.
I have implemented a QR code scanner using AVFoundation, right now my capture screen looks same as a video recording screen i.e. AVCaptureVideoPreviewLayer shows what is being captured by the camera.
But since it is a QR code scanner and not a regular image or video capture, I would like my VideoPreviewLayer to look like this:
I understand this can be achieved by adding another VideoPreviewLayer on top of one VideoPreviewLayer.
My questions are:
How do I add the borders only to the edges in the upper (or smaller) preview layer?
How do I change the brightness level for the VideoPreviewLayer in the background?
How to ignore media captured by the the background layer?
You shouldn't use another VideoPreviewLayer. Instead you should add two sublayers - one for the masked background area and one for the corners.
Have a look at the source code in this repo for an example.
To limit the video capturing to the masked area you have to set the rectOfInterest of your AVCaptureMetadataOutput.
let rectOfInterest = videoPreviewLayer.metadataOutputRectConverted(fromLayerRect: rect)
metadataOutput.rectOfInterest = rectOfInterest
Long story short: you can use AVCaptureVideoPreviewLayer for video capturing, create another CALayer() and use layer.insertSublayer(..., above: ...) to insert your "custom" layer above the video layer, and by custom I mean just yet another CALayer with let say
layer.contents = spinner.cgImage
Here's a bit more detailed instructions
I am currently dealing with an issue associated to UIKit/UIGraphics in Swift (library also existed in ObjC UIKit/UIGraphics).
I am programmatically capturing a UIView to save it locally. The simple code goes like this:
UIGraphicsBeginImageContextWithOptions(view.frame.size, false, UIScreen.mainScreen().scale)
view.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// use image var
However, this code does not capture anything else on the screen like a video player. The video player ends up being a black frame and not rendering in the image context.
Native iOS clearly does this. You can screenshot video easily.
What's the solution here? Thanks for your help.