Are they any ways in swift to take high quality pictures with the sceneview camera session like snapchat does?
Or at least without the scnnode objects in the picture?
Because I don't want to init a second camera frame to take pictures.
I want it to be integrated like snapchat camera.
Best regards,
moe
Yep
ARSCNViews have a built in snapshot method that returns a UIImage of the ARScene
So in Objective-C you could do:
UIImage *image = [sceneView snapshot];
and in Swift
var image = sceneView.snapshot()
Related
In my project i need to draw 2D image in realtime corresponding with UIGestureRecognizer updates .
The image would be the same UIImage , drawing on various positions.
let arrayOfPositions = [pos1,pos2,pos3]
And i need to transfer the result image on MetalLayer into Single UIImage , the result image will have the same size as device's screen.
something similar to
let resultImage = UIGraphicsGetImageFromCurrentImageContext()
I'm new to Metal and after watching realm's video and apple documentation my sanity went to chaos . Most tutorials focus on 3D rendering which is beyond my need (and my knowledge)
If anyone would help me a simple code how to draw UIImage in to MetalLayer , then convert the whole into single UIImage as a result ? thanks
I'd like to replace the camera feed in ARSCNView as the camera quality is not as good as the native camera feed.
Does anyone know how to remove the camera feed in ARSCNView?
The above answer does not work for me
sceneView.scene.background.contents = UIColor.clear
does work for me though
You can simply do this on an instance of ARSCNView:
scene.background.contents = nil
I am building an iOS app using Swift which requires QR code scanner functionality.
I have implemented a QR code scanner using AVFoundation, right now my capture screen looks same as a video recording screen i.e. AVCaptureVideoPreviewLayer shows what is being captured by the camera.
But since it is a QR code scanner and not a regular image or video capture, I would like my VideoPreviewLayer to look like this:
I understand this can be achieved by adding another VideoPreviewLayer on top of one VideoPreviewLayer.
My questions are:
How do I add the borders only to the edges in the upper (or smaller) preview layer?
How do I change the brightness level for the VideoPreviewLayer in the background?
How to ignore media captured by the the background layer?
You shouldn't use another VideoPreviewLayer. Instead you should add two sublayers - one for the masked background area and one for the corners.
Have a look at the source code in this repo for an example.
To limit the video capturing to the masked area you have to set the rectOfInterest of your AVCaptureMetadataOutput.
let rectOfInterest = videoPreviewLayer.metadataOutputRectConverted(fromLayerRect: rect)
metadataOutput.rectOfInterest = rectOfInterest
Long story short: you can use AVCaptureVideoPreviewLayer for video capturing, create another CALayer() and use layer.insertSublayer(..., above: ...) to insert your "custom" layer above the video layer, and by custom I mean just yet another CALayer with let say
layer.contents = spinner.cgImage
Here's a bit more detailed instructions
I am currently dealing with an issue associated to UIKit/UIGraphics in Swift (library also existed in ObjC UIKit/UIGraphics).
I am programmatically capturing a UIView to save it locally. The simple code goes like this:
UIGraphicsBeginImageContextWithOptions(view.frame.size, false, UIScreen.mainScreen().scale)
view.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// use image var
However, this code does not capture anything else on the screen like a video player. The video player ends up being a black frame and not rendering in the image context.
Native iOS clearly does this. You can screenshot video easily.
What's the solution here? Thanks for your help.
Before stackoverflow members answer with "You shouldn't. It's a privacy violation" let me counter with why there is a legitimate need for this.
I have a scenario where a user can change the camera device by swiping left and right. In order to make this animation not look like absolute crap, I need to grab a freeze frame before making this animation.
The only sane answer I have seen is capturing the buffer of AVCaptureVideoDataOutput, which is fine, but now I can't let the user take the video/photo with kCVPixelFormatType_420YpCbCr8BiPlanarFullRange which is a nightmare trying to get a CGImage from with CGBitmapContextCreate See How to convert a kCVPixelFormatType_420YpCbCr8BiPlanarFullRange buffer to UIImage in iOS
When capturing a still photo are there any serious quality considerations when using AVCaptureVideoDataOutput instead of AVCaptureStillImageOutput? Since the user will be taking both video and still photos (not just freeze-frame preview stills) Also, can some one "Explain it to me like I'm five" about the differences between kCVPixelFormatType_420YpCbCr8BiPlanarFullRange/kCVPixelFormatType_32BGRA besides one doesn't work on old hardware?
I don't think there is a way to directly capture a preview image using AVFoundation. You could however take a capture the preview layer by doing the following:
UIGraphicsBeginImageContext(previewView.frame.size);
[previewLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Where previewView.layer is the
previewLayer is the AVCaptureVideoPreviewLayer added to the previewView. "image" is rendered from this layer and can be used for your animation.