How to add overlay on AVCaptureVideoPreviewLayer? - ios

I am building an iOS app using Swift which requires QR code scanner functionality.
I have implemented a QR code scanner using AVFoundation, right now my capture screen looks same as a video recording screen i.e. AVCaptureVideoPreviewLayer shows what is being captured by the camera.
But since it is a QR code scanner and not a regular image or video capture, I would like my VideoPreviewLayer to look like this:
I understand this can be achieved by adding another VideoPreviewLayer on top of one VideoPreviewLayer.
My questions are:
How do I add the borders only to the edges in the upper (or smaller) preview layer?
How do I change the brightness level for the VideoPreviewLayer in the background?
How to ignore media captured by the the background layer?

You shouldn't use another VideoPreviewLayer. Instead you should add two sublayers - one for the masked background area and one for the corners.
Have a look at the source code in this repo for an example.
To limit the video capturing to the masked area you have to set the rectOfInterest of your AVCaptureMetadataOutput.
let rectOfInterest = videoPreviewLayer.metadataOutputRectConverted(fromLayerRect: rect)
metadataOutput.rectOfInterest = rectOfInterest

Long story short: you can use AVCaptureVideoPreviewLayer for video capturing, create another CALayer() and use layer.insertSublayer(..., above: ...) to insert your "custom" layer above the video layer, and by custom I mean just yet another CALayer with let say
layer.contents = spinner.cgImage
Here's a bit more detailed instructions

Related

Show Custom camera in shape of image with custom filters ios

I am creating a feature like open front camera in custom image shape. I am not sure how exactly this should be achieved. I have googled enough to find out the way to achieve it. But no success yet.
I have tried to make a layer using AVCapture session & add into Image layer but it takes whole square frame of image not takes in image shape only by ignoring transparent pixels.
This is something what i need to achieve like : http://apple.co/2h7Oe8L. Please let me know if anything library or framework available or by using core features of Objective C i can do it.
Any reference or hint will be highly appreciated.
Thanks in Advance.
Instead of adding AVCapture layer into image layer add it to another view and then set the image in the mask property of that view. i.e:
let view = UIView()
view.layer.addSublayer(avCaptureLayer)
view.mask = image
addSubview(view)
You need to add the new view to the hierarchy, but the image doesn't.
You can use auto layout to position and resize the new view, but the image needs be resized/repositioned directly in its frame

iOS: drawViewHierarchyInRect:afterScreenUpdates: doesn't draw a AVPlayerLayer content

I tried to take a snapshot of a UIView which contains an AVPlayerLayer, however the video part was drawn black. On the other hand, when I use resizableSnapshotViewFromRect:afterScreenUpdates:withCapInsets: then the resulting view contains the video snapshot correctly. However, I cannot get an image out of it, so I'm stuck with UIView instead of UIImage.
I thought that unlike renderInContex, drawViewHierarchyInRect should capture all the "special" OpenGL and video layers also. Is this not possible?

Camera Output onto SceneKit Object

I'm trying to use SceneKit in an application and am wanting to use images captured from an iPhone/iPad's camera (front or back) as a texture on an object in my SCNScene.
From everything that I can tell from the documentation as well as other questions here on StackOverflow, I should just be able to create a AVCaptureVideoPreviewLayer with an appropriate AVCaptureSession and have it "just work". Unfortunately, it does not.
I'm using a line of code like this:
cubeGeometry.firstMaterial?.diffuse.contents = layer
Having the layer as the material seems to work because if I set the layer's backgroundColor, I see the background color, but the camera capturing does not appear to work. The layer is set up properly, because if I use the layer as a sublayer of the SCNView instead of on the object in the SCNScene, the layer appears properly in UIKit.
An example project can be found here.
You can use the USE_FRONT_CAMERA constant in GameViewController.swift to toggle between using front and back camera. You can use the USE_LAYER_AS_MATERIAL constant to toggle between using the AVCaptureVideoPreviewLayer as the texture for a material or as a sub layer in the SCNView.
I've found a pretty hacky workaround for this using some OpenGL calls, but I'd prefer to have this code working as a more general and less fragile solution. Anyone know how to get this working properly on device? I've tried both iOS 8.0 and iOS 9.0 devices.

Capturing a preview image with AVCaptureStillImageOutput

Before stackoverflow members answer with "You shouldn't. It's a privacy violation" let me counter with why there is a legitimate need for this.
I have a scenario where a user can change the camera device by swiping left and right. In order to make this animation not look like absolute crap, I need to grab a freeze frame before making this animation.
The only sane answer I have seen is capturing the buffer of AVCaptureVideoDataOutput, which is fine, but now I can't let the user take the video/photo with kCVPixelFormatType_420YpCbCr8BiPlanarFullRange which is a nightmare trying to get a CGImage from with CGBitmapContextCreate See How to convert a kCVPixelFormatType_420YpCbCr8BiPlanarFullRange buffer to UIImage in iOS
When capturing a still photo are there any serious quality considerations when using AVCaptureVideoDataOutput instead of AVCaptureStillImageOutput? Since the user will be taking both video and still photos (not just freeze-frame preview stills) Also, can some one "Explain it to me like I'm five" about the differences between kCVPixelFormatType_420YpCbCr8BiPlanarFullRange/kCVPixelFormatType_32BGRA besides one doesn't work on old hardware?
I don't think there is a way to directly capture a preview image using AVFoundation. You could however take a capture the preview layer by doing the following:
UIGraphicsBeginImageContext(previewView.frame.size);
[previewLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Where previewView.layer is the
previewLayer is the AVCaptureVideoPreviewLayer added to the previewView. "image" is rendered from this layer and can be used for your animation.

How do you get an UIImage from an AVCaptureVideoPreviewLayer?

I have already tried this solution CGImage (or UIImage) from a CALayer
However I do not get anything.
Like the question says, I am trying to get an UIImage from the preview layer of the camera. I know I can either capture a still image or use the outputsamplebuffer but my session quality video is set to photo so either of these 2 aproaches are slow and will give me a big image.
So what I thought could work is to get the image directly from the preview layer, since this has exactly the size I need and the operations have already been made on it. I just dont know how to get this layer to draw into my context so that I can get it as an UIImage.
Perhaps another solution would be to use OpenGL to get this layer directly as a texture?
Any help will be appreciated, thanks.
Quoting Apple from this Technical Q&A:
A: Starting from iOS 7, the UIView class provides a method
-drawViewHierarchyInRect:afterScreenUpdates:, which lets you render a snapshot of the complete view hierarchy as visible onscreen into a
bitmap context. On iOS 6 and earlier, how to capture a view's drawing
contents depends on the underlying drawing technique. This new method
-drawViewHierarchyInRect:afterScreenUpdates: enables you to capture the contents of the receiver view and its subviews to an image
regardless of the drawing techniques (for example UIKit, Quartz,
OpenGL ES, SpriteKit, AV Foundation, etc) in which the views are
rendered
In my experience regarding AVFoundation is not like that, if you use that method on view that host a preview layer you will only obtain the content of the view without the image of the preview layer. Using the -snapshotViewAfterScreenUpdates: will return a UIView that host a special layer. If you try to make an image from that view you won't see nothing.
The only solution I know are AVCaptureVideoDataOutput and AVCaptureStillImageOutput. Each one has its own limit. The first one can't work simultaneously with a AVCaptureMovieFileOutput acquisition, the latter makes the shutter noise.

Resources