I am currently dealing with an issue associated to UIKit/UIGraphics in Swift (library also existed in ObjC UIKit/UIGraphics).
I am programmatically capturing a UIView to save it locally. The simple code goes like this:
UIGraphicsBeginImageContextWithOptions(view.frame.size, false, UIScreen.mainScreen().scale)
view.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// use image var
However, this code does not capture anything else on the screen like a video player. The video player ends up being a black frame and not rendering in the image context.
Native iOS clearly does this. You can screenshot video easily.
What's the solution here? Thanks for your help.
Related
In my project i need to draw 2D image in realtime corresponding with UIGestureRecognizer updates .
The image would be the same UIImage , drawing on various positions.
let arrayOfPositions = [pos1,pos2,pos3]
And i need to transfer the result image on MetalLayer into Single UIImage , the result image will have the same size as device's screen.
something similar to
let resultImage = UIGraphicsGetImageFromCurrentImageContext()
I'm new to Metal and after watching realm's video and apple documentation my sanity went to chaos . Most tutorials focus on 3D rendering which is beyond my need (and my knowledge)
If anyone would help me a simple code how to draw UIImage in to MetalLayer , then convert the whole into single UIImage as a result ? thanks
I have managed to setup a basic AVCaptureSession which records a video and saves it on device by using AVCaptureFileOutputRecordingDelegate. I have been searching through docs to understand how we can add statistics overlays on top of the video which is being recorded.
i.e.
As you can see in the above image. I have multiple overlays on top of video preview layer. Now, when I save my video output I would like to compose those views onto the video as well.
What have I tried so far?
Honestly, I am just jumping around on internet to find a reputable blog explaining how one would do this. But failed to find one.
I have read few places that one could render text layer overlays as described in following post by creating CALayer and adding it as a sublayer.
But, what about if I want to render MapView on top of the video being recorded. Also, I am not looking for screen capture. Some of the content on the screen will not be part of the final recording so I want to be able to cherry pick view that will be composed.
What am I looking for?
Direction.
No straight up solution
Documentation link and class names I should be reading more about to create this.
Progress So Far:
I have managed to understand that I need to get hold of CVImageBuffer from CMSampleBuffer and draw text over it. There are things still unclear to me whether it is possible to somehow overlay MapView over the video that is being recorded.
The best way that helps you to achieve your goal is to use a Metal framework. Using a Metal camera is good for minimising the impact on device’s limited computational resources. If you are trying to achieve the lowest-overhead access to camera sensor, using a AVCaptureSession would be a really good start.
You need to grab each frame data from CMSampleBuffer (you're right) and then to convert a frame to a MTLTexture. AVCaptureSession will continuously send us frames from device’s camera via a delegate callback.
All available overlays must be converted to MTLTextures too. Then you can composite all MTLTextures layers with over operation.
So, here you'll find all necessary info in four-part Metal Camera series.
And here's a link to a blog: About Compositing in Metal.
Also, I'd like to publish code's excerpt (working with AVCaptureSession in Metal):
import Metal
guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
// Handle an error here.
}
// Texture cache for converting frame images to textures
var textureCache: CVMetalTextureCache?
// `MTLDevice` for initializing texture cache
var metalDevice = MTLCreateSystemDefaultDevice()
guard
let metalDevice = metalDevice
where CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, metalDevice, nil, &textureCache) == kCVReturnSuccess
else {
// Handle an error (failed to create texture cache)
}
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
var imageTexture: CVMetalTexture?
let result = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache.takeUnretainedValue(), imageBuffer, nil, pixelFormat, width, height, planeIndex, &imageTexture)
// `MTLTexture` is in the `texture` variable now.
guard
let unwrappedImageTexture = imageTexture,
let texture = CVMetalTextureGetTexture(unwrappedImageTexture),
result == kCVReturnSuccess
else {
throw MetalCameraSessionError.failedToCreateTextureFromImage
}
And here you can find a final project on a GitHub: MetalRenderCamera
Are they any ways in swift to take high quality pictures with the sceneview camera session like snapchat does?
Or at least without the scnnode objects in the picture?
Because I don't want to init a second camera frame to take pictures.
I want it to be integrated like snapchat camera.
Best regards,
moe
Yep
ARSCNViews have a built in snapshot method that returns a UIImage of the ARScene
So in Objective-C you could do:
UIImage *image = [sceneView snapshot];
and in Swift
var image = sceneView.snapshot()
I am building an iOS app using Swift which requires QR code scanner functionality.
I have implemented a QR code scanner using AVFoundation, right now my capture screen looks same as a video recording screen i.e. AVCaptureVideoPreviewLayer shows what is being captured by the camera.
But since it is a QR code scanner and not a regular image or video capture, I would like my VideoPreviewLayer to look like this:
I understand this can be achieved by adding another VideoPreviewLayer on top of one VideoPreviewLayer.
My questions are:
How do I add the borders only to the edges in the upper (or smaller) preview layer?
How do I change the brightness level for the VideoPreviewLayer in the background?
How to ignore media captured by the the background layer?
You shouldn't use another VideoPreviewLayer. Instead you should add two sublayers - one for the masked background area and one for the corners.
Have a look at the source code in this repo for an example.
To limit the video capturing to the masked area you have to set the rectOfInterest of your AVCaptureMetadataOutput.
let rectOfInterest = videoPreviewLayer.metadataOutputRectConverted(fromLayerRect: rect)
metadataOutput.rectOfInterest = rectOfInterest
Long story short: you can use AVCaptureVideoPreviewLayer for video capturing, create another CALayer() and use layer.insertSublayer(..., above: ...) to insert your "custom" layer above the video layer, and by custom I mean just yet another CALayer with let say
layer.contents = spinner.cgImage
Here's a bit more detailed instructions
Before stackoverflow members answer with "You shouldn't. It's a privacy violation" let me counter with why there is a legitimate need for this.
I have a scenario where a user can change the camera device by swiping left and right. In order to make this animation not look like absolute crap, I need to grab a freeze frame before making this animation.
The only sane answer I have seen is capturing the buffer of AVCaptureVideoDataOutput, which is fine, but now I can't let the user take the video/photo with kCVPixelFormatType_420YpCbCr8BiPlanarFullRange which is a nightmare trying to get a CGImage from with CGBitmapContextCreate See How to convert a kCVPixelFormatType_420YpCbCr8BiPlanarFullRange buffer to UIImage in iOS
When capturing a still photo are there any serious quality considerations when using AVCaptureVideoDataOutput instead of AVCaptureStillImageOutput? Since the user will be taking both video and still photos (not just freeze-frame preview stills) Also, can some one "Explain it to me like I'm five" about the differences between kCVPixelFormatType_420YpCbCr8BiPlanarFullRange/kCVPixelFormatType_32BGRA besides one doesn't work on old hardware?
I don't think there is a way to directly capture a preview image using AVFoundation. You could however take a capture the preview layer by doing the following:
UIGraphicsBeginImageContext(previewView.frame.size);
[previewLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Where previewView.layer is the
previewLayer is the AVCaptureVideoPreviewLayer added to the previewView. "image" is rendered from this layer and can be used for your animation.