AVFoundation -> record camera video with animated overlay - ios

In my app I am recording camera output with AVFoundation. I am trying to save not just camera output, but also GUI layout(overlay) presented over camera surface.
I am executing some basic animations on GUI layer( showing and hiding of UIView and UIProgressView animation).
So is it somehow possible to record camera output with animated overlay?
My research:
1.) https://www.raywenderlich.com/2734-avfoundation-tutorial-adding-overlays-and-animations-to-videos
Post processing is not an option. And this solution would not work for my problem.
2.) iPhone Watermark on recorded Video.
So it is possible to add watermark. Maybe it would be possible to capture frame from camera, capture frame from overlay then overlaying the captured camera frame with the captured overlay frame? :(

My solution (i am working on it right now) is create an object that subclass a CALayer and add it to your preview uiview.layer.
protocol VideoCameraLayer {
func playAnimation()
func stopAnimation()
}
class CircleFaceLayer: : CALayer,VideoCameraLayer{
init(){
//... add sublayers to this layer or whatever you need
}
func playAnimation(){
//Add animation/s to your layer/sublayers
}
func stopAnimation(){
//Pause animation/s to your layer/sublayers
}
}
You could use this object and sublayering in your avassetexportsession.animatioToll and in your preview layer in your UIView:
let animatedLayer:VideoCameraLayer = CircleFaceLayer()
self.view.layer.addSublayer(animatedLayer)

Related

setting CVImageBuffer to all black

I am trying to modify some sample code from Apple Developer codes for my own purpose (I am very new to programming for iOS). I am trying to get images from a camera and apply some detection and just show the detections.
Currently, I am using the AVCaptureVideoPreviewLayer and basically the camera feed gets displayed on the screen. I actually want to zero out the camera feed and draw some detections only. So, I am basically trying to handle this in the captureOutputfunction. So something like:
extension ViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
// Grab the pixelbuffer frame from the camera output
guard let pixelBuffer = sampleBuffer.imageBuffer else { return }
// Here now I should be able to set it 0s (all black)
}
}
I am trying to do something basic like setting this CVImageBuffer to a black background but have not been able to figure that out in the last hours!
EDIT
So, I discovered that I can do something like:
var image: CGImage?
// Create a Core Graphics bitmap image from the buffer.
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &image)
This copies the buffer data to the CGImage, which I can then use for my purposes. Now, is there an API that can basically make an all black image with the same size as one represented by the input image buffer?

Take ARSCNView snapshot with subview

I am using ARSCNView.snapshot() to take a snapshot in my iOS app with a picture as a frame. The picture frame is a UIImageView which is a subview to the ARSCNView. However, I can only get the scene in the taken picture without the image. I also tried to use this method (https://stackoverflow.com/a/45346555/10057340) to render the ARSCNView as an image but I only got the image frame with a white background rather than the image frame with the camera scene. Can anyone suggest a way for doing this? Thanks!!
Below is how I used snaphot():
var frameImageView: UIImageView!
var sceneView: ARSCNView!
func takeSnapShot() {
frameImageView.image = UIImage(named:"frame")
frameImageView.ishidden = false
sceneView.addSubView(frameImageView)
UIImageWriteToSavedPhotosAlbum(sceneView.snapshot(), nil, nil. nil)
}
I don't fully understand what you are asking specifically but you might find this video from the WWDC 2017 useful: https://www.youtube.com/watch?v=eBt1p799L3Q
Check out the project they make at 18 min.

How to control multiple skeletal animations independently with SceneKit?

I am trying to control multiple skeletal animations at the same time with multiple inputs, each of which points to a different frame of an animation. Imagine the way you set weight for a morpher with setWeight(_:forTargetAt:), but with input as a frame which I want the animation to be in state of.
I achieved this for just one animation by doing something like
#IBOutlet weak var sceneView: SCNView!
var animationPlayer: SCNAnimationPlayer! = nil
override func viewDidLoad() {
super.viewDidLoad()
//add a node on the scene and give animationPlayer the node's animation player
animationPlayer.animation.usesSceneTimeBase = true
}
#IBAction func sliderChanged(_ sender: UISlider) {
sceneView.sceneTime = Double(sender.value) * animationPlayer.animation.duration
}
where a slider is put to change the sceneTime of sceneView. This could only work if the node has just one animation.
But what I want to achieve is controlling all the animations asynchronously but on one node. I cannot simply play with sceneTime because that will cause all the animations to be played on the same time base in sync.
Is there any way to play all the animations with different input time or any value that works as a pointer to a specific frame of the animation?
(e.g. at one point, first animation is playing the 5th frame while second is in state of 8th. At next frame of rendering, the first animation is playing 4th frame and the second is playing 12th frame.)
Thanks.

Is there a way to display camera images without using AVCaptureVideoPreviewLayer?

Is there a way to display camera images without using AVCaptureVideoPreviewLayer?
I want to do screen capture, but I can not do it.
session = AVCaptureSession()
camera = AVCaptureDevice.default(
AVCaptureDevice.DeviceType.builtInWideAngleCamera,
for: AVMediaType.video,
position: .front) // position: .front
do {
input = try AVCaptureDeviceInput(device: camera)
} catch let error as NSError {
print(error)
}
if(session.canAddInput(input)) {
session.addInput(input)
}
let previewLayer = AVCaptureVideoPreviewLayer(session: session)
cameraView.backgroundColor = UIColor.red
previewLayer.frame = cameraView.bounds
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspect
cameraview.layer.addSublayer(previewLayer)
session.startRunning()
I am currently trying to broadcast a screen capture. It is to synthesize the camera image and some UIView. However, if you use AVCaptureVideoPreviewLayer screen capture can not be done and the camera image is not displayed. Therefore, I want to display the camera image so that screen capture can be performed.
Generally the views that are displayed using GPU directly may not be redrawn on the CPU. This includes situations like openGL content or these preview layers.
The "screen capture" redraws the screen on a new context on CPU which obviously ignores the GPU part.
You should try and play around with adding some outputs on the session which will give you images or rather CMSampleBuffer shots which may be used to generate the image.
There are plenty ways in doing this but you will most likely need to go a step lower. You can add output to your session to receive samples directly. Doing this is a bit of a code so please refer to some other posts like this one. The point in this you will have a didOutputSampleBuffer method which will feed you CMSampleBufferRef objects that may be used to construct pretty much anything in terms of images.
Now in your case I assume you will be aiming to get UIImage from sample buffer. To do so you may again need a bit of code so refer to some other post like this one.
To put it all together you could as well simply use an image view and drop the preview layer. As you get the sample buffer you can create image and update your image view. I am not sure what the performance of this would be but I discourage you on doing this. If image itself is enough for your case then you don't need a view snapshot at all.
But IF you do:
On snapshot create this image. Then overlay your preview layer with and image view that is showing this generated image (add a subview). Then create the snapshot and remove the image view all in a single chunk:
func snapshot() -> UIImage? {
let imageView = UIImageView(frame: self.previewPanelView.bounds)
imageView.image = self.imageFromLatestSampleBuffer()
imageView.contentMode = .aspectFill // Not sure
self.previewPanelView.addSubview(imageView)
let image = createSnapshot()
imageView.removeFromSuperview()
return image
}
Let us know how things turn and you tried, what did or did not work.

Swift: Make control buttons not move with camera

I'm building a platform game, and I made the camera follow the player when he walks:
let cam = SKCameraNode()
override func didMoveToView(view: SKView) {
self.camera = cam
...
}
override func update(currentTime: CFTimeInterval) {
/* Called before each frame is rendered */
cam.position = Player.player.position
...
But, when the camera moves, the control buttons move as well
What should I do to keep the control buttons static?
See this note in the SKCameraNode docs:
A camera’s descendants are always rendered relative to the camera node’s origin and without applying the camera’s scaling or rotation to them. For example, if your game wants to display scores or other data floating above the gameplay, the nodes that render these elements should be descendants of the current camera node.
If you want HUD elements that stay fixed relative to the screen even as the camera moves/scales/rotates, make them child nodes of the camera.
By the way, you don't need to change the camera's position on every update(). Instead, just constrain the camera's position to match that of the player:
let constraint = SKConstraint.distance(SKRange(constantValue: 0), toNode: player)
camera.constraints = [ constraint ]
Then, SpriteKit will automatically keep the camera centered on the player without any per-frame work from you. You can even add more than one constraint — say, to follow the player but keep the camera from getting too close to the edge of the world (and showing empty space).
Add the buttons as child to the camera, like cam.addchild(yourButton)
From rickster's answer I made these constraints where the camera only moves horizontally, even if the player jumps. The order in which they are added is important. In case somebody else find them useful:
Swift 4.2
let camera = SKCameraNode()
scene.addChild(camera)
camera.constraints = [SKConstraint.distance(SKRange(upperLimit: 200), to: player),
SKConstraint.positionY(SKRange(constantValue: 0))]

Resources