Different methods of displaying camera under SceneKit - ios

I'm developing a AR Application which can use few different engines. One of them is based on SceneKit (not ARKit).
I used to make SceneView background transparent, and just display AVCaptureVideoPreviewLayer under it. But this have created a problem later - turns out, that if you use clear backgroundColor for SceneView, and then add a floor node to it, which has diffuse.contents = UIColor.clear (transparent floor), then shadows are not displaying on it. And the goal for now is to have shadows in this engine.
I think the best method of getting shadows to work is to set camera preview as SCNScene.background.contents. For this I tried using AVCaptureDevice.default(for: video). This worked, but it has one issue - you can't use video format that you want - SceneKit automatically changes format when it's assigned. I even asked Apple for help using one of two help requests you can send to them, but they replied, that for now there is no public api that would allow me to use this with the format I would like. And on iPhone 6s this format changes to 30 FPS, and I need it to be 60 FPS. So this option is no good.
Is there some other way I would assign camera preview to scene background property? From what I read I can use also CALayer for this property, so I tried assigning AVCaptureVideoPreviewLayer, but this resulted in black color only, and no video. I have updated frame of layer to correct size, but this didn't work anyway. Maybe I did something wrong, and there is a way to use this AVCaptureVideoPreviewLayer or something else?
Can you suggest some possible solutions? I know I could use ARKit, and I do for other engine, but for this particular one I need to keep using SceneKit.

Related

AVCaptureVideoPreviewLayer issues with Video Gravity and Face Detection Accuracy

I want to use AVFoundation to set up my own camera feed and process the live feed to detect smiles.
A lot of what I need has been done here: https://developer.apple.com/library/ios/samplecode/SquareCam/Introduction/Intro.html
This code was written a long time back, so I needed to make some modifications to use it the way I want to in terms of appearance.
The changes I made are as follows:
I enabled auto layout and size classes so as I wanted to support different screen sizes. I also changed the dimensions of the preview layer to be the full screen.
The session Preset is set to AVCaptureSessionPresetPhoto for iPhone and iPad
Finally, I set the video gravity to AVLayerVideoGravityResizeAspectFill (this seems to be the keypoint)
Now when I run the application, the faces get detected but there seems to be an error in the coordinates of where the rectangles are drawn
When I change the video gravity to AVLayerVideoGravityResizeAspect, everything seems to work fine again.
The only problem is then, the camera preview is not the desired size which is the full screen.
So now I am wondering why does this happen. I notice a function in square cam: videoPreviewBoxForGravity which processes the gravity type and seems to make adjustments.
- (CGRect)videoPreviewBoxForGravity:(NSString *)gravity frameSize:(CGSize)frameSize apertureSize:(CGSize)apertureSize
One thing I noticed here, the frame size stays the same regardless of gravity type.
Finally I read somewhere else, when setting the gravity to AspectFill, some part of the feed gets cropped which is understandable, similar to ImageView's scaletoFill.
My question is, how can I make the right adjustments to make this app work for any VideoGravity type and any size of previewlayer.
I have had a look at some related questions, for example CIDetector give wrong position on facial features seems to have a similar issue but it does not help
Thanks, in advance.

Adding cross dissolve effect - iOS video creation

I would like to create a video from a number of images and add a cross dissolve effect between the images.
How can this be done? I know images can be written into a video file, but I don't see where to apply an effect. Should each image be turned into a video and then the videos written into the fill video with the transition effect?
I have searched around and cannot find much information on how this can be done, e.g. how to use AVMutableComposition and if it is viable to create videos consisting of individual images, to then apply the cross dissolve effect.
Any information will be greatly appreciated.
If you want to dig around in the bowels of AVFoundation for this, I strongly suggest you take a look at this presentation, especially starting at slide 74 Be prepared to do a large amount of work to pull this off...
If you'd like to get down to business several orders of magnitude faster, and don't mind incorporating a 3rd party library, I'd highly recommend you try GPUImage
You'll find it quite simple to push images into a video and swap them out at will, as well as apply any number of blend filters to the transitions, simply by varying a single mix property of your blend filter over the time your transition happens.
I'm currently doing this right now. To make things short: You need an AVPlayer to which you put an AVPlayerItem that can be empty. You then need to set the forwardPlaybackEndTime of your AVPlayerItemto the duration of your animation. You then create an AVPlayerLayer that you initiate with your AVPlayer. (actually maybe you do not need the AVPlayerLayer if you will not put video in your animation. Then the important part, you create an AVSynchronizedLayer that your initiate with your previous AVPlayerItem. this AVSynchronizedLayer and any sublayer it holds will be synchronized with your AVPlayerItem. You can then create some simple CALayer holding your image (through the contents property and add your CAKeyframeAnimation to your layer on the opacity property. Now any animation on those sublayers will follow the time of your AVPlayerItem. To start the animation, simply call playon your AVPlayer. That's the theory for playback. If you want to export this animation in an mp4 you will need to use AVVideoCompositionCoreAnimationTool but it's pretty similar.
for code example see code snippet to create animation

iOS: Draw on top of AV video, then save the drawing in the video file

I'm working on an iPad app that records and plays videos using AVFoundation classes. I have all of the code for basic record/playback in place and now I would like to add a feature that allows the user to draw and make annotations on the video—something I believe will not be too difficult. The harder part, and something that I have not been able to find any examples of, will be to combine the drawing and annotations into the video file itself. I suspect this is part is accomplished with AVComposition but have no idea exactly how. Your help would be greatly appreciated.
Mark
I do not think that you can actually save a drawing into a video file in iOS. You could however consider using a separate view to save the drawing and synchronize the overlay onto the video using a transparent view. In other words, the user circled something at time 3 mins 42 secs in the video. Then when the video is played back you overlay the saved drawing onto the video at the 3:42 mark. It's not what you want but I think it is as close as you can get right now.
EDIT: Actually there might be a way after all. Take a look at this tutorial. I have not read the whole thing but it seems to incorporate the overlay function you need.
http://www.raywenderlich.com/30200/avfoundation-tutorial-adding-overlays-and-animations-to-videos

iOS: UIImagePickerController Overlay property Detects CameraSource Change

Background: I am implementing a face mask to help people focus their camera and to produce a uniform result across every picture. Sadly, the face mask needs to adjust its size while switching between front and back facing camera to provide a great guideline for people.
Problem: I have been trying to detect this switch between camera to adjust my face mask accordingly. I have not yet found how to detect it.
Additional Info: I have tried looking into delegate and/or subclassing the pickerController. There are no methods visible for this detection. My last resort would be having a thread keep on checking camera source and adjust if needed. I welcome anything better :)
I would take a look at the UIImagePickerController documentation around cameraDevice property.
https://developer.apple.com/library/ios/#documentation/UIKit/Reference/UIImagePickerController_Class/UIImagePickerController_Class.pdf
You can create an observer to run a selector when it changes:
http://farwestab.wordpress.com/2010/09/09/using-observers-on-ios/

Blur effect in a view of iOS

I want to use an UIImagePicker to have a camera preview being displayed. Over this preview I want to place an overlay view with controls.
Is it possible to apply any effects to the preview which will be displayed from camera? I particularly need to apply a blur effect to the camera preview.
So I want to have a blurred preview from camera and overlay view with controls. If I decide to capture the still image from the camera, I need to have it original without blur effect. So blur effect must applied only to the preview.
Is this possible using such configuration or maybe with AVFoundation being used for accessing the camera preview or maybe somehow else, or that's impossible at all?
With AV foundation you could do almost everything you want since you can obtain single frame from the camera and elaborate them, but it could lead you at a dead-end applying a blur on an image in realtime is a pretty intensive task with laggy video results, that could lead you to waste hours of coding. I would suggest you to use the solution of James WebSster or OpenGL shaders. Take a look at this awesome free library written by one of my favorite guru Brad http://www.sunsetlakesoftware.com/2012/02/12/introducing-gpuimage-framework even if you do not find the right filter, probably it will lead you to a correct implementation of what you want to do.
The right filter is Gaussian blur of course, but I don't know if it is supported, but you could do by yourself.
Almost forgot to say than in iOS 5 you have full access to the Accelerate Framework, made by Apple, you should look also into that.
From the reasonably limited amount of work I've done with UIImagePicker I don't think it is possible to apply the blur to the image you see using programatic filters.
What you might be able to do is to use the overlay to estimate blur. You could do this, for example, by adding an overlay which contains an image of semi-transparent frosted glass.

Resources