iOS: UIImagePickerController Overlay property Detects CameraSource Change - ios

Background: I am implementing a face mask to help people focus their camera and to produce a uniform result across every picture. Sadly, the face mask needs to adjust its size while switching between front and back facing camera to provide a great guideline for people.
Problem: I have been trying to detect this switch between camera to adjust my face mask accordingly. I have not yet found how to detect it.
Additional Info: I have tried looking into delegate and/or subclassing the pickerController. There are no methods visible for this detection. My last resort would be having a thread keep on checking camera source and adjust if needed. I welcome anything better :)

I would take a look at the UIImagePickerController documentation around cameraDevice property.
https://developer.apple.com/library/ios/#documentation/UIKit/Reference/UIImagePickerController_Class/UIImagePickerController_Class.pdf
You can create an observer to run a selector when it changes:
http://farwestab.wordpress.com/2010/09/09/using-observers-on-ios/

Related

Different methods of displaying camera under SceneKit

I'm developing a AR Application which can use few different engines. One of them is based on SceneKit (not ARKit).
I used to make SceneView background transparent, and just display AVCaptureVideoPreviewLayer under it. But this have created a problem later - turns out, that if you use clear backgroundColor for SceneView, and then add a floor node to it, which has diffuse.contents = UIColor.clear (transparent floor), then shadows are not displaying on it. And the goal for now is to have shadows in this engine.
I think the best method of getting shadows to work is to set camera preview as SCNScene.background.contents. For this I tried using AVCaptureDevice.default(for: video). This worked, but it has one issue - you can't use video format that you want - SceneKit automatically changes format when it's assigned. I even asked Apple for help using one of two help requests you can send to them, but they replied, that for now there is no public api that would allow me to use this with the format I would like. And on iPhone 6s this format changes to 30 FPS, and I need it to be 60 FPS. So this option is no good.
Is there some other way I would assign camera preview to scene background property? From what I read I can use also CALayer for this property, so I tried assigning AVCaptureVideoPreviewLayer, but this resulted in black color only, and no video. I have updated frame of layer to correct size, but this didn't work anyway. Maybe I did something wrong, and there is a way to use this AVCaptureVideoPreviewLayer or something else?
Can you suggest some possible solutions? I know I could use ARKit, and I do for other engine, but for this particular one I need to keep using SceneKit.

How to track an opened hand in any environment with RGB camera?

I want to make a movable camera that tracks an opened hand (toward the floor). It just needs to track the opened hand but it has to also know the rotation (2d rotation).
This is what I searched for so far:
Contour- As the camera is movable, the background is unknown, even the lighting is not fixed. It's hard for me to get a clear hand
segment in real time.
Haar- It seems this just returns a rect and can't deal with rotation.
Feature detect- A hand doesn't have enough detail for this.
I am using the Opencv Unity plugin to do this.
EDIT
https://www.codeproject.com/Articles/826377/Rapid-Object-Detection-in-Csharp
I see another library can do something like this. Can OpenCV also do this?

AVCaptureVideoPreviewLayer issues with Video Gravity and Face Detection Accuracy

I want to use AVFoundation to set up my own camera feed and process the live feed to detect smiles.
A lot of what I need has been done here: https://developer.apple.com/library/ios/samplecode/SquareCam/Introduction/Intro.html
This code was written a long time back, so I needed to make some modifications to use it the way I want to in terms of appearance.
The changes I made are as follows:
I enabled auto layout and size classes so as I wanted to support different screen sizes. I also changed the dimensions of the preview layer to be the full screen.
The session Preset is set to AVCaptureSessionPresetPhoto for iPhone and iPad
Finally, I set the video gravity to AVLayerVideoGravityResizeAspectFill (this seems to be the keypoint)
Now when I run the application, the faces get detected but there seems to be an error in the coordinates of where the rectangles are drawn
When I change the video gravity to AVLayerVideoGravityResizeAspect, everything seems to work fine again.
The only problem is then, the camera preview is not the desired size which is the full screen.
So now I am wondering why does this happen. I notice a function in square cam: videoPreviewBoxForGravity which processes the gravity type and seems to make adjustments.
- (CGRect)videoPreviewBoxForGravity:(NSString *)gravity frameSize:(CGSize)frameSize apertureSize:(CGSize)apertureSize
One thing I noticed here, the frame size stays the same regardless of gravity type.
Finally I read somewhere else, when setting the gravity to AspectFill, some part of the feed gets cropped which is understandable, similar to ImageView's scaletoFill.
My question is, how can I make the right adjustments to make this app work for any VideoGravity type and any size of previewlayer.
I have had a look at some related questions, for example CIDetector give wrong position on facial features seems to have a similar issue but it does not help
Thanks, in advance.

How to put the Camera input on the screen and blur it?

I'm fairly new to iOS development.
What I want to achive is to put the stream from the camera in a UIView class. (and size it with a frame).
So i don't need controls or the possibility to capture images, just on the screen what the camera sees.
Furthermore, i want that view to be blurred. Is there a way (or a library) to put a gaussian blur on that videostream?
Thank you!
You can use GPUImage https://github.com/BradLarson/GPUImage try realtime effects they provide. That will solve your problem for sure.
To display the camera on screen without controls you will need to use AVFoundation. Take a look at Apple's SquareCam sample code.
As for the blur a simpler solution might be creating semi-transparent image with a blur effect and placing it about the camera view.

Blur effect in a view of iOS

I want to use an UIImagePicker to have a camera preview being displayed. Over this preview I want to place an overlay view with controls.
Is it possible to apply any effects to the preview which will be displayed from camera? I particularly need to apply a blur effect to the camera preview.
So I want to have a blurred preview from camera and overlay view with controls. If I decide to capture the still image from the camera, I need to have it original without blur effect. So blur effect must applied only to the preview.
Is this possible using such configuration or maybe with AVFoundation being used for accessing the camera preview or maybe somehow else, or that's impossible at all?
With AV foundation you could do almost everything you want since you can obtain single frame from the camera and elaborate them, but it could lead you at a dead-end applying a blur on an image in realtime is a pretty intensive task with laggy video results, that could lead you to waste hours of coding. I would suggest you to use the solution of James WebSster or OpenGL shaders. Take a look at this awesome free library written by one of my favorite guru Brad http://www.sunsetlakesoftware.com/2012/02/12/introducing-gpuimage-framework even if you do not find the right filter, probably it will lead you to a correct implementation of what you want to do.
The right filter is Gaussian blur of course, but I don't know if it is supported, but you could do by yourself.
Almost forgot to say than in iOS 5 you have full access to the Accelerate Framework, made by Apple, you should look also into that.
From the reasonably limited amount of work I've done with UIImagePicker I don't think it is possible to apply the blur to the image you see using programatic filters.
What you might be able to do is to use the overlay to estimate blur. You could do this, for example, by adding an overlay which contains an image of semi-transparent frosted glass.

Resources