Put "grid" on camera in Swift 2 (using UIimagePickerController) - ios

So I have followed one of the few tutorials on how to access the camera with the UIimagePickerController and that works fine. (It even implements face detection as well which was my next step!)
But now I would like to create something like Apple's "grid" view with my own personally made grid. I have how to make an ImageView but after that I have a few more questions:
Would I make a separate image view and somehow layer it over the UIimagePickerController Image View that starts the camera?
Could I just make one UIImageview that already has the lines and then accesses the camera? If so how would I do that?
My final goal would be to detect if something (like a face that the face detector has found) has crossed over into the grid or possibly just map where it is on the screen. Is this possible? How would I get an object's location on the camera screen?

Related

Camera Output onto SceneKit Object

I'm trying to use SceneKit in an application and am wanting to use images captured from an iPhone/iPad's camera (front or back) as a texture on an object in my SCNScene.
From everything that I can tell from the documentation as well as other questions here on StackOverflow, I should just be able to create a AVCaptureVideoPreviewLayer with an appropriate AVCaptureSession and have it "just work". Unfortunately, it does not.
I'm using a line of code like this:
cubeGeometry.firstMaterial?.diffuse.contents = layer
Having the layer as the material seems to work because if I set the layer's backgroundColor, I see the background color, but the camera capturing does not appear to work. The layer is set up properly, because if I use the layer as a sublayer of the SCNView instead of on the object in the SCNScene, the layer appears properly in UIKit.
An example project can be found here.
You can use the USE_FRONT_CAMERA constant in GameViewController.swift to toggle between using front and back camera. You can use the USE_LAYER_AS_MATERIAL constant to toggle between using the AVCaptureVideoPreviewLayer as the texture for a material or as a sub layer in the SCNView.
I've found a pretty hacky workaround for this using some OpenGL calls, but I'd prefer to have this code working as a more general and less fragile solution. Anyone know how to get this working properly on device? I've tried both iOS 8.0 and iOS 9.0 devices.

Unity3D Displaying a RenderTexture overlayed ontop of another Camera

So to be simple I have a RenderTexture of another camera, and I need to overlay it onto another camera either through:
a) A RenderTexture of that camera
or
b) directly to the cameras rendering
What I'm trying to do can also be seen in this representation:
fig1 shows the main render, fig2 shows the desired overlay to be applied, fig3 shows them applied in a way of overlay, fig4 shows post processing of the now newly edited image
Where the first box is the main camera, and the second is what I want overlayed onto it as a RenderTexture in OnRenderObject() A.K.A when these two get rendered. Then in OnPostRender() these are combined where the overlay will be ontop. Then in OnRenderImage(), image effects can freely change the combined images together.
So as a list of what I need help with in explaining is that:
I do not know how to either:
Access the cameras rendering directly
or
Set a RenderTexture as a cameras rendering in OnPostRender()
I also need help though explanation in correctly overlaying a RenderTexture onto either one of the above (This would be using the depth rendered to the RenderTexture as alpha) just as shown in fig3 of the image.
This is the method I've thought up in order to overlay a forward rendering onto deferred for image effects. If you have any other solutions or ideas, it'd be very well appreciated if you could post them as a comment.
Just to clarify I'm not asking for source code, just methods and or links to Unity3D's documentation of said methods that I'm asking about.
Thank you very much so in advance. :)

Take a picture, choose between Full or Square Mode in camera

Is there a way to use the overlays in the UIImagePickerController to show the square picture a user might use, while having a toggle button in there somewhere to switch on the fly?
Currently the iOS 7 camera has this capability, but UIImagePickerController does not (thanks Apple), so is there a way to add this functionality?
Would using AVCaptureSession be necessary? It seems like serious overkill, and I'd have to program flash/focus/etc all over again, I think.
Failing that, is there a customizable class that already exists that I could just implement?
I'm banging my head against the wall trying to figure out the best course of action here, so any help would be appreciated.
You can adjust the camera view by transforming the scale
CGAffineTransform transform = CGAffineTransformMakeScale(1.0, 0.5);
imagePickerController.cameraViewTransform = transform;
with the arguments being the scale to change the x and y axis of the camera screen by. You could add a button that re-presents the camera view within the view controller at the top of your hierarchy. It may not perfectly recapitulate the native phone app, but it will be a pretty good approximation.
I would probably do this with a custom cameraOverlayView that letterboxes the viewfinder. Once the picture has been taken you'll have to crop the resulting image.

xcode custom overlay capture

I am working on OCR recognition App and I want to give the user the option to manually select the area (during the camera selection) on which to perform the OCR. Now, the issue I face is that I provide a rectangle on the camera screen by simply overriding the - (void)drawRect:(CGRect)rect method, However, despite there being a rectangle ,the camera tries to focus on the entire captured area rather than just within rectangle specified.
In other word, I do not want the entire picture to be send for processing but rather only the part of the captured image inside the rectangle. I have managed to provide a rectangle, However with no functionality. I do not want the entire screen area to be focused, but only the area under the rectangle.
I hope this makes sense since i have tried my best to explain it.
Thanks and let me know
Stream the camera's image to a UIScrollView using an AVCaptureOutput then allow the user to pinch/pull/pan the camera into the proper place... now use UIGraphics Image Context to take a "screen-shot" of this area and send that UIImage.CGImage in for processing.

How do you get an UIImage from an AVCaptureVideoPreviewLayer?

I have already tried this solution CGImage (or UIImage) from a CALayer
However I do not get anything.
Like the question says, I am trying to get an UIImage from the preview layer of the camera. I know I can either capture a still image or use the outputsamplebuffer but my session quality video is set to photo so either of these 2 aproaches are slow and will give me a big image.
So what I thought could work is to get the image directly from the preview layer, since this has exactly the size I need and the operations have already been made on it. I just dont know how to get this layer to draw into my context so that I can get it as an UIImage.
Perhaps another solution would be to use OpenGL to get this layer directly as a texture?
Any help will be appreciated, thanks.
Quoting Apple from this Technical Q&A:
A: Starting from iOS 7, the UIView class provides a method
-drawViewHierarchyInRect:afterScreenUpdates:, which lets you render a snapshot of the complete view hierarchy as visible onscreen into a
bitmap context. On iOS 6 and earlier, how to capture a view's drawing
contents depends on the underlying drawing technique. This new method
-drawViewHierarchyInRect:afterScreenUpdates: enables you to capture the contents of the receiver view and its subviews to an image
regardless of the drawing techniques (for example UIKit, Quartz,
OpenGL ES, SpriteKit, AV Foundation, etc) in which the views are
rendered
In my experience regarding AVFoundation is not like that, if you use that method on view that host a preview layer you will only obtain the content of the view without the image of the preview layer. Using the -snapshotViewAfterScreenUpdates: will return a UIView that host a special layer. If you try to make an image from that view you won't see nothing.
The only solution I know are AVCaptureVideoDataOutput and AVCaptureStillImageOutput. Each one has its own limit. The first one can't work simultaneously with a AVCaptureMovieFileOutput acquisition, the latter makes the shutter noise.

Resources