How do you get an UIImage from an AVCaptureVideoPreviewLayer? - ios

I have already tried this solution CGImage (or UIImage) from a CALayer
However I do not get anything.
Like the question says, I am trying to get an UIImage from the preview layer of the camera. I know I can either capture a still image or use the outputsamplebuffer but my session quality video is set to photo so either of these 2 aproaches are slow and will give me a big image.
So what I thought could work is to get the image directly from the preview layer, since this has exactly the size I need and the operations have already been made on it. I just dont know how to get this layer to draw into my context so that I can get it as an UIImage.
Perhaps another solution would be to use OpenGL to get this layer directly as a texture?
Any help will be appreciated, thanks.

Quoting Apple from this Technical Q&A:
A: Starting from iOS 7, the UIView class provides a method
-drawViewHierarchyInRect:afterScreenUpdates:, which lets you render a snapshot of the complete view hierarchy as visible onscreen into a
bitmap context. On iOS 6 and earlier, how to capture a view's drawing
contents depends on the underlying drawing technique. This new method
-drawViewHierarchyInRect:afterScreenUpdates: enables you to capture the contents of the receiver view and its subviews to an image
regardless of the drawing techniques (for example UIKit, Quartz,
OpenGL ES, SpriteKit, AV Foundation, etc) in which the views are
rendered
In my experience regarding AVFoundation is not like that, if you use that method on view that host a preview layer you will only obtain the content of the view without the image of the preview layer. Using the -snapshotViewAfterScreenUpdates: will return a UIView that host a special layer. If you try to make an image from that view you won't see nothing.
The only solution I know are AVCaptureVideoDataOutput and AVCaptureStillImageOutput. Each one has its own limit. The first one can't work simultaneously with a AVCaptureMovieFileOutput acquisition, the latter makes the shutter noise.

Related

Camera Output onto SceneKit Object

I'm trying to use SceneKit in an application and am wanting to use images captured from an iPhone/iPad's camera (front or back) as a texture on an object in my SCNScene.
From everything that I can tell from the documentation as well as other questions here on StackOverflow, I should just be able to create a AVCaptureVideoPreviewLayer with an appropriate AVCaptureSession and have it "just work". Unfortunately, it does not.
I'm using a line of code like this:
cubeGeometry.firstMaterial?.diffuse.contents = layer
Having the layer as the material seems to work because if I set the layer's backgroundColor, I see the background color, but the camera capturing does not appear to work. The layer is set up properly, because if I use the layer as a sublayer of the SCNView instead of on the object in the SCNScene, the layer appears properly in UIKit.
An example project can be found here.
You can use the USE_FRONT_CAMERA constant in GameViewController.swift to toggle between using front and back camera. You can use the USE_LAYER_AS_MATERIAL constant to toggle between using the AVCaptureVideoPreviewLayer as the texture for a material or as a sub layer in the SCNView.
I've found a pretty hacky workaround for this using some OpenGL calls, but I'd prefer to have this code working as a more general and less fragile solution. Anyone know how to get this working properly on device? I've tried both iOS 8.0 and iOS 9.0 devices.

xcode custom overlay capture

I am working on OCR recognition App and I want to give the user the option to manually select the area (during the camera selection) on which to perform the OCR. Now, the issue I face is that I provide a rectangle on the camera screen by simply overriding the - (void)drawRect:(CGRect)rect method, However, despite there being a rectangle ,the camera tries to focus on the entire captured area rather than just within rectangle specified.
In other word, I do not want the entire picture to be send for processing but rather only the part of the captured image inside the rectangle. I have managed to provide a rectangle, However with no functionality. I do not want the entire screen area to be focused, but only the area under the rectangle.
I hope this makes sense since i have tried my best to explain it.
Thanks and let me know
Stream the camera's image to a UIScrollView using an AVCaptureOutput then allow the user to pinch/pull/pan the camera into the proper place... now use UIGraphics Image Context to take a "screen-shot" of this area and send that UIImage.CGImage in for processing.

CAEmitterLayer not rendering when -renderInContext: of superlayer is called

I have a drawing app and I would like for my users to be able to use particle effects as part of their drawing. Basically, the point of the app is to perform custom drawing and save to Camera Roll or share over the World Wide Web.
I encounted the CAEmitterLayer class recently, which I reckon would be a simple and effective way to add particle effects.
I have been able to draw the particles onscreen in the app using the CAEmitterLayer implementation. So rendering onscreen works fine.
When I go about rendering the contents of the drawing using
CGContextRef context = UIGraphicsBeginImageContextWithSize(self.bounds.size);
// The instance drawingView has a CAEmitterLayer instance in its layer/view hierarchy
[drawingView.layer renderInContext:context];
//Note: I have also tried using the layer.presentationLayer and still nada
....
//Get the image from the current image context here for saving to Camera Roll or sharing
....the particles are never rendered in the image.
What I think is happening
The CAEmitterLayer is in a constant state of "animating" the particles. That's why when I attempt to render the layer (I have also tried render the layers.presentationLayer and modelLayer), the animations are never committed and so the off screen image render does not contain the particles.
Question
Has anyone rendered the contents of a CAEmitterLayer offscreen? If so, how did you do it?
Alternate Question
Does anyone know of any particle effect system libraries that don't use OpenGL and is not Cocos2D?
-[CALayer renderInContext:] is useful in a few simple cases, but will not work as expected in more complicated situations. You will need to find some other way to do your drawing.
The documentation for -[CALayer renderInContext:] says:
The Mac OS X v10.5 implementation of this method does not
support the entire Core Animation composition model.
QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not
rendered. Additionally, layers that use 3D transforms are not
rendered, nor are layers that specify backgroundFilters, filters,
compositingFilter, or a mask values. Future versions of Mac OS X may
add support for rendering these layers and properties.
(These limitations apply to iOS, too.)
The header CALayer.h also says:
* WARNING: currently this method does not implement the full
* CoreAnimation composition model, use with caution. */
I was able to get my CAEmitterLayer rendered as an image correctly in its current animation state with
Swift
func drawViewHierarchyInRect(_ rect: CGRect,
afterScreenUpdates afterUpdates: Bool) -> Bool
Objective-C
- (BOOL)drawViewHierarchyInRect:(CGRect)rect
afterScreenUpdates:(BOOL)afterUpdates
within a current context
UIGraphicsBeginImageContextWithOptions(size, false, 0)
and set afterScreenUpdates to true|YES
Good luck with that one :D

Is there another way to display OpenGL content than using a Core Animation aware renderbuffer?

According to Apple's OpenGL ES Programming Guide, "If [a] framebuffer is intended to be displayed to the user, use a special Core Animation-aware renderbuffer."
The text goes on to say that to make this Core Animation aware renderbuffer, one needs to "Subclass UIView to create an OpenGL ES view for [the] iOS application [and] Override the layerClass" by using this code:
+ (Class) layerClass
{
return [CAEAGLLayer class];
}
However, if one examines Apple's GLCameraRipple example which displays OpenGL to the end user, the layerClass never appears to be overridden. A text search on layerClass or CAEAGLLayer reveals they are missing.
If you look for other approaches to display directly to users, Apple gives two other OpenGL approaches, but both seem to imply that they are not for displaying directly to users but rather are for off-screen rendering. (i.e. "If the framebuffer is used to perform offscreen image processing, attach a renderbuffer. If the framebuffer image is used as an input to a later rendering step, attach a texture.")
Is there another way to display OpenGL content than using a Core Animation aware renderbuffer - or is Apple somehow overrriding the layer class so the OpenGL content is becoming Core Animation aware in another way?
The reason you don't see a subclassed UIView with a CAEAGLLayer backing it in the GLCameraRipple example is because it uses a GLKView. GLKView is a class introduced in iOS 5.0 as part of GLKit, and it wraps some common code, such as the explicit override to use a CAEAGLLayer and the setup of its matching renderbuffer.
This is still being done, it's just abstracted away from you. For displaying OpenGL ES content to the screen, you still need to go through a CAEAGLLayer one way or another.
Offscreen rendering is a different animal, because there you aren't attaching to a layer for display, so there's no layer needed. If you want to render to a texture, attach a texture as a target for your FBO, and that's it.

How to Display ngmoco's Fireworks App Over a UIImageView?

Tim Omernick from ngmoco recently gave a talk at Stanford and demonstrated an interesting fireworks app for the iPhone that he posted up the code for here:
gamemakers.ngmoco.com/post/111712416/stanford-university-and-apple-were-kind-enough-to
I can get the app to run when I specify the EAGLView's parent class as a UIView in its header file. However, I want to be able to display the fireworks over an image and so when I tried to specify the parent class as a UIImageView, the background picture I specify seems to hide the firework animation.
Basically, I want to be able to display a UIImage and a EAGLView at the same time. Is this possible? Thanks
I suggest you take the time to learn some OpenGL. It's pretty basic once you understand how it works.
Here's an overview of what you'll need to do.
Create a texture (must be power-of-two size) to hold your image
Upload image pixel data to texture
On each frame:
Bind the texture
Take a quad made out of two triangles and corresponding texture coords and render it
Render the fireworks as before

Resources