I'm trying to use SceneKit in an application and am wanting to use images captured from an iPhone/iPad's camera (front or back) as a texture on an object in my SCNScene.
From everything that I can tell from the documentation as well as other questions here on StackOverflow, I should just be able to create a AVCaptureVideoPreviewLayer with an appropriate AVCaptureSession and have it "just work". Unfortunately, it does not.
I'm using a line of code like this:
cubeGeometry.firstMaterial?.diffuse.contents = layer
Having the layer as the material seems to work because if I set the layer's backgroundColor, I see the background color, but the camera capturing does not appear to work. The layer is set up properly, because if I use the layer as a sublayer of the SCNView instead of on the object in the SCNScene, the layer appears properly in UIKit.
An example project can be found here.
You can use the USE_FRONT_CAMERA constant in GameViewController.swift to toggle between using front and back camera. You can use the USE_LAYER_AS_MATERIAL constant to toggle between using the AVCaptureVideoPreviewLayer as the texture for a material or as a sub layer in the SCNView.
I've found a pretty hacky workaround for this using some OpenGL calls, but I'd prefer to have this code working as a more general and less fragile solution. Anyone know how to get this working properly on device? I've tried both iOS 8.0 and iOS 9.0 devices.
Related
So I have followed one of the few tutorials on how to access the camera with the UIimagePickerController and that works fine. (It even implements face detection as well which was my next step!)
But now I would like to create something like Apple's "grid" view with my own personally made grid. I have how to make an ImageView but after that I have a few more questions:
Would I make a separate image view and somehow layer it over the UIimagePickerController Image View that starts the camera?
Could I just make one UIImageview that already has the lines and then accesses the camera? If so how would I do that?
My final goal would be to detect if something (like a face that the face detector has found) has crossed over into the grid or possibly just map where it is on the screen. Is this possible? How would I get an object's location on the camera screen?
I'm trying to recreate the barrel effect that can be seen on the camera mode picker below:
(source: androidnova.org)
Do I have to use OpenGL in order to achieve this effect? What is the best approach?
I found a great library on GitHub that can be used to achieve this effect (https://github.com/Ciechan/BCMeshTransformView), but unfortunately it doesn't support animation and is therefore not usable.
I bet Apple used CGMeshTransform. It's just like BCMeshTransform, except it is a private API and fully integrates with Core Graphics. BCMeshTransformView was born when a developer discovered this.
The only easy option I see is:
Use CALayer.transform, which is a CATransform3D. You can use this to simulate the barrel effect you want by adjusting the z position and y rotation of each item on the barrel. Also add a semitransparent dark gradient (CAGradientLayer) to the wheel to simulate the effect of choices getting darker towards the edges. This will be simple to do, but won't look as smooth and realistic as an actual 3D barrel. Maybe it will look good enough to create a convincing illusion though? (To enable 3D transforms, you need to enable depth by using view.layer.transform.m34 = 1/500.f or similar)
http://www.thinkandbuild.it/introduction-to-3d-drawing-in-core-animation-part-1/
The hardest option is using a custom OpenGL view that makes a barrel shape and applies your contents on top of it as a texture. I would expect that you run into most of the complexities behind creating BCMeshTransformView, and have difficulty supporting animations just like BCMeshTransformView did.
You may still be able to use BCMeshTransformView though. BCMeshTransformView is slow at processing content animations such as color changes, but is very fast at processing geometry changes. So you could use it to do a barrel effect, as long as you define the barrel effect entirely in terms of mesh geometry changes (instead of as content changes like using a scroll view or adjusting subview positions). You would need to do gesture handling + scrolling yourself instead of using UIScrollView though, which is tricky and tedious to get right.
Considering the options, I would want to fudge it by using 3D transforms, then move to other options only if I can't create a convincing illusion using 3D transforms.
I have a drawing app and I would like for my users to be able to use particle effects as part of their drawing. Basically, the point of the app is to perform custom drawing and save to Camera Roll or share over the World Wide Web.
I encounted the CAEmitterLayer class recently, which I reckon would be a simple and effective way to add particle effects.
I have been able to draw the particles onscreen in the app using the CAEmitterLayer implementation. So rendering onscreen works fine.
When I go about rendering the contents of the drawing using
CGContextRef context = UIGraphicsBeginImageContextWithSize(self.bounds.size);
// The instance drawingView has a CAEmitterLayer instance in its layer/view hierarchy
[drawingView.layer renderInContext:context];
//Note: I have also tried using the layer.presentationLayer and still nada
....
//Get the image from the current image context here for saving to Camera Roll or sharing
....the particles are never rendered in the image.
What I think is happening
The CAEmitterLayer is in a constant state of "animating" the particles. That's why when I attempt to render the layer (I have also tried render the layers.presentationLayer and modelLayer), the animations are never committed and so the off screen image render does not contain the particles.
Question
Has anyone rendered the contents of a CAEmitterLayer offscreen? If so, how did you do it?
Alternate Question
Does anyone know of any particle effect system libraries that don't use OpenGL and is not Cocos2D?
-[CALayer renderInContext:] is useful in a few simple cases, but will not work as expected in more complicated situations. You will need to find some other way to do your drawing.
The documentation for -[CALayer renderInContext:] says:
The Mac OS X v10.5 implementation of this method does not
support the entire Core Animation composition model.
QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not
rendered. Additionally, layers that use 3D transforms are not
rendered, nor are layers that specify backgroundFilters, filters,
compositingFilter, or a mask values. Future versions of Mac OS X may
add support for rendering these layers and properties.
(These limitations apply to iOS, too.)
The header CALayer.h also says:
* WARNING: currently this method does not implement the full
* CoreAnimation composition model, use with caution. */
I was able to get my CAEmitterLayer rendered as an image correctly in its current animation state with
Swift
func drawViewHierarchyInRect(_ rect: CGRect,
afterScreenUpdates afterUpdates: Bool) -> Bool
Objective-C
- (BOOL)drawViewHierarchyInRect:(CGRect)rect
afterScreenUpdates:(BOOL)afterUpdates
within a current context
UIGraphicsBeginImageContextWithOptions(size, false, 0)
and set afterScreenUpdates to true|YES
Good luck with that one :D
I want to make a 3D metal compass in iOS which will have a movable cover.
That is when you touch it by 3 fingers and try to move your fingers upward the cover keeps moving with your fingers and after certain distance it gets opened.Once you pull it down using 3 fingers again, it gets closed. I have attached a sketch about what I'm thinking.
Is it possible using core animations and CALayers? Or would I have to use OpenGL ES?
First you should obviously create a textured 3d model in app like 3Ds Max or Maya. Then export it to some suitable format. The simplest one is OBJ (there are lots of examples about how to load it). There are two options about animation:
Do animation manually by rotating the cover object. It's probably the easiest way to do that.
Create animation in you 3D editor and then interpolate between frames. By doing this you can get much more realistic view. However in this case OBJ format is not suitable, but COLLADA is. To load it I suggest to use Assimp library.
And if you don't need some advanced interraction another option is to use pseude 3D: just pre render all the compass animation frames and use that animation applied to 2d texture.
I have already tried this solution CGImage (or UIImage) from a CALayer
However I do not get anything.
Like the question says, I am trying to get an UIImage from the preview layer of the camera. I know I can either capture a still image or use the outputsamplebuffer but my session quality video is set to photo so either of these 2 aproaches are slow and will give me a big image.
So what I thought could work is to get the image directly from the preview layer, since this has exactly the size I need and the operations have already been made on it. I just dont know how to get this layer to draw into my context so that I can get it as an UIImage.
Perhaps another solution would be to use OpenGL to get this layer directly as a texture?
Any help will be appreciated, thanks.
Quoting Apple from this Technical Q&A:
A: Starting from iOS 7, the UIView class provides a method
-drawViewHierarchyInRect:afterScreenUpdates:, which lets you render a snapshot of the complete view hierarchy as visible onscreen into a
bitmap context. On iOS 6 and earlier, how to capture a view's drawing
contents depends on the underlying drawing technique. This new method
-drawViewHierarchyInRect:afterScreenUpdates: enables you to capture the contents of the receiver view and its subviews to an image
regardless of the drawing techniques (for example UIKit, Quartz,
OpenGL ES, SpriteKit, AV Foundation, etc) in which the views are
rendered
In my experience regarding AVFoundation is not like that, if you use that method on view that host a preview layer you will only obtain the content of the view without the image of the preview layer. Using the -snapshotViewAfterScreenUpdates: will return a UIView that host a special layer. If you try to make an image from that view you won't see nothing.
The only solution I know are AVCaptureVideoDataOutput and AVCaptureStillImageOutput. Each one has its own limit. The first one can't work simultaneously with a AVCaptureMovieFileOutput acquisition, the latter makes the shutter noise.