Transform to create barrel effect seen in Apple's Camera app - ios

I'm trying to recreate the barrel effect that can be seen on the camera mode picker below:
(source: androidnova.org)
Do I have to use OpenGL in order to achieve this effect? What is the best approach?
I found a great library on GitHub that can be used to achieve this effect (https://github.com/Ciechan/BCMeshTransformView), but unfortunately it doesn't support animation and is therefore not usable.

I bet Apple used CGMeshTransform. It's just like BCMeshTransform, except it is a private API and fully integrates with Core Graphics. BCMeshTransformView was born when a developer discovered this.
The only easy option I see is:
Use CALayer.transform, which is a CATransform3D. You can use this to simulate the barrel effect you want by adjusting the z position and y rotation of each item on the barrel. Also add a semitransparent dark gradient (CAGradientLayer) to the wheel to simulate the effect of choices getting darker towards the edges. This will be simple to do, but won't look as smooth and realistic as an actual 3D barrel. Maybe it will look good enough to create a convincing illusion though? (To enable 3D transforms, you need to enable depth by using view.layer.transform.m34 = 1/500.f or similar)
http://www.thinkandbuild.it/introduction-to-3d-drawing-in-core-animation-part-1/
The hardest option is using a custom OpenGL view that makes a barrel shape and applies your contents on top of it as a texture. I would expect that you run into most of the complexities behind creating BCMeshTransformView, and have difficulty supporting animations just like BCMeshTransformView did.
You may still be able to use BCMeshTransformView though. BCMeshTransformView is slow at processing content animations such as color changes, but is very fast at processing geometry changes. So you could use it to do a barrel effect, as long as you define the barrel effect entirely in terms of mesh geometry changes (instead of as content changes like using a scroll view or adjusting subview positions). You would need to do gesture handling + scrolling yourself instead of using UIScrollView though, which is tricky and tedious to get right.
Considering the options, I would want to fudge it by using 3D transforms, then move to other options only if I can't create a convincing illusion using 3D transforms.

Related

How to change the appearance of ARSCNDebugOptions FeaturePoints?

Is there a way to change the appearance (size, color, etc) of the feature points in ARKit easily? (After setting debugOptions in the sceneView to ARSCNDebugOptions.showFeaturePoints I'm thinking I might have to iterate over the rawFeaturePoints and manually add custom objects into the scene at those points.
As its name suggests, ARSCNDebugOptions.showFeaturePoints is a tool to aid in debugging your app. Because the size and color of feature point indicators aren't essential to knowing where feature points are (for the sake of making sure your app is behavior correctly), Apple doesn't offer API to change their appearance. (Any more than they offer APIs for changing the colors of bounding boxes, physics shapes, and other indicators available in SceneKit debug options.)
If you want to create your own visualization for feature points, you'll need to do exactly as you suggest: read the rawFeaturePoints from the current ARFrame and use those to position content in the SceneKit scene. You might do this by creating a bunch of nodes with geometry and setting their positions. You might also look into whether it's easy to pass the entire buffer of points to create an SCNGeometry that renders in point-cloud mode.

Interact with complex figure in iOS

I need to be able to interact with a representation of a cilinder that has many different parts in it. When the users taps over on of the small rectangles, I need to display a popover related to the specific piece (form).
The next image demonstrates a realistic 3d approach. But, I repeat, I need to solve the problem, the 3d is NOT required (would be really cool though). A representation that complies the functional needs will suffice.
The info about the parts to make the drawing comes from an API (size, position, etc)
I dont need it to be realistic really. The simplest aproximation would be to show a cilinder in a 2d representation, like a rectangle made out of interactable small rectangles.
So, as I mentioned, I think there are (as I see it) two opposite approaches: Realistic or Simplified
Is there a way to achieve a nice solution in the middle? What libraries, components, frameworks that I should look into?
My research has led me to SceneKit, but I still dont know if I will be able to interact with it. Interaction is a very important part as I need to display a popover when the user taps on any small rectangle over the cylinder.
Thanks
You don't need any special frameworks to achieve an interaction like this. This effect can be achieved with standard UIKit and UIView and a little trigonometry. You can actually draw exactly your example image using 2D math and drawing. My answer is not an exact formula but involves thinking about how the shapes are defined and break the problem down into manageable steps.
A cylinder can be defined by two offset circles representing the end pieces, connected at their radii. I will use an orthographic projection meaning the cylinder doesn't appear smaller as the depth extends into the background (but you could adapt to perspective if needed). You could draw this with CoreGraphics in a UIView drawRect.
A square slice represents an angle piece of the circle, offset by an amount smaller than the length of the cylinder, but in the same direction, as in the following diagram (sorry for imprecise drawing).
This square slice you are interested in is the area outlined in solid red, outside the radius of the first circle, and inside the radius of the imaginary second circle (which is just offset from the first circle by whatever length you want the slice).
To draw this area you simply need to draw a path of the outline of each arc and connect the endpoints.
To check if a touch is inside one of these square slices:
Check if the touch point is between angle a from the origin at a.
Check if the touch point is outside the radius of the inside circle.
Check if the touch point is inside the radius of the outside circle. (Note what this means if the circles are more than a radius apart.)
To find a point to display the popover you could average the end points on the slice or find the middle angle between the two edges and offset by half the distance.
Theoretically, doing this in Scene Kit with either SpriteKit or UIKit Popovers is ideal.
However Scene Kit (and Sprite Kit) seem to be in a state of flux wherein nobody from Apple is communicating with users about the raft of issues folks are currently having with both. From relatively stable and performant Sprite Kit in iOS 8.4 to a lot of lost performance in iOS 9 seems common. Scene Kit simply doesn't seem finished, and the documentation and community are both nearly non-existent as a result.
That being said... the theory is this:
Material IDs are what's used in traditional 3D apps to define areas of an object that have different materials. Somehow these Material IDs are called "elements" in SceneKit. I haven't been able to find much more about this.
It should be possible to detect the "element" that's underneath a touch on an object, and respond accordingly. You should even be able to change the state/nature of the material on that element to indicate it's the currently selected.
When wanting a smooth, well rounded cylinder as per your example, start with a cylinder that's made of only enough segments to describe/define the material IDs you need for your "rectangular" sections to be touched.
Later you can add a smoothing operation to the cylinder to make it round, and all the extra smoothing geometry in each quadrant of unique material ID should be responsive, regardless of how you add this extra detail to smooth the presentation of the cylinder.
Idea for the "Simplified" version:
if this representation is okey, you can use a UICollectionView.
Each cell can have a defined size thanks to
collectionView:layout:sizeForItemAtIndexPath:
Then each cell of the collection could be a small rectangle representing a
touchable part of the cylinder.
and using
collectionView:(UICollectionView *)collectionView
didSelectItemAtIndexPath:(NSIndexPath *)indexPath
To get the touch.
This will help you to display the popover at the right place:
CGRect rect = [collectionView layoutAttributesForItemAtIndexPath:indexPath].frame;
Finally, you can choose the appropriate popover (if the app has to work on iPhone) here:
https://www.cocoacontrols.com/search?q=popover
Not perfect, but i think this is efficient!
Yes, SceneKit.
When user perform a touch event, that mean you knew the 2D coordinate on screen, so your only decision is to popover a view or not, even a 3D model is not exist.
First, we can logically split the requirement into two pieces, determine the touching segment, showing right "color" in each segment.
I think the use of 3D model is to determine which piece of data to show in your case if I don't get you wrong. In that case, the SCNView's hit test method will do most of work for you. What you should do is to perform a hit test, take out the hit node and the hit's local 3D coordinate of this node, you can then calculate which segment is hit by this touch and do the decision.
Now how to draw the surface of the cylinder would be the only left question, right? There are various ways to do, for example simply paint each image you need and programmatically and attach it to the cylinder's material or have your image files on disk and use as material for the cylinder ...
I think the problem would be basically solved.

iOS : Creating a 3D Compass

I want to make a 3D metal compass in iOS which will have a movable cover.
That is when you touch it by 3 fingers and try to move your fingers upward the cover keeps moving with your fingers and after certain distance it gets opened.Once you pull it down using 3 fingers again, it gets closed. I have attached a sketch about what I'm thinking.
Is it possible using core animations and CALayers? Or would I have to use OpenGL ES?
First you should obviously create a textured 3d model in app like 3Ds Max or Maya. Then export it to some suitable format. The simplest one is OBJ (there are lots of examples about how to load it). There are two options about animation:
Do animation manually by rotating the cover object. It's probably the easiest way to do that.
Create animation in you 3D editor and then interpolate between frames. By doing this you can get much more realistic view. However in this case OBJ format is not suitable, but COLLADA is. To load it I suggest to use Assimp library.
And if you don't need some advanced interraction another option is to use pseude 3D: just pre render all the compass animation frames and use that animation applied to 2d texture.

iOS: Smooth button Glow effect by blending between images

I am creating a custom button that needs to be able to glow to a varying degree
How would I use these pictures to make a button that 'glows' the diamond when it is pressed, and have this glow gradually fade back to inert state?
I want to churn out several different colours of diamond as well... I am hoping to generate all different coloured diamonds from the same stock images presented here.
I would like to get my head around the basic methods available, in enough detail that I can see each one through and make a decision which path to take...
My tangled efforts so far... ( I will delete all of this, or move it into possibly several answers as a solution unfolds... )
I can see 3 potential solution paths:
GL
it looks as though GL has everything it takes to get complete fine-grained control over the process, although functions exposed by core graphics come tantalisingly close, and that would save several hundred lines of code spread over a bunch of source files, which seems a bit ridiculous for such a basic task.
core graphics, and core animation to accomplish the blending
documentation goes on to say
Anything underneath the unpainted samples, such as the current fill color or other drawing, shows through.
so I can chroma-key mask the left image, setting {0,0,0} ie Black as the key.
this at least secures a transparent background, now I have to work on making it yellow instead of grey.
so maybe I could have started instead with setting a yellow back colour for my image context, then use some CGContextSetBlendMode(...) to imprint the diamond on the yellow, THEN use chroma-key masking to get a transparent background
ok, this covers at least getting the basic unlit image on-screen
now I could overlay the sparkly image, using some blend mode, maybe I could keep it in its current greyscale state, and that would just boost the colours of the original
only problem with this is that it is a lot of heavy real-time blending
so maybe I could pre-calculate every image in the animation... this is looking increasingly mucky...
Cocos2D
if this allows me to set the blend mode to additive blending then I could just composite the glowing image over the original image with an appropriate Alpha setting.
After digging through a lot of documentation, the optimal solution seems to be to use core graphics functions to get the source images into a single 2-component GL texture, and then use GL to blend between them.
I will need to pass a uniform value glow_factor into the shader
The obvious solution might seem to simply use
r,g,b = in_r,g,b * { (1 - glow_factor) * inertPixel + glow_factor * shinyPixel }
(where inertPixel is the appropriate pixel of the inert diamond etc)...
it looks like I would also do well to manufacture my own sparkles and add them over the top; a gem should sparkle white irrespective of its characteristic colour.
After having looked at this problem a little more, I can see several solutions
Solution A -- store the transition from glow=0 to glow=1 as 60 frames in memory, then load the appropriate frame into a GL texture every time it is required.
this has an obvious benefit that a graphic designer could construct the entire sequence and I could load it in as a bunch of PNG files.
another advantage is that these frames wouldn't need to be played in sequence... the appropriate frame can be chosen on-the-fly
however, it has a potential drawback of a lot of sending data RAM->VRAM
this can be optimised by using glTexSubImage2D; several frames can be sent simultaneously and then unpacked from within GL... in fact maybe the entire sequence. if this is so, then it would make sense to use PVRT texture compression.
iOS: playing a frame-by-frame greyscale animation in a custom colour
Solution B -- load glow=0 and glow=1 images as GL textures, and manually write shader code that takes in the glow factor as a uniform and performs the blend
this has an advantage that it is close to the wire and can be tweaked in all sorts of ways. Also it is going to be very efficient. This advantage is that it is a big extra slice of code to maintain.
Solution C -- set glBlendMode to perform additive blending.
then draw the glow=0 image image, setting eg alpha=0.2 on each vertex.
then draw the glow=1 image image, setting eg alpha=0.8 on each vertex.
this has an advantage that it can be achieved with a more generic code structure -- ie a very general ' draw textured quad / sprite ' class.
disadvantage is that without some sort of wrapper it is a bit messy... in my game I have a couple of dozen diamonds -- at any one time maybe 2 or 3 are likely to be glowing. so first-pass I would render EVERYTHING ( just need to set Alpha appropriately for everything that is glowing ) and then on the second pass I could draw the glowing sprite again with appropriate Alpha for everything that IS glowing.
it is worth noting that if I pursue solution A, this would involve creating some sort of real-time movie player object, which could be a very useful reusable code component.

Using OpenGL on top of a MKMapView?

I've got some data that I would like to render on top of a MKMapView using OpenGL. Currently I can sort of achieve this by placing a transparent OpenGL layer on top of the MKMapView and drawing to it using OpenGL commands.
However, the problem becomes synching the drawing of the OpenGL layer with the drawing that MKMapView does. I can kind of get around this by drawing on touch events, this works well until you "flick" the map which causes a continuous series of draws for the animation that I don't detect.
Another idea was to use a MKOverlayView and hope that OpenGL drawing could be done with it. But I'm not sure how exactly to app
I would recommend evaluating BA3's "Altus" mapping engine. It is built entirely in OpenGL, so in the worst case you may simple render to the same context. However, it would probably be better if you could take advantage of their support for geo-located raster, vector and marker elements.
Full disclosure: I'm friends with the authors, but have no financial interest.

Resources