TPlane versus TImage3D? - delphi

I have an DelphiXE4 application made with FMX.
I want to have flat textured objects (no thickness) where I can zoom and move the whole scene on the screen.
(hint : Think about pictures on a wall)
I started with a TForm3D to implement TImage3D components with bitmap assigned. It works well!
BUT I've tested with the TPlane component, I can do the same thing and achieve the same result.
The question is : What is the difference between these 2 components TImage3D & TPlane ?
This will help me to choose the right one for my - current and further - needs.
FMX documents and wiki doesn't really help here !

The best way to think about these things is to look at the inheritance path.
TImage3D derives from TAbstractLayer and so it can generally be used like a 3D layer. TAbstractLayer derives from TControl3D but also implements IAlignableObject and IAlignRoot.
TPlane derives from TCustomMesh which derives from TShape3D which also comes from TControl3D.
I guess in brief a TPlane is simply a specific 3D shape - I haven't poured through the FMX source code but I would guess that it must be more lightweight than TImage3D. A TPlane is just a flat surface that can be manipulated in 3D space.
A TImage3D, however, by means of IAlignableObject, etc, has access to a number of built-in methods and features that allow it to interact as a 3D UI object - to align itself to other IAlignableObjects, to define margins, bounds, and anchors, etc, to define how it places itself in, or fills, space with respect to other IAlignableObjects.
Which to use depends on what you are doing. If you want the image to be a part of a 3D scene, then TPlane probably makes most sense. If you want it to be part of a 3D UI; that is to say that the image is part of a 3D space with other controls, user-interface elements, etc, then TImage3D probably makes most sense.
A TImage3D may not have access, at the same time, to certain methods that operate on TCustomMesh - a TImage probably doesn't expose its mesh (ie: can't be generically modified by 3D transforms, etc, where the input must be a TCustomMesh), while a TPlane, being a 3D primitive rather than a locked-down UI control, would be rather more malleable in that regard.

Related

How to change the appearance of ARSCNDebugOptions FeaturePoints?

Is there a way to change the appearance (size, color, etc) of the feature points in ARKit easily? (After setting debugOptions in the sceneView to ARSCNDebugOptions.showFeaturePoints I'm thinking I might have to iterate over the rawFeaturePoints and manually add custom objects into the scene at those points.
As its name suggests, ARSCNDebugOptions.showFeaturePoints is a tool to aid in debugging your app. Because the size and color of feature point indicators aren't essential to knowing where feature points are (for the sake of making sure your app is behavior correctly), Apple doesn't offer API to change their appearance. (Any more than they offer APIs for changing the colors of bounding boxes, physics shapes, and other indicators available in SceneKit debug options.)
If you want to create your own visualization for feature points, you'll need to do exactly as you suggest: read the rawFeaturePoints from the current ARFrame and use those to position content in the SceneKit scene. You might do this by creating a bunch of nodes with geometry and setting their positions. You might also look into whether it's easy to pass the entire buffer of points to create an SCNGeometry that renders in point-cloud mode.

Design UISegmentControl

I'm having a segment control design like this
How to design my segments so that the selected one should look like the "ALL" like in the image above. What I could think of to use different images on selection but if I do that then some part of the curve which is going in "Others" won't be visible. Any suggestion on designing UISegmentControl like this ?
I have two suggestions:
Find an alternative approach that avoids this.
A lot of apps try to add delight by designing custom components and UI when actually it doesn't really add that much to the app. At worst you might frustrate people by using non-standard components that don't work the way they expect, or increase cognitive load as they're trying to use you're app.
Go 100% with a custom subclass.
Don't just settle for setting a static background image, but invest in creating a component that not only looks like this in a selected state, but also provides animations as people change the selected item.
This is going to be a fair amount of work, but would be something like this:
subclassing UISegmentedController -- it provides the base of the functionality that you're looking for which is good;
adding a new CAShapeLayer as to the controls background layer
figuring out a dynamic bezier curve that can update for all your states (will probably end up having control points for each segment) you might be able to do this by hand, but I'd have to use a tool like PaintCode to generate the bezier curve code, and Illustrator to make the initial curve
add listeners for event changes of the segment changing, and recalculate the curve control points as needed
The positive note of this is that the path property on CAShapeLayer is animatable, so when the segment changes and the curve updates the animation will probably be the easiest part!

Transform to create barrel effect seen in Apple's Camera app

I'm trying to recreate the barrel effect that can be seen on the camera mode picker below:
(source: androidnova.org)
Do I have to use OpenGL in order to achieve this effect? What is the best approach?
I found a great library on GitHub that can be used to achieve this effect (https://github.com/Ciechan/BCMeshTransformView), but unfortunately it doesn't support animation and is therefore not usable.
I bet Apple used CGMeshTransform. It's just like BCMeshTransform, except it is a private API and fully integrates with Core Graphics. BCMeshTransformView was born when a developer discovered this.
The only easy option I see is:
Use CALayer.transform, which is a CATransform3D. You can use this to simulate the barrel effect you want by adjusting the z position and y rotation of each item on the barrel. Also add a semitransparent dark gradient (CAGradientLayer) to the wheel to simulate the effect of choices getting darker towards the edges. This will be simple to do, but won't look as smooth and realistic as an actual 3D barrel. Maybe it will look good enough to create a convincing illusion though? (To enable 3D transforms, you need to enable depth by using view.layer.transform.m34 = 1/500.f or similar)
http://www.thinkandbuild.it/introduction-to-3d-drawing-in-core-animation-part-1/
The hardest option is using a custom OpenGL view that makes a barrel shape and applies your contents on top of it as a texture. I would expect that you run into most of the complexities behind creating BCMeshTransformView, and have difficulty supporting animations just like BCMeshTransformView did.
You may still be able to use BCMeshTransformView though. BCMeshTransformView is slow at processing content animations such as color changes, but is very fast at processing geometry changes. So you could use it to do a barrel effect, as long as you define the barrel effect entirely in terms of mesh geometry changes (instead of as content changes like using a scroll view or adjusting subview positions). You would need to do gesture handling + scrolling yourself instead of using UIScrollView though, which is tricky and tedious to get right.
Considering the options, I would want to fudge it by using 3D transforms, then move to other options only if I can't create a convincing illusion using 3D transforms.

UIDynamicItem with non-rectangular bounds

So I'm looking into UIKit Dynamics and the one problem I have run into is if I want to create a UIView with a custom drawRect: (for instance let's say I want to draw a triangle), there seems to be no way to specify the path of the UIView (or rather the UIDynamicItem) to be used for a UICollisionBehavior.
My goal really is to have polygons on the screen that collide with one another exactly how one would expect.
I came up with a solution of stitching multiple views together but this seems like overkill for what I want.
Is there some easy way to do this, or do I really have to stitch views together?
Dan
Watch the WWDC 2013 videos on this topic. They are very clear: for the sake of efficiency and speed, only the (rectangular) bounds of the view matter during collisions.
EDIT In iOS 9, a dynamic item can have a customized collision boundary. You can have a rectangle dictated by the frame, an ellipse dictated by the frame, or a custom shape — a convex counterclockwise simple closed UIBezierPath. The relevant properties, collisionBoundsType and (for a custom shape) collisionBoundingPath, are read-only, so you will have to subclass in order to set them.
If you really want to collide polygons, you might consider SpriteKit and its physics engine (it seems to share a lot in common with UIDynamics). It can mix with UIKit, although maybe not as smoothly as you'd like.

XNA project - who is in charge of drawing?

I am just playing around with XNA, and I have several different models I need to draw in each frame.
at the moment, the Game object holds references to all my models, and draws them one after the other. Each with his own different way of drawing - one has two separate textures, the other might be mirrored to the other side, etc.
I was wondering if it is acceptable to just add a
public void Draw(SpriteBatch spriteBatch)
method to all my models (from the BaseModel, of course), and have each class be in charge of drawing itself, or maybe I should stick to letting the classes set their data according to events (KeyboardState) on the Update method, and keep all graphic logic in the Game class.
Is there a preferred way to do this?
Generally, I have a base class that contains a BaseModel, texture data, rotation and scale data, etc. For each type of actor in the game, I create a derived class. The base class provides a Draw method that, by default, draws the model with the texture, rotation, and scale data given in the class. Derived classes can override it to draw the actor however they like.
Then, I have a DrawableGameComponent that acts as my scene graph. It contains a list of all active actor objects. In the component's Draw and Update methods, I iterate through the list of actors and call their Draw and Update methods.
That's one way of approaching it ... for the sake of completeness in this post, I'll highlight the other approach. Basically, the opposing view states that no one entity should need (or have) custom knowledge of how to render itself. An entity is merely a collection of state ... and the renderer can simply look at that state, and draw it in the correct way.
An example ... say you have a number of ships. Some go fast, some shoot rockets, some have a sattelite orbiting around it that also shoots. Your "Entity" class can have the following properties
Model VisualRepresentation
Matrix Position
Entity[] AttachedEntities
Your renderer can then iterate over your generic "List<Entity>", and
Draw the visual representation (ie. Model) of the entity using the position
Loop over the AttachedEntities and draw them (recursively).
It's obviously a simplified example ... but this way the drawing logic is completely contained in the rendering code, and only needs to concern itself with as little amount of information as possible. While the ship class can focus on the game logic itself (ie. how fast do I fly, what weapon am I using, how much energy do I have in my shields, etc.).
As far as which one is preferred, really the answer lies within your project's requirements, and what you feel comfortable with. Don't try to make a game engine before making a game ... just do whatever it takes to make your game, and then maybe you can extract the components that worked after you ship the game :-P

Resources