So I'm looking into UIKit Dynamics and the one problem I have run into is if I want to create a UIView with a custom drawRect: (for instance let's say I want to draw a triangle), there seems to be no way to specify the path of the UIView (or rather the UIDynamicItem) to be used for a UICollisionBehavior.
My goal really is to have polygons on the screen that collide with one another exactly how one would expect.
I came up with a solution of stitching multiple views together but this seems like overkill for what I want.
Is there some easy way to do this, or do I really have to stitch views together?
Dan
Watch the WWDC 2013 videos on this topic. They are very clear: for the sake of efficiency and speed, only the (rectangular) bounds of the view matter during collisions.
EDIT In iOS 9, a dynamic item can have a customized collision boundary. You can have a rectangle dictated by the frame, an ellipse dictated by the frame, or a custom shape — a convex counterclockwise simple closed UIBezierPath. The relevant properties, collisionBoundsType and (for a custom shape) collisionBoundingPath, are read-only, so you will have to subclass in order to set them.
If you really want to collide polygons, you might consider SpriteKit and its physics engine (it seems to share a lot in common with UIDynamics). It can mix with UIKit, although maybe not as smoothly as you'd like.
Related
I need to be able to interact with a representation of a cilinder that has many different parts in it. When the users taps over on of the small rectangles, I need to display a popover related to the specific piece (form).
The next image demonstrates a realistic 3d approach. But, I repeat, I need to solve the problem, the 3d is NOT required (would be really cool though). A representation that complies the functional needs will suffice.
The info about the parts to make the drawing comes from an API (size, position, etc)
I dont need it to be realistic really. The simplest aproximation would be to show a cilinder in a 2d representation, like a rectangle made out of interactable small rectangles.
So, as I mentioned, I think there are (as I see it) two opposite approaches: Realistic or Simplified
Is there a way to achieve a nice solution in the middle? What libraries, components, frameworks that I should look into?
My research has led me to SceneKit, but I still dont know if I will be able to interact with it. Interaction is a very important part as I need to display a popover when the user taps on any small rectangle over the cylinder.
Thanks
You don't need any special frameworks to achieve an interaction like this. This effect can be achieved with standard UIKit and UIView and a little trigonometry. You can actually draw exactly your example image using 2D math and drawing. My answer is not an exact formula but involves thinking about how the shapes are defined and break the problem down into manageable steps.
A cylinder can be defined by two offset circles representing the end pieces, connected at their radii. I will use an orthographic projection meaning the cylinder doesn't appear smaller as the depth extends into the background (but you could adapt to perspective if needed). You could draw this with CoreGraphics in a UIView drawRect.
A square slice represents an angle piece of the circle, offset by an amount smaller than the length of the cylinder, but in the same direction, as in the following diagram (sorry for imprecise drawing).
This square slice you are interested in is the area outlined in solid red, outside the radius of the first circle, and inside the radius of the imaginary second circle (which is just offset from the first circle by whatever length you want the slice).
To draw this area you simply need to draw a path of the outline of each arc and connect the endpoints.
To check if a touch is inside one of these square slices:
Check if the touch point is between angle a from the origin at a.
Check if the touch point is outside the radius of the inside circle.
Check if the touch point is inside the radius of the outside circle. (Note what this means if the circles are more than a radius apart.)
To find a point to display the popover you could average the end points on the slice or find the middle angle between the two edges and offset by half the distance.
Theoretically, doing this in Scene Kit with either SpriteKit or UIKit Popovers is ideal.
However Scene Kit (and Sprite Kit) seem to be in a state of flux wherein nobody from Apple is communicating with users about the raft of issues folks are currently having with both. From relatively stable and performant Sprite Kit in iOS 8.4 to a lot of lost performance in iOS 9 seems common. Scene Kit simply doesn't seem finished, and the documentation and community are both nearly non-existent as a result.
That being said... the theory is this:
Material IDs are what's used in traditional 3D apps to define areas of an object that have different materials. Somehow these Material IDs are called "elements" in SceneKit. I haven't been able to find much more about this.
It should be possible to detect the "element" that's underneath a touch on an object, and respond accordingly. You should even be able to change the state/nature of the material on that element to indicate it's the currently selected.
When wanting a smooth, well rounded cylinder as per your example, start with a cylinder that's made of only enough segments to describe/define the material IDs you need for your "rectangular" sections to be touched.
Later you can add a smoothing operation to the cylinder to make it round, and all the extra smoothing geometry in each quadrant of unique material ID should be responsive, regardless of how you add this extra detail to smooth the presentation of the cylinder.
Idea for the "Simplified" version:
if this representation is okey, you can use a UICollectionView.
Each cell can have a defined size thanks to
collectionView:layout:sizeForItemAtIndexPath:
Then each cell of the collection could be a small rectangle representing a
touchable part of the cylinder.
and using
collectionView:(UICollectionView *)collectionView
didSelectItemAtIndexPath:(NSIndexPath *)indexPath
To get the touch.
This will help you to display the popover at the right place:
CGRect rect = [collectionView layoutAttributesForItemAtIndexPath:indexPath].frame;
Finally, you can choose the appropriate popover (if the app has to work on iPhone) here:
https://www.cocoacontrols.com/search?q=popover
Not perfect, but i think this is efficient!
Yes, SceneKit.
When user perform a touch event, that mean you knew the 2D coordinate on screen, so your only decision is to popover a view or not, even a 3D model is not exist.
First, we can logically split the requirement into two pieces, determine the touching segment, showing right "color" in each segment.
I think the use of 3D model is to determine which piece of data to show in your case if I don't get you wrong. In that case, the SCNView's hit test method will do most of work for you. What you should do is to perform a hit test, take out the hit node and the hit's local 3D coordinate of this node, you can then calculate which segment is hit by this touch and do the decision.
Now how to draw the surface of the cylinder would be the only left question, right? There are various ways to do, for example simply paint each image you need and programmatically and attach it to the cylinder's material or have your image files on disk and use as material for the cylinder ...
I think the problem would be basically solved.
With UIDynamics it is simple to have the physics objects, called dynamic items, control custom views (or indeed anything you want) using the protocol UIDynamitItem with the three properties center, transform and bounds. I would like to do something similar with SpriteKit, but according to the manual:
Unlike views, you cannot create SKNode subclasses that perform custom drawing
Specifically, I would like to have the physics bodies control some vector graphics I currently have in a drawrect. There are two things I am after here:
The first is to let the vector graphics move around like any other node.
The second is to let position, angle and other properties change the exact position of some of the control points.
I am fairly certain that I could achieve this with UIDynamics and dynamic items, but then I wouldn't be able to use actions and other nice sprite kit features.
It would also seem that 1 could be handled by converting the paths to cgpaths and using shape nodes, but that would be limiting, and not cover 2.
Any suggestions?
I always used CoreGraphics and CoreAnimation, I understand how each of them works on their own, but not those edge cases when one have to talk with the other. I also understand that UIViews are a nice wrapper for CALayer, where CALayer does all the heavy lifting of rendering, and the UIView adds the touch-based responsiveness.
But, all the questions I have seen thus far, attack the problem from one side or the other, not the interplay between them, specially between CoreGraphics and CALayer.
Anyway, my question is ...
How does CoreGraphics relate to CALayer?
My understanding is that a CALayer wraps the CoreGraphics methods to draw itself, but does it once, and can live with the snapshot of itself until invalidated. But, how these drawing methods interplay with the sublayers of that layer? Are they exclusive?
For example, what happens when I have a UIView that has sub-views, and I overload the drawRect method? How does that affect the drawing of its sublayers?
Is it even a good idea to intermix the two inside the same function?
Also, I'm asking only about iOS, I understand that Mac is a different beast (and also have those fancy CIFilters, bastards!).
Prior Research
Here's some related questions I've researched beforehand:
confusion regarding quartz2d, core graphics, core animation, core images. This question asks the differences between each other, and the chosen answer indeed delivers, but it answers for each individual library as if the other didn't exist.
To Drawrect or not to Drawrect. Another great question, but it addresses only the subject of drawing CoreGraphics vs handing the problem to UIKit, but anyway, the chosen answer delivers parts of the puzzle.
Animating Pie Slices with Custom CALayer. Must be one of the most valuable tutorials I've seen in this subject, it's the only one that has guided me through to drawing a CALayer
What is different between CoreGraphics and CoreAnimation Absolutely disappointed on how quick the asker accepted the answer, I feel that there's a whole lot more going in here.
Various WWDC videos, but I haven't seen one that explains in detail the general scope. If anyone replies with a WWDC video that does, I'll consider that a valid answer.
I'll try to answer your question at a conceptual, 20,000ft level. I will try to disclaim my points where I'm over-generalizing, but I'll attempt to hit the common case.
Perhaps the easiest way to think about it is this: In the GPU's memory you have textures which, for the purposes of this discussion, are bitmap images. A CALayer might have a texture associated with it, or it might not. These cases would correspond to a layer with a -drawRect: method, and a layer that exists solely to contain sublayers, respectively. Conceptually, each layer that has a texture associated with it has a different texture all it's own (there are some details and optimizations that make this not strictly/universally true, but in the general, abstract case, it can help to think of it this way.) With that in mind, a superlayer's -drawRect: method has no effect on any of its sublayers' -drawRect: methods, and (again, in the general case) a sublayer's -drawRect: method has no effect on its superlayer's -drawRect: method. Each draws into its own texture (also called a "backing store") and then, based on the layer tree and the associated geometries and transforms, the GPU composites all these textures together into what you see on the screen. When one of the layers is invalidated, directly or indirectly (via -setNeedsDisplayInRect:), then when CA goes to display the next frame on screen, the invalid layers will be redrawn by virtue of having their -drawRect: methods called. That will update the associated texture, and once all the invalid layers' textures are updated, the GPU will composite them, generating the final bitmap that you see on-screen.
So to answer your first question: In the general case, no, there is no interplay between the -drawRect: methods of distinct CALayers.
As to your second question: For the purposes of this discussion you can think of UIViews as being the same as CALayers. The interrelationship with respect to drawing and textures is largely unchanged from that of non-UIView CALayers.
To your third question: Is it a good idea to intermix UIViews and CALayers? Every UIView has a CALayer backing it (all views in UIKit are layer-backed, which is not normally the case on OSX.) So at some level, they're "intermixed" whether you want them to be or not. It is perfectly fine to add CALayer sublayers to the layer that backs a UIView, although that layer will not have all the added functionality that UIView brings to the party. If that layer's purpose is just to generate an image for display, then that's fine. If you want the sub-layer/view to be a first class participant in touch handling, or to be positioned/sized using AutoLayout, then it will need to be a UIView. It's probably best to think of a UIView as a CALayer with a bunch of extra functionality added to it. If you need that functionality, use a UIView. If you don't, use a CALayer.
In sum: CoreGraphics is an API for drawing 2D graphics. One common use of CG (but not the only one) is to draw bitmap images. CoreAnimation is an API providing an organized framework for manipulating bitmaps on-screen. You could meaningfully use CoreAnimation without ever calling a CoreGraphics drawing primitive, for example, if all your textures were backed by images that were compiled into your application at build time.
Hopefully this is helpful. Leave comments if you need clarification, and I'll edit to oblige.
I am looking into making an iOS app that has little creatures. I plan on having these creatures grow and change shapes based on user interaction. So the creatures could end up looking very different based off what the user does.
My problem is animating these creatures. I have dealt with simple animations in the past with cocos2d, but nothing like this.
How can I animate these creatures being different sizes and shapes without having my graphic designer draw every possible image that could be used. In the game spore a user can create an animal of whatever shape or size they want and these animals animate. My question is how can I do something similar in 2d? I know this can't be a simple answer, but a point in the right direction is all I am looking for.
You could use some CGAffineTransforms to scale your drawing and custom filters with Core Image maybe to change the color.
I'm struggling with conceptualizing animations with a CALayer as opposed to UIView's own animation methods. Throw "Core Animation" into this and, well, maybe someone can articulate these concepts from a high level so I can better visualize what's happening and why I'd want to migrate UIView animations (which I'm quite familiar with now) to CALayer animations on the iPhone. Every view in Cocoa-Touch automatically gets a layer. And, it seems, you can animate one and/or the other?!? Even mix them together?!? But why? Where's the line? What's the pro/con to each?
The Core Animation Programming Guide immediately jumps into Layer & Timing Classes and I think need to take a step back and understand why these varied pieces exist and how relate to each other.
Use views for control and layers for eye candy. Layers don't receive events so it's easier to use a view for those cases, but when you want to animate a sprite or backgrounds, etc., layers make sense. Events pass right through layers to the backing view so you can have a pretty visual representation without messing up your events. Try to overlay a view that you're just using for visual representation and you'll have to pass tap events through to the underlying view yourself.
An UIView is always rendered to a CALayer. When you use UIView methods to animate a view, you are effectively manipulating the underlying CALayer.
If you need to do simple things, use the UIView methods. For more complex situatins, or if you want layers not associated with any view in particular, use CALayers.
I've done a bunch of apps in the past year. Here's my rule of thumb:
Use UIView until it doesn't do what you want.
Then move to CoreAnimation. But before you get into it too much...
If you write more than a few animations, use Cocos2D.
UIView transforms are only 2D and are restricted to that, LAyer transforms however can be 3D and you should use those if you want to do 3D stuff, UIView animation will work if you change either the UIView transform or the CALayer transform. So at a basic level, you can do a lot more manipulation when you are working with a Layer rather than the View.
I am not sure if I am misunderstanding Chris' response to "What's Cocos2D doing better? Don't you have other problems then, regarding the touch event handling and many other stuff that misses in openGL ES?"
It sounds like the answer suggests Cocos2D is not based on the OpenGL ES framework when in fact it actual is. While it is a great 2D game engine it does implement OpenGL for much of it's rendering - attached to a physics library it allows for a lot of very interesting possibilities for animation - and Chris is correct - it is a lot less coding indeed.