I'm trying to create the physicsbody for a custom image, but it's quite hard to do it the manual way using CGPath. On iOS8, we can call bodyWithTexture method for SKPhysicsBody, however in iOS7, I have to do it manually.
Is there an easy way for making physics body of complex textures?
Dazchong site is down and I tried a few other tools which didn't help.
Related
I'm working with SpriteKit and wondering if anyone knows how to make a sprite follow a path. I've seen other questions online that talk about moving a sprite along a CGPath, but how would you do it if the path is another sprite?
Video example
Have a look at Is there a way to create a CGPath matching outline of a SKSpriteNode? - it seems from that answer that the problem is much more complex than it first appears.
If your sprite is an SVG, a simpler solution may be to manually convert the SVG into a CGPath, and then using that CGPath in your game. A simple tool for doing that is available here: https://swiftvg.mike-engel.com/. Of course doing this manually won't scale well if you have many different sprites.
This may not be exactly the answer you are looking for, but it seems right now there is no simple way to accomplish what you need :)
I am not an iOS developer, I just started to implement my game, that I have done in flash, hoping that it will be super fast using the native environment.
The project is a 2d game that has a lot of dynamic bezier drawing based on the user's interaction. Basically I draw dynamic blobs (amoeba type of shapes).
First I tried Swift, which is very similar to actionscript, but it turned out, Apple won't be able to accept apps built with Swift before the final release of Xcode6 and as I want release my game before September I went to objective C route.
I wanted to use sprite kit because of the integrated physics engine, sprite hierarchy, etc.
I tried to use SKSphapeNode for drawing, but I realised it very quickly, it is not suitable for my needs. (cannot draw a stroke if it is thicker than 2 pixels, it has memory leaks etc.)
So I used UIBezierPath that I put into an UIImage, but I am not happy with the performance as I have to create a new UIImage with the new dynamically generated bezier shape.
These are the options I found so far:
SkShapeNode - not suitable for my game
UIBezierPath - UIImage > I have to create a new UIImage every frame, so it is slow
OpenGL - I haven't tried it yet, I am not sure it is possible to use it with Sprite kit
CALayer - I am trying to integrate it with spritekit at the moment, but I have a feeling I will have the same problem that I had with the UIImage approach.
Does anybody have any idea, tip what approach would be the best performance-wise?
Thank you for your help in advance!
I tried to create a lightning effect with using SpriteKit, so check out my article about that: https://andreygordeev.com/2014/11/01/lightning-with-srite-kit.html
Some of the approaches use UIBezierPath, so may be you find that useful.
So I'm looking into UIKit Dynamics and the one problem I have run into is if I want to create a UIView with a custom drawRect: (for instance let's say I want to draw a triangle), there seems to be no way to specify the path of the UIView (or rather the UIDynamicItem) to be used for a UICollisionBehavior.
My goal really is to have polygons on the screen that collide with one another exactly how one would expect.
I came up with a solution of stitching multiple views together but this seems like overkill for what I want.
Is there some easy way to do this, or do I really have to stitch views together?
Dan
Watch the WWDC 2013 videos on this topic. They are very clear: for the sake of efficiency and speed, only the (rectangular) bounds of the view matter during collisions.
EDIT In iOS 9, a dynamic item can have a customized collision boundary. You can have a rectangle dictated by the frame, an ellipse dictated by the frame, or a custom shape — a convex counterclockwise simple closed UIBezierPath. The relevant properties, collisionBoundsType and (for a custom shape) collisionBoundingPath, are read-only, so you will have to subclass in order to set them.
If you really want to collide polygons, you might consider SpriteKit and its physics engine (it seems to share a lot in common with UIDynamics). It can mix with UIKit, although maybe not as smoothly as you'd like.
I want to make sort of a chalk board for part of my app, and I was wondering how to accomplish this?
I was thinking I could create a sprite and have it's image set to something very small (maybe a small point), and then add a new instance of that sprite everywhere the user touches to simulate a draw event. Something like [self addChild:someSprite]; for each touch location.
But it seems like that would be extremely memory inefficient. There has to be a better way than that, Maybe drawing actual lines? I'm probably overlooking some method.
Thanks for any help.
You need to use CCRenderTexture for chalk board paintings. Check this article & project for a drawing example.
Your variant isn't such "memory inefficient" as you think. No matter how much sprites will you create with the same texture, your texture will be placed to the memory only once. And all the sprites will use pointer to it. Just one thing to prevent many unnessesary calls is to use CCBatchNode. It will draw all it's children with single draw call. Without using it, draw will be called on every children.
I'm struggling with conceptualizing animations with a CALayer as opposed to UIView's own animation methods. Throw "Core Animation" into this and, well, maybe someone can articulate these concepts from a high level so I can better visualize what's happening and why I'd want to migrate UIView animations (which I'm quite familiar with now) to CALayer animations on the iPhone. Every view in Cocoa-Touch automatically gets a layer. And, it seems, you can animate one and/or the other?!? Even mix them together?!? But why? Where's the line? What's the pro/con to each?
The Core Animation Programming Guide immediately jumps into Layer & Timing Classes and I think need to take a step back and understand why these varied pieces exist and how relate to each other.
Use views for control and layers for eye candy. Layers don't receive events so it's easier to use a view for those cases, but when you want to animate a sprite or backgrounds, etc., layers make sense. Events pass right through layers to the backing view so you can have a pretty visual representation without messing up your events. Try to overlay a view that you're just using for visual representation and you'll have to pass tap events through to the underlying view yourself.
An UIView is always rendered to a CALayer. When you use UIView methods to animate a view, you are effectively manipulating the underlying CALayer.
If you need to do simple things, use the UIView methods. For more complex situatins, or if you want layers not associated with any view in particular, use CALayers.
I've done a bunch of apps in the past year. Here's my rule of thumb:
Use UIView until it doesn't do what you want.
Then move to CoreAnimation. But before you get into it too much...
If you write more than a few animations, use Cocos2D.
UIView transforms are only 2D and are restricted to that, LAyer transforms however can be 3D and you should use those if you want to do 3D stuff, UIView animation will work if you change either the UIView transform or the CALayer transform. So at a basic level, you can do a lot more manipulation when you are working with a Layer rather than the View.
I am not sure if I am misunderstanding Chris' response to "What's Cocos2D doing better? Don't you have other problems then, regarding the touch event handling and many other stuff that misses in openGL ES?"
It sounds like the answer suggests Cocos2D is not based on the OpenGL ES framework when in fact it actual is. While it is a great 2D game engine it does implement OpenGL for much of it's rendering - attached to a physics library it allows for a lot of very interesting possibilities for animation - and Chris is correct - it is a lot less coding indeed.