Pixel collision detection? - ios

In my app, I have a bunch of CCSprites and I want to have a collision detection feature that will work only when the non-transparent pixels in the CCSprites collide. I don't want to be restricted to color between the colliding sprites. I think thats what the 'Pixel Perfect Collision Detection' thread does in the Cocos2D forum, but I want to use any color for the real collision. This collision detection would be in my game loop so it can't be too expensive. Anyway, does anyone have any ideas on how I can do this?
I am willing to use Cocos2D, Box2D or Chipmunk or even UIKit if it can do it.
Thanks!

When talking about hardware rendered graphics, "I want pixel perfect collisions" and "I don't want them to be too expensive" are pretty mutually exclusive.
Either write a simpler renderer that doesn't allows such complex transformations, anti-aliasing or sub-pixel placement or use the actual GPU to render some sort of collision mask. The problem with doing that on the GPU is that it's fast to send stuff to the GPU and expensive to get it back. There's a reason why this technique is quite uncommon.
Chipmunk Pro's auto-geometry stuff supports turning images of various varieties into collision shapes, but isn't complete yet.

It`s imposible to do if you dont want lose performance. Try to do a system colission based in circles, this in best way to do a collision

Related

Monogame - how to have draw layer while on SpriteSortMode.Texture

I have a problem in which in my game I have to use SpriteSortMode.Texture because I have a lot of objects with few textures, so I cannot afford to use SpriteSortMode.BackToFront.
The thing is this means I cannot draw by layers, unless I do SpriteBatch.Begin with the exact same settings, which is what I'm currently doing.
I only have 3 draw layers I need - a Tileset surface, Objects like rocks or characters on the surface, and UI.
Other solutions I've found is using texture quads (which supposedly also improves tileset drawing performance), going 3D with orthogonal view which I haven't researched yet.
I'm hoping there's a better to make this work.
Why would having a lot of objects with few textures mean you have to use SpriteSortMode.Texture?
"This can improve performance when drawing non-overlapping sprites of uniform depth." says the MSDN page, and this is clearly not what you are doing.
Just use the default SpriteSortMode.Deferred and draw things back to front in order.

what does bake do in SceneKit

What is the purpose of "bake" option in SceneKit editor. Does it have an impact on performance?
Type offers 2 options: Ambient Occlusion and Light Map
Destination offers: Texture and Vertex
For me, it crashes Xcode. It's supposed to render lighting (specifically shadows) into the textures on objects so you don't need static lights.
This should, theoretically, mean that all you need in your scene are the lights used to create dynamic lighting on objects that move, and you can save all the calculations required to fill the scene with static lights on static geometry.
In terms of performance, yes, baking in the lighting can create a HUGE jump in performance because it's saving you all the complex calculations that create ambient light, occlusion and direct shadows and soft shadows.
If you're using ambient occlusion and soft shadows in real-time you'll be seeing VERY low frame rates.
And the quality possible with baking is far beyond what you can achieve with a super computer in real time, particularly in terms of global illumination.
What's odd is that Scene Kit has a bake button. It has never worked for me, always crashing Xcode. But the thing is... to get the most from baking, you need to be a 3D artist, in which case you'll be much more inclined to do the baking in a 3D design app.
And 3D design apps have lighting solutions that are orders of magnitude better than the best Scene Kit lighting possible. I can't imagine that there's really a need for baking in Scene Kit. It's a strange thing for the development team to have spent time on as it simply could never come close to the quality afforded by even the cheapest 3D design app.
What I remember from college days:
Baking is actually process in 3D rendering and textures. You have two kind of bakings: texture baking and physics baking.
Texture baking:
You calculate some data and save that data to a texture. You use that on your material. With that, you reduce rendering time. Every single frame, everything is calculated again and again. If you have animations, that is a lot of time wasted there.
Physics baking:
You can pre calculate physics simulations exactly like above baking and you use that data. For example you use it in Rigid Body.

SKPhysicsBody slowing down program

I have a random maze generator that starts building small mazes then progress into massive levels. The "C"s are collectables and the "T"s are tiles. the "P" is the player starting position. I included a sample tile map below.
The performance issue is not when I have a small 6x12 pattern like here; it shows up when I've got a 20x20 pattern for example.
Each character is a tile, and each tile has it's own SKPhysicsBody. The tiles are not square, they are complex polygons and the tiles don't quite touch each other.
The "C"s need to be able to be removed one at a time, and the "T"s are permanent for the level and don't move. Also the maze only shows 6x4 section of tiles at a time and moves the background to the view centered on the player.
I've tried making the T's and C's rectangles which drastically improves performance (but still slower than desired) although the user won't care for this, the shape of the tile is just too different.
Are there any performance tricks you pros can muster up to fix this?
TTTTTT
TCTTCT
TCCCCT
TTCTCT
TCCTCT
TTCCTT
TTTCTT
TTCCCT
TCCTCT
TCTTCT
TTCCCT
TTPTTT
The tiles are not square, they are complex polygons
I think this is your problem. Also, if your bodies are dynamic, setting them static will drastically improve performance. You can also try pooling. And be aware, that performance on the simulator is drastically lower than on the real device.
What kind of collision method are you using?
SpriteKit provides several possibilities to define the shape of the SKPhysicsBody. The best performance provides a rectangle or a circle:
myPhysicsBody = SKPhysicsBody(rectangleOfSize: mySprite.size)
You can also define more complex shapes like a triangle, which have a worse performance.
Using the texture (SpriteKit will use all non transparent pixels to detect the shape by itself) has the worst performance:
myPhysicsBody = SKPhysicsBody(texture: mySprite.texture, size: mySprite.size)
Activating 'usesPreciseCollisionDetection' will also have a negative impact on your performance.

SKSpriteNode - How do I write to the pixels that make up the image of a sprite?

I would like to be able to write pixels to the image of an SKSprite in iOS7. How do I do this?
Applications? Graphing for example. Random images. Applying damage effects perhaps to a sprite.
You can't directly write to an SKSpriteNode's pixel data in iOS 7. (This is called out explicitly in Apple's WWDC 2013 videos about sprite kit, which I highly recommend.) The only thing you can do is to change its texture member. The Apple docs on sprites give a variety of techniques to do that.
If you really need to programmatically create an image, you can always do so with a pixel buffer and then make it into an SKTexture with textureWithData:size: and related methods. For explosions and damage effects, though, there are probably better ways to do this, such as particle systems or masking out or combining the underlying sprite with other sprites.
How to draw a line in Sprite-kit
I didn't know you could draw lines and such directly onto nodes or scenes. This works for my purposes.

Directional Lights

I'm working on a game idea (2D) that needs directional lights. Basically I want to add light sources that can be moved and the light rays interact with the other bodies on the scene.
What I'm doing right now is some test where using sensors (box2d) and ccDrawLine I could achieve something similar to what I want. Basically I send a bunch of sensors from certain point and with raycast detect collisions, get the end points and draw lines over the sensors.
Just want to get some opinions if this is a good way of doing this or is other better options to build something like this?
Also I would like to know how to make a light effect over this area (sensors area) to provide a better looking light effect. Any ideas?
I can think of one cool looking effect you could apply. Put some particles inside the area where light is visible, like sparks shining and falling down very slowly, something like on this picture
Any approach to this problem would need to use collision detection anyway so your is pretty nice providing you have limited number of box2d objects.
Other approach when you have a lot of box2d objects I would think of is to render your screen to texture with just solid colors (should be fast) and perform ray tracing on that generated texture to find pixels that are going to be affected by light. That way you are limited to resolution not the number of box2d objects.
There is a good source code here about dynamic and static lights in a 2D space.
It's Ruby code but easy to understand so it shouldn't be long to port it to Obj-C/Cocos2D/box2D.
I really hope it will help you as it helped me.
Hm, interesting question. Cocos2D does provide some rather flexible masking effects. You could have a gradient mask that you lay over your objects, where its position depends on the position of the "light", thereby giving the effect that your objects were being coloured by the light.

Resources