Image based collision detection taking transparency into account - ios

Basically this is my code to check if two UIImageViews collide:
if (CGRectIntersectsRect(self.fishImage.frame, self.minebomb.frame))
{
// some code
}
The code above only detects when the two frames collide. How do I detect when non-transparent pixels collide?

Related

How to change border of this image

I am making a game using this function:
if (CGRectIntersectsRect(object.frame, object2.frame)) {
[self GameOver];
}
Both objects are square however the image of object #2 is not. Therefore when the borders collide (but not the actual pictures) the game ends. Is there a way I can have the border "fit" to the image, so that the game only ends when the actual pictures collide.
Thanks :)
**my image is a shark and therefore a rectangle cannot be used
A view's frame is always a CGRect, which is a rectangle. You can use UIKit Dynamics for collision detection with views, but that only support rectangles too.
As #jammycoder mentioned, try out SpriteKit or other game engines, if you need to detect custom shape bounds.

iOS Triangular Image view

so I'm making a game and pretty much when the player (which is a triangular shaped rocket) hits an object flying at you (a rock) the game ends. I have everything working well but my problem is the rocket is a triangle yet the image view its in is a rectangle. So if the edge of the image view touches the rock the game will end even though the actual rocket didn't touch the object. So basically how can I make the rock image view not recognize the parts of the rocket image view which are empty? Basically a triangular shaped image view.
Thank you for your help. Let me know if you need more info or want to see the code I have for them to collide.
You analytically present the triangle with 3 points and a rock with a center and radius then find and implement an algorithm checking a hit test between those 2 shapes. Or draw the two shapes onto some graphics context using an appropriate blending and check for overlapping pixels (for instance draw one as red and another as green and look if a pixel that is both red and green exists) you could actually do that with 2 image views having those colors and .5f alpha added on the 3rd invisible view but you would need to get the image from the view and then iterate through all the pixels. In any of the cases do this check only after the corresponding view frames overlap.

Fill image with different color by detecting the different parts

I have an Image of a landscape which i need to fill with different colors.
When i select colors from palette and start scrubbing on any particular part, only that part should get the color even if by mistake i take my finger outside of that image part.
So basically i need to detect which part of image have i tapped so that only that part takes the color.
I am developing this app in Cocos2dx, but any help in logic would be a good point to start.
Here is an example of what i want.
Note : I know i could achieve this by taking separate images and then detecting touches, but that increases the app size by alot of MB's.
I guess user will be able to draw only on white part of the image.
If above is true, what i want you to do is, in your touchesMoved method, check if any black color (non white) pixel is present between previous touch point and current touch point.
If there is no such black pixel, then draw it else dont draw it.

How to apply full-screen SKEffectNode for post-processing in SpriteKit

I'm trying out SpriteKit with the following setup:
An SKScene with two child nodes used merely for grouping other
nodes: foreground and background.
background is really empty as of now, but would eventually hold some
type of background sprite / layers.
foreground is a SKEffectNode and whenever the user taps on the
screen, a new intance of a SKnode subclass which represents a game
element is added as child to it.
This SKNode subclass basically creates 3 SKShapeNodes and two labels: an outter
circumference, an inner circumference, 2 labels and an inner quarter circumference. The inner quarter circumference has an SKAction that
makes it rotate forever about its origin / center.
Now here's the issue, as long as foreground doesn't have any CIFilter or has shouldEnableEffects = NO, everything is fine. That is, I can tap on the screen and my game elements are instantiated and added to the main scene. But the minute I try adding a CIGaussianBlur or CIBloom to the foreground, I notice two things:
The framerate drops to about 2fps. Mind you, this happens even with
as little as 6 nodes alive in the scene.
The effect seems to be constantly cropping its contents or adjusting
it's frame. That is, if I have one node, the "full screen" effect
seems to try and constantly crop or adjust its bounds to the minimum
area required to hold all nodes. This is for one node:
And this is for 2 nodes:
In OpenGL ES 2, one would do a post blur / bloom by basically rendering the whole framebuffer (all objects) to texture, then doing at least one more pass to blur,etc on that texture and then either present that in the framebuffer attached to the display or compose that with the original render back to the framebuffer. I'd expect SKEffectNode to work in a similar way. However the cropping and the poor performance makes me think I might be using the effect node the wrong way. Any thoughts?
It seems to be a bug with the SKEffectNode trying to apply a filter on children SKShapeNodes as far as I can tell. I played around with this and achieved your results, but when I switched out the SKShapeNodes for SKSpriteNodes (using a simple png of a circle) the cropping no longer appears. It's a bug in that SKEffectNode doesn't handle the stroke of the SKShapeNode very well. If you take off the stroke (lineWidth = 0) and give it a fill color, you'll see that there is no cropping.
As for the frame rate, SKShapeNodes perform poorly. Doing the switch to SKSpriteNodes I mentioned earlier boosted my fps from 40 to 50 when I had 35 nodes on the screen (iPhone 5) with the filter applied.

Determine which sprite the mouse is over

I'm attempting to determine which sprite a mouse is over in an isometric 2D game. I think my best bet is to draw each sprite a different color into a separate renderTarget2D and turn it into a Texture2D at which point I can get the color data from the mouse point and check it against the drawn sprites.
The problem I'm having with that method though is that I can't change the color of the individual sprites to a solid color. If I change the Color in the spriteBatch.Draw call, it only tints the color of the sprite rather than drawing it at a solid color so the data I retrieve from the Texture doesn't help.
Any suggestions or help with drawing those sprites in a solid color?
Don't do it that way. Creating a new render target and copying the data into the memory even for a mere hundred sprites sixty times per sec is far beyond what current systems can handle.
Simply use the Contains method of the Rectangle structure:
var destination = new Rectangle(100, 100, 50, 50);
bool mouseOver = destination.Contains(mouseX, mouseY);

Resources