attach multiple collision shapes to a single rigid body - ios

I have created a single player game using iOS + Cocos2d + Chipmunk and I'm looking for a solution that demonstrates how to attach multiple collision shapes to a single rigid body. I have a target that has an irregular shape (a car) that I need to detect collisions for. The target (car) is seen by the player from a side view and other objects may impact the target from multiple directions, not just from the front or the rear. The shape is such that I am unable to use a single cpPolyShape and achieve a realistic collision effect. Two cpPolyShapes (rectangular) stacked on top of each other, with the bottom rectangle being larger should do the trick.
Can someone provide a example of how this can be achieved?
I read the Chipmunk docs about cpShape, http://code.google.com/p/chipmunk-physics/wiki/cpShape, and it states that 'You can attach multiple collision shapes to a rigid body' in the very bottom of the page in the notes section, but no example is provided.
I currently have a working, functional project and am trying to make some final adjustments to improve game play.

When you call cp*ShapeNew(), the first parameter is the body to attach it to. Simple make more than one shape that share the same body. There is no trick.

You can add the method
In the .h file add the prototype
static int FunctionName (cpArbiter *arb, cpSpace *space, void *unused);
Now in the .m file add the code as
cpSpaceAddCollisionHandler(<space name>, <cpCollisionType of body a >, <cpCollisionType of body b>, <cpCollisionBeginFunc name>, <cpCollisionPreSolveFunc preSolve>, <cpCollisionPostSolveFunc postSolve>, <cpCollisionSeparateFunc separate>, <void *data>);
static int FunctionName(cpArbiter *arb, cpSpace *space, void *unused)
{
cpShape *a, *b; cpArbiterGetShapes(arb, &a, &b);
printf("\n Collision Detected");
return 1;
}
Note:- Don't Forget to give the collision type of both Body.

Related

Find where on a path a touch is (by percentage)

I have a CGPath, and on that path I detect touches. The path is basically a line or a curve, but it is not filled. The idea is that the user strokes the path, so wherever the user has stroked along the path I'd like to draw the path with a different stroke/color etc.
First, to find if the touches are inside the path, and to give the user some room for error, I'm using CGPathCreateCopyByStrokingPath, and CGPointInsidePath. This works fine.
Next, I need to find how far along the path the user has stroked. That is today's question.
When that is done, and I have an answer like 33%, then I'd be a simple assignment to strokeEnd on the path. (two copies, one complete path, and one on top with strokeEnd set to whatever percentage of the complete path the user has stroked with his finger so far).
Any ideas how to find how far along the path the user has stroked his finger?
I have worked on this quite a while, and would like to share my answer (I'm the one who asked the question btw).
I want to detect touches along a path (e.g a Bezier Curve). Problem is that it can only tell you if a touch hit the path, but not where (using CGPathContainsPoint).
To avoid having the user hit exactly on the path, create a fatter version with CGPathCreateCopyByStrokingPath), so that's the one to check for touch on.
The main problem still is, where along the path did the touch hit. What I did was to create a dashed path copy with CGPathCreateCopyByDashingPath with minimal spacing. This path is never actually displayed. Then I got every part of it with the CGPathApply saving every path element in an array.
When a touch is detected, and is within the "fat path", then I just loop through the array of path-dash-elements, checking the distance (just regular Pythagoras) to the start-point of each path-dash-element.
If the touch-point is closest to the 14'th element, and there are e.g. 140 elements total, that means we're 10% along the way.
Now that we have the percentage. What I want now is to paint the path as far as the touch. I tried to use the suggested strokeEnd property on a path. That didn't match my calculated percentage at all. Actually, it seemed to me to be quite inaccurate. Or maybe my calculation is.
Anyway, since we already have all the path-elements, and know which one to stop at (the 14th element), what I did was to build a new path using the path-elements. Since I didn't want a dashed path (that was just a trick to find out where the touch was on the long continuous path), I just skipped all the moveTo path-elements (except the very first). That will create a continous path.
CGPathApply - by request
// MARK: - Stuff for CGPathApply
typealias MyPathApplier = #convention(block) (UnsafePointer<CGPathElement>) -> Void
// http://stackoverflow.com/questions/24274913/equivalent-of-or-alternative-to-cgpathapply-in-swift
// Note: You must declare MyPathApplier as #convention(block), because
// if you don't, you get "fatal error: can't unsafeBitCast between
// types of different sizes" at runtime, on Mac OS X at least.
func myPathApply(path: CGPath!, block: MyPathApplier) {
let callback: #convention(c) (UnsafeMutablePointer<Void>, UnsafePointer<CGPathElement>) -> Void = { (info, element) in
let block = unsafeBitCast(info, MyPathApplier.self)
block(element)
}
CGPathApply(path, unsafeBitCast(block, UnsafeMutablePointer<Void>.self), unsafeBitCast(callback, CGPathApplierFunction.self))
}
I have a utility class GHPathUtilies.m which includes a method:
+(CGFloat) totalLengthOfCGPath:(CGPathRef)pathRef
This is all written in Objective-C, but presumably you could use the same technique to walk along the path elements, using CGPathApply, until you are near enough, or have passed the target point. The general idea is to break the curves between each element into small line segments and keep a running sum of the length of each segment.
Of course if the path doubles back on itself, you'll have to figure out which point is the touch point.

Create a class identified by variables

I've always had issues with classes, and I'm not sure if this is possible.
I'm trying to create a class with an identifiable name.
I realize that that isn't clear, but my overall goal is to create a grid-like game and each square in the grid would be a member in the class.
So for example, I would have a class called square and say in my code
square(16,47).isdead = true;
Basically, I want to know if it is possible to create a class where I can differentiate between at least a hundred different squares.
Also, not sure if this matters, but I am using sprite kit.
Several ways to do what you want, I would prefer the below method.
C style two dimensional array:
Square * squares[10][10];
And you encapsulate that in a class called SquareManager with a method:
-(Square*) aSquareManager squareX:(short)x Y:(short)y;
If you specifically want the access pattern described in your question you can use (credit goes to arturgrigor):
#define square(x, y) [aSquareManager squareX:x Y:y]
Then you can access all of your squares this way:
if ([aSquareManager squareX:16 Y:47].isdead==true) [self showSkullForSquare:[aSquareManager squareX:16 Y:47]];
A different approach would be that a square has x and y properties:
Square.h
#property short y;
#property short x;
And then you put them all into an array, and when you need a square you search through the array.
for(Square * aSquare in squares) {
if(aSquare.x==anXValue && aSquare.y==anYValue) {
return aSquare;
}
}
Functions like these are much quicker than you think.

Making two object collide but moving like kinematic objects

I'm working on a "Words in a pic" clone and I have different images representing each letters and empty boxes where the letters should be put in to.
When I drag the letters I want them to be dragged like when it is a static body i.e. just up, down, left and right (no turning or spinning) and when the item is within the box it should stay within that box, otherwise it should go back to it's original position.
The thing is that static objects can't collide with another static object nor can a kinematic object collide with another object so I need to use Dynamic if I have understood it correctly?
However how do I do so when the drag event is activated the body, the letter image, moves like a static or kinematic body (only up, down, left and right) but also detects collision between a letter image and a empty box image?
Thanks for helping me with this, I have not been able to find any information on how to solve this problem!
This was easier than I though, you set the items as "dynamic" and then object.isSensor = true, to make it not rotate object.isFixedRotation = true and also deactivate the gravity through object.gravityScale = 0

locating a change between two images

I have two images that are similar, but one has a additional change on it. What I need to be able to do is locate the change between the two images. Both images have white backgrounds and the change is a line being draw. I don't need anything as complex as openCV I'm looking for a "simple" solution in c or c++.
If you just want to show the differences, so you can use the code below.
FastBitmap original = new FastBitmap(bitmap);
FastBitmap overlay = new FastBitmap(processedBitmap);
//Subtract the original with overlay and just see the differences.
Subtract sub = new Subtract(overlay);
sub.applyInPlace(original);
// Show the results
JOptionPane.showMessageDialog(null, original.toIcon());
For compare two images, you can use ObjectiveFideliy class in Catalano Framework.
Catalano Framework is in Java, so you can port this class in another LGPL project.
https://code.google.com/p/catalano-framework/
FastBitmap original = new FastBitmap(bitmap);
FastBitmap reconstructed = new FastBitmap(processedBitmap);
ObjectiveFidelity of = new ObjectiveFidelity(original, reconstructed);
int error = of.getTotalError();
double errorRMS = of.getErrorRMS();
double snr = of.getSignalToNoiseRatioRMS();
//Show the results
Disclaimer: I am the author of this framework, but I thought this would help.
Your description leaves me with a few unanswered questions. It would be good to see some example before/after images.
However at the face of it, assuming you just want to find the parameters of the added line, it may be enough to convert the frames to grey-scale, subtract them from one another, segment the result to black & white and then perform line segment detection.
If the resulting image only contains one straight line segment, then it might be enough to find the bounding box around the remaining pixels, with a simple check to determine which of the two possible line segments you have.
However it would probably be simpler to use one of the Hough Transform methods provided by OpenCV.
You can use memcmp() (Ansi C function to compare 2 memory blocks, much like strcmp()). Just activate it on the Arrays of pixels and it returns whether they are identical or not.
You can add a little tweak that you get as result the pointer to the memory block where the first change occurred. This will give you a pointer to the first pixel. You can than just go along its neighbors to find all the non white pixels (representing your line).
bool AreImagesDifferent(const char*Im1, const char* Im2, const int size){
return memcmp(Im1,Im2,size);
}
const char* getFirstDifferentPixel(const char*Im1, const char* Im2, const int size){
const char* Im1end = Im1+size;
for (;Im1<Im1end; Im1++, Im2++){
if ((*Im1)!=(*Im2))
return Im1;
}
}

Missing depth info after first mesh

I'm using SlimDX for a Direct3D 10 apps. In the apps I've loaded 2 to more mesh, with images loaded as texture and using a fx code for shader. The code was modified from SlimDX's sample "SimpleModel10"
I move the draw call, shader setup code into a class that manage 1 mesh, shader (effect) and draw call. Then I initialize 2 copy of this class, then call the draw function one after another.
The output, no matter how I change the Z position of the mesh, the one being draw later will always stay on top. Later, when I use PIX to debug the draw call, I found out that the 2nd mesh doesn't have depth while the first one does. I've tried with 3 meshes, 2nd and 3rd one will not have depth too. The funny thing is all of then are instantiated from the same class, using the same draw call.
What could have cause such problem?
Following is part of the code in the draw function of the class, I've omitted the rest as it's lengthy involved a few classes. I keep the existing OnRenderBegin() and OnRenderEnd() of the sample:
PanelEffect.GetVariableByName("world").AsMatrix().SetMatrix(world);
lock (this)
{
device.InputAssembler.SetInputLayout(layout);
device.InputAssembler.SetPrimitiveTopology(PrimitiveTopology.TriangleList);
device.InputAssembler.SetIndexBuffer(indices, Format.R32_UInt, 0);
device.InputAssembler.SetVertexBuffers(0, binding);
PanelEffect.GetTechniqueByIndex(0).GetPassByIndex(0).Apply();
device.DrawIndexed(indexCount, 0, 0);
device.InputAssembler.SetIndexBuffer(null, Format.Unknown, 0);
device.InputAssembler.SetVertexBuffers(0, nullBinding);
}
Edit: After much debugging and code isolation, I found out the culprit is Font.Draw() in my DrawString() function
internal void DrawString(string text)
{
sprite.Begin(SpriteFlags.None);
string[] texts = text.Split(new string[] {"\r\n"}, StringSplitOptions.None);
int y = PanelY;
foreach (string t in texts)
{
font.Draw(sprite, t, new System.Drawing.Rectangle(PanelX, y, PanelSize.Width, PanelSize.Height), FontDrawFlags.SingleLine, new Color4(Color.Red));
y += font.Description.Height;
}
sprite.End();
}
Comment out Font.Draw solve the problem. Maybe it automatically set some states which causes the next Mesh draw to discard depth. Looking into SlimDX's source code now.
After much debugging in PIX, this is the conclusion.
Calling Font.Draw() will automatically set DepthEnable to false and DepthFunction to D3D10_COMPARISON_NEVER, that's after comparing PIX's detail on the OutputMerger of before and after calling Font.Draw
Solution
Context10_1.Device.OutputMerger.DepthStencilState = depthStencilState;
Put that before the next Mesh draw call fixed the problem.
Previously I only set the DepthStencilState in the OnRenderBegin()

Resources