box2d with custom polygon crashes - ios

I created a polygon which has 5 vertices, and all vertices are generated by VertexHelper.
Why does the program get SIGABRT at b2Assert(area > b2_epsilon) in ComputeCentroid() in b2PolygonShape.cpp?
The program runs well when I use shape.SetAsBox(.359375, 1.0) instead of shape.Set(vertices, count).
It seems that somethings wrong during calculating centroid when shape.Set() is used, but I don't know how to deal with this problem.
Here's the code:
b2BodyDef bodyDef;
bodyDef.type = b2_dynamicBody;
bodyDef.awake = NO;
bodyDef.position.Set(3.125, 3.125);
bodyDef.angle = -.785398163397;
spriteBody = world->CreateBody(&bodyDef);
spriteBody->SetUserData(sprite);
b2MassData massData = {2.0, b2Vec2(0.021875, 0.203125), 0.0};
spriteBody->SetMassData(&massData);
int32 count = 5;
b2Vec2 vertices[] = {
b2Vec2(-11.5f / PTM_RATIO, -16.0f / PTM_RATIO),
b2Vec2(-10.5f / PTM_RATIO, 15.0f / PTM_RATIO),
b2Vec2(10.5f / PTM_RATIO, 15.0f / PTM_RATIO),
b2Vec2(10.5f / PTM_RATIO, -5.0f / PTM_RATIO),
b2Vec2(-5.5f / PTM_RATIO, -16.0f / PTM_RATIO)
};
b2PolygonShape shape;
shape.Set(vertices, count);
b2FixtureDef fixtureDef;
fixtureDef.shape = &shape;
fixtureDef.density = 1.0f;
fixtureDef.friction = 0.2f;
fixtureDef.restitution = 0.7f;
spriteBody->CreateFixture(&fixtureDef);

It looks like you've wound your vertices the wrong way. I think they should be anticlockwise in box2d, at least by default.
You're assertion will be failing because the calculation for area will be returning a negative value, far less than b2_epsilon

Related

Line Pixellating

Using Metal I am drawing line using Bezier Curves using four points. I am using nearly 1500 triangles for the lines. The line is Pixellated. How can i reduce pixellated.
vertex VertexOutBezier bezier_vertex(constant BezierParameters *allParams[[buffer(0)]],
constant GlobalParameters& globalParams[[buffer(1)]],
uint vertexId [[vertex_id]],
uint instanceId [[instance_id]])
{
float t = (float) vertexId / globalParams.elementsPerInstance;
rint(t);
BezierParameters params = allParams[instanceId];
float lineWidth = (1 - (((float) (vertexId % 2)) * 2.0)) * params.lineThickness;
float2 a = params.a;
float2 b = params.b;
float cx = distance(a , b);
float2 p1 = params.p1 * 3.0; // float2 p1 = params.p1 * 3.0;
float2 p2 = params.p2 * 3.0; // float2 p2 = params.p2 * 3.0;
float nt = 1.0f - t;
float nt_2 = nt * nt;
float nt_3 = nt_2 * nt;
float t_2 = t * t;
float t_3 = t_2 * t;
// Calculate a single point in this Bezier curve:
float2 point = a * nt_3 + p1 * nt_2 * t + p2 * nt * t_2 + b * t_3;
float2 tangent = -3.0 * a * nt_2 + p1 * (1.0 - 4.0 * t + 3.0 * t_2) + p2 * (2.0 * t - 3.0 * t_2) + 3 * b * t_2;
tangent = (float2(-tangent.y , tangent.x ));
VertexOutBezier vo;
vo.pos.xy = point + (tangent * (lineWidth / 2.0f));
vo.pos.zw = float2(0, 1);
vo.color = params.color;
return vo;
}
You need to enable MSAA (multisample anti-aliasing). How you do this depends on your exact Metal view configuration, but the easiest way is if you're using MTKView. To enable MSAA in an MTKView, all you have to do is:
metalView.sampleCount = 4
Then, when you configure your MTLRenderPipelineDescriptor before calling makeRenderPipelineState(), add the following:
pipelineDescriptor.sampleCount = 4
This should greatly improve the quality of your curves and reduce pixelation. It does come with a performance cost however, as the GPU has to do substantially more work to render your frame.

OpenCV + OpenGL Using solvePnP camera pose - object is offset from detected marker

I have a problem in my iOS application where i attempt to obtain a view matrix using solvePnP and render a 3d cube using modern OpenGL. While my code attempts to render a 3d cube directly on top of the detected marker, it seems to render with a certain offset from the marker (see video for example)
https://www.youtube.com/watch?v=HhP5Qr3YyGI&feature=youtu.be
(on the bottom right of the image you can see an opencv render of the homography around the tracker marker. the rest of the screen is an opengl render of the camera input frame and a 3d cube at location (0,0,0).
the cube rotates and translates correctly whenever i move the marker, though it is very telling that there is some difference in the scale of translations (IE, if i move my marker 5cm in the real world, it hardly moves by 1cm on screen)
these are what i believe to be the relevant parts of the code where the error could come from :
Extracting view matrix from homography :
AVCaptureDevice *deviceInput = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceFormat *format = deviceInput.activeFormat;
CMFormatDescriptionRef fDesc = format.formatDescription;
CGSize dim = CMVideoFormatDescriptionGetPresentationDimensions(fDesc, true, true);
const float cx = float(dim.width) / 2.0;
const float cy = float(dim.height) / 2.0;
const float HFOV = format.videoFieldOfView;
const float VFOV = ((HFOV)/cx)*cy;
const float fx = abs(float(dim.width) / (2 * tan(HFOV / 180 * float(M_PI) / 2)));
const float fy = abs(float(dim.height) / (2 * tan(VFOV / 180 * float(M_PI) / 2)));
Mat camIntrinsic = Mat::zeros(3, 3, CV_64F);
camIntrinsic.at<double>(0, 0) = fx;
camIntrinsic.at<double>(0, 2) = cx;
camIntrinsic.at<double>(1, 1) = fy;
camIntrinsic.at<double>(1, 2) = cy;
camIntrinsic.at<double>(2, 2) = 1.0;
std::vector<cv::Point3f> object3dPoints;
object3dPoints.push_back(cv::Point3f(-0.5f,-0.5f,0));
object3dPoints.push_back(cv::Point3f(+0.5f,-0.5f,0));
object3dPoints.push_back(cv::Point3f(+0.5f,+0.5f,0));
object3dPoints.push_back(cv::Point3f(-0.5f,+0.5f,0));
cv::Mat raux,taux;
cv::Mat Rvec, Tvec;
cv::solvePnP(object3dPoints, mNewImageBounds, camIntrinsic, Mat(),raux,taux); //mNewImageBounds are the 4 corner of the homography detected by perspectiveTransform (the green outline seen in the image)
raux.convertTo(Rvec,CV_32F);
taux.convertTo(Tvec ,CV_64F);
Mat Rot(3,3,CV_32FC1);
Rodrigues(Rvec, Rot);
// [R | t] matrix
Mat_<double> para = Mat_<double>::eye(4,4);
Rot.convertTo(para(cv::Rect(0,0,3,3)),CV_64F);
Tvec.copyTo(para(cv::Rect(3,0,1,3)));
Mat cvToGl = Mat::zeros(4, 4, CV_64F);
cvToGl.at<double>(0, 0) = 1.0f;
cvToGl.at<double>(1, 1) = -1.0f; // Invert the y axis
cvToGl.at<double>(2, 2) = -1.0f; // invert the z axis
cvToGl.at<double>(3, 3) = 1.0f;
para = cvToGl * para;
Mat_<double> modelview_matrix;
Mat(para.t()).copyTo(modelview_matrix); // transpose to col-major for OpenGL
glm::mat4 openGLViewMatrix;
for(int col = 0; col < modelview_matrix.cols; col++)
{
for(int row = 0; row < modelview_matrix.rows; row++)
{
openGLViewMatrix[col][row] = modelview_matrix.at<double>(col,row);
}
}
i made sure the camera intrinsic matrix contains correct values, the portion which converts the opencv Mat to an opengl view matrix i believe to be correct as the cube translates and rotates in the right directions.
once the view matrix is calculated, i use it to draw the cube as follows :
_projectionMatrix = glm::perspective<float>(radians(60.0f), fabs(view.bounds.size.width / view.bounds.size.height), 0.1f, 100.0f);
_cube_ModelMatrix = glm::translate(glm::vec3(0,0,0));
const mat4 MVP = _projectionMatrix * openGLViewMatrix * _cube_ModelMatrix;
glUniformMatrix4fv(glGetUniformLocation(_cube_program, "ModelMatrix"), 1, GL_FALSE, value_ptr(MVP));
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
Is anyone able to spot my error?
You should create perspective matrix as explained here: http://ksimek.github.io/2013/06/03/calibrated_cameras_in_opengl
Here is quick code:
const float fx = intrinsicParams(0, 0); // Focal length in x axis
const float fy = intrinsicParams(1, 1); // Focal length in y axis
const float cx = intrinsicParams(0, 2); // Primary point x
const float cy = intrinsicParams(1, 2); // Primary point y
projectionMatrix(0, 0) = 2.0f * fx;
projectionMatrix(0, 1) = 0.0f;
projectionMatrix(0, 2) = 0.0f;
projectionMatrix(0, 3) = 0.0f;
projectionMatrix(1, 0) = 0.0f;
projectionMatrix(1, 1) = 2.0f * fy;
projectionMatrix(1, 2) = 0.0f;
projectionMatrix(1, 3) = 0.0f;
projectionMatrix(2, 0) = 2.0f * cx - 1.0f;
projectionMatrix(2, 1) = 2.0f * cy - 1.0f;
projectionMatrix(2, 2) = -(far + near) / (far - near);
projectionMatrix(2, 3) = -1.0f;
projectionMatrix(3, 0) = 0.0f;
projectionMatrix(3, 1) = 0.0f;
projectionMatrix(3, 2) = -2.0f * far * near / (far - near);
projectionMatrix(3, 3) = 0.0f;
For more information about intrinsic matrix: http://ksimek.github.io/2013/08/13/intrinsic

Adding box2d Body on Sprite Cocos2d

I have a bucket in which i want to add box2d body . not on whole bucket but on left, right and bottom so i can throw my ball inside bucket.
Here is my Bucket. I have added box2d body on left and right side of my bucket. like this
It is working fine for me But when i add body on bottom than my game is crashing.
Here is my code for adding 2 body on bucket with comment bottom body as well.
-(b2Body *) createBucket
{
CCSprite* bucket = [CCSprite spriteWithFile:#"simple-bucket-md.png"];
[self addChild:bucket z:3];
b2Body* b_bucket;
//set this to avoid updating this object in the tick schedule
bucket.userData = (void *)YES;
b2BodyDef bodyDef;
bodyDef.type = b2_staticBody;
CGPoint startPos = ccp(400,150);
bodyDef.position = [self toMeters:startPos];
bodyDef.userData = bucket;
bodyDef.gravityScale = 0;
b2PolygonShape dynamicBox;
//----------------------------------
// THis IS body for Left Side
//----------------------------------
int num = 5;
b2Vec2 verts[] = {
b2Vec2(-29.1f / PTM_RATIO, 25.0f / PTM_RATIO),
b2Vec2(-25.4f / PTM_RATIO, -14.9f / PTM_RATIO),
b2Vec2(-18.7f / PTM_RATIO, -14.9f / PTM_RATIO),
b2Vec2(-21.8f / PTM_RATIO, 26.7f / PTM_RATIO),
b2Vec2(-28.9f / PTM_RATIO, 25.1f / PTM_RATIO)
};
dynamicBox.Set(verts, num);
b2FixtureDef fixtureDef;
fixtureDef.shape = &dynamicBox;
fixtureDef.friction = 0.7;
fixtureDef.density = 10.0f;
fixtureDef.restitution = 0.7;
b_bucket = world->CreateBody(&bodyDef);
b_bucket->CreateFixture(&fixtureDef);
//----------------------------------
// THis IS body for Right Side
//----------------------------------
int num1 = 5;
b2Vec2 verts1[] = {
b2Vec2(16.8f / PTM_RATIO, 27.0f / PTM_RATIO),
b2Vec2(15.9f / PTM_RATIO, -11.5f / PTM_RATIO),
b2Vec2(22.1f / PTM_RATIO, -10.7f / PTM_RATIO),
b2Vec2(24.6f / PTM_RATIO, 26.9f / PTM_RATIO),
b2Vec2(16.9f / PTM_RATIO, 26.7f / PTM_RATIO)
};
dynamicBox.Set(verts1, num1);
fixtureDef.shape = &dynamicBox;
b_bucket-> CreateFixture(&fixtureDef);
//----------------------------------
// THis IS body for Bottom
//----------------------------------
/*
int num2 = 5;
b2Vec2 verts2[] = {
b2Vec2(-23.0f / PTM_RATIO, -21.6f / PTM_RATIO),
b2Vec2(18.9f / PTM_RATIO, -21.0f / PTM_RATIO),
b2Vec2(18.2f / PTM_RATIO, -26.1f / PTM_RATIO),
b2Vec2(-22.8f / PTM_RATIO, -25.9f / PTM_RATIO),
b2Vec2(-23.0f / PTM_RATIO, -21.7f / PTM_RATIO)
};
dynamicBox.Set(verts2, num2);
fixtureDef.shape = &dynamicBox;
b_bucket-> CreateFixture(&fixtureDef);
*/
return b_bucket;
}
Your vertices are clockwise ordered, while they should be counter-clockwise.
You must create polygons with a counter clockwise winding (CCW). We must be careful because the notion of CCW is with respect to a right-handed coordinate system with the z-axis pointing out of the plane. This might turn out to be clockwise on your screen, depending on your coordinate system conventions.
Also, in order to create a 4 sides polygon (which it seems what you're trying), you just need 4 vertices. Try something like this.-
int num2 = 4;
b2Vec2 verts2[] = {
b2Vec2(-22.8f / PTM_RATIO, -25.9f / PTM_RATIO),
b2Vec2(18.2f / PTM_RATIO, -26.1f / PTM_RATIO),
b2Vec2(18.9f / PTM_RATIO, -21.0f / PTM_RATIO),
b2Vec2(-23.0f / PTM_RATIO, -21.6f / PTM_RATIO)
};
dynamicBox.Set(verts2, num2);
fixtureDef.shape = &dynamicBox;
b_bucket-> CreateFixture(&fixtureDef);
Notice that you can also remove the fifth vertex of your lateral bodies.

Screen space raycasting

I am using OpenGL ES 2.0 and the latest iOS.
I have a 3D model that I wish the user to be able to select different parts of the model by tapping on the screen. I have found this tutorial on converting a pixel-space screen coordinate to a world-space ray, and have implemented a ray-AABB intersection test to determine the intersection portion of the model.
I get some hits on the model at seemingly random sections of the model. So I need to debug this feature, but I don't really know where to start.
I can't exactly draw a line representing the ray (since it is coming out of the camera it will appear as a point), so I can see a couple of ways of debugging this:
Check the bounding boxes of the model sections. So is there an easy way with OGL ES to draw a bounding box given a min and max point?
draw some 3D object along the path of the ray. This seems more complicated.
Actually debug the raycast and intersection code. This seems like the hardest to accomplish since the algorithms are fairly well known (I took the intersection test straight ouf of my Real-Time Collision Detection book).
If anyone can help, or wants me to post some code, I could really use it.
Here is my code for converting to world space:
- (IBAction)tappedBody:(UITapGestureRecognizer *)sender
{
if ( !editMode )
{
return;
}
CGPoint tapPoint = [sender locationOfTouch:0 inView:self.view];
const float tanFOV = tanf(GLKMathDegreesToRadians(65.0f*0.5f));
const float width = self.view.frame.size.width,
height = self.view.frame.size.height,
aspect = width/height,
w_2 = width * 0.5,
h_2 = height * 0.5;
CGPoint screenPoint;
screenPoint.x = tanFOV * ( tapPoint.x / w_2 - 1 ) / aspect;
screenPoint.y = tanFOV * ( 1.0 - tapPoint.y / h_2 );
GLKVector3 nearPoint = GLKVector3Make(screenPoint.x * NEAR_PLANE, screenPoint.y * NEAR_PLANE, NEAR_PLANE );
GLKVector3 farPoint = GLKVector3Make(screenPoint.x * FAR_PLANE, screenPoint.y * FAR_PLANE, FAR_PLANE );
GLKVector3 nearWorldPoint = GLKMatrix4MultiplyVector3( _invViewMatrix, nearPoint );
GLKVector3 farWorldPoint = GLKMatrix4MultiplyVector3( _invViewMatrix, farPoint );
GLKVector3 worldRay = GLKVector3Subtract(farWorldPoint, nearWorldPoint);
NSLog(#"Model matrix: %#", NSStringFromGLKMatrix4(_modelMatrix));
worldRay = GLKVector4Normalize(worldRay);
[male intersectWithRay:worldRay fromStartPoint:nearWorldPoint];
for ( int i =0; i < 3; ++i )
{
touchPoint[i] = nearWorldPoint.v[i];
}
}
And here's how I get the matrices:
- (void)update
{
// _rotation = 0;
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, NEAR_PLANE, FAR_PLANE);
self.effect.transform.projectionMatrix = projectionMatrix;
GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -5.0f);
// Compute the model view matrix for the object rendered with ES2
_viewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 0.0f);
_modelMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _rotation, 0.0f, 1.0f, 0.0f);
_modelViewMatrix = GLKMatrix4Rotate(_viewMatrix, _rotation, 0.0f, 1.0f, 0.0f);
_modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, _modelViewMatrix);
_invViewMatrix = GLKMatrix4Invert(_viewMatrix, NULL);
_invMVMatrix = GLKMatrix4Invert(_modelViewMatrix, NULL);
_normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(_modelViewMatrix), NULL);
_modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, _modelViewMatrix);
male.modelTransform = _modelMatrix;
if ( !editMode )
{
_rotation += self.timeSinceLastUpdate * 0.5f;
}
}
I can't exactly draw a line representing the ray (since it is coming
out of the camera it will appear as a point)
DonĀ“t discard this so soon. Isn't it that what you are trying to test? I mean, That would be so if you are unprojecting things right. But if it you have a bug, it won't.
I'll go with this first... if you see a point just under your finger the conversion is right, and you can start investigating the other options you pointed out, which are more complex.
I'm not sure why this fixed the problem I was having but changing
screenPoint.x = tanFOV * ( tapPoint.x / w_2 - 1 ) / aspect;
to
screenPoint.x = tanFOV * ( tapPoint.x / w_2 - 1 ) * aspect;
and
GLKVector4 nearPoint = GLKVector4Make(screenPoint.x * NEAR_PLANE, screenPoint.y * NEAR_PLANE, NEAR_PLANE, 1 );
GLKVector4 farPoint = GLKVector4Make(screenPoint.x * FAR_PLANE, screenPoint.y * FAR_PLANE, FAR_PLANE, 1 );
to
GLKVector4 nearPoint = GLKVector4Make(screenPoint.x * NEAR_PLANE, screenPoint.y * NEAR_PLANE, -NEAR_PLANE, 1 );
GLKVector4 farPoint = GLKVector4Make(screenPoint.x * FAR_PLANE, screenPoint.y * FAR_PLANE, -FAR_PLANE, 1 );
seems to have fixed the raycasting issue. Still not sure why my view matrix seems to be indicating that the camera is looking down the positive Z-axis, but my objects are translated negative along the z axis

Different x and y speed (acceleration) in cocos2d?

I want to create a visible object with a trajectory using standard actions (CCMoveBy and e.t.c) which is similar to:
x = sin(y)
My code:
CCMoveBy *moveAction1 = [CCMoveBy actionWithDuration:1.5 position:ccp(300, 0)];
CCEaseInOut *easeInOutAction1 = [CCEaseInOut actionWithAction:moveAction1 rate:2];
CCMoveBy *moveAction2 = [CCMoveBy actionWithDuration:1.5 position:ccp(-300, 0)];
CCEaseInOut *easeInOutAction2 = [CCEaseInOut actionWithAction:moveAction2 rate:2];
CCMoveBy *moveAction3 = [CCMoveBy actionWithDuration:1.5 position:ccp(0, -32)];
CCSpawn *moveActionRight = [CCSpawn actionOne:easeInOutAction1 two:moveAction3];
CCSpawn *moveActionLeft = [CCSpawn actionOne:easeInOutAction2 two:moveAction3];
CCSequence *sequenceOfActions = [CCSequence actionOne:moveActionRight two:moveActionLeft];
CCRepeatForever *finalMoveAction = [CCRepeatForever actionWithAction:sequenceOfActions];
[enemy runAction:finalMoveAction];
This code shows move down only. The problem is that object has a different x and y accelerations and I don't know how to combine them
UPDATED
- (void)tick:(ccTime)dt
{
CGPoint pos = self.position;
pos.y -= 50 * dt;
if (pos.y < activationDistance) {
pos.x = 240 + sin(angle) * 140;
angle += dt * 360 * 0.007;
if (angle >= 360) {
angle = ((int)angle) % 360;
}
}
self.position = pos;
}
It is my current solution. I can increase activationDistance to adjust the object trajectory. But I want to setup an initial value of the angle variable.
I use numbers instead of variables because they are used inside this function only.
SOLVED
To change the initial angle:
angle = point.x < 240 ? -asin((240 - point.x) / 140) : asin((point.x - 240) / 140);
the main problem was my tiled map has its own coordinates and cover 320x320 part of the screen only
I think it will be easier for you to just do it in your frame update method (the one I assume you schedule for updating your objects. So why not just do :
- (void)tick:(ccTime)dt {
CGPoint pos = myObject.position;
pos.x = <desired x> + sin(angle);
pos.y = pos.y - y_acceleration * dt;
angle += dt * 360 * x_acceleration;
if (angle >= 360)
angle = ((int)angle) % 360;
myObject.position = pos;
}
And you can apply the same for the y axis of the object

Resources