Currently I create a b2Fixture like this:
b2PolygonShape spriteShape;
int num = 3;
b2Vec2 verts[] = {
b2Vec2(-29.5f / PTM_RATIO, -49.0f / PTM_RATIO),
b2Vec2(10.0f / PTM_RATIO, -49.0f / PTM_RATIO),
b2Vec2(34.2f / PTM_RATIO, -9.2f / PTM_RATIO),
};
spriteShape.Set(verts, num);
bodyFixture.shape = &spriteShape;
bodyCharacter->CreateFixture(&bodyFixture);
The problem is that on a retina display it seems to double in size, is there a reason for this, and if so how do I make it so that it stays the same size so it doesn't get messed up on any device?
Thanks!
I faced the same issue as of yours. I have multiplied the content Scale Factor with PTM_RATIO.
You can try the code as below and check it out... It worked for me
int num = 3;
b2Vec2 verts[] = {
b2Vec2(-29.5f / ( PTM_RATIO * CC_CONTENT_SCALE_FACTOR() ), -49.0f / ( PTM_RATIO * CC_CONTENT_SCALE_FACTOR() )),
b2Vec2(10.0f / ( PTM_RATIO * CC_CONTENT_SCALE_FACTOR() ), -49.0f / ( PTM_RATIO * CC_CONTENT_SCALE_FACTOR() )),
b2Vec2(34.2f / ( PTM_RATIO * CC_CONTENT_SCALE_FACTOR() ), -9.2f / ( PTM_RATIO * CC_CONTENT_SCALE_FACTOR() )),
};
I imagine that PTM_RATIO is not multiplied by CC_CONTENT_SCALE_FACTOR(). See this article for a number of possible solutions.
Related
Using Metal I am drawing line using Bezier Curves using four points. I am using nearly 1500 triangles for the lines. The line is Pixellated. How can i reduce pixellated.
vertex VertexOutBezier bezier_vertex(constant BezierParameters *allParams[[buffer(0)]],
constant GlobalParameters& globalParams[[buffer(1)]],
uint vertexId [[vertex_id]],
uint instanceId [[instance_id]])
{
float t = (float) vertexId / globalParams.elementsPerInstance;
rint(t);
BezierParameters params = allParams[instanceId];
float lineWidth = (1 - (((float) (vertexId % 2)) * 2.0)) * params.lineThickness;
float2 a = params.a;
float2 b = params.b;
float cx = distance(a , b);
float2 p1 = params.p1 * 3.0; // float2 p1 = params.p1 * 3.0;
float2 p2 = params.p2 * 3.0; // float2 p2 = params.p2 * 3.0;
float nt = 1.0f - t;
float nt_2 = nt * nt;
float nt_3 = nt_2 * nt;
float t_2 = t * t;
float t_3 = t_2 * t;
// Calculate a single point in this Bezier curve:
float2 point = a * nt_3 + p1 * nt_2 * t + p2 * nt * t_2 + b * t_3;
float2 tangent = -3.0 * a * nt_2 + p1 * (1.0 - 4.0 * t + 3.0 * t_2) + p2 * (2.0 * t - 3.0 * t_2) + 3 * b * t_2;
tangent = (float2(-tangent.y , tangent.x ));
VertexOutBezier vo;
vo.pos.xy = point + (tangent * (lineWidth / 2.0f));
vo.pos.zw = float2(0, 1);
vo.color = params.color;
return vo;
}
You need to enable MSAA (multisample anti-aliasing). How you do this depends on your exact Metal view configuration, but the easiest way is if you're using MTKView. To enable MSAA in an MTKView, all you have to do is:
metalView.sampleCount = 4
Then, when you configure your MTLRenderPipelineDescriptor before calling makeRenderPipelineState(), add the following:
pipelineDescriptor.sampleCount = 4
This should greatly improve the quality of your curves and reduce pixelation. It does come with a performance cost however, as the GPU has to do substantially more work to render your frame.
I have tried most of the possible method(java script/touchaction) but not able to scroll
if any one have solution please help me, thanks
Try below code:
Dimension size = driver.manage().window().getSize();
int x = size.width / 2;
int endy = (int) (size.height * 0.75);
int starty = (int) (size.height * 0.20);
driver.swipe(x, starty, x, endy, 1000);
For more details check below post:
Click Here
I have a bucket in which i want to add box2d body . not on whole bucket but on left, right and bottom so i can throw my ball inside bucket.
Here is my Bucket. I have added box2d body on left and right side of my bucket. like this
It is working fine for me But when i add body on bottom than my game is crashing.
Here is my code for adding 2 body on bucket with comment bottom body as well.
-(b2Body *) createBucket
{
CCSprite* bucket = [CCSprite spriteWithFile:#"simple-bucket-md.png"];
[self addChild:bucket z:3];
b2Body* b_bucket;
//set this to avoid updating this object in the tick schedule
bucket.userData = (void *)YES;
b2BodyDef bodyDef;
bodyDef.type = b2_staticBody;
CGPoint startPos = ccp(400,150);
bodyDef.position = [self toMeters:startPos];
bodyDef.userData = bucket;
bodyDef.gravityScale = 0;
b2PolygonShape dynamicBox;
//----------------------------------
// THis IS body for Left Side
//----------------------------------
int num = 5;
b2Vec2 verts[] = {
b2Vec2(-29.1f / PTM_RATIO, 25.0f / PTM_RATIO),
b2Vec2(-25.4f / PTM_RATIO, -14.9f / PTM_RATIO),
b2Vec2(-18.7f / PTM_RATIO, -14.9f / PTM_RATIO),
b2Vec2(-21.8f / PTM_RATIO, 26.7f / PTM_RATIO),
b2Vec2(-28.9f / PTM_RATIO, 25.1f / PTM_RATIO)
};
dynamicBox.Set(verts, num);
b2FixtureDef fixtureDef;
fixtureDef.shape = &dynamicBox;
fixtureDef.friction = 0.7;
fixtureDef.density = 10.0f;
fixtureDef.restitution = 0.7;
b_bucket = world->CreateBody(&bodyDef);
b_bucket->CreateFixture(&fixtureDef);
//----------------------------------
// THis IS body for Right Side
//----------------------------------
int num1 = 5;
b2Vec2 verts1[] = {
b2Vec2(16.8f / PTM_RATIO, 27.0f / PTM_RATIO),
b2Vec2(15.9f / PTM_RATIO, -11.5f / PTM_RATIO),
b2Vec2(22.1f / PTM_RATIO, -10.7f / PTM_RATIO),
b2Vec2(24.6f / PTM_RATIO, 26.9f / PTM_RATIO),
b2Vec2(16.9f / PTM_RATIO, 26.7f / PTM_RATIO)
};
dynamicBox.Set(verts1, num1);
fixtureDef.shape = &dynamicBox;
b_bucket-> CreateFixture(&fixtureDef);
//----------------------------------
// THis IS body for Bottom
//----------------------------------
/*
int num2 = 5;
b2Vec2 verts2[] = {
b2Vec2(-23.0f / PTM_RATIO, -21.6f / PTM_RATIO),
b2Vec2(18.9f / PTM_RATIO, -21.0f / PTM_RATIO),
b2Vec2(18.2f / PTM_RATIO, -26.1f / PTM_RATIO),
b2Vec2(-22.8f / PTM_RATIO, -25.9f / PTM_RATIO),
b2Vec2(-23.0f / PTM_RATIO, -21.7f / PTM_RATIO)
};
dynamicBox.Set(verts2, num2);
fixtureDef.shape = &dynamicBox;
b_bucket-> CreateFixture(&fixtureDef);
*/
return b_bucket;
}
Your vertices are clockwise ordered, while they should be counter-clockwise.
You must create polygons with a counter clockwise winding (CCW). We must be careful because the notion of CCW is with respect to a right-handed coordinate system with the z-axis pointing out of the plane. This might turn out to be clockwise on your screen, depending on your coordinate system conventions.
Also, in order to create a 4 sides polygon (which it seems what you're trying), you just need 4 vertices. Try something like this.-
int num2 = 4;
b2Vec2 verts2[] = {
b2Vec2(-22.8f / PTM_RATIO, -25.9f / PTM_RATIO),
b2Vec2(18.2f / PTM_RATIO, -26.1f / PTM_RATIO),
b2Vec2(18.9f / PTM_RATIO, -21.0f / PTM_RATIO),
b2Vec2(-23.0f / PTM_RATIO, -21.6f / PTM_RATIO)
};
dynamicBox.Set(verts2, num2);
fixtureDef.shape = &dynamicBox;
b_bucket-> CreateFixture(&fixtureDef);
Notice that you can also remove the fifth vertex of your lateral bodies.
I am using OpenGL ES 2.0 and the latest iOS.
I have a 3D model that I wish the user to be able to select different parts of the model by tapping on the screen. I have found this tutorial on converting a pixel-space screen coordinate to a world-space ray, and have implemented a ray-AABB intersection test to determine the intersection portion of the model.
I get some hits on the model at seemingly random sections of the model. So I need to debug this feature, but I don't really know where to start.
I can't exactly draw a line representing the ray (since it is coming out of the camera it will appear as a point), so I can see a couple of ways of debugging this:
Check the bounding boxes of the model sections. So is there an easy way with OGL ES to draw a bounding box given a min and max point?
draw some 3D object along the path of the ray. This seems more complicated.
Actually debug the raycast and intersection code. This seems like the hardest to accomplish since the algorithms are fairly well known (I took the intersection test straight ouf of my Real-Time Collision Detection book).
If anyone can help, or wants me to post some code, I could really use it.
Here is my code for converting to world space:
- (IBAction)tappedBody:(UITapGestureRecognizer *)sender
{
if ( !editMode )
{
return;
}
CGPoint tapPoint = [sender locationOfTouch:0 inView:self.view];
const float tanFOV = tanf(GLKMathDegreesToRadians(65.0f*0.5f));
const float width = self.view.frame.size.width,
height = self.view.frame.size.height,
aspect = width/height,
w_2 = width * 0.5,
h_2 = height * 0.5;
CGPoint screenPoint;
screenPoint.x = tanFOV * ( tapPoint.x / w_2 - 1 ) / aspect;
screenPoint.y = tanFOV * ( 1.0 - tapPoint.y / h_2 );
GLKVector3 nearPoint = GLKVector3Make(screenPoint.x * NEAR_PLANE, screenPoint.y * NEAR_PLANE, NEAR_PLANE );
GLKVector3 farPoint = GLKVector3Make(screenPoint.x * FAR_PLANE, screenPoint.y * FAR_PLANE, FAR_PLANE );
GLKVector3 nearWorldPoint = GLKMatrix4MultiplyVector3( _invViewMatrix, nearPoint );
GLKVector3 farWorldPoint = GLKMatrix4MultiplyVector3( _invViewMatrix, farPoint );
GLKVector3 worldRay = GLKVector3Subtract(farWorldPoint, nearWorldPoint);
NSLog(#"Model matrix: %#", NSStringFromGLKMatrix4(_modelMatrix));
worldRay = GLKVector4Normalize(worldRay);
[male intersectWithRay:worldRay fromStartPoint:nearWorldPoint];
for ( int i =0; i < 3; ++i )
{
touchPoint[i] = nearWorldPoint.v[i];
}
}
And here's how I get the matrices:
- (void)update
{
// _rotation = 0;
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, NEAR_PLANE, FAR_PLANE);
self.effect.transform.projectionMatrix = projectionMatrix;
GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -5.0f);
// Compute the model view matrix for the object rendered with ES2
_viewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 0.0f);
_modelMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _rotation, 0.0f, 1.0f, 0.0f);
_modelViewMatrix = GLKMatrix4Rotate(_viewMatrix, _rotation, 0.0f, 1.0f, 0.0f);
_modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, _modelViewMatrix);
_invViewMatrix = GLKMatrix4Invert(_viewMatrix, NULL);
_invMVMatrix = GLKMatrix4Invert(_modelViewMatrix, NULL);
_normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(_modelViewMatrix), NULL);
_modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, _modelViewMatrix);
male.modelTransform = _modelMatrix;
if ( !editMode )
{
_rotation += self.timeSinceLastUpdate * 0.5f;
}
}
I can't exactly draw a line representing the ray (since it is coming
out of the camera it will appear as a point)
DonĀ“t discard this so soon. Isn't it that what you are trying to test? I mean, That would be so if you are unprojecting things right. But if it you have a bug, it won't.
I'll go with this first... if you see a point just under your finger the conversion is right, and you can start investigating the other options you pointed out, which are more complex.
I'm not sure why this fixed the problem I was having but changing
screenPoint.x = tanFOV * ( tapPoint.x / w_2 - 1 ) / aspect;
to
screenPoint.x = tanFOV * ( tapPoint.x / w_2 - 1 ) * aspect;
and
GLKVector4 nearPoint = GLKVector4Make(screenPoint.x * NEAR_PLANE, screenPoint.y * NEAR_PLANE, NEAR_PLANE, 1 );
GLKVector4 farPoint = GLKVector4Make(screenPoint.x * FAR_PLANE, screenPoint.y * FAR_PLANE, FAR_PLANE, 1 );
to
GLKVector4 nearPoint = GLKVector4Make(screenPoint.x * NEAR_PLANE, screenPoint.y * NEAR_PLANE, -NEAR_PLANE, 1 );
GLKVector4 farPoint = GLKVector4Make(screenPoint.x * FAR_PLANE, screenPoint.y * FAR_PLANE, -FAR_PLANE, 1 );
seems to have fixed the raycasting issue. Still not sure why my view matrix seems to be indicating that the camera is looking down the positive Z-axis, but my objects are translated negative along the z axis
I created a polygon which has 5 vertices, and all vertices are generated by VertexHelper.
Why does the program get SIGABRT at b2Assert(area > b2_epsilon) in ComputeCentroid() in b2PolygonShape.cpp?
The program runs well when I use shape.SetAsBox(.359375, 1.0) instead of shape.Set(vertices, count).
It seems that somethings wrong during calculating centroid when shape.Set() is used, but I don't know how to deal with this problem.
Here's the code:
b2BodyDef bodyDef;
bodyDef.type = b2_dynamicBody;
bodyDef.awake = NO;
bodyDef.position.Set(3.125, 3.125);
bodyDef.angle = -.785398163397;
spriteBody = world->CreateBody(&bodyDef);
spriteBody->SetUserData(sprite);
b2MassData massData = {2.0, b2Vec2(0.021875, 0.203125), 0.0};
spriteBody->SetMassData(&massData);
int32 count = 5;
b2Vec2 vertices[] = {
b2Vec2(-11.5f / PTM_RATIO, -16.0f / PTM_RATIO),
b2Vec2(-10.5f / PTM_RATIO, 15.0f / PTM_RATIO),
b2Vec2(10.5f / PTM_RATIO, 15.0f / PTM_RATIO),
b2Vec2(10.5f / PTM_RATIO, -5.0f / PTM_RATIO),
b2Vec2(-5.5f / PTM_RATIO, -16.0f / PTM_RATIO)
};
b2PolygonShape shape;
shape.Set(vertices, count);
b2FixtureDef fixtureDef;
fixtureDef.shape = &shape;
fixtureDef.density = 1.0f;
fixtureDef.friction = 0.2f;
fixtureDef.restitution = 0.7f;
spriteBody->CreateFixture(&fixtureDef);
It looks like you've wound your vertices the wrong way. I think they should be anticlockwise in box2d, at least by default.
You're assertion will be failing because the calculation for area will be returning a negative value, far less than b2_epsilon