Cocos2d v3: How do you draw an arc? - ios

I want to draw a filled arc like this:
CCDrawNode only contain methods to draw a circle, a polygon or a line – but no arc.
To be clear, I'm wanting to be able to generate the arc at runtime, with an arbitrary radius and arc angle.
My Question:
Is there a way to have Cocos2d draw an arc, or would I have to do it myself with OpenGL?

I guess you can use the CCProgressNode, and by setting the sprite to an orange circle, you can draw an arc by setting the progress property.
Now, whether you can set the sprite as a vectorized circle that you can just scale, I am not really sure.
Personally, I'd suggest you add the drawing arc code to CCDrawNode, and submit a PR.

tested using cocos2dx v3.8, codes from DrawNode::drawSolidCircle
DrawNode* Pie::drawPie(const Vec2& center, float radius, float startAngle, float endAngle, unsigned int segments, float scaleX, float scaleY, const Color4F &color){
segments++;
auto draw = DrawNode::create();
const float coef = (endAngle - startAngle) / segments;
Vec2 *vertices = new (std::nothrow) Vec2[segments];
if (!vertices)
return nullptr;
for (unsigned int i = 0; i < segments - 1; i++)
{
float rads = i*coef;
GLfloat j = radius * cosf(rads + startAngle) * scaleX + center.x;
GLfloat k = radius * sinf(rads + startAngle) * scaleY + center.y;
vertices[i].x = j;
vertices[i].y = k;
}
vertices[segments - 1].x = center.x;
vertices[segments - 1].y = center.y;
draw->drawSolidPoly(vertices, segments, color);
CC_SAFE_DELETE_ARRAY(vertices);
return draw;
}
call it like
auto pie = Pie::drawPie(_visibleSize / 2, _visibleSize.width / 4, 0, 1, 999, 1, 1, Color4F::BLUE);

Related

How to call glPointSize() (or the SceneKit equivalent) when making custom geometries using SceneKit and SCNGeometryPrimitiveTypePoint

I'm writing an iOS app that renders a pointcloud in SceneKit using a custom geometry. This post was super helpful in getting me there (though I translated this to Objective-C), as was David Rönnqvist's book 3D Graphics with SceneKit (see chapter on custom geometries). The code works fine, but I'd like to make the points render at a larger point size - at the moment the points are super tiny.
According to the OpenGL docs, you can do this by calling glPointSize(). From what I understand, SceneKit is built on top of OpenGL so I'm hoping there is a way to access this function or do the equivalent using SceneKit. Any suggestions would be much appreciated!
My code is below. I've also posted a small example app on bitbucket accessible here.
// set the number of points
NSUInteger numPoints = 10000;
// set the max distance points
int randomPosUL = 2;
int scaleFactor = 10000; // because I want decimal points
// but am getting random values using arc4random_uniform
PointcloudVertex pointcloudVertices[numPoints];
for (NSUInteger i = 0; i < numPoints; i++) {
PointcloudVertex vertex;
float x = (float)(arc4random_uniform(randomPosUL * 2 * scaleFactor));
float y = (float)(arc4random_uniform(randomPosUL * 2 * scaleFactor));
float z = (float)(arc4random_uniform(randomPosUL * 2 * scaleFactor));
vertex.x = (x - randomPosUL * scaleFactor) / scaleFactor;
vertex.y = (y - randomPosUL * scaleFactor) / scaleFactor;
vertex.z = (z - randomPosUL * scaleFactor) / scaleFactor;
vertex.r = arc4random_uniform(255) / 255.0;
vertex.g = arc4random_uniform(255) / 255.0;
vertex.b = arc4random_uniform(255) / 255.0;
pointcloudVertices[i] = vertex;
// NSLog(#"adding vertex #%lu with position - x: %.3f y: %.3f z: %.3f | color - r:%.3f g: %.3f b: %.3f",
// (long unsigned)i,
// vertex.x,
// vertex.y,
// vertex.z,
// vertex.r,
// vertex.g,
// vertex.b);
}
// convert array to point cloud data (position and color)
NSData *pointcloudData = [NSData dataWithBytes:&pointcloudVertices length:sizeof(pointcloudVertices)];
// create vertex source
SCNGeometrySource *vertexSource = [SCNGeometrySource geometrySourceWithData:pointcloudData
semantic:SCNGeometrySourceSemanticVertex
vectorCount:numPoints
floatComponents:YES
componentsPerVector:3
bytesPerComponent:sizeof(float)
dataOffset:0
dataStride:sizeof(PointcloudVertex)];
// create color source
SCNGeometrySource *colorSource = [SCNGeometrySource geometrySourceWithData:pointcloudData
semantic:SCNGeometrySourceSemanticColor
vectorCount:numPoints
floatComponents:YES
componentsPerVector:3
bytesPerComponent:sizeof(float)
dataOffset:sizeof(float) * 3
dataStride:sizeof(PointcloudVertex)];
// create element
SCNGeometryElement *element = [SCNGeometryElement geometryElementWithData:nil
primitiveType:SCNGeometryPrimitiveTypePoint
primitiveCount:numPoints
bytesPerIndex:sizeof(int)];
// create geometry
SCNGeometry *pointcloudGeometry = [SCNGeometry geometryWithSources:#[ vertexSource, colorSource ] elements:#[ element]];
// add pointcloud to scene
SCNNode *pointcloudNode = [SCNNode nodeWithGeometry:pointcloudGeometry];
[self.myView.scene.rootNode addChildNode:pointcloudNode];
I was looking into rendering point clouds in ios myself and found a solution on twitter, by a "vade", and figured I post it here for others:
ProTip: SceneKit shader modifiers are useful:
mat.shaderModifiers = #{SCNShaderModifierEntryPointGeometry : #"gl_PointSize = 16.0;"};

Metal: nothing is rendered when using orthographic projection matrix

I am using the following code to create orthographic matrix:
Matrix4D Matrix4D::fromOrtho(double left, double right, double bottom, double top, double nearZ, double farZ)
{
double ral = right + left;
double rsl = right - left;
double tab = top + bottom;
double tsb = top - bottom;
double fan = farZ + nearZ;
double fsn = farZ - nearZ;
return Matrix4D ( 2.0f / rsl, 0.0f, 0.0f, 0.0f,
0.0f, 2.0f / tsb, 0.0f, 0.0f,
0.0f, 0.0f, -2.0f / fsn, 0.0f,
-ral / rsl, -tab / tsb, -fan / fsn, 1.0f);
}
and use the following parameters:
double widthToHeightRatio = screenWidth / screenHeight;
Matrix4D::fromOrtho(-10, 10, -7, 7 ,0.1, 5000);
The left, right, bottom and top parameters are actually calculates as a function of the camera eye and center coordinates, but this is an example for result parameters.
The same matrix works well with OpenGL but not working with Metal. When the matrix is a perspective matrix, everything works well also in Metal.
What might be the problem?
Both perspective and ortho projection matrices from GL are invalid in Metal, because the z range is different. Some matrices MAY still work because your z-clip range is overly-deep in OpenGL so it happens to be deep enough to let fragments through in Metal as well, but this is a bad thing to bank on.
From the Metal Programming Guide, p. 51 “Working with Viewport and Pixel Space Coordinates”:
Metal defines its Normalized Device Coordinate (NDC) system as a
2x2x1 cube with its center at (0, 0, 0.5). The left and bottom for x
and y, respectively, of the NDC system are specified as -1. The right
and top for x and y, respectively, of the NDC system are specified as
+1.
This is different from OpenGL, which has its z go from -1 to 1, in a 2x2x2 cube.
See this blog post for more details: http://blog.athenstean.com/post/135771439196/from-opengl-to-metal-the-projection-matrix
Update — User da1 found an alternate blog post, the above is currently down: http://metashapes.com/blog/opengl-metal-projection-matrix-problem
from AAPLTransfomations.mm this works for me (metal sample project)
simd::float4x4 AAPL::ortho2d(const float& left,
const float& right,
const float& bottom,
const float& top,
const float& near,
const float& far)
{
float sLength = 1.0f / (right - left);
float sHeight = 1.0f / (top - bottom);
float sDepth = 1.0f / (far - near);
simd::float4 P;
simd::float4 Q;
simd::float4 R;
simd::float4 S;
P.x = 2.0f * sLength;
P.y = 0.0f;
P.z = 0.0f;
P.w = 0.0f;
Q.x = 0.0f;
Q.y = 2.0f * sHeight;
Q.z = 0.0f;
Q.w = 0.0f;
R.x = 0.0f;
R.y = 0.0f;
R.z = sDepth;
R.w = 0.0f;
S.x = 0.0f;
S.y = 0.0f;
S.z = -near * sDepth;
S.w = 1.0f;
return simd::float4x4(P, Q, R, S);
} // ortho2d
and implemented by passing to your shader in the
constant_buffer[i].modelview_ortho_matrix = ortho2d(-2.0f, 2.0f, -2.0f, 2.0f, 0, 2); //_projectionMatrix * modelViewMatrix;
and then perhaps in your vertex shader
float4 in_position = float4(float3(vertex_array[vid].position), 1.0);
out.position = constants.modelview_ortho_matrix * in_position;

Box2d and Cocos2d on iOS issues with sprites and positioning

I am creating new sprites/bodies when touching the screen:
-(void) addNewSpriteAtPosition:(CGPoint)pos
{
b2BodyDef bodyDef;
bodyDef.type = b2_dynamicBody;
bodyDef.position=[Helper toMeters:pos];
b2Body* body = world->CreateBody(&bodyDef);
b2CircleShape circle;
circle.m_radius = 30/PTM_RATIO;
// Define the dynamic body fixture.
b2FixtureDef fixtureDef;
fixtureDef.shape=&circle;
fixtureDef.density=0.7f;
fixtureDef.friction=0.3f;
fixtureDef.restitution = 0.5;
body-> CreateFixture(&fixtureDef);
PhysicsSprite* sprite = [PhysicsSprite spriteWithFile:#"circle.png"];
[self addChild:sprite];
[sprite setPhysicsBody:body];
body->SetUserData((__bridge void*)sprite);
}
Here is my positioning helper:
+(b2Vec2) toMeters:(CGPoint)point
{
return b2Vec2(point.x / PTM_RATIO, point.y / PTM_RATIO);
}
PhysicsSprite is the typical one used with Box2D, but I'll include the relevant method:
-(CGAffineTransform) nodeToParentTransform
{
b2Vec2 pos = physicsBody->GetPosition();
float x = pos.x * PTM_RATIO;
float y = pos.y * PTM_RATIO;
if (ignoreAnchorPointForPosition_)
{
x += anchorPointInPoints_.x;
y += anchorPointInPoints_.y;
}
float radians = physicsBody->GetAngle();
float c = cosf(radians);
float s = sinf(radians);
if (!CGPointEqualToPoint(anchorPointInPoints_, CGPointZero))
{
x += c * -anchorPointInPoints_.x + -s * -anchorPointInPoints_.y;
y += s * -anchorPointInPoints_.x + c * -anchorPointInPoints_.y;
}
self.position = CGPointMake(x, y);
// Rot, Translate Matrix
transform_ = CGAffineTransformMake(c, s, -s, c, x, y);
return transform_;
}
Now, I have two issues illustrated by the following two images, which show the debug draw with the sprites. Retina and non-retina versions:
Issue #1 - As you can see in both images, the further away from (0,0) the objects are, the sprite becomes further offset from the physics body.
Issue #2 - The red circle image files are 60x60 (retina), and the white circles are 30x30 (non-retina). Why are they sized differently on screen? Cocos2d should use points, not pixels, so shouldn't they be the same size on screen?
Instead of hardcoded size, use contentSize.
circle.m_radius = sprite.contentSize.width*0.5f/PTM_RATIO;
This works in all SD and Retina mode.
You can use this style to sync between box2d and cocos2d positions:
body->SetTransform([self toB2Meters:sprite.position], 0.0f); //box2d<---cocos2d
//OR
sprite.position = ccp(body->GetPosition().x * PTM_RATIO,
body->GetPosition().y * PTM_RATIO); //box2d--->cocos2d

OpenGL ES 2.0 Ray Picking, far point

Help me please with ray picking
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(35.0f), aspect, 0.1f, 1000.0f);
GLKMatrix4 modelViewMatrix = _mainmodelViewMatrix;
// some transformations
_mainmodelViewMatrix = modelViewMatrix;
_modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix);
_normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL);
_modelViewProjectionMatrix and _normalMatrix put to shader
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, 0, _modelViewProjectionMatrix.m);
glUniformMatrix3fv(uniforms[UNIFORM_NORMAL_MATRIX], 1, 0, _normalMatrix.m);
and in touch end
GLKVector4 normalisedVector = GLKVector4Make((2 * position.x / self.view.bounds.size.width - 1),
(2 * (self.view.bounds.size.height-position.y) / self.view.bounds.size.height - 1) , //1 - 2 * position.y / self.view.bounds.size.height,
-1,
1);
GLKMatrix4 inversedMatrix = GLKMatrix4Invert(_modelViewProjectionMatrix, nil);
GLKVector4 near_point = GLKMatrix4MultiplyVector4(inversedMatrix, normalisedVector);
How I can get far point? And my near_point is correct or not?
Thanks!
it looks like you have
GLKVector4 normalisedVector = GLKVector4Make((2 * position.x / self.view.bounds.size.width - 1),
(2 * (self.view.bounds.size.height-position.y) / self.view.bounds.size.height - 1) ,
-1, 1);
(phew) to calculate the normalized device coordinates of the near point.
To get the far point, just swap the -1 z coordinate for a 1:
GLKVector4 normalisedFarVector = GLKVector4Make((2 * position.x / self.view.bounds.size.width - 1),
(2 * (self.view.bounds.size.height-position.y) / self.view.bounds.size.height - 1) ,
1, 1);
And apply the same inverse transform to that. That should do the trick.
Background: Under normal circumstances, the final coordinates received by the GL for turning a fragment into a pixel are what are called normalised device coordinates. These lie within a cube whose corners are at (-1,-1,-1_ and (1,1,1). So the center of the screen is (0,0,z), the top left corner is (-1,1,z) and so on. The coordinates are transformed so that a point lying on the near plane will have a z coordinate of 1, and one lying just on the far plane will have a z coordinate of -1. These are the numbers that are used for depth testing, if you have it turned on.
So, as you might guess, when you want to convert a screen location back to a point in 3D space, you actually have a number of points to choose from - a line, in fact, stretching from the near plane to the far plane. In normalised device coordinates, this is the line stretching from z=-1 to z=1. So the process goes like this:
convert the x and y coordinates into normalised device coordinates x' and y'
For each of z' = 1 and z' = -1:
convert the coordinates to normalised device coordinates (see here for the formula)
apply the inverse of the projection matrix
apply the inverse of the model/view matrix (as it is before any per-object transformations)
The results are the two coordinates of your line in 3D space.
We can draw line from near_point to far_point.
GLKVector4 normalisedVector = GLKVector4Make((2 * position.x / self.view.bounds.size.width - 1),
(2 * (self.view.bounds.size.height-position.y) / self.view.bounds.size.height - 1),
-1,
1);
GLKMatrix4 inversedMatrix = GLKMatrix4Invert(_modelViewProjectionMatrix, nil);
GLKVector4 near_point = GLKMatrix4MultiplyVector4(inversedMatrix, normalisedVector);
near_point.v[3] = 1.0/near_point.v[3];
near_point = GLKVector4Make(near_point.v[0]*near_point.v[3], near_point.v[1]*near_point.v[3], near_point.v[2]*near_point.v[3], 1);
normalisedVector.z = 1.0;
GLKVector4 far_point = GLKMatrix4MultiplyVector4(inversedMatrix, normalisedVector);
far_point.v[3] = 1.0/far_point.v[3];
far_point = GLKVector4Make(far_point.v[0]*far_point.v[3], far_point.v[1]*far_point.v[3], far_point.v[2]*far_point.v[3], 1);

How to rotate a triangle?

I'm struggling with rotating a triangle resulting from a UIRotationGestureRecognizer. If you could look over my approach and offer suggestions, I'd greatly appreciate it.
I ask the gesture recognizer object for the rotation, which the documentation says is returned in radians.
My strategy had been to think of each vertex as a point on a circle that exists between the center of the triangle and the vertex, and then use the radians of rotation to find the new point on that circumference. I'm not totally sure this is a valid approach, but I wanted to at least try it. Visually I'd know whether or not it was working.
Here's the code I created in that attempt:
- (CGPoint)rotateVertex:(CGPoint)vertex byRadians:(float)radians
{
float deltaX = center.x - vertex.x;
float deltaY = center.y - vertex.y;
float currentAngle = atanf( deltaX / deltaY );
float newAngle = currentAngle + radians;
float newX = cosf(newAngle) + vertex.x;
float newY = sinf(newAngle) + vertex.y;
return CGPointMake(newX, newY);
}
When executed, there's a slight rotation at the beginning, but then as I continue rotating my fingers the vertices just start getting farther away from the center point, indicating I'm confusing something here.
I looked at what the CGContextRotateCTM could do for me, but ultimately I need to know what the vertices are after the rotation, so just rotating the graphics context doesn't appear to leave me with those changed coordinates.
I also tried the technique described here but that resulted in the triangle being flipped about the second vertex, which seems odd, but then that technique works with p and q being the x and y coordinates of the second vertex.
Thanks for taking a look!
Solved: Here is the corrected function. It assumes you have calculated the center of the triangle. I used the 1/3(x1 + x2 + x3), 1/3(y1 + y2 + y3) method described on the Wikipedia article on Centroids.
- (CGPoint)rotatePoint:(CGPoint)currentPoint byRadians:(float)radiansOfRotation
{
float deltaX = currentPoint.x - center.x;
float deltaY = currentPoint.y - center.y;
float radius = sqrtf(powf(deltaX, 2.0) + powf(deltaY, 2.0));
float currentAngle = atan2f( deltaY, deltaX );
float newAngle = currentAngle + radiansOfRotation;
float newRun = radius * cosf(newAngle);
float newX = center.x + newRun;
float newRise = radius * sinf(newAngle);
float newY = center.y + newRise;
return CGPointMake(newX, newY);
}
Of noteworthy relevance to why the first code listing did not work was that the arguments to atan2 were reversed. Also, the correct calculation of the delta values was reversed.
You're forgetting to multiply by the radius of the circle. Also, since the Y axis points down in the UIKit coordinate system, you have to subtract instead of add the radians and negate the y coordinate at the end. And you need to use atan2 only gives output in the range -pi/2 to pi/2:
float currentAngle = atan2f(deltaY, deltaX);
float newAngle = currentAngle - radians;
float radious = sqrtf(powf(deltaX, 2.0) + powf(deltaY, 2.0));
float newX = radius * cosf(newAngle) + vertex.x;
float newY = -1.0 * radius * sinf(newAngle) + vertex.y;
The answer is embedded now in the original question. Gun shy about proper decorum ;-)

Resources