iOS: transforming a view into cylindrical shape - ios

With Quartz 2D we can transform our views on the x, yand z axis.
In some cases we could even make them look 3D by changing the values of the matrixes.
I was wondering if it could be possible to transform a view into a cylinder shape like in the following picture?
Please ignore the top part of the cylinder. I am more curious to know whether it would be possible warping an UIView around like the side of the cylinder as in the image.
Is that possible only making use of Quartz 2D, layers and transformations (not OpenGL)? If not, is it possible to at least draw it in CGContext to make a view appear like so?

You definitely can't do this with a transform. What you could do is create your UIView off-screen, get the context for the view, get an image from that, and then map the image to a new image, using a non-linear mapping.
So:
Create an image context with UIGraphicsBeginImageContext()
Render the view there, with view.layer.renderInContext()
Get an image of the result with CGBitmapContextCreateImage()
Write a mapping function that takes the x/y screen coordinates and maps them to coordinates on the cylinder.
Create a new image the size of the screen view, and call the mapping
function to copy pixels from the source to the destination.
Draw the destination bitmap to the screen.
None of these steps is particularly-difficult, and you might come up with various ways to simplify. For example, you can just render strips of the original view, offsetting the Y coordinate based on the coordinates of a circle, if you are okay with not doing perspective transformations.
If you want the view to actually be interactive, then you'd need to do the transform in the opposite direction when handling touch events.

No you can't bend a view using a transform.
The transform can only manipulate the four corners of the view so no matter what you do it will still be a plane.

I realize this goes beyond Quartz2D... You could try adding SceneKit.
Obtain the view's image via UIGraphicsBeginImageContext(), view.layer.renderInContext(), CGBitmapContextCreateImage().
Create a SCNMaterial with the diffuse property set to the image of your view
Create an SCNCylinder and apply the material to it.
Add the cylinder to an SCNScene.
Create an SCNView and set its scene.
Add the SCNView to your view hierarchy.

Reference : Using OpenGL ES 2.0 with iOS, how do I draw a cylinder between two points?
I have also used the same code for one of my project:
Check this one where it is mentioned to draw cone shape; it's dated but after adapting the algorithm, it works.
See code below for solution. Self represents the mesh and contains the vertices, indices, and such.
- (instancetype)initWithOriginRadius:(CGFloat)originRadius
atOriginPoint:(GLKVector3)originPoint
andEndRadius:(CGFloat)endRadius
atEndPoint:(GLKVector3)endPoint
withPrecision:(NSInteger)precision
andColor:(GLKVector4)color
{
self = [super init];
if (self) {
// normal pointing from origin point to end point
GLKVector3 normal = GLKVector3Make(originPoint.x - endPoint.x,
originPoint.y - endPoint.y,
originPoint.z - endPoint.z);
// create two perpendicular vectors - perp and q
GLKVector3 perp = normal;
if (normal.x == 0 && normal.z == 0) {
perp.x += 1;
} else {
perp.y += 1;
}
// cross product
GLKVector3 q = GLKVector3CrossProduct(perp, normal);
perp = GLKVector3CrossProduct(normal, q);
// normalize vectors
perp = GLKVector3Normalize(perp);
q = GLKVector3Normalize(q);
// calculate vertices
CGFloat twoPi = 2 * PI;
NSInteger index = 0;
for (NSInteger i = 0; i < precision + 1; i++) {
CGFloat theta = ((CGFloat) i) / precision * twoPi; // go around circle and get points
// normals
normal.x = cosf(theta) * perp.x + sinf(theta) * q.x;
normal.y = cosf(theta) * perp.y + sinf(theta) * q.y;
normal.z = cosf(theta) * perp.z + sinf(theta) * q.z;
AGLKMeshVertex meshVertex;
AGLKMeshVertexDynamic colorVertex;
// top vertex
meshVertex.position.x = endPoint.x + endRadius * normal.x;
meshVertex.position.y = endPoint.y + endRadius * normal.y;
meshVertex.position.z = endPoint.z + endRadius * normal.z;
meshVertex.normal = normal;
meshVertex.originalColor = color;
// append vertex
[self appendVertex:meshVertex];
// append color vertex
colorVertex.colors = color;
[self appendColorVertex:colorVertex];
// append index
[self appendIndex:index++];
// bottom vertex
meshVertex.position.x = originPoint.x + originRadius * normal.x;
meshVertex.position.y = originPoint.y + originRadius * normal.y;
meshVertex.position.z = originPoint.z + originRadius * normal.z;
meshVertex.normal = normal;
meshVertex.originalColor = color;
// append vertex
[self appendVertex:meshVertex];
// append color vertex
[self appendColorVertex:colorVertex];
// append index
[self appendIndex:index++];
}
// draw command
[self appendCommand:GL_TRIANGLE_STRIP firstIndex:0 numberOfIndices:self.numberOfIndices materialName:#""];
}
return self;
}

Related

SceneKit 3D Marker Augmented Reality iOS

Last couple of weeks I've been working on developing a simple proof-of-concept application in which a 3D model is projected over a specific Augmented Reality marker (in my case I am using Aruco markers) in IOS (with Swift and Objective-C)
I calibrated an Ipad Camera with a specific fixed lens position and used that to estimate the pose of the AR marker (which from my debug analysis seem pretty accurate). The problem seems (surprise, surprise) when I try to use SceneKit scene to project a model over the marker.
I am aware that the axis in opencv and SceneKit are different (Y and Z) and already done this correction as well as the row order/column order difference between the two libraries.
After constructing the projection matrix, I apply that same transform to the 3D model and from my debug analysis the object seems to be translated to the desired position and with the desired rotation. The problem is that it never does overlap the specific image pixel position of the marker. I am using a AVCapturePreviewVideoLayer as to put the video in background that has the same bounds as my SceneKit View.
Has anyone has a clue why this happens? I tried to play with cameras FOV's but with no real impact in the results.
Thank you all for your time.
EDIT1: I Will post some of the code here to reveal what I am currently doing.
I have two subviews inside the main view, one which is a background AVCaptureVideoPreviewLayer and another which is a SceneKitView. Both have the same bounds as the main view.
At each frame I use an opencv wrapper which outputs the pose of each marker:
std::vector<int> ids;
std::vector<std::vector<cv::Point2f>> corners, rejected;
cv::aruco::detectMarkers(frame, _dictionary, corners, ids, _detectorParams, rejected);
if (ids.size() > 0 ){
cv::aruco::drawDetectedMarkers(frame, corners, ids);
cv::Mat rvecs, tvecs;
cv::aruco::estimatePoseSingleMarkers(corners, 2.6, _intrinsicMatrix, _distCoeffs, rvecs, tvecs);
// Let's protect ourselves agains multiple markers
if (rvecs.total() > 1)
return;
_markerFound = true;
cv::Rodrigues(rvecs, _currentR);
_currentT = tvecs;
for (int row = 0; row < _currentR.rows; row++){
for (int col = 0; col < _currentR.cols; col++){
_currentExtrinsics.at<double>(row, col) = _currentR.at<double>(row, col);
}
_currentExtrinsics.at<double>(row, 3) = _currentT.at<double>(row);
}
_currentExtrinsics.at<double>(3,3) = 1;
std::cout << tvecs << std::endl;
// Convert coordinate systems of opencv to openGL (SceneKit)
// Note that in openCV z goes away the camera (in openGL goes into the camera)
// and y points down and on openGL point up
// Another note: openCV has a column order matrix representation, while SceneKit
// has a row order matrix, but we'll take care of it later.
cv::Mat cvToGl = cv::Mat::zeros(4, 4, CV_64F);
cvToGl.at<double>(0,0) = 1.0f;
cvToGl.at<double>(1,1) = -1.0f; // invert the y axis
cvToGl.at<double>(2,2) = -1.0f; // invert the z axis
cvToGl.at<double>(3,3) = 1.0f;
_currentExtrinsics = cvToGl * _currentExtrinsics;
cv::aruco::drawAxis(frame, _intrinsicMatrix, _distCoeffs, rvecs, tvecs, 5);
Then in each frame I convert the opencv matrix for a SCN4Matrix:
- (SCNMatrix4) transformToSceneKit:(cv::Mat&) openCVTransformation{
SCNMatrix4 mat = SCNMatrix4Identity;
// Transpose
openCVTransformation = openCVTransformation.t();
// copy the rotationRows
mat.m11 = (float) openCVTransformation.at<double>(0, 0);
mat.m12 = (float) openCVTransformation.at<double>(0, 1);
mat.m13 = (float) openCVTransformation.at<double>(0, 2);
mat.m14 = (float) openCVTransformation.at<double>(0, 3);
mat.m21 = (float)openCVTransformation.at<double>(1, 0);
mat.m22 = (float)openCVTransformation.at<double>(1, 1);
mat.m23 = (float)openCVTransformation.at<double>(1, 2);
mat.m24 = (float)openCVTransformation.at<double>(1, 3);
mat.m31 = (float)openCVTransformation.at<double>(2, 0);
mat.m32 = (float)openCVTransformation.at<double>(2, 1);
mat.m33 = (float)openCVTransformation.at<double>(2, 2);
mat.m34 = (float)openCVTransformation.at<double>(2, 3);
//copy the translation row
mat.m41 = (float)openCVTransformation.at<double>(3, 0);
mat.m42 = (float)openCVTransformation.at<double>(3, 1)+2.5;
mat.m43 = (float)openCVTransformation.at<double>(3, 2);
mat.m44 = (float)openCVTransformation.at<double>(3, 3);
return mat;
}
At each frame in which the AR marker is found I add a box to the scene and apply the transformation to the object node:
SCNBox *box = [SCNBox boxWithWidth:5.0 height:5.0 length:5.0 chamferRadius:0.0];
_boxNode = [SCNNode nodeWithGeometry:box];
if (found){
[self.delegate returnExtrinsicsMat:extrinsicMatrixOfTheMarker];
Mat R, T;
[self.delegate returnRotationMat:R];
[self.delegate returnTranslationMat:T];
SCNMatrix4 Transformation;
Transformation = [self transformToSceneKit:extrinsicMatrixOfTheMarker];
//_cameraNode.transform = SCNMatrix4Invert(Transformation);
[_sceneKitScene.rootNode addChildNode:_cameraNode];
//_cameraNode.camera.projectionTransform = SCNMatrix4Identity;
//_cameraNode.camera.zNear = 0.0;
_sceneKitView.pointOfView = _cameraNode;
_boxNode.transform = Transformation;
[_sceneKitScene.rootNode addChildNode:_boxNode];
//_boxNode.position = SCNVector3Make(Transformation.m41, Transformation.m42, Transformation.m43);
std::cout << (_boxNode.position.x) << " " << (_boxNode.position.y) << " " << (_boxNode.position.z) << std::endl << std::endl;
}
For example if the translation vector is (-1, 5, 20) the object appears in the scene in position (-1, -5, -20) in the scene, and the rotation is correct also. The problem is that it never appears in the correct position in the background image. I will add some images to show the result.
Does anyone know why this is happening?
Found out the solution. Instead of applying the transform to the node of the object I applied the inverted transformation matrix to the camera node. Then for the camera perspective transform matrix I applied the following matrix:
projection = SCNMatrix4Identity
projection.m11 = (2 * (float)(cameraMatrix[0])) / -(ImageWidth*0.5)
projection.m12 = (-2 * (float)(cameraMatrix[1])) / (ImageWidth*0.5)
projection.m13 = (width - (2 * Float(cameraMatrix[2]))) / (ImageWidth*0.5)
projection.m22 = (2 * (float)(cameraMatrix[4])) / (ImageHeight*0.5)
projection.m23 = (-height + (2 * (float)(cameraMatrix[5]))) / (ImageHeight*0.5)
projection.m33 = (-far - near) / (far - near)
projection.m34 = (-2 * far * near) / (far - near)
projection.m43 = -1
projection.m44 = 0
being far and near the z clipping planes.
I also had to correct the box initial position to center it on the marker.

How to calculate camera ray position for use with XMVector3Unproject(), DirectX11?

I'm trying to create a ray-casting camera in DirectX11 using XMVector3Unproject(). From my understanding, I will be passing in the (Vector3)position of the pixel on the near plane, and in separate call, a corresponding position on the far plane. Then I would subtract these vectors to get the direction of the ray. The origin would then be the Unprojected coordinate on the near plane. My problem here is calculating the origin of the ray to be passed in.
Example
// assuming screenHeight and screenWidth are the number of pixels.
const uint32_t screenHeight = 768;
const uint32_t screenWidth = 1024;
struct Ray
{
XMFLOAT3 origin;
XMFLOAT3 direction;
};
Ray rays[screenWidth * screenHeight];
for (uint32_t i = 0; i < screenHeight; ++i)
{
for (uint32_t j = 0; j < screenWidth; ++j)
{
// 1. ***calculate and store the current pixel position on the near plane***
// 2. ***calculate the corresponding point on the far plane***
// 3. ***pass both positions separately into XMVector3Unproject() (2 total calls to the function)***
// 4. ***store the returned vectors' difference into rays[i * screenWidth + j].direction***
// 5. ***store the near plane pixel position's returned vector into rays[i * screenWidth + j].origin***
}
}
Hopefully I'm understanding this correctly. Any help in determining the ray origins, or corrections would be greatly appreciated.
According to the documentation, the XMVector3Unproject function gives you the coordinates of a ray you have provided in camera space (Normalized-device coordinates), in object space (given your model matrix).
To generate your camera rays, you consider your camera pinhole (all the light passes through one point, which is your camera (0, 0, 0), then you choose your ray direction. Let say you want to generate W*H camera rays, your loop might look like this
Vector3 ray_origin = Vector3(0, 0, 0);
for (float x = -1.f; x <= 1.f; x += 2.f / W) {
for (float y = -1.f; y <= 1.f; y += 2.f / H) {
Vector3 ray_direction = Normalize(Vector3(x, y, -1.f)) - ray_origin;
Vector3 ray_in_model = Unproject(ray_direction, 0.f, 0.f,
width, height, znear, zfar,
proj, view, model);
}
}
You might also want to have a look at this link which sounds interesting

How can I track a point on a texture in OpenGL ES1?

In my iOS application I have a texture applied to a sphere rendered in OpenGLES1. The sphere can be rotated by the user. How can I track where a given point on the texture is in 2D space at any given time?
For example, given point (200, 200) on a texture that's 1000px x 1000px, I'd like to place a UIButton on top of my OpenGL view that tracks the point as the sphere is manipulated.
What's the best way to do this?
On my first attempt, I tried to use a color-picking technique where I have a separate sphere in an off-screen framebuffer that uses a black texture with a red square at point (200, 200). Then, I used glReadPixels() to track the position of the red square and I moved my button accordingly. Unfortunately, grabbing all the pixel data and iterating it 60 times a second just isn't possible for obvious performance reasons. I tried a number of ways to optimize this hack (eg: iterating only the red pixels, iterating every 4th red pixel, etc), but it just didn't prove to be reliable.
I'm an OpenGL noob, so I'd appreciate any guidance. Is there a better solution? Thanks!
I think it's easier to keep track of where your ball is instead of searching for it with pixels. Then just have a couple of functions to translate your ball's coordinates to your view's coordinates (and back), then set your subview's center to the translated coordinates.
CGPoint translatePointFromGLCoordinatesToUIView(CGPoint coordinates, UIView *myGLView){
//if your drawing coordinates were between (horizontal {-1.0 -> 1.0} vertical {-1 -> 1})
CGFloat leftMostGLCoord = -1;
CGFloat rightMostGLCoord = 1;
CGFloat bottomMostGLCoord = -1;
CGFloat topMostGLCoord = 1;
CGPoint scale;
scale.x = (rightMostGLCoord - leftMostGLCoord) / myGLView.bounds.size.width;
scale.y = (topMostGLCoord - bottomMostGLCoord) / myGLView.bounds.size.height;
coordinates.x -= leftMostGLCoord;
coordinates.y -= bottomMostGLCoord;
CGPoint translatedPoint;
translatedPoint.x = coordinates.x / scale.x;
translatedPoint.y =coordinates.y / scale.y;
//flip y for iOS coordinates
translatedPoint.y = myGLView.bounds.size.height - translatedPoint.y;
return translatedPoint;
}
CGPoint translatePointFromUIViewToGLCoordinates(CGPoint pointInView, UIView *myGLView){
//if your drawing coordinates were between (horizontal {-1.0 -> 1.0} vertical {-1 -> 1})
CGFloat leftMostGLCoord = -1;
CGFloat rightMostGLCoord = 1;
CGFloat bottomMostGLCoord = -1;
CGFloat topMostGLCoord = 1;
CGPoint scale;
scale.x = (rightMostGLCoord - leftMostGLCoord) / myGLView.bounds.size.width;
scale.y = (topMostGLCoord - bottomMostGLCoord) / myGLView.bounds.size.height;
//flip y for iOS coordinates
pointInView.y = myGLView.bounds.size.height - pointInView.y;
CGPoint translatedPoint;
translatedPoint.x = leftMostGLCoord + (pointInView.x * scale.x);
translatedPoint.y = bottomMostGLCoord + (pointInView.y * scale.y);
return translatedPoint;
}
In my app I choose to use the iOS coordinate system for my drawing too. I just apply a projection matrix to my whole glkView the reconciles the coordinate system.
static GLKMatrix4 GLKMatrix4MakeIOSCoordsWithSize(CGSize screenSize){
GLKMatrix4 matrix4 = GLKMatrix4MakeScale(
2.0 / screenSize.width,
-2.0 / screenSize.height,
1.0);
matrix4 = GLKMatrix4Translate(matrix4,-screenSize.width / 2.0, -screenSize.height / 2.0, 0);
return matrix4;
}
This way you don't have to translate anything.

Convert an image to a SceneKit Node

I have a bit-map image:
( However this should work with any arbitrary image )
I want to take my image and make it a 3D SCNNode. I've accomplished that much with this code. That takes each pixel in the image and creates a SCNNode with a SCNBox geometry.
static inline SCNNode* NodeFromSprite(const UIImage* image) {
SCNNode *node = [SCNNode node];
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* data = CFDataGetBytePtr(pixelData);
for (int x = 0; x < image.size.width; x++)
{
for (int y = 0; y < image.size.height; y++)
{
int pixelInfo = ((image.size.width * y) + x) * 4;
UInt8 alpha = data[pixelInfo + 3];
if (alpha > 3)
{
UInt8 red = data[pixelInfo];
UInt8 green = data[pixelInfo + 1];
UInt8 blue = data[pixelInfo + 2];
UIColor *color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];
SCNNode *pixel = [SCNNode node];
pixel.geometry = [SCNBox boxWithWidth:1.001 height:1.001 length:1.001 chamferRadius:0];
pixel.geometry.firstMaterial.diffuse.contents = color;
pixel.position = SCNVector3Make(x - image.size.width / 2.0,
y - image.size.height / 2.0,
0);
[node addChildNode:pixel];
}
}
}
CFRelease(pixelData);
node = [node flattenedClone];
//The image is upside down and I have no idea why.
node.rotation = SCNVector4Make(1, 0, 0, M_PI);
return node;
}
But the problem is that what I'm doing takes up way to much memory!
I'm trying to find a way to do this with less memory.
All Code and resources can be found at:
https://github.com/KonradWright/KNodeFromSprite
Now you drawing each pixel as SCNBox of certain color, that means:
one GL draw per box
drawing of unnecessary two invisible faces between adjancent boxes
drawing N of same 1x1x1 boxes in a row when one box of 1x1xN can be drawn
Seems like common Minecraft-like optimization problem:
Treat your image is 3-dimensional array (where depth is wanted image extrusion depth), each element representing cube voxel of certain color.
Use greedy meshing algorithm (demo) and custom SCNGeometry to create mesh for SceneKit node.
Pseudo-code for meshing algorithm that skips faces of adjancent cubes (simplier, but less effective than greedy meshing):
#define SIZE_X = 16; // image width
#define SIZE_Y = 16; // image height
// pixel data, 0 = transparent pixel
int data[SIZE_X][SIZE_Y];
// check if there is non-transparent neighbour at x, y
BOOL has_neighbour(x, y) {
if (x < 0 || x >= SIZE_X || y < 0 || y >= SIZE_Y || data[x][y] == 0)
return NO; // out of dimensions or transparent
else
return YES;
}
void add_face(x, y orientation, color) {
// add face at (x, y) with specified color and orientation = TOP, BOTTOM, LEFT, RIGHT, FRONT, BACK
// can be (easier and slower) implemented with SCNPlane's: https://developer.apple.com/library/mac/documentation/SceneKit/Reference/SCNPlane_Class/index.html#//apple_ref/doc/uid/TP40012010-CLSCHSCNPlane-SW8
// or (harder and faster) using Custom Geometry: https://github.com/d-ronnqvist/blogpost-codesample-CustomGeometry/blob/master/CustomGeometry/CustomGeometryView.m#L84
}
for (x = 0; x < SIZE_X; x++) {
for (y = 0; y < SIZE_Y; y++) {
int color = data[x][y];
// skip current pixel is transparent
if (color == 0)
continue;
// check neighbour at top
if (! has_neighbour(x, y + 1))
add_face(x,y, TOP, );
// check neighbour at bottom
if (! has_neighbour(x, y - 1))
add_face(x,y, BOTTOM);
// check neighbour at bottom
if (! has_neighbour(x - 1, y))
add_face(x,y, LEFT);
// check neighbour at bottom
if (! has_neighbour(x, y - 1))
add_face(x,y, RIGHT);
// since array is 2D, front and back faces is always visible for non-transparent pixels
add_face(x,y, FRONT);
add_face(x,y, BACK);
}
}
A lot of depends on input image. If it is not big and without wide variety of colors, it I would go with SCNNode adding SCNPlane's for visible faces and then flattenedClone()ing result.
An approach similar to the one proposed by Ef Dot:
To keep the number of draw calls as small as possible you want to keep the number of materials as small as possible. Here you will want one SCNMaterial per color.
To keep the number of draw calls as small as possible make sure that no two geometry elements (SCNGeometryElement) use the same material. In other words, use one geometry element per material (color).
So you will have to build a SCNGeometry that has N geometry elements and N materials where N is the number of distinct colors in your image.
For each color in you image build a polygon (or group of disjoint polygons) from all the pixels of that color
Triangulate each polygon (or group of polygons) and build a geometry element with that triangulation.
Build the geometry from the geometry elements.
If you don't feel comfortable with triangulating the polygons yourself your can leverage SCNShape.
For each polygon (or group of polygons) create a single UIBezierPath and a build a SCNShape with that.
Merge all the geometry sources of your shapes in a single source, and reuse the geometry elements to create a custom SCNGeometry
Note that some vertices will be duplicated if you use a collection of SCNShapes to build the geometry. With little effort you can make sure that no two vertices in your final source have the same position. Update the indexes in the geometry elements accordingly.
I can also direct you to this excellent GitHub repo by Nick Lockwood:
https://github.com/nicklockwood/FPSControls
It will show you how to generate the meshes as planes (instead of cubes) which is a fast way to achieve what you need for simple scenes using a "neighboring" check.
If you need large complex scenes, then I suggest you go for the solution proposed by Ef Dot using a greedy meshing algorithm.

Using OpenGL ES 2.0 with iOS, how do I draw a cylinder between two points?

I am given two GLKVector3's representing the start and end points of the cylinder. Using these points and the radius, I need to build and render a cylinder. I can build a cylinder with the correct distance between the points but in a fixed direction (currently always in the y (0, 1, 0) up direction). I am not sure what kind of calculations I need to make to get the cylinder on the correct plane between the two points so that a line would run through the two end points. I am thinking there is some sort of calculations I can apply as I create my vertex data with the direction vector, or angle, that will create the cylinder pointing the correct direction. Does anyone have an algorithm, or know of one, that will help?
Are you drawing more than one of these cylinders? Or ever drawing it in a different position? If so, using the algorithm from the awesome article is a not-so-awesome idea. Every time you upload geometry data to the GPU, you incur a performance cost.
A better approach is to calculate the geometry for a single basic cylinder once — say, one with unit radius and height — and stuff that vertex data into a VBO. Then, when you draw, use a model-to-world transformation matrix to scale (independently in radius and length if needed) and rotate the cylinder into place. This way, the only new data that gets sent to the GPU with each draw call is a 4x4 matrix instead of all the vertex data for whatever polycount of cylinder you're drawing.
Check this awesome article; it's dated but after adapting the algorithm, it works like a charm. One tip, OpenGL ES 2.0 only supports triangles so instead of using GL_QUAD_STRIP as the method does, use GL_TRIANGLE_STRIP instead and the result is identical. The site also contains a bunch of other useful information regarding OpenGL geometries.
See code below for solution. Self represents the mesh and contains the vertices, indices, and such.
- (instancetype)initWithOriginRadius:(CGFloat)originRadius
atOriginPoint:(GLKVector3)originPoint
andEndRadius:(CGFloat)endRadius
atEndPoint:(GLKVector3)endPoint
withPrecision:(NSInteger)precision
andColor:(GLKVector4)color
{
self = [super init];
if (self) {
// normal pointing from origin point to end point
GLKVector3 normal = GLKVector3Make(originPoint.x - endPoint.x,
originPoint.y - endPoint.y,
originPoint.z - endPoint.z);
// create two perpendicular vectors - perp and q
GLKVector3 perp = normal;
if (normal.x == 0 && normal.z == 0) {
perp.x += 1;
} else {
perp.y += 1;
}
// cross product
GLKVector3 q = GLKVector3CrossProduct(perp, normal);
perp = GLKVector3CrossProduct(normal, q);
// normalize vectors
perp = GLKVector3Normalize(perp);
q = GLKVector3Normalize(q);
// calculate vertices
CGFloat twoPi = 2 * PI;
NSInteger index = 0;
for (NSInteger i = 0; i < precision + 1; i++) {
CGFloat theta = ((CGFloat) i) / precision * twoPi; // go around circle and get points
// normals
normal.x = cosf(theta) * perp.x + sinf(theta) * q.x;
normal.y = cosf(theta) * perp.y + sinf(theta) * q.y;
normal.z = cosf(theta) * perp.z + sinf(theta) * q.z;
AGLKMeshVertex meshVertex;
AGLKMeshVertexDynamic colorVertex;
// top vertex
meshVertex.position.x = endPoint.x + endRadius * normal.x;
meshVertex.position.y = endPoint.y + endRadius * normal.y;
meshVertex.position.z = endPoint.z + endRadius * normal.z;
meshVertex.normal = normal;
meshVertex.originalColor = color;
// append vertex
[self appendVertex:meshVertex];
// append color vertex
colorVertex.colors = color;
[self appendColorVertex:colorVertex];
// append index
[self appendIndex:index++];
// bottom vertex
meshVertex.position.x = originPoint.x + originRadius * normal.x;
meshVertex.position.y = originPoint.y + originRadius * normal.y;
meshVertex.position.z = originPoint.z + originRadius * normal.z;
meshVertex.normal = normal;
meshVertex.originalColor = color;
// append vertex
[self appendVertex:meshVertex];
// append color vertex
[self appendColorVertex:colorVertex];
// append index
[self appendIndex:index++];
}
// draw command
[self appendCommand:GL_TRIANGLE_STRIP firstIndex:0 numberOfIndices:self.numberOfIndices materialName:#""];
}
return self;
}

Resources