Jagged ccDrawCircle - ios

Using Cocos2d to draw a thicker circle:
glLineWidth(20);
ccDrawCircle(self.ripplePosition, _radius, 0, 50, NO);
But this is what shows up(notice how it looks like it's created from 4 different segments):
http://i.stack.imgur.com/jYW4s.png
I tried increasing the number of segments to larger values but the result is the same.
Is this a bug in Cocos2D? Any ideas on how to achieve a "perfect" circle?
Here is the implementation of ccDrawCircle from cocos2d 2.0rc2:
void ccDrawCircle( CGPoint center, float r, float a, NSUInteger segs, BOOL drawLineToCenter)
{
lazy_init();
int additionalSegment = 1;
if (drawLineToCenter)
additionalSegment++;
const float coef = 2.0f * (float)M_PI/segs;
GLfloat *vertices = calloc( sizeof(GLfloat)*2*(segs+2), 1);
if( ! vertices )
return;
for(NSUInteger i = 0;i <= segs; i++) {
float rads = i*coef;
GLfloat j = r * cosf(rads + a) + center.x;
GLfloat k = r * sinf(rads + a) + center.y;
vertices[i*2] = j;
vertices[i*2+1] = k;
}
vertices[(segs+1)*2] = center.x;
vertices[(segs+1)*2+1] = center.y;
[shader_ use];
[shader_ setUniformForModelViewProjectionMatrix];
[shader_ setUniformLocation:colorLocation_ with4fv:(GLfloat*) &color_.r count:1];
ccGLEnableVertexAttribs( kCCVertexAttribFlag_Position );
glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, 0, vertices);
glDrawArrays(GL_LINE_STRIP, 0, (GLsizei) segs+additionalSegment);
free( vertices );
CC_INCREMENT_GL_DRAWS(1);
}

I went with a slightly modified version of ccDrawCircle and it works pretty well (performs a lot better than using and resizing a sprite):
void ccDrawDonut( CGPoint center, float r1, float r2, NSUInteger segs)
{
lazy_init();
const float coef = 2.0f * (float)M_PI/segs;
GLfloat *vertices = calloc( sizeof(GLfloat)*4*segs+4, 1);
if( ! vertices )
return;
for(NSUInteger i = 0;i <= segs; i++) {
float rads = i*coef;
GLfloat j1 = r1 * cosf(rads) + center.x;
GLfloat k1 = r1 * sinf(rads) + center.y;
vertices[i*4] = j1;
vertices[i*4+1] = k1;
rads+= coef/2;
GLfloat j2 = r2 * cosf(rads) + center.x;
GLfloat k2 = r2 * sinf(rads) + center.y;
vertices[i*4+2] = j2;
vertices[i*4+3] = k2;
}
[shader_ use];
[shader_ setUniformForModelViewProjectionMatrix];
[shader_ setUniformLocation:colorLocation_ with4fv:(GLfloat*) &color_.r count:1];
ccGLEnableVertexAttribs( kCCVertexAttribFlag_Position );
glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, 0, vertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, (GLsizei) 2*segs+2);
free( vertices );
CC_INCREMENT_GL_DRAWS(1);
}

Related

Shader Program not working in Cocos2d 2.2 Obj.C iOS 12

CODE:
[lightRender beginWithClear:0 g:0 b:0 a:0];
self.shaderProgram = [[CCShaderCache sharedShaderCache] programForKey:kCCShader_PositionColor];
CC_NODE_DRAW_SETUP();
CGPoint vertices[3];
ccColor4F colors[3];
CGPoint mid = BODY_POSITION(cursor.body);
float radius = LIGHT_RANGE * lightRadius; // - (CCRANDOM_0_1() * LIGHT_RANGE * .015);
float initialAlpha = .7f;
for (int i = 0; i < LIGHT_PRECISION; i++)
{
int nVertices = 0;
float angle = 2*M_PI/LIGHT_PRECISION * i;
float nextAngle = 2*M_PI/LIGHT_PRECISION * (i+1);
int n = i + 1;
if (n == LIGHT_PRECISION)
n = 0;
vertices[nVertices] = ccpAdd(mid, ccp(cosf(angle) * radius * lightLength[i], sinf(angle) * radius * lightLength[i]));
colors[nVertices++] = (ccColor4F){1, 1, .9, initialAlpha * (1 - lightLength[i])};
vertices[nVertices] = ccpAdd(mid, ccp(cosf(nextAngle) * radius * lightLength[n], sinf(nextAngle) * radius * lightLength[n]));
colors[nVertices++] = (ccColor4F){1, 1, .9, initialAlpha * (1 - lightLength[n])};
vertices[nVertices] = mid;
colors[nVertices++] = (ccColor4F){1, 1, .8, initialAlpha};
ccGLEnableVertexAttribs(kCCVertexAttribFlag_Position | kCCVertexAttribFlag_Color);
glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, 0, vertices);
glVertexAttribPointer(kCCVertexAttrib_Color, 4, GL_FLOAT, GL_FALSE, 0, colors);
glBlendFunc(CC_BLEND_SRC, CC_BLEND_DST);
glDrawArrays(GL_TRIANGLE_STRIP, 0, (GLsizei)nVertices);
}
[lightRender end];
CC_NODE_DRAW_SETUP Code:
#define CC_NODE_DRAW_SETUP() \
do { \
ccGLEnable( _glServerState ); \
NSAssert1(_shaderProgram, #"No shader program set for node: %#", self); \
[_shaderProgram use]; \
[_shaderProgram setUniformsForBuiltins]; \
} while(0)
How to fix shader code in iOS12 with Cocos2d 2.2 Obj.C project ? If I run in iOS 7 simulator it works perfect..same code not working in iOS 12
Fixed Shaders Problem in Cocos2d.
In Above code, main problem is CGFloat...converted it to float for all vertices.
Also I took shaders related files from Cocos2d-x and converted it to obj.C for Cocos2d.
Now all shaders working Perfect in Cocos2d v2.2 Obj.C iOS 12, iPhoneX/XR/XSMax also..
Happy Coding! Cheers :)

Convert camera intrinsic and extrinsic matrix to modelview and projection matrix on openGL

I'm trying to convert camera extrinsic matrix and intrinsic matrix to modelview and projection matrix on openGL for AR.
The camera intrinsic matrix is estimated by camera calibration, and I already calculated extrinsic matrix (I knew the correspondences between world coordinate(CAD model) and camera coordinate).
Only for checking whether I calculated camera extrinsic matrix properly, I augmented CAD model on the image using :
P = KE
(u, v, w) = P(X, Y, Z)
where P : projection matrix, K : intrinsic matrix, E : extrinsic matrix,
(u/w, v/w) : image coordinate, (X, Y, Z) : world coordinate of CAD model.
The result is shown as follows, and result is good. I confirmed the intrinsic matrix and extrinsic matrix is correct.
However, I failed to change those to openGL.(I want to draw CAD model using OpenGL on the image). Using below code, the object is out of window (By changing the projection matrix of the openGL, I checked the code could draw object. But it draws object at wrong position).
extern Matrix3f IntrinsicMatrix;
double fx = IntrinsicMatrix(0, 0);
double fy = IntrinsicMatrix(1, 1);
double cx = IntrinsicMatrix(0, 2);
double cy = IntrinsicMatrix(1, 2);
double alpha = IntrinsicMatrix(0, 1);
extern Mat InputImage;
double w = InputImage.cols;
double h = InputImage.rows;
double Znear = 0.1;
double Zfar = 500000;
extern MatrixXf ExtrinsicMatrix;
Matrix4f Extrinsic4f, Temp;
for (unsigned int i = 0; i < 3; i++)
for (unsigned int j = 0; j < 4; j++)
Extrinsic4f(i, j) = ExtrinsicMatrix(i, j);
Extrinsic4f(3, 0) = 0.0f;
Extrinsic4f(3, 1) = 0.0f;
Extrinsic4f(3, 2) = 0.0f;
Extrinsic4f(3, 3) = 1.0f;
for (unsigned int i = 0; i < 4; i++)
for (unsigned int j = 0; j < 4; j++)
Temp(i, j) = 0.0f;
Temp(0, 0) = 1.0f;
Temp(1, 1) = -1.0f;
Temp(2, 2) = -1.0f;
Temp(3, 3) = 1.0f;
Extrinsic4f = Temp*Extrinsic4f;
Matrix4f glViewMatrix = Extrinsic4f.transpose();
GLfloat model[16] = {
glViewMatrix(0, 0), glViewMatrix(0, 1), glViewMatrix(0, 2), glViewMatrix(0, 3),
glViewMatrix(1, 0), glViewMatrix(1, 1), glViewMatrix(1, 2), glViewMatrix(1, 3),
glViewMatrix(2, 0), glViewMatrix(2, 1), glViewMatrix(2, 2), glViewMatrix(2, 3),
glViewMatrix(3, 0), glViewMatrix(3, 1), glViewMatrix(3, 2), glViewMatrix(3, 3),
};
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(model);
glMatrixMode(GL_PROJECTION);
GLfloat perspMatrix[16]={
2*fx/w, 0.0 , (w-2*cx)/w, 0,
0, -2*fy/h, (-h+2*cy)/h, 0,
0, 0, (-Zfar-Znear)/(Zfar-Znear), -2*Zfar*Znear/(Zfar-Znear),
0, 0, -1, 0};
glLoadMatrixf(perspMatrix);
glColor3f(1.f, 1.f, 1.f);
glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex3f(-1.0, -1.0, 0.0);
glTexCoord2f(1.0, 0.0); glVertex3f(1.0, -1.0, 0.0);
glTexCoord2f(1.0, 1.0); glVertex3f(1.0, 1.0, 0.0);
glTexCoord2f(0.0, 1.0); glVertex3f(-1.0, 1.0, 0.0);
glEnd();
glDisable(GL_TEXTURE_2D);
double fov_y = 2 * atan(height / 2 / fy) * 180 / CV_PI;
gluPerspective(fov_y, (double)width / height, Znear, Zfar); // 39
glViewport(0, 0, width, height);
draw3Dobject();
glutSwapBuffers();
Is there anything that I change?

Converting cv::Mat to MTLTexture

An intermediate step of my current project requires conversion of opencv's cv::Mat to MTLTexture, the texture container of Metal. I need to store the Floats in the Mat as Floats in the texture; my project cannot quite afford the loss of precision.
This is my attempt at such a conversion.
- (id<MTLTexture>)texForMat:(cv::Mat)image context:(MBEContext *)context
{
id<MTLTexture> texture;
int width = image.cols;
int height = image.rows;
Float32 *rawData = (Float32 *)calloc(height * width * 4,sizeof(float));
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * width;
float r, g, b,a;
for(int i = 0; i < height; i++)
{
Float32* imageData = (Float32*)(image.data + image.step * i);
for(int j = 0; j < width; j++)
{
r = (Float32)(imageData[4 * j]);
g = (Float32)(imageData[4 * j + 1]);
b = (Float32)(imageData[4 * j + 2]);
a = (Float32)(imageData[4 * j + 3]);
rawData[image.step * (i) + (4 * j)] = r;
rawData[image.step * (i) + (4 * j + 1)] = g;
rawData[image.step * (i) + (4 * j + 2)] = b;
rawData[image.step * (i) + (4 * j + 3)] = a;
}
}
MTLTextureDescriptor *textureDescriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatRGBA16Float
width:width
height:height
mipmapped:NO];
texture = [context.device newTextureWithDescriptor:textureDescriptor];
MTLRegion region = MTLRegionMake2D(0, 0, width, height);
[texture replaceRegion:region mipmapLevel:0 withBytes:rawData bytesPerRow:bytesPerRow];
free(rawData);
return texture;
}
But it doesn't seem to be working. It reads zeroes every time from the Mat, and throws up EXC_BAD_ACCESS. I need the MTLTexture in MTLPixelFormatRGBA16Float to keep the precision.
Thanks for considering this issue.
One problem here is you’re loading up rawData with Float32s but your texture is RGBA16Float, so the data will be corrupted (16Float is half the size of Float32). This shouldn’t cause your crash, but it’s an issue you’ll have to deal with.
Also as “chappjc” noted you’re using ‘image.step’ when writing your data out, but that buffer should be contiguous and not ever have a step that’s not just (width * bytesPerPixel).

Angle and Scale Invariant template matching using OpenCV

Function rotates the template image from 0 to 180 (or upto 360) degrees to search all related matches(in all angles) in source image even with different scale.
The function had been written in OpenCV C interface. When I tried to port it to openCV C++ interface , I am getting lot of errors. Some one please help me to port it to OpenCV C++ interface.
void TemplateMatch()
{
int i, j, x, y, key;
double minVal;
char windowNameSource[] = "Original Image";
char windowNameDestination[] = "Result Image";
char windowNameCoefficientOfCorrelation[] = "Coefficient of Correlation Image";
CvPoint minLoc;
CvPoint tempLoc;
IplImage *sourceImage = cvLoadImage("template_source.jpg", CV_LOAD_IMAGE_ANYDEPTH | CV_LOAD_IMAGE_ANYCOLOR);
IplImage *templateImage = cvLoadImage("template.jpg", CV_LOAD_IMAGE_ANYDEPTH | CV_LOAD_IMAGE_ANYCOLOR);
IplImage *graySourceImage = cvCreateImage(cvGetSize(sourceImage), IPL_DEPTH_8U, 1);
IplImage *grayTemplateImage =cvCreateImage(cvGetSize(templateImage),IPL_DEPTH_8U,1);
IplImage *binarySourceImage = cvCreateImage(cvGetSize(sourceImage), IPL_DEPTH_8U, 1);
IplImage *binaryTemplateImage = cvCreateImage(cvGetSize(templateImage), IPL_DEPTH_8U, 1);
IplImage *destinationImage = cvCreateImage(cvGetSize(sourceImage), IPL_DEPTH_8U, 3);
cvCopy(sourceImage, destinationImage);
cvCvtColor(sourceImage, graySourceImage, CV_RGB2GRAY);
cvCvtColor(templateImage, grayTemplateImage, CV_RGB2GRAY);
cvThreshold(graySourceImage, binarySourceImage, 200, 255, CV_THRESH_OTSU );
cvThreshold(grayTemplateImage, binaryTemplateImage, 200, 255, CV_THRESH_OTSU);
int templateHeight = templateImage->height;
int templateWidth = templateImage->width;
float templateScale = 0.5f;
for(i = 2; i <= 3; i++)
{
int tempTemplateHeight = (int)(templateWidth * (i * templateScale));
int tempTemplateWidth = (int)(templateHeight * (i * templateScale));
IplImage *tempBinaryTemplateImage = cvCreateImage(cvSize(tempTemplateWidth, tempTemplateHeight), IPL_DEPTH_8U, 1);
// W - w + 1, H - h + 1
IplImage *result = cvCreateImage(cvSize(sourceImage->width - tempTemplateWidth + 1, sourceImage->height - tempTemplateHeight + 1), IPL_DEPTH_32F, 1);
cvResize(binaryTemplateImage, tempBinaryTemplateImage, CV_INTER_LINEAR);
float degree = 20.0f;
for(j = 0; j <= 9; j++)
{
IplImage *rotateBinaryTemplateImage = cvCreateImage(cvSize(tempBinaryTemplateImage- >width, tempBinaryTemplateImage->height), IPL_DEPTH_8U, 1);
//cvShowImage(windowNameSource, tempBinaryTemplateImage);
//cvWaitKey(0);
for(y = 0; y < tempTemplateHeight; y++)
{
for(x = 0; x < tempTemplateWidth; x++)
{
rotateBinaryTemplateImage->imageData[y * tempTemplateWidth + x] = 255;
}
}
for(y = 0; y < tempTemplateHeight; y++)
{
for(x = 0; x < tempTemplateWidth; x++)
{
float radian = (float)j * degree * CV_PI / 180.0f;
int scale = y * tempTemplateWidth + x;
int rotateY = - sin(radian) * ((float)x - (float)tempTemplateWidth / 2.0f) + cos(radian) * ((float)y - (float)tempTemplateHeight / 2.0f) + tempTemplateHeight / 2;
int rotateX = cos(radian) * ((float)x - (float)tempTemplateWidth / 2.0f) + sin(radian) * ((float)y - (float)tempTemplateHeight / 2.0f) + tempTemplateWidth / 2;
if(rotateY < tempTemplateHeight && rotateX < tempTemplateWidth && rotateY >= 0 && rotateX >= 0)
rotateBinaryTemplateImage->imageData[scale] = tempBinaryTemplateImage->imageData[rotateY * tempTemplateWidth + rotateX];
}
}
//cvShowImage(windowNameSource, rotateBinaryTemplateImage);
//cvWaitKey(0);
cvMatchTemplate(binarySourceImage, rotateBinaryTemplateImage, result, CV_TM_SQDIFF_NORMED);
//cvMatchTemplate(binarySourceImage, rotateBinaryTemplateImage, result, CV_TM_SQDIFF);
cvMinMaxLoc(result, &minVal, NULL, &minLoc, NULL, NULL);
printf(": %f%%\n", (int)(i * 0.5 * 100), j * 20, (1 - minVal) * 100);
if(minVal < 0.065) // 1 - 0.065 = 0.935 : 93.5%
{
tempLoc.x = minLoc.x + tempTemplateWidth;
tempLoc.y = minLoc.y + tempTemplateHeight;
cvRectangle(destinationImage, minLoc, tempLoc, CV_RGB(0, 255, 0), 1, 8, 0);
}
}
//cvShowImage(windowNameSource, result);
//cvWaitKey(0);
cvReleaseImage(&tempBinaryTemplateImage);
cvReleaseImage(&result);
}
// cvShowImage(windowNameSource, sourceImage);
// cvShowImage(windowNameCoefficientOfCorrelation, result);
cvShowImage(windowNameDestination, destinationImage);
key = cvWaitKey(0);
cvReleaseImage(&sourceImage);
cvReleaseImage(&templateImage);
cvReleaseImage(&graySourceImage);
cvReleaseImage(&grayTemplateImage);
cvReleaseImage(&binarySourceImage);
cvReleaseImage(&binaryTemplateImage);
cvReleaseImage(&destinationImage);
cvDestroyWindow(windowNameSource);
cvDestroyWindow(windowNameDestination);
cvDestroyWindow(windowNameCoefficientOfCorrelation);
}
RESULT :
Template Image:
Result image:
The function above puts rectangles around the perfect matches (angle and scale invariant) in this image .....
Now, I have been trying to port the code into C++ interface. If anyone needs more details please let me know.
C++ Port of above code:
Mat TemplateMatch(Mat sourceImage, Mat templateImage){
double minVal;
Point minLoc;
Point tempLoc;
Mat graySourceImage = Mat(sourceImage.size(),CV_8UC1);
Mat grayTemplateImage = Mat(templateImage.size(),CV_8UC1);
Mat binarySourceImage = Mat(sourceImage.size(),CV_8UC1);
Mat binaryTemplateImage = Mat(templateImage.size(),CV_8UC1);
Mat destinationImage = Mat(sourceImage.size(),CV_8UC3);
sourceImage.copyTo(destinationImage);
cvtColor(sourceImage, graySourceImage, CV_BGR2GRAY);
cvtColor(templateImage, grayTemplateImage, CV_BGR2GRAY);
threshold(graySourceImage, binarySourceImage, 200, 255, CV_THRESH_OTSU );
threshold(grayTemplateImage, binaryTemplateImage, 200, 255, CV_THRESH_OTSU);
int templateHeight = templateImage.rows;
int templateWidth = templateImage.cols;
float templateScale = 0.5f;
for(int i = 2; i <= 3; i++){
int tempTemplateHeight = (int)(templateWidth * (i * templateScale));
int tempTemplateWidth = (int)(templateHeight * (i * templateScale));
Mat tempBinaryTemplateImage = Mat(Size(tempTemplateWidth,tempTemplateHeight),CV_8UC1);
Mat result = Mat(Size(sourceImage.cols - tempTemplateWidth + 1,sourceImage.rows - tempTemplateHeight + 1),CV_32FC1);
resize(binaryTemplateImage,tempBinaryTemplateImage,Size(tempBinaryTemplateImage.cols,tempBinaryTemplateImage.rows),0,0,INTER_LINEAR);
float degree = 20.0f;
for(int j = 0; j <= 9; j++){
Mat rotateBinaryTemplateImage = Mat(Size(tempBinaryTemplateImage.cols, tempBinaryTemplateImage.rows), CV_8UC1);
for(int y = 0; y < tempTemplateHeight; y++){
for(int x = 0; x < tempTemplateWidth; x++){
rotateBinaryTemplateImage.data[y * tempTemplateWidth + x] = 255;
}
}
for(int y = 0; y < tempTemplateHeight; y++){
for(int x = 0; x < tempTemplateWidth; x++){
float radian = (float)j * degree * CV_PI / 180.0f;
int scale = y * tempTemplateWidth + x;
int rotateY = - sin(radian) * ((float)x - (float)tempTemplateWidth / 2.0f) + cos(radian) * ((float)y - (float)tempTemplateHeight / 2.0f) + tempTemplateHeight / 2;
int rotateX = cos(radian) * ((float)x - (float)tempTemplateWidth / 2.0f) + sin(radian) * ((float)y - (float)tempTemplateHeight / 2.0f) + tempTemplateWidth / 2;
if(rotateY < tempTemplateHeight && rotateX < tempTemplateWidth && rotateY >= 0 && rotateX >= 0)
rotateBinaryTemplateImage.data[scale] = tempBinaryTemplateImage.data[rotateY * tempTemplateWidth + rotateX];
}
}
matchTemplate(binarySourceImage, rotateBinaryTemplateImage, result, CV_TM_SQDIFF_NORMED);
minMaxLoc(result, &minVal, 0, &minLoc, 0, Mat());
cout<<(int)(i * 0.5 * 100)<<" , "<< j * 20<<" , "<< (1 - minVal) * 100<<endl;
if(minVal < 0.065){ // 1 - 0.065 = 0.935 : 93.5%
tempLoc.x = minLoc.x + tempTemplateWidth;
tempLoc.y = minLoc.y + tempTemplateHeight;
rectangle(destinationImage, minLoc, tempLoc, CV_RGB(0, 255, 0), 1, 8, 0);
}
}
}
return destinationImage;
}

OpenGL ES on iOS Adding Vertices to Buffers

In my app I am using OpenGL ES to render a file downloaded from the internet that is then parsed into a vertex array, so I have to input vertex and normals data after launch. I am new to OpenGL ES but I have been reading and learning. I have setup a vertex and normals buffer that seem to be working fine, but I think that I am putting the vertex and normals data into the buffers incorrectly because when I load the view there is an object there that vaguely resembles the shape that I want but with triangles veering off in various directions and parts of the shape missing. Here is my code for inputting my data into the buffers:
for (int i = 0; i < triangle_cnt; i++) {
int base = i * 18;
GLfloat x1 = vertices[base];
GLfloat y1 = vertices[base + 1];
GLfloat z1 = vertices[base + 2];
GLfloat x2 = vertices[base + 6];
GLfloat y2 = vertices[base + 7];
GLfloat z2 = vertices[base + 8];
GLfloat x3 = vertices[base + 12];
GLfloat y3 = vertices[base + 13];
GLfloat z3 = vertices[base + 14];
vector_t normal;
vector_t U;
vector_t V;
GLfloat length;
U.x = x2 - x1;
U.y = y2 - y1;
U.z = z2 - z1;
V.x = x3 - x1;
V.y = y3 - y1;
V.z = z3 - z1;
normal.x = U.y * V.z - U.z * V.y;
normal.y = U.z * V.x - U.x * V.z;
normal.z = U.x * V.y - U.y * V.x;
length = normal.x * normal.x + normal.y * normal.y + normal.z * normal.z;
length = sqrt(length);
base = i * 9;
verticesBuff[base] = x1;
verticesBuff[base + 1] = y1;
verticesBuff[base + 2] = z1;
normalsBuff[base] = normal.x;
normalsBuff[base + 1] = normal.y;
normalsBuff[base + 2] = normal.z;
verticesBuff[base + 3] = x2;
verticesBuff[base + 4] = y2;
verticesBuff[base + 5] = z2;
normalsBuff[base + 3] = normal.x;
normalsBuff[base + 4] = normal.y;
normalsBuff[base + 5] = normal.z;
verticesBuff[base + 6] = x3;
verticesBuff[base + 7] = y3;
verticesBuff[base + 8] = z3;
normalsBuff[base + 6] = normal.x;
normalsBuff[base + 7] = normal.y;
normalsBuff[base + 8] = normal.z;
fprintf(stderr, "%ff, %ff, %ff, %ff, %ff, %ff, \n", x1, y1, z1, normal.x, normal.y, normal.z);
fprintf(stderr, "%ff, %ff, %ff, %ff, %ff, %ff, \n", x2, y2, z2, normal.x, normal.y, normal.z);
fprintf(stderr, "%ff, %ff, %ff, %ff, %ff, %ff, \n", x3, y3, z3, normal.x, normal.y, normal.z);
}
And here is the code I use for using those buffers:
- (void)setupGL {
[EAGLContext setCurrentContext:self.context];
[self loadShaders];
self.effect = [[GLKBaseEffect alloc] init];
self.effect.light0.enabled = GL_TRUE;
self.effect.light0.diffuseColor = GLKVector4Make(.05f, .55f, 1.0f, 1.0f);
glEnable(GL_DEPTH_TEST);
glGenVertexArraysOES(1, &_vertexArray);
glBindVertexArrayOES(_vertexArray);
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, vertCount*sizeof(verticesBuff)*3*2, NULL, GL_STATIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(verticesBuff) * vertCount * 3, verticesBuff);
glBufferData(GL_ARRAY_BUFFER, vertCount*sizeof(normalsBuff)*3*2, NULL, GL_STATIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, sizeof(GLfloat) * vertCount * 3, sizeof(normalsBuff) * vertCount * 3, normalsBuff);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0));
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(12));
glBindVertexArrayOES(0);
_rotMatrix = GLKMatrix4Identity;
_quat = GLKQuaternionMake(0, 0, 0, 1);
_quatStart = GLKQuaternionMake(0, 0, 0, 1);
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
glClearColor(0.78f, 0.78f, 0.78f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArrayOES(_vertexArray);
// Render the object with GLKit
[self.effect prepareToDraw];
glVertexPointer(3, GL_FLOAT, 0, verticesBuff);
glNormalPointer(GL_FLOAT, 0, normalsBuff);
glDrawArrays(GL_TRIANGLES, 0, vertCount); //*******************************
// Render the object again with ES2
glUseProgram(_program);
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, 0, _modelViewProjectionMatrix.m);
glUniformMatrix3fv(uniforms[UNIFORM_NORMAL_MATRIX], 1, 0, _normalMatrix.m);
glDrawArrays(GL_TRIANGLES, 0, vertCount);
}
If I take those logs and paste them into the vertex array of a sample app using the code Apple supplies when creating an OpenGL ES app then the object renders beautifully, so I have deduced that I must just be putting the vertex data in wrong.
So can someone help me understand what I am doing wrong when entering the vertices and normals? Any help is appreciated.
Also this is what my render looks like:
And this is, in shape at least, what it should look like:
It's a bit messy and there might be quite a few problems. The one you are currently facing would seem to be:
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0));
The stride parameter (24) should be sizeof(GLfloat)*3 and same goes for normals. This is because you don't use interleaved vertex structure any more and now your position coordinates are tightly packed (that being said you can also use 0 as a stride parameter now). The result you produced with this bug is that only every second vertex was taken, others were dropped and when positions are all taken it starts drawing normals. Also if your normal pointer and offset are set correctly you are reading them beyond the buffer and should produce either crash or reading some other resources.

Resources