OpenGL in Dart not showing triangle - dart

I've written a simple "hello word triangle" in OpenGL/C++, it works perfectly. Then, when i was sure that the code worked, I decided to switch to dart and convert all my code.
These are the the pulgins that i've used: OpenGL - GLFW(mingw x64 dll)
And this is the OpenGL code without all the GLFW abstraction:
int shader;
#override
void onCreate() {
List<double> pos = [
-0.5, -0.5,
0.0, 0.5,
0.5, -0.5,
];
Pointer<Float> pPos = allocate<Float>(count: pos.length);
for(int i = 0; i < pos.length; i++)
pPos.elementAt(i).value = pos[i];
Pointer<Uint32> buffer = allocate();
glGenBuffers(1, buffer);
glBindBuffer(GL_ARRAY_BUFFER, buffer.value);
glBufferData(GL_ARRAY_BUFFER, 6 * sizeOf<Float>(), pPos, GL_STATIC_DRAW);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeOf<Float>(), 0);
glEnableVertexAttribArray(0);
String vs =
"""
#version 330 core
layout(location = 0) in vec4 position;
void main()
{
gl_Position = position;
}
""";
String fs =
"""
#version 330 core
layout(location = 0) out vec4 color;
void main()
{
color = vec4(1.0, 0.0, 0.0, 1.0);
}
""";
shader = _createShader(vs, fs);
}
#override
void onUpdate() {
glDrawArrays(GL_TRIANGLES, 0, 3);
}
These are the functions to compile the shaders:
int _compileShader(String source, int type) {
int id = glCreateShader(type);
Pointer<Pointer<Utf8>> ptr = allocate<Pointer<Utf8>>();
ptr.elementAt(0).value = NativeString.fromString(source).cast();
glShaderSource(id, 1, ptr, nullptr);
glCompileShader(id);
Pointer<Int32> result = allocate<Int32>();
glGetShaderiv(id, GL_COMPILE_STATUS, result);
if(result.value == GL_FALSE) {
Pointer<Int32> length = allocate<Int32>();
glGetShaderiv(id, GL_INFO_LOG_LENGTH, length);
Pointer<Utf8> message = allocate<Utf8>(count: length.value);
glGetShaderInfoLog(id, length.value, length, message);
glDeleteShader(id);
print("Failed to compile ${type == GL_VERTEX_SHADER ? "vertex" : "fragment"} shader");
print(message.ref);
return 0;
}
return id;
}
int _createShader(String vertexShader, String fragmentShader) {
int program = glCreateProgram();
int vs = _compileShader(vertexShader, GL_VERTEX_SHADER);
int fs = _compileShader(fragmentShader, GL_FRAGMENT_SHADER);
glAttachShader(program, vs);
glAttachShader(program, fs);
glLinkProgram(program);
glValidateProgram(program);
glDeleteShader(vs);
glDeleteShader(fs);
return program;
}
Since this exact code works perfectly in C++ the only thing I can think is that I've messed un some poiner witht ffi. Can you find any mistake I don't see?
EDIT:
One could think that OpenGL cannot interact with glfw but glClear does work
This is the github repository if you need to investigate more code

I've been doing something very similar and have the same problem. I thought I'd tracked it down to the call to glDrawArrays which crashed with a blue screen of death on Windows and a thread stuck in device driver error message. However, with further investigation I believe it is caused by the attribute array:
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeOf<Float>(), 0);
glEnableVertexAttribArray(0);
Commenting out those lines lets it run without crashing but no triangle appears. Uncommenting results in the BSOD.
I also found an interesting post from 2013 which appears to be relevant: Link
That's as far as I know at the moment.

Related

Can an layout(location=n) out skip an index for drawBuffers in WebGL?

I'm working on MRT in my graphics engine.
An interesting point i'm at (and aim to fix) has my generated fragment shader spitting out:
layout(location = 0) out vec4 thing1;
layout(location = 2) out vec4 thing2;
The drawBuffers call on the application side calls something like this:
gl.drawBuffers([gl.COLOR_ATTACHMENT0, gl.NONE, gl.COLOR_ATTACHMENT1]);
However, I'm getting an error:
WebGL: INVALID_OPERATION: drawBuffers: COLOR_ATTACHMENTi_EXT or NONE
So obviously, this would appear to not be allowed. From the documentation I've read from a wikipedia article discussing it:
https://www.khronos.org/opengl/wiki/Fragment_Shader
It states along the lines that the layout location specified refers to the array index specified from the drawBuffers call. So, in theory I would have thought this shader to configuration would be valid.
What am I missing from my understanding that makes this not work?
I ask for understanding mostly and not to fix my program, my generator will correct the indices when I'm done to be 'correct' with no location index skipping.
Update: As noted below, you CAN skip layout locations in the shader. My issue was the improper formatting of the drawBuffers call where I had COLOR_ATTACHMENT1 in the index where ONLY COLOR_ATTACHMENT2 is valid.
This is wrong
gl.drawBuffers([gl.COLOR_ATTACHMENT0, gl.NONE, gl.COLOR_ATTACHMENT1]);
the i-th attachment must be gl.NONE or gl.COLOR_ATTACHMENTi
so it has to be this
gl.drawBuffers([gl.COLOR_ATTACHMENT0, gl.NONE, gl.COLOR_ATTACHMENT2]);
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
const vs = `#version 300 es
void main() {
gl_Position = vec4(0, 0, 0, 1);
gl_PointSize = 100.0;
}
`;
const fs = `#version 300 es
precision highp float;
layout(location = 0) out vec4 thing1;
layout(location = 2) out vec4 thing2;
void main () {
thing1 = vec4(1, 0, 0, 1);
thing2 = vec4(0, 0, 1, 1);
}
`;
const prg = twgl.createProgram(gl, [vs, fs]);
const fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
createTextureAndAttach(gl, gl.COLOR_ATTACHMENT0);
createTextureAndAttach(gl, gl.COLOR_ATTACHMENT2);
gl.drawBuffers([
gl.COLOR_ATTACHMENT0,
gl.NONE,
gl.COLOR_ATTACHMENT2,
]);
const status = gl.checkFramebufferStatus(gl.FRAMEBUFFER);
if (status !== gl.FRAMEBUFFER_COMPLETE) {
console.error("can't render to this framebuffer combo");
return;
}
gl.useProgram(prg);
gl.viewport(0, 0, 1, 1);
gl.drawArrays(gl.POINTS, 0, 1);
checkError();
read(gl.COLOR_ATTACHMENT0);
read(gl.COLOR_ATTACHMENT2);
checkError();
function checkError() {
const err = gl.getError();
if (err) {
console.error(twgl.glEnumToString(gl, err));
}
}
function read(attachmentPoint) {
gl.readBuffer(attachmentPoint);
const pixel = new Uint8Array(4);
gl.readPixels(0, 0, 1, 1, gl.RGBA, gl.UNSIGNED_BYTE, pixel);
console.log(Array.from(pixel).join(','));
}
function createTextureAndAttach(gl, attachmentPoint) {
const tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA8, 1, 1, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
gl.framebufferTexture2D(gl.FRAMEBUFFER, attachmentPoint, gl.TEXTURE_2D, tex, 0);
}
}
main();
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
<canvas></canvas>
note: Referencing the OpenGL docs for WebGL are often wrong and/or misleading for WebGL. You need to reference the OpenGL 3.0 ES spec for WebGL2

WebGL rendering outside of browser paint time

We are building a WebGL application that has some high render-load objects. Is there a way we can render those object outside of browser-paint time, i.e. in the background? We don't want our FPS going down, and breaking up our rendering process is possible (to split between frames).
Three ideas come to mind.
You can render to a texture via a framebuffer over many frames, when you're done you render that texture to the canvas.
const gl = document.querySelector('canvas').getContext('webgl');
const vs = `
attribute vec4 position;
attribute vec2 texcoord;
varying vec2 v_texcoord;
void main() {
gl_Position = position;
v_texcoord = texcoord;
}
`;
const fs = `
precision highp float;
uniform sampler2D tex;
varying vec2 v_texcoord;
void main() {
gl_FragColor = texture2D(tex, v_texcoord);
}
`;
// compile shader, link program, look up locations
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
// gl.createBuffer, gl.bindBuffer, gl.bufferData
const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
numComponents: 2,
data: [
-1, -1,
1, -1,
-1, 1,
-1, 1,
1, -1,
1, 1,
],
},
texcoord: {
numComponents: 2,
data: [
0, 0,
1, 0,
0, 1,
0, 1,
1, 0,
1, 1,
],
},
});
// create a framebuffer with a texture and depth buffer
// same size as canvas
// gl.createTexture, gl.texImage2D, gl.createFramebuffer
// gl.framebufferTexture2D
const framebufferInfo = twgl.createFramebufferInfo(gl);
const infoElem = document.querySelector('#info');
const numDrawSteps = 16;
let drawStep = 0;
let time = 0;
// draw over several frames. Return true when ready
function draw() {
// draw to texture
// gl.bindFrambuffer, gl.viewport
twgl.bindFramebufferInfo(gl, framebufferInfo);
if (drawStep == 0) {
// on the first step clear and record time
gl.disable(gl.SCISSOR_TEST);
gl.clearColor(0, 0, 0, 0);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
time = performance.now() * 0.001;
}
// this represents drawing something.
gl.enable(gl.SCISSOR_TEST);
const halfWidth = framebufferInfo.width / 2;
const halfHeight = framebufferInfo.height / 2;
const a = time * 0.1 + drawStep
const x = Math.cos(a ) * halfWidth + halfWidth;
const y = Math.sin(a * 1.3) * halfHeight + halfHeight;
gl.scissor(x, y, 16, 16);
gl.clearColor(
drawStep / 16,
drawStep / 6 % 1,
drawStep / 3 % 1,
1);
gl.clear(gl.COLOR_BUFFER_BIT);
drawStep = (drawStep + 1) % numDrawSteps;
return drawStep === 0;
}
let frameCount = 0;
function render() {
++frameCount;
infoElem.textContent = frameCount;
if (draw()) {
// draw to canvas
// gl.bindFramebuffer, gl.viewport
twgl.bindFramebufferInfo(gl, null);
gl.disable(gl.DEPTH_TEST);
gl.disable(gl.BLEND);
gl.disable(gl.SCISSOR_TEST);
gl.useProgram(programInfo.program);
// gl.bindBuffer, gl.enableVertexAttribArray, gl.vertexAttribPointer
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
// gl.uniform...
twgl.setUniformsAndBindTextures(programInfo, {
tex: framebufferInfo.attachments[0],
});
// draw the quad
gl.drawArrays(gl.TRIANGLES, 0, 6);
}
requestAnimationFrame(render);
}
requestAnimationFrame(render);
<canvas></canvas>
<div id="info"></div>
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
You can make 2 canvases. A webgl canvas that is not in the DOM. You render to it over many frames and when you're done you draw it to a 2D canvas with ctx.drawImage(webglCanvas, ...) This is basically the same as #1 except you're letting the browser "render that texture to a canvas" part
const ctx = document.querySelector('canvas').getContext('2d');
const gl = document.createElement('canvas').getContext('webgl');
const vs = `
attribute vec4 position;
attribute vec2 texcoord;
varying vec2 v_texcoord;
void main() {
gl_Position = position;
v_texcoord = texcoord;
}
`;
const fs = `
precision highp float;
uniform sampler2D tex;
varying vec2 v_texcoord;
void main() {
gl_FragColor = texture2D(tex, v_texcoord);
}
`;
// compile shader, link program, look up locations
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
const infoElem = document.querySelector('#info');
const numDrawSteps = 16;
let drawStep = 0;
let time = 0;
// draw over several frames. Return true when ready
function draw() {
if (drawStep == 0) {
// on the first step clear and record time
gl.disable(gl.SCISSOR_TEST);
gl.clearColor(0, 0, 0, 0);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
time = performance.now() * 0.001;
}
// this represents drawing something.
gl.enable(gl.SCISSOR_TEST);
const halfWidth = gl.canvas.width / 2;
const halfHeight = gl.canvas.height / 2;
const a = time * 0.1 + drawStep
const x = Math.cos(a ) * halfWidth + halfWidth;
const y = Math.sin(a * 1.3) * halfHeight + halfHeight;
gl.scissor(x, y, 16, 16);
gl.clearColor(
drawStep / 16,
drawStep / 6 % 1,
drawStep / 3 % 1,
1);
gl.clear(gl.COLOR_BUFFER_BIT);
drawStep = (drawStep + 1) % numDrawSteps;
return drawStep === 0;
}
let frameCount = 0;
function render() {
++frameCount;
infoElem.textContent = frameCount;
if (draw()) {
// draw to canvas
ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
ctx.drawImage(gl.canvas, 0, 0);
}
requestAnimationFrame(render);
}
requestAnimationFrame(render);
<canvas></canvas>
<div id="info"></div>
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
You can use OffscreenCanvas and render in a worker. This has only shipped in Chrome though.
Note that if you DOS the GPU (give the GPU too much work) you can still affect the responsiveness of the main thread because most GPUs do not support pre-emptive multitasking. So, if you have a lot of really heavy work then split it up into smaller tasks.
As an example if you took one of the heaviest shaders from shadertoy.com that runs at say 0.5 fps when rendered at 1920x1080, even offscreen it will force the entire machine to run at 0.5 fps. To fix you'd need to render smaller portions over several frames. If it's running at 0.5 fps that suggests you need to split it up into at least 120 smaller parts, maybe more, to keep the main thread responsive and at 120 smaller parts you'd only see the results every 2 seconds.
In fact trying it out shows some issues. Here's Iq's Happy Jumping Example drawn over 960 frames. It still can't keep 60fps on my late 2018 Macbook Air even though it's rendering only 2160 pixels a frame (2 columns of a 1920x1080 canvas). The issue is likely some parts of the scene have to recurse deeply and there is no way knowing before hand which parts of the scene that will be. One reason why shadertoy style shaders using signed distance fields are more of a toy (hence shaderTOY) and not actually a production style technique.
Anyway, the point of that is if you give the GPU too much work you'll still get an unresponsive machine.

GLSL strange IF behaviour

I am writing an iPad app running on iPad Retina using OpenGL ES 3.0
I am trying to use transform feedback for the first time and the vertex shader is acting really strangely. It seems that the boolean expression inside the IF command is always returning true for example this shader:
#version 300 es
in ivec2 coords;
out highp uint vertexID;
uniform sampler2D depthTexture;
uniform int yResolution;
void main () {
ivec2 textCoords = coords;
textCoords.y = yResolution - 1 - coords.y;
bool val = true;
bool val2 = false;
if (val == val2) {
vertexID = uint(6);
return;
}
vertexID = uint(4);
return;
}
When I map the VBO and check the values I get 6!
I'll post the drawing and mapping code below:
glUseProgram(_ValidInputPixelsProg);
uniforms[UNIFORM_VALID_INPUT_PIXEL_DEPTH_TEXTURE] = glGetUniformLocation(_ValidInputPixelsProg, "depthTexture");
uniforms[UNIFORM_VALID_INPUT_PIXEL_Y_RESOLUTION] = glGetUniformLocation(_ValidInputPixelsProg, "yResolution");
glUniform1i(uniforms[UNIFORM_VALID_INPUT_PIXEL_DEPTH_TEXTURE], 10);
glUniform1i(uniforms[UNIFORM_VALID_INPUT_PIXEL_Y_RESOLUTION], RESOLUTION_Y);
glEnable(GL_RASTERIZER_DISCARD);
glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, inputTFB);
glBindVertexArray(validPixelsVA);
glBindBuffer(GL_ARRAY_BUFFER, _inputTFBBuffer);
glBeginTransformFeedback(GL_POINTS);
glDrawArrays(GL_POINTS, 0, RESOLUTION_INNER_XY);
glEndTransformFeedback();
glBindVertexArray(0);
glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, 0);
glDisable(GL_RASTERIZER_DISCARD);
glFlush();
glBindBuffer(GL_ARRAY_BUFFER, indexValidInputPixels);
GLuint* ptr = (GLuint*)(glMapBufferRange(GL_ARRAY_BUFFER, 0, RESOLUTION_INNER_XY * sizeof(GLuint), GL_MAP_READ_BIT));
ptr += 320 + 120;
NSLog(#"\nshader value = %u \n", *ptr);
glUnmapBuffer(GL_ARRAY_BUFFER);
Anybody know what I am doing wrong?
Have the if statement check for a non-variable.
if (val == false)
{
vertexID = uint(6);
return;
}

Opengl 3.3 rendering nothing

I'm trying to create my own lib that can simplify code, so I'm trying to write the tutorials that we can found on web using my lib but I'm have some trouble and I don't know why it's rendereing nothing.
so this is my main file
#include "../../lib/OpenGLControl.h"
#include "../../lib/Object.h"
#include <iostream>
using namespace std;
using namespace sgl;
int main(){
OpenGLControl sglControl;
sglControl.initOpenGL("02 - My First Triangle",1024,768,3,3);
GLuint VertexArrayID;
glGenVertexArrays(1, &VertexArrayID);
glBindVertexArray(VertexArrayID);
//trconfigurações do triangulo
vector<glm::vec3> vertices;
vertices.push_back(glm::vec3(-1.0f,-1.0f,0.0f));
vertices.push_back(glm::vec3(1.0f,-1.0f,0.0f));
vertices.push_back(glm::vec3(0.0f,1.0f,0.0f));
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
Object triangle(vertices);
do{
glClear(GL_COLOR_BUFFER_BIT);
triangle.render(GL_TRIANGLES);
glfwSwapBuffers();
}
while( glfwGetKey( GLFW_KEY_ESC ) != GLFW_PRESS &&
glfwGetWindowParam( GLFW_OPENED ) );
glDeleteVertexArrays(1, &VertexArrayID);
glfwTerminate();
return 0;
}
and this is my Object class functions.
#include "../lib/Object.h"
sgl::Object::Object(){
this->hasColor = false;
}
sgl::Object::Object(std::vector<glm::vec3> vertices){
for(int i = 0; i < vertices.size(); i++)
this->vertices.push_back(vertices[i]);
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, this->vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, this->vertices.size()*sizeof(glm::vec3),&vertices[0], GL_STATIC_DRAW);
}
sgl::Object::~Object(){
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDeleteBuffers(1,&(this->vertexBuffer));
glDeleteBuffers(1,&(this->colorBuffer));
}
void sgl::Object::render(GLenum mode){
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, this->vertexBuffer);
glVertexAttribPointer(
0, // The attribute we want to configure
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
cout<<vertices.size()<<endl;
glDrawArrays(mode, 0, vertices.size());
glDisableVertexAttribArray(0);
}
void sgl::Object::setColor(std::vector<glm::vec3> color){
for(int i = 0; i < color.size(); i++)
this->color.push_back(color[i]);
glGenBuffers(1, &(this->colorBuffer));
glBindBuffer(GL_ARRAY_BUFFER, this->colorBuffer);
glBufferData(GL_ARRAY_BUFFER, color.size()*sizeof(glm::vec3),&color[0], GL_STATIC_DRAW);
this->hasColor = true;
}
void sgl::Object::setVertices(std::vector<glm::vec3> vertices){
for(int i = 0; i < vertices.size(); i++)
this->vertices.push_back(vertices[i]);
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, this->vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, vertices.size()*sizeof(glm::vec3),&vertices[0], GL_STATIC_DRAW);
}
the tutorial that I rewtriting is it:
/*
Copyright 2010 Etay Meiri
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Tutorial 03 - First triangle
*/
#include <stdio.h>
#include <GL/glew.h>
#include <GL/freeglut.h>
#include "math_3d.h"
GLuint VBO;
static void RenderSceneCB()
{
glClear(GL_COLOR_BUFFER_BIT);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableVertexAttribArray(0);
glutSwapBuffers();
}
static void InitializeGlutCallbacks()
{
glutDisplayFunc(RenderSceneCB);
}
static void CreateVertexBuffer()
{
Vector3f Vertices[3];
Vertices[0] = Vector3f(-1.0f, -1.0f, 0.0f);
Vertices[1] = Vector3f(1.0f, -1.0f, 0.0f);
Vertices[2] = Vector3f(0.0f, 1.0f, 0.0f);
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA);
glutInitWindowSize(1024, 768);
glutInitWindowPosition(100, 100);
glutCreateWindow("Tutorial 03");
InitializeGlutCallbacks();
// Must be done after glut is initialized!
GLenum res = glewInit();
if (res != GLEW_OK) {
fprintf(stderr, "Error: '%s'\n", glewGetErrorString(res));
return 1;
}
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
CreateVertexBuffer();
glutMainLoop();
return 0;
}
if some one can find the error please help me!
I found the problem, in constructor when I call de function glbufferdata I was sending the wrong data, the right way to do it is that:
sgl::Object::Object(std::vector<glm::vec3> vertices){
for(int i = 0; i < vertices.size(); i++)
this->vertices.push_back(vertices[i]);
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, this->vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, this->vertices.size()*sizeof(glm::vec3),&(this->vertices[0]), GL_STATIC_DRAW);
}

GLDrawElements Crashing Program

I am trying to render some vertices to an Open GL ES window. My program keeps crashing on the GLDrawElements command. I am trying to pass some VBOs for the vertices "bindPosition", "bindNorml" and "Index" of type GLFloat.
Here is a link to my rendering method:
- (void)render:(CADisplayLink*)displayLink {
glViewport(0, 0, self.frame.size.width, self.frame.size.height);
glClearColor(0.3, 0.5, 0.9, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
GLKMatrix4 modelView = GLKMatrix4Make(1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, -10, -30, 1);
GLKMatrix4 projectionView = GLKMatrix4Make(3.6213202476501465, 0, 0, 0, 0, 2.4142136573791504, 0, 0, 0, 0, -1.0020020008087158, -1, 0, -24.142135620117188, 28.05805778503418, 30);
// Upload Transforms
glUniformMatrix4fv(_modelViewMatrixUniform, 1, 0, modelView.m);
glUniformMatrix4fv(_modelViewProjMatrixUniform, 1, 0, projectionView.m);
// Upload Bones
glUniformMatrix4fv(_bonesUniform, 1, 0, bones);
glBindBuffer(GL_ARRAY_BUFFER, _bindPositionBuffer);
glVertexAttribPointer(_VertexPositionAttribute, 3, GL_FLOAT, GL_FALSE, sizeof(bindPosition), 0);
glBindBuffer(GL_ARRAY_BUFFER, _bindNormalBuffer);
glVertexAttribPointer(_VertexNormalAttribute, 3, GL_FLOAT, GL_FALSE, sizeof(bindNormal), 0);
// 3
glBindBuffer(GL_ARRAY_BUFFER, _indexBuffer);
glDrawElements(GL_TRIANGLE_STRIP, sizeof(Index)/sizeof(Index[0]), GL_UNSIGNED_SHORT, 0);
[_context presentRenderbuffer:GL_RENDERBUFFER];
}
Setting up VBOS:
- (void)setupVBOs {
glGenBuffers(1, &_bindPositionBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _bindPositionBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(bindPosition), bindPosition, GL_STATIC_DRAW);
glGenBuffers(1, &_bindNormalBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _bindNormalBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(bindNormal), bindNormal, GL_STATIC_DRAW);
glGenBuffers(1, &_indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Index), Index, GL_STATIC_DRAW);
}
Compiling Shaders:
- (void)compileShaders {
// 1
GLuint vertexShader = [self compileShader:#"SimpleVertex" withType:GL_VERTEX_SHADER];
GLuint fragmentShader = [self compileShader:#"SimpleFragment" withType:GL_FRAGMENT_SHADER];
// 2
GLuint programHandle = glCreateProgram();
glAttachShader(programHandle, vertexShader);
glAttachShader(programHandle, fragmentShader);
glLinkProgram(programHandle);
// 3
GLint linkSuccess;
glGetProgramiv(programHandle, GL_LINK_STATUS, &linkSuccess);
if (linkSuccess == GL_FALSE) {
GLchar messages[256];
glGetProgramInfoLog(programHandle, sizeof(messages), 0, &messages[0]);
NSString *messageString = [NSString stringWithUTF8String:messages];
NSLog(#"%#", messageString);
exit(1);
}
// 4
glUseProgram(programHandle);
// 5
// Uniform Locations
_bonesUniform = glGetUniformLocation(programHandle, "Bones[0]");
_modelViewMatrixUniform = glGetUniformLocation(programHandle, "ModelViewMatrix");
_modelViewProjMatrixUniform = glGetUniformLocation(programHandle, "ModelViewProjMatrix");
_textureUniform = glGetUniformLocation(programHandle, "Texture");
// Attribute Locations
_VertexBoneWeightAttribute = glGetAttribLocation(programHandle, "VertexBoneWeight");
_VertexBoneIDAttribute = glGetAttribLocation(programHandle, "VertexBoneID");
_VertexTexCoord0Attribute = glGetAttribLocation(programHandle, "VertexTexCoord0");
_VertexNormalAttribute = glGetAttribLocation(programHandle, "VertexNormal");
_VertexPositionAttribute = glGetAttribLocation(programHandle, "VertexPosition");
// Enable vertex pointers
glEnableVertexAttribArray(_VertexBoneWeightAttribute);
glEnableVertexAttribArray(_VertexBoneIDAttribute);
glEnableVertexAttribArray(_VertexTexCoord0Attribute);
glEnableVertexAttribArray(_VertexNormalAttribute);
glEnableVertexAttribArray(_VertexPositionAttribute);
}
Here is a link to my shaders:
attribute vec3 VertexPosition;
attribute vec3 VertexNormal;
attribute vec2 VertexTexCoord0;
attribute vec4 VertexBoneID;
attribute vec4 VertexBoneWeight;
uniform mat4 ModelViewMatrix;
uniform mat4 ModelViewProjMatrix;
uniform vec4 Bones[222];
varying vec3 Normal;
varying vec2 TexCoord0;
void main(void)
{
TexCoord0 = VertexTexCoord0;
// Build 4x3 skinning matrix.
vec4 r0 = Bones[int(VertexBoneID.x) * 3 + 0] * VertexBoneWeight.x;
vec4 r1 = Bones[int(VertexBoneID.x) * 3 + 1] * VertexBoneWeight.x;
vec4 r2 = Bones[int(VertexBoneID.x) * 3 + 2] * VertexBoneWeight.x;
r0 += Bones[int(VertexBoneID.y) * 3 + 0] * VertexBoneWeight.y;
r1 += Bones[int(VertexBoneID.y) * 3 + 1] * VertexBoneWeight.y;
r2 += Bones[int(VertexBoneID.y) * 3 + 2] * VertexBoneWeight.y;
r0 += Bones[int(VertexBoneID.z) * 3 + 0] * VertexBoneWeight.z;
r1 += Bones[int(VertexBoneID.z) * 3 + 1] * VertexBoneWeight.z;
r2 += Bones[int(VertexBoneID.z) * 3 + 2] * VertexBoneWeight.z;
r0 += Bones[int(VertexBoneID.w) * 3 + 0] * VertexBoneWeight.w;
r1 += Bones[int(VertexBoneID.w) * 3 + 1] * VertexBoneWeight.w;
r2 += Bones[int(VertexBoneID.w) * 3 + 2] * VertexBoneWeight.w;
// Skin and transform position.
float px = dot(r0, vec4(VertexPosition, 1.0));
float py = dot(r1, vec4(VertexPosition, 1.0));
float pz = dot(r2, vec4(VertexPosition, 1.0));
gl_Position = ModelViewProjMatrix * vec4(px, py, pz, 1.0);
/* Skin and transform normal into view-space. We assume that the modelview matrix
doesn't contain a scale. Should pass pass in the inverse-transpose really. */
float nx = dot(r0, vec4(VertexNormal, 0.0));
float ny = dot(r1, vec4(VertexNormal, 0.0));
float nz = dot(r2, vec4(VertexNormal, 0.0));
Normal = normalize((ModelViewMatrix * vec4(nx, ny, nz, 0.0)).xyz);
}
Frag Shader:
#ifdef GL_ES
precision highp float;
#endif
uniform sampler2D Texture;
varying vec3 Normal;
varying vec2 TexCoord0;
void main(void)
{
// Ambient term.
vec3 lighting = vec3(0.5,0.5,0.5) * 0.7;
/* Very cheap lighting. Three directional lights, one shining slighting upwards to illuminate
underneath the chin, and then one each shining from the left and right. Light directional
are in view-space and follow the camera rotation by default. */
lighting += dot(Normal, normalize(vec3( 0.0, -0.2, 0.8))) * vec3(0.8, 0.8, 0.6) * 0.6; // Shines forwards and slightly upwards.
lighting += dot(Normal, normalize(vec3(-0.8, 0.4, 0.8))) * vec3(0.8, 0.8, 0.6) * 0.4; // Shines forwards and from left to right.
lighting += dot(Normal, normalize(vec3( 0.8, 0.4, 0.8))) * vec3(0.8, 0.8, 0.6) * 0.4; // Shines forwards and from right to left.
//gl_FragColor = vec4(Normal * 0.5 + vec3(0.5), 1.0);
gl_FragColor = vec4(texture2D(Texture, TexCoord0).xyz * lighting, 1.0);
}
Can anyone see anything in my render method which i have done wrong?
If you bind GLFloat values to your GL_ELEMENT_ARRAY_BUFFER, that could be a problem. You should bind GL_UNSIGNED_BYTE or GL_UNSIGNED_SHORT. http://www.khronos.org/opengles/sdk/docs/man/xhtml/glDrawElements.xml
When binding the index buffer in your render method, you should use GL_ELEMENT_ARRAY_BUFFER instead of GL_ARRAY_BUFFER.
Your call to glDrawElements doesn't look to be correct. Nor does your call to glBufferData.
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Index), Index, GL_STATIC_DRAW);
glDrawElements(GL_TRIANGLE_STRIP, sizeof(Index)/sizeof(Index[0]), GL_UNSIGNED_SHORT, 0);
It looks like you're passing a struct to glBufferData rather than an array of indices. Unless your parameter is called Index but it doesn't seem that way due to how you're using it. You need to build an array of vertex indices and pass this in to tell the GPU what you want to draw.
The second parameter to glDrawElement should be the number of elements you want to render. I had a similar problem in this question which someone helpfully pointed out I was making the same mistake. Hopefully it will be helpful to you too.

Resources