WebGL got weird drawing(loop not closed) with gl.TRIANGLES [closed] - webgl

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
I am trying to use gl.TRIANGLES to draw an "I" shape, I have meshed it in 24 triangles, the first 22 gives me this shape, however, whenever I added the last two triangle vertices, I got this weird shape, I double-checked the coordinates of the last two triangles, and they are correct, so not sure what could be wrong in my code, which I pasted it here.
shape "I" with 22 triangles
weird shape "I" with 24 triangles
/**
* #file A simple WebGL example drawing a triangle with colors
* #author Eric Shaffer <shaffer1#eillinois.edu>
*/
/** #global The WebGL context */
var gl;
/** #global The HTML5 canvas we draw on */
var canvas;
/** #global A simple GLSL shader program */
var shaderProgram;
/** #global The WebGL buffer holding the triangle */
var vertexPositionBuffer;
/** #global The WebGL buffer holding the vertex colors */
var vertexColorBuffer;
/**
* Creates a context for WebGL
* #param {element} canvas WebGL canvas
* #return {Object} WebGL context
*/
function createGLContext(canvas) {
var context = null;
context = canvas.getContext("webgl2");
if (context) {
context.viewportWidth = canvas.width;
context.viewportHeight = canvas.height;
} else {
alert("Failed to create WebGL context!");
}
return context;
}
/**
* Loads Shaders
* #param {string} id ID string for shader to load. Either vertex shader/fragment shader
*/
function loadShaderFromDOM(id) {
var shaderScript = document.getElementById(id);
// If we don't find an element with the specified id
// we do an early exit
if (!shaderScript) {
return null;
}
var shaderSource = shaderScript.text;
var shader;
if (shaderScript.type == "x-shader/x-fragment") {
shader = gl.createShader(gl.FRAGMENT_SHADER);
} else if (shaderScript.type == "x-shader/x-vertex") {
shader = gl.createShader(gl.VERTEX_SHADER);
} else {
return null;
}
gl.shaderSource(shader, shaderSource);
gl.compileShader(shader);
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
alert(gl.getShaderInfoLog(shader));
return null;
}
return shader;
}
/**
* Setup the fragment and vertex shaders
*/
function setupShaders() {
vertexShader = loadShaderFromDOM("shader-vs");
fragmentShader = loadShaderFromDOM("shader-fs");
shaderProgram = gl.createProgram();
gl.attachShader(shaderProgram, vertexShader);
gl.attachShader(shaderProgram, fragmentShader);
gl.linkProgram(shaderProgram);
if (!gl.getProgramParameter(shaderProgram, gl.LINK_STATUS)) {
alert("Failed to setup shaders");
}
gl.useProgram(shaderProgram);
shaderProgram.vertexPositionAttribute = gl.getAttribLocation(shaderProgram, "aVertexPosition");
shaderProgram.vertexColorAttribute = gl.getAttribLocation(shaderProgram, "aVertexColor");
gl.enableVertexAttribArray(shaderProgram.vertexPositionAttribute);
gl.enableVertexAttribArray(shaderProgram.vertexColorAttribute);
}
/**
* Populate buffers with data
*/
function setupBuffers() {
vertexPositionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vertexPositionBuffer);
var triangleVertices = [
/*triangle*/
-0.7, -1.0, 0.0,
-0.6, -0.9, 0.0,
0.7, -1.0, 0.0,
/*triangle*/
-0.6, -0.9, 0.0,
0.7, -1.0, 0.0,
0.6, -0.9, 0.0,
/*triangle*/
0.7, -1.0, 0.0,
0.7, -0.5, 0.0,
0.6, -0.6, 0.0,
/*triangle*/
0.6, -0.9, 0.0,
0.6, -0.6, 0.0,
0.7, -1.0, 0.0,
/*triangle*/
0.6, -0.6, 0.0,
0.7, -0.5, 0.0,
0.3, -0.5, 0.0,
/*triangle*/
0.6, -0.6, 0.0,
0.3, -0.5, 0.0,
0.2, -0.6, 0.0,
/*triangle*/
0.3, -0.5, 0.0,
0.2, -0.6, 0.0,
0.3, 0.5, 0.0,
/*triangle*/
0.2, -0.6, 0.0,
0.3, 0.5, 0.0,
0.2, 0.6, 0.0,
/*triangle*/
0.3, 0.5, 0.0,
0.2, 0.6, 0.0,
0.6, 0.6, 0.0,
/*triangle*/
0.3, 0.5, 0.0,
0.6, 0.6, 0.0,
0.7, 0.5, 0.0,
/*triangle*/
0.6, 0.6, 0.0,
0.7, 0.5, 0.0,
0.7, 1.0, 0.0,
/*triangle*/
0.6, 0.6, 0.0,
0.7, 1.0, 0.0,
0.6, 0.9, 0.0,
/*triangle*/
0.7, 1.0, 0.0,
0.6, 0.9, 0.0,
-0.6, 0.9, 0.0,
/*triangle*/
0.7, 1.0, 0.0,
-0.6, 0.9, 0.0,
-0.7, 1.0, 0.0,
/*triangle*/
-0.7, 1.0, 0.0,
-0.7, 0.5, 0.0,
-0.6, 0.9, 0.0,
/*triangle*/
-0.7, 0.5, 0.0,
-0.6, 0.6, 0.0,
-0.6, 0.9, 0.0,
/*triangle*/
-0.7, 0.5, 0.0,
-0.6, 0.6, 0.0,
-0.3, 0.5, 0.0,
/*triangle*/
-0.6, 0.6, 0.0,
-0.3, 0.5, 0.0,
-0.2, 0.6, 0.0,
/*triangle*/
-0.3, 0.5, 0.0,
-0.2, 0.6, 0.0,
-0.3, -0.5, 0.0,
/*triangle*/
-0.2, 0.6, 0.0,
-0.3, -0.5, 0.0,
-0.2, -0.6, 0.0,
/*triangle*/
-0.3, -0.5, 0.0,
-0.2, -0.6, 0.0,
-0.7, -0.5, 0.0,
/*triangle*/
-0.2, -0.6, 0.0,
-0.7, -0.5, 0.0,
-0.6, -0.6, 0,0,
/*triangle*/
-0.7, -0.5, 0.0,
-0.6, -0.6, 0,0,
-0.7, -1.0, 0.0,
/*triangle*/
-0.6, -0.6, 0,0,
-0.7, -1.0, 0.0,
-0.6, -0.9, 0.0,
];
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(triangleVertices), gl.STATIC_DRAW);
vertexPositionBuffer.itemSize = 3;
vertexPositionBuffer.numberOfItems = 72;
vertexColorBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vertexColorBuffer);
var colors = [
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
];
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(colors), gl.STATIC_DRAW);
vertexColorBuffer.itemSize = 4;
vertexColorBuffer.numItems = 72;
}
/**
* Draw model...render a frame
*/
function draw() {
gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight);
gl.clear(gl.COLOR_BUFFER_BIT);
gl.bindBuffer(gl.ARRAY_BUFFER, vertexPositionBuffer);
gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute,
vertexPositionBuffer.itemSize, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, vertexColorBuffer);
gl.vertexAttribPointer(shaderProgram.vertexColorAttribute,
vertexColorBuffer.itemSize, gl.FLOAT, false, 0, 0);
gl.drawArrays(gl.TRIANGLES, 0, vertexPositionBuffer.numberOfItems);
}
/**
* Startup function called from html code to start program.
*/
function startup() {
console.log("No bugs so far...");
canvas = document.getElementById("myGLCanvas");
gl = createGLContext(canvas);
setupShaders();
setupBuffers();
gl.clearColor(0.0, 0.0, 0.0, 1.0);
draw();
}

You have 3 places in your vertex data were you have 0,0 instead of 0.0
var triangleVertices = [
...
/*triangle*/
-0.2, -0.6, 0.0,
-0.7, -0.5, 0.0,
-0.6, -0.6, 0,0, // <==---
/*triangle*/
-0.7, -0.5, 0.0,
-0.6, -0.6, 0,0, // <==---
-0.7, -1.0, 0.0,
/*triangle*/
-0.6, -0.6, 0,0, // <==---
-0.7, -1.0, 0.0,
-0.6, -0.9, 0.0,
];

Related

How to remove delay on start and end of animateKeyframes animation - Swift UIKit

Is there a way to remove the delay on start and end of the animateKeyframes animation?
As you can see there is a slight delay before the animation starts; after tapping on the Animate button and also at the end of the animation. What I would like to be able to do is start the animation as soon as the Animate button is tapped since this is meant to provide feedback to the user.
Is this the normal behavior when using animateKeyframes animations with the calculationModeCubic? Is there a way to make the animation starts as soon as the button is tapped?
Sorry about the misspelling error (Aniamate).
Here is the code:
#IBAction func startAnimation(_ sender: Any) {
addMyView()
UIView.animateKeyframes(withDuration: 3.0, delay: 0.0, options: [.calculationModeCubic], animations: {
UIView.addKeyframe(withRelativeStartTime: 0, relativeDuration: 0.2, animations: {
self.myView!.center = CGPoint(x: self.pointA.center.x, y: self.pointA.center.y)
})
UIView.addKeyframe(withRelativeStartTime: 0.2, relativeDuration: 0.2, animations: {
self.myView!.center = CGPoint(x: self.pointB.center.x + 55, y: self.pointB.center.y - 5 )
self.myView!.transform = CGAffineTransform(scaleX: 0.75, y: 0.75)
})
UIView.addKeyframe(withRelativeStartTime: 0.4, relativeDuration: 0.2, animations: {
self.myView!.center = CGPoint(x: self.pointB.center.x, y: self.pointB.center.y)
self.myView!.transform = CGAffineTransform(scaleX: 0.5, y: 0.5)
})
}, completion: { _ in
self.myView?.removeFromSuperview()
})
}
func addMyView(){
myView = UIView(frame: CGRect(x: pointA.center.x - 25, y: pointA.center.y - 25, width: 50, height: 50))
myView?.backgroundColor = .blue
myView?.layer.cornerRadius = myView!.frame.height / 2
view.addSubview(myView!)
}
The key is to keep tweaking the withRelativeStartTime: and the relativeDuration: parameters.
#IBAction func startAnimation(_ sender: Any) {
addMyView()
UIView.animateKeyframes(withDuration: 1.5, delay: 0.0, options: [.calculationModeCubic], animations: {
UIView.addKeyframe(withRelativeStartTime: 0, relativeDuration: 0.2, animations: {
self.myView!.center = CGPoint(x: self.pointA.center.x, y: self.pointA.center.y)
})
UIView.addKeyframe(withRelativeStartTime: 0.0, relativeDuration: 0.9, animations: {
self.myView!.center = CGPoint(x: self.pointB.center.x + 75, y: self.pointB.center.y - 50 )
self.myView!.transform = CGAffineTransform(scaleX: 0.75, y: 0.75)
})
UIView.addKeyframe(withRelativeStartTime: 0.1, relativeDuration: 0.7, animations: {
self.myView!.center = CGPoint(x: self.pointB.center.x, y: self.pointB.center.y)
self.myView!.transform = CGAffineTransform(scaleX: 0.3, y: 0.3)
})
}, completion: { _ in
self.myView?.removeFromSuperview()
})
}

how to get wheel encoder count from turtlebot's /odom topic

turtlebot's odom topic gives:
header: seq: 406289 stamp:
secs: 4392
nsecs: 160000000 frame_id: odom child_frame_id: base_footprint pose: pose:
position:
x: 1.56701645246e-05
y: -9.82132735628e-06
z: 0.0
orientation:
x: 0.0
y: 0.0
z: -0.548275342929
w: 0.836297882537 covariance: [0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1000000.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1000000.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1000000.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.05] twist: twist:
linear:
x: -2.67171244095e-06
y: 0.0
z: 0.0
angular:
x: 0.0
y: 0.0
z: -0.000185729678152 covariance: [0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
I'd like to find the wheel encoder count from this data. How does turtlebot use wheel encoder values to compute positions? Where can I find the code that does this?

Why does a CGAffineTransform applied in steps behave differently than applied at once?

I'm seeing some unexpected, inconsistent behavior when applying transforms in steps as opposed to applying them at once and I'd like to know why.
Say we have a label that we'd like to translate to the right 100 and down 50 and then scale up to 1.5 times the original size. So there are two transformations:
Translation
Scale
And say that we are experimenting with two different animations:
Perform the translation and scale in parallel
Perform the translation, then perform the scale in sequence
In the first animation you might do something like this:
UIView.animate(withDuration: 5, animations: {
label.transform = label.transform.translatedBy(x: 100, y: 50).scaledBy(x: 1.5, y: 1.5)
}, completion: nil)
And everything behaves how you'd expect. The label translates and scales smoothly at the same time.
In the second animation:
UIView.animate(withDuration: 5, animations: {
label.transform = label.transform.translatedBy(x: 100, y: 50)
}, completion: { _ in
UIView.animate(withDuration: 5, animations: {
label.transform = label.transform.scaledBy(x: 1.5, y: 1.5)
}, completion: nil)
})
The label translates correctly, and then boom, it jumps unexpectedly and then starts to scale.
What causes that sudden, unexpected jump? From inspecting the matrices for each transform (the parallelized and the sequential transforms) the values are the same, as would be expected.
Parallelized Animation
transform before translate and scale: CGAffineTransform(a: 1.0, b: 0.0, c: 0.0, d: 1.0, tx: 0.0, ty: 0.0)
translate and scale transform: CGAffineTransform(a: 1.5, b: 0.0, c: 0.0, d: 1.5, tx: 100.0, ty: 50.0)
transform after translate and scale: CGAffineTransform(a: 1.5, b: 0.0, c: 0.0, d: 1.5, tx: 100.0, ty: 50.0)
Sequential Animation
transform before translation: CGAffineTransform(a: 1.0, b: 0.0, c: 0.0, d: 1.0, tx: 0.0, ty: 0.0)
translation transform: CGAffineTransform(a: 1.0, b: 0.0, c: 0.0, d: 1.0, tx: 100.0, ty: 50.0)
transform after translation: CGAffineTransform(a: 1.0, b: 0.0, c: 0.0, d: 1.0, tx: 100.0, ty: 50.0)
transform before scale: CGAffineTransform(a: 1.0, b: 0.0, c: 0.0, d: 1.0, tx: 100.0, ty: 50.0)
scale transform: CGAffineTransform(a: 1.5, b: 0.0, c: 0.0, d: 1.5, tx: 100.0, ty: 50.0)
transform after scale: CGAffineTransform(a: 1.5, b: 0.0, c: 0.0, d: 1.5, tx: 100.0, ty: 50.0)
So what is it that causes the sudden jump?
You need to understand how animation works in iOS. Your animation closure block runs right away and the final values are assigned to the object straight away (This is one of the most important points that a
lot of people forget). All the animation block does is that it makes it appear to take that much time. Let me elaborate with an example.
let x = UIView()
x.alpha = 0
//At this point, alpha is 0
UIView.animate(withDuration: 5, animations: {
x.alpha = 1
}, completion: nil)
//At this point, alpha is 1 right away. But the animation itself will take 5 seconds
With that in mind, let's look at the second example that you posted
UIView.animate(withDuration: 5, animations: {
label.transform = label.transform.translatedBy(x: 100, y: 50)
}, completion: { _ in
UIView.animate(withDuration: 5, animations: {
label.transform = label.transform.scaledBy(x: 1.5, y: 1.5)
}, completion: nil)
})
The first animation runs and translates your view right away. It only takes 5 seconds to move there but your view's x & y values have already changed. Upon completion, you scale it resulting in a weird behaviour.

OpenGL Texture not displaying on iOS with Swift

I would like to show a texture with OpenGLES2 on iOS using Swift, but unfortunately the texture does not show up :-(. I don't know what I am doing wrong. Obviously the 'Test-Texture' is correctly loaded (and is POT). Maybe the call to glEnableVertexAttribArray is wrong?
func BUFFER_OFFSET(i: Int) -> UnsafePointer<Void> {
let p: UnsafePointer<Void> = nil
return p.advancedBy(i)
}
let vertices:[GLfloat] = [
// Positions // Colors // Texture Coords
0.5, 0.5, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, // Top Right
0.5, -0.5, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, // Bottom Right
-0.5, -0.5, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, // Bottom Left
-0.5, 0.5, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0 // Top Left
]
let indices:[GLuint] = [ // Note that we start from 0!
0, 1, 3, // First Triangle
1, 2, 3 // Second Triangle
]
func setupGL() {
EAGLContext.setCurrentContext(self.context)
self.loadShaders()
self.effect = GLKBaseEffect()
self.effect!.light0.enabled = GLboolean(GL_TRUE)
self.effect!.light0.diffuseColor = GLKVector4Make(1.0, 0.4, 0.4, 1.0)
do {
testTexture = try GLKTextureLoader.textureWithContentsOfFile(NSBundle.mainBundle().pathForResource("MyTexture1024x1024", ofType: "png")!, options: [GLKTextureLoaderOriginBottomLeft:true, GLKTextureLoaderApplyPremultiplication:true])
print("successfully loaded test texture")
}
catch let error as NSError {
print("could not load test texture \(error)")
}
glEnable(GLenum(GL_DEPTH_TEST))
glGenVertexArraysOES(1, &vertexArray)
glBindVertexArrayOES(vertexArray)
glGenBuffers(1, &vertexBuffer)
glBindBuffer(GLenum(GL_ARRAY_BUFFER), vertexBuffer)
glBufferData(GLenum(GL_ARRAY_BUFFER), GLsizeiptr(sizeof(GLfloat) * vertices.count), vertices, GLenum(GL_STATIC_DRAW))
glGenBuffers(1, &indexBuffer)
glBindBuffer(GLenum(GL_ELEMENT_ARRAY_BUFFER), indexBuffer)
glBufferData(GLenum(GL_ELEMENT_ARRAY_BUFFER), GLsizeiptr(sizeof(GLfloat) * indices.count), indices, GLenum(GL_STATIC_DRAW))
glEnableVertexAttribArray(GLuint(GLKVertexAttrib.Position.rawValue))
glVertexAttribPointer(GLuint(GLKVertexAttrib.Position.rawValue), 3, GLenum(GL_FLOAT), GLboolean(GL_FALSE), GLsizei(sizeof(GLfloat) * 8), BUFFER_OFFSET(0))
glEnableVertexAttribArray(GLuint(GLKVertexAttrib.TexCoord0.rawValue))
glVertexAttribPointer(GLuint(GLKVertexAttrib.TexCoord0.rawValue), 2, GLenum(GL_FLOAT), GLboolean(GL_FALSE), GLsizei(sizeof(GLfloat) * 8), BUFFER_OFFSET(sizeof(GLfloat) * 6))
glBindVertexArrayOES(0)
}
override func glkView(view: GLKView, drawInRect rect: CGRect) {
glClearColor(0.65, 0.65, 0.65, 1.0)
glClear(GLbitfield(GL_COLOR_BUFFER_BIT) | GLbitfield(GL_DEPTH_BUFFER_BIT))
//use specified vertex buffer
glBindVertexArrayOES(vertexArray)
//use specified index buffer
glBindBuffer(GLenum(GL_ELEMENT_ARRAY_BUFFER), indexBuffer)
glEnable(GLenum(GL_BLEND));
glBlendFunc(GLenum(GL_SRC_ALPHA), GLenum(GL_ONE_MINUS_SRC_ALPHA));
self.effect!.texture2d0.name = testTexture.name
self.effect!.texture2d0.enabled = 1
// Render the object with GLKit
self.effect!.prepareToDraw()
glDrawElements(GLenum(GL_TRIANGLES), 6, GLenum(GL_UNSIGNED_INT), nil)
}
Finally, I found the cause of the problem, which was the following code that is not included above (in setupGL()):
self.effect = GLKBaseEffect()
self.effect!.light0.enabled = GLboolean(GL_TRUE)
self.effect!.light0.diffuseColor = GLKVector4Make(1.0, 0.4, 0.4, 1.0)
So the reason was actually the light. When disabled it works!
self.effect = GLKBaseEffect()
self.effect!.light0.enabled = GLboolean(GL_FALSE)

How to do Picking via IdMapping in Scenejs?

We have ONE huge Json Mesh like this which we render with scenejs:
{"vertices":[
0.0, 0.0, 0.0,
0.0, 0.0, 2.0,
1.0, 0.0, 2.0,
0.0, 2.0, 2.0, //... next object
],
"normals":[
0.0, 0.0, 1.0,
0.0, 0.0, 1.0,
0.0, 0.0, 1.0,
0.0, 0.0, 1.0, //... next object
],
"colors":[
1.0, 0.0, 0.0, 1.0,
0.0, 1.0, 0.0, 1.0,
0.0, 0.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, //... next object
],
"idMapColors":[
0.0, 0.0, 0.0, 0.756,
0.0, 0.0, 0.0, 0.756,
0.0, 0.0, 0.0, 0.756,
0.0, 0.0, 0.0, 0.756, //... next object
]}
The idMapColors are unique for each "object" and can be converted into an id to provide additional information.
We now want to render the mesh with normal colors on screen and with idMapColors in a second rendering path. We than want to read out the color value in the second FrameBuffer (for idMapColors) at a specific Point(Mouse Position).
How to do this in Scenejs? We could render the idMapColors to a Framebuffer, but how to access its data?
On the wiki at https://github.com/xeolabs/scenejs/wiki/frameBuf we found that picking is in further work, is there any possibility to do this at the moment?

Resources