I need to access a buffer from my shader. The buffer is created from an array. (In the real scenario, the array has 10k+ (variable) numbers.)
var myBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, myBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Uint8Array([1,2,3,4,5,6,7]), gl.STATIC_DRAW);
How do I send it so it's usable by the shader?
precision mediump float;
uniform uint[] myBuffer;//???
void main() {
gl_FragColor = vec4(myBuffer[0],myBuffer[1],0,1);
}
Normally, if it were a attribute, it'd be
gl.vertexAttribPointer(myBuffer, 2, gl.UNSIGNED_BYTE, false, 4, 0);
but I need to be able to access the whole array from any shader pixel, so it's not a vertex attribute.
Use a texture if you want random access to lots of data in a shader.
If you have 10000 values you might make a texture that's 100x100 pixels. you can then get each value from the texture with something like
uniform sampler2D u_texture;
vec2 textureSize = vec2(100.0, 100.0);
vec4 getValueFromTexture(float index) {
float column = mod(index, textureSize.x);
float row = floor(index / textureSize.x);
vec2 uv = vec2(
(column + 0.5) / textureSize.x,
(row + 0.5) / textureSize.y);
return texture2D(u_texture, uv);
}
Make sure your texture filtering is set to gl.NEAREST.
Of course if you make textureSize a uniform you could pass in the size of the texture.
As for why the + 0.5 part see this answer
You can use normal gl.RGBA, gl.UNSIGNED_BYTE textures and add/multiply the channels together to get a large range of values. Or, you could use floating point textures if you don't want to mess with that. You need to enable floating point textures.
Related
I am trying to use a VAO with a vertex shader. This works, but only if I set the length of the bufferData to 0. My understanding is that when using a vertex shader, a VBO is not required because my shader is generating the vertices of a quad. If I attempt to create the VAO without binding a buffer, it will also crash.
As I mentioned, this works, however I am concerned because in Apple's Instruments, the OpenGL Expert reports a severe error:
Draw Call Exceeded Array Buffer Bounds
No Buffer Data - DYFKNoBufferData
Here is the code for generating the VAO:
glGenVertexArrays(1, &vaoID); // Create our Vertex Array Object
glBindVertexArray(avoid); // Bind VAO
GLfloat vertices[12]; // Vertices for our square
vertices[0] = -0.5; vertices[1] = 0.5; vertices[2] = 0.0; // Top left corner
vertices[3] = -0.5; vertices[4] = -0.5; vertices[5] = 0.0; // Bottom left corner
vertices[6] = 0.5; vertices[7] = 0.5; vertices[8] = 0.0; // Top Right corner
vertices[9] = 0.5; vertices[10] = -0.5; vertices[11] = 0.0; // Bottom right corner
glGenBuffers(1, &fboTextureVboID); // Create our Vertex Buffer Object
glBindBuffer(GL_ARRAY_BUFFER, fboTextureVboID); // Bind VBO
// As long as I set the buffer data length to 0
// then my glDrawArrays(GL_TRIANGLE_STRIP, 0, 4) call works
// otherwise I get EXC_BAD_ACCESS
glBufferData(GL_ARRAY_BUFFER, 0, vertices, GL_STATIC_DRAW);
// configure vertex attributes
glEnableVertexAttribArray (...
glVertexAttribPointer(...
...
glEnableVertexAttribArray(0); // Make our Vertex Array Object Inactive
glBindVertexArray(0); // Make our Vertex Buffer Object Inactive
Drawing with:
glUseProgram(vertexShaderProgram);
glBindVertexArray(vaoID);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Can I safely ignore Apple's errors? I am trying to use a VAO for the vertex shader because I would like to eliminate all the vertex attribute bindings in my drawing code. Or is there a better way to do this with a shader with or without a VAO?
EDIT:
Here is my vertex shader source:
#version 300 es
uniform lowp mat4 uProjectionMatrix;
in lowp vec4 a_position;
in lowp vec2 a_texCoord;
out lowp vec2 v_texCoord;
void main()
{
gl_Position = uProjectionMatrix * a_position;
v_texCoord = a_texCoord;
}
And fragment shader source:
#version 300 es
precision mediump float;
uniform lowp sampler2D uTexture;
in lowp vec2 v_texCoord;
out lowp vec4 fragmentColor;
void main()
{
fragmentColor = texture( uTexture, v_texCoord );
}
You can pick one of two things.
It is perfectly legal to have a VAO that has no attached buffer objects. However, this does not mean "create a buffer object, but don't put anything in it". It means not to attach buffer objects to the VAO. You just call glGenVertexArrays to generate the vertex array, and you're done.
No calls to glEnableVertexAttribArrays. No calls to glVertexAttribPointer. If you're not using vertex arrays at all, you should not be making these calls at all.
It is also perfectly legal to have a VAO that contains buffer objects. These work like normal.
What you can't do is create a buffer object that has no allocation, then try to use it for vertex data. That's what happens if you remove just glBufferData.
So you have to pick one side of the road or the other. Either your VAO uses one or more buffers, or it doesn't. If it uses a buffer, those buffers have to have storage. If it doesn't, then it won't care.
I'm reading a z-stack of 16-bit images into javascript (i.e. an array of images). I'm new to webgl shaders and I'm having problems with how to loop over the array.
I hope to put the whole variable-length list of images, up to hundreds, onto the GPU. Then, with each change in z-stack, the gpu will render the new image. Currently, I have the following:
<script id="shader-vertex" type="x-shader/x-vertex">
attribute float a_fluxes;
varying float v_flux;
attribute vec2 a_position;
uniform vec2 u_resolution;
void main() {
vec2 zeroToOne = a_position / u_resolution;
gl_Position = vec4((zeroToOne * 2.0 - 1.0) * vec2(1, -1), 0, 1);
v_flux = a_fluxes/256.;
}
</script>
<script id="shader-fragment" type="x-shader/x-fragment">
precision highp float;
varying float v_flux;
void main() {
gl_FragColor = vec4(v_flux, v_flux, v_flux, 1);
}
</script>
The 3 dimensional array of images (x, y, and z) is supposed to be converted to a 1d list going into a_fluxes.
How do I iterate over the x and y dimensions of one of the z images? Do I use a loop to iterate in the vertex shader or am I required to pass in an array of all of the possible x,y coordinates of the pixels to the vertex shader? Should I really be doing these calculations on the fragment shader?
I think you would be well-served to put the textures together into an atlas (a 2d array of images as one image), if you plan to jump between them using the GPU. WebGL doesn't support volume textures, if it did I'd recommend using a slice per image -- but instead you'll probably be best-off slicing-out windows from the UV space of a larger composite image, where each window corresponds to one of your sub-images.
I was reading tutorials from here.
<script class = "WebGL">
var gl;
function initGL() {
// Get A WebGL context
var canvas = document.getElementById("canvas");
gl = getWebGLContext(canvas);
if (!gl) {
return;
}
}
var positionLocation;
var resolutionLocation;
var colorLocation;
var translationLocation;
var rotationLocation;
var translation = [50,50];
var rotation = [0, 1];
var angle = 0;
function initShaders() {
// setup GLSL program
vertexShader = createShaderFromScriptElement(gl, "2d-vertex-shader");
fragmentShader = createShaderFromScriptElement(gl, "2d-fragment-shader");
program = createProgram(gl, [vertexShader, fragmentShader]);
gl.useProgram(program);
// look up where the vertex data needs to go.
positionLocation = gl.getAttribLocation(program, "a_position");
// lookup uniforms
resolutionLocation = gl.getUniformLocation(program, "u_resolution");
colorLocation = gl.getUniformLocation(program, "u_color");
translationLocation = gl.getUniformLocation(program, "u_translation");
rotationLocation = gl.getUniformLocation(program, "u_rotation");
// set the resolution
gl.uniform2f(resolutionLocation, canvas.width, canvas.height);
}
function initBuffers() {
// Create a buffer.
var buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.enableVertexAttribArray(positionLocation);
gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false, 0, 0);
// Set Geometry.
setGeometry(gl);
}
function setColor(red, green, blue) {
gl.uniform4f(colorLocation, red, green, blue, 1);
}
// Draw the scene.
function drawScene() {
// Clear the canvas.
gl.clear(gl.COLOR_BUFFER_BIT);
// Set the translation.
gl.uniform2fv(translationLocation, translation);
// Set the rotation.
gl.uniform2fv(rotationLocation, rotation);
// Draw the geometry.
gl.drawArrays(gl.TRIANGLES, 0, 6);
}
// Fill the buffer with the values that define a letter 'F'.
function setGeometry(gl) {
/*Assume size1 is declared*/
var vertices = [
-size1/2, -size1/2,
-size1/2, size1/2,
size1/2, size1/2,
size1/2, size1/2,
size1/2, -size1/2,
-size1/2, -size1/2 ];
gl.bufferData(
gl.ARRAY_BUFFER,
new Float32Array(vertices),
gl.STATIC_DRAW);
}
function animate() {
translation[0] += 0.01;
translation[1] += 0.01;
angle += 0.01;
rotation[0] = Math.cos(angle);
rotation[1] = Math.sin(angle);
}
function tick() {
requestAnimFrame(tick);
drawScene();
animate();
}
function start() {
initGL();
initShaders();
initBuffers();
setColor(0.2, 0.5, 0.5);
tick();
}
</script>
<!-- vertex shader -->
<script id="2d-vertex-shader" type="x-shader/x-vertex">
attribute vec2 a_position;
uniform vec2 u_resolution;
uniform vec2 u_translation;
uniform vec2 u_rotation;
void main() {
vec2 rotatedPosition = vec2(
a_position.x * u_rotation.y + a_position.y * u_rotation.x,
a_position.y * u_rotation.y - a_position.x * u_rotation.x);
// Add in the translation.
vec2 position = rotatedPosition + u_translation;
// convert the position from pixels to 0.0 to 1.0
vec2 zeroToOne = position / u_resolution;
// convert from 0->1 to 0->2
vec2 zeroToTwo = zeroToOne * 2.0;
// convert from 0->2 to -1->+1 (clipspace)
vec2 clipSpace = zeroToTwo - 1.0;
gl_Position = vec4(clipSpace, 0, 1);
}
</script>
<!-- fragment shader -->
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
uniform vec4 u_color;
void main() {
gl_FragColor = u_color;
}
</script>
My WebGL program for 1 shape works something like this:
Get a context (gl) from the canvas element.
initialize buffers with the shape of my object
drawScene() : a call to gl.drawArrays()
If there is animation, an update function, which updates my shape's angles, positions,
and then drawScene() both in tick(), so that it gets called repeatedly.
Now when I need more than 1 shape, should I fill the single buffer at once with many objects and then use it to later call drawScene() drawing all the objects at once
[OR]
should I repeated call the initBuffer and drawScene() from requestAnimFrame().
In pseudo code
At init time
Get a context (gl) from the canvas element.
for each shader
create shader
look up attribute and uniform locations
for each shape
initialize buffers with the shape
for each texture
create textures and/or fill them with data.
At draw time
for each shape
if the last shader used is different than the shader needed for this shape call gl.useProgram
For each attribute needed by shader
call gl.enableVertexAttribArray, gl.bindBuffer and gl.vertexAttribPointer for each attribute needed by shape with the attribute locations for the current shader.
For each uniform needed by shader
call gl.uniformXXX with the desired values using the locations for the current shader
call gl.drawArrays or if the data is indexed called gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, bufferOfIndicesForCurrentShape) followed by gl.drawElements
Common Optimizations
1) Often you don't need to set every uniform. For example if you are drawing 10 shapes with the same shader and that shader takes a viewMatrix or cameraMatrix it's likely that viewMatrix uniform or cameraMatrix uniform is the same for every shape so just set it once.
2) You can often move the calls to gl.enableVertexAttribArray to initialization time.
Having multiple meshes in one buffer (and rendering them with a single gl.drawArrays() as you're suggesting) yields better performance in complex scenes but obviously at that point you're not able to change shader uniforms (such as transformations) per mesh.
If you want to have the meshes running around independently, you'll have to render each one separately. You could still keep all the meshes in one buffer to avoid some overhead from gl.bindBuffer() calls but imho that won't help that much, at least not in simple scenes.
Create your buffers separately for each object you want on the scene otherwise they won't be able to move and use shader effects independently.
But that is in case your objects are different. From what I got here I think you just want to draw the same shape more than once on different positions right?
The way you go about that is you just set that translationLocation uniform right there with a different translation matrix after drawing the shape for the first time. That way when you draw the shape again it will be located somewhere else and not in top of the other one so you can see it. You can set all those transformation matrices differently and then just call gl.drawElements again since you're going to draw the same buffers that are already in use.
I'm trying to port some OpenGL rendering code I wrote for iOS to a Windows app. The code runs fine on iOS, but on Windows it doesn't draw anything. I've narrowed the problem down to this bit of code as fixed function stuff (such as glutSolidTorus) draws fine, but when shaders are enabled, nothing works.
Here's the rendering code:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_INDEX_ARRAY);
// Set the vertex buffer as current
this->vertexBuffer->MakeActive();
// Get a reference to the vertex description to save copying
const AT::Model::VertexDescription & vd = this->vertexBuffer->GetVertexDescription();
std::vector<GLuint> handles;
// Loop over the vertex descriptions
for (int i = 0, stride = 0; i < vd.size(); ++i)
{
// Get a handle to the vertex attribute on the shader object using the name of the current vertex description
GLint handle = shader.GetAttributeHandle(vd[i].first);
// If the handle is not an OpenGL 'Does not exist' handle
if (handle != -1)
{
glEnableVertexAttribArray(handle);
handles.push_back(handle);
// Set the pointer to the vertex attribute, with the vertex's element count,
// the size of a single vertex and the start position of the first attribute in the array
glVertexAttribPointer(handle, vd[i].second, GL_FLOAT, GL_FALSE,
sizeof(GLfloat) * (this->vertexBuffer->GetSingleVertexLength()),
(GLvoid *)stride);
}
// Add to the stride value with the size of the number of floats the vertex attr uses
stride += sizeof(GLfloat) * (vd[i].second);
}
// Draw the indexed elements using the current vertex buffer
glDrawElements(GL_TRIANGLES,
this->vertexBuffer->GetIndexArrayLength(),
GL_UNSIGNED_SHORT, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_INDEX_ARRAY);
// Disable the vertexattributearrays
for (int i = 0, stride = 0; i < handles.size(); ++i)
{
glDisableVertexAttribArray(handles[i]);
}
It's inside a function that takes a shader as a parameter, and the vertex description is a list of pairs: attribute handles to number of elements. Uniforms are being set outside this function. I'm enabling the shader for use before it's passed in to the function. Here are the two shader sources:
Vertex:
attribute vec3 position;
attribute vec2 texCoord;
attribute vec3 normal;
// Uniforms
uniform mat4 Model;
uniform mat4 View;
uniform mat4 Projection;
uniform mat3 NormalMatrix;
/// OUTPUTS
varying vec2 o_texCoords;
varying vec3 o_normals;
// Vertex Shader
void main()
{
// Do the normal position transform
gl_Position = Projection * View * Model * vec4(position, 1.0);
// Transform the normals to world space
o_normals = NormalMatrix * normal;
// Pass texture coords on for interpolation
o_texCoords = texCoord;
}
Fragment:
varying vec2 o_texCoords;
varying vec3 o_normals;
/// Fragment Shader
void main()
{
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
}
I'm running OpenGL 2.1 with Shader language 1.2. I'd be most appreciative for any help anyone can give me.
I'm seeng that you are assigning black color for the output color for the fragment in your fragment shader. Try changing that to something like
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
and see if the objects in the scene will be colored with green.
I came back to this recently and it seems that I wasn't checking for errors during rendering, it was giving me a 1285 error GL_OUT_OF_MEMORY after calling glDrawElements(). This lead me to check the vertex buffer objects to see if they contained any data and it turns out I wasn't properly deep copying them in a wrapper class, and as a result they were being deleted before any rendering happened. Fixing this sorted the issue.
Thank you for your suggestions.
I'm trying to render a native planar image to an OpenGL ES 2.0 texture in iOS 4.3 on an iPhone 4. The texture however winds up all black. My camera is configured as such:
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
and I'm passing the pixel data to my texture like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_RGB_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, CVPixelBufferGetBaseAddress(cameraFrame));
My fragement shaders is:
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main() {
lowp vec4 color;
color = texture2D(videoFrame, textureCoordinate);
lowp vec3 convertedColor = vec3(-0.87075, 0.52975, -1.08175);
convertedColor += 1.164 * color.g; // Y
convertedColor += vec3(0.0, -0.391, 2.018) * color.b; // U
convertedColor += vec3(1.596, -0.813, 0.0) * color.r; // V
gl_FragColor = vec4(convertedColor, 1.0);
}
and my vertex shader is
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
This works just fine when I'm working with an BGRA image, and my fragment shader only does
gl_FragColor = texture2D(videoFrame, textureCoordinate);
What if anything am I missing here? Thanks!
OK. We have a working success here. The key was passing the Y and the UV as two separate textures to the fragment shader. Here is the final shader:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 textureCoordinate;
uniform sampler2D videoFrame;
uniform sampler2D videoFrameUV;
const mat3 yuv2rgb = mat3(
1, 0, 1.2802,
1, -0.214821, -0.380589,
1, 2.127982, 0
);
void main() {
vec3 yuv = vec3(
1.1643 * (texture2D(videoFrame, textureCoordinate).r - 0.0625),
texture2D(videoFrameUV, textureCoordinate).r - 0.5,
texture2D(videoFrameUV, textureCoordinate).a - 0.5
);
vec3 rgb = yuv * yuv2rgb;
gl_FragColor = vec4(rgb, 1.0);
}
You'll need to create your textures along like this:
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, bufferWidth, bufferHeight, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 0));
glBindTexture(GL_TEXTURE_2D, videoFrameTextureUV);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA, bufferWidth/2, bufferHeight/2, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 1));
and then pass them like this:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, videoFrameTextureUV);
glActiveTexture(GL_TEXTURE0);
glUniform1i(videoFrameUniform, 0);
glUniform1i(videoFrameUniformUV, 1);
Boy am I relieved!
P.S. The values for the yuv2rgb matrix are from here http://en.wikipedia.org/wiki/YUV and I copied code from here http://www.ogre3d.org/forums/viewtopic.php?f=5&t=25877 to figure out how to get the correct YUV values to begin with.
Your code appears to attempt to convert a 32-bit colour in 444-plus-unused-byte to RGBA. That's not going to work too well. I don't know of anything that outputs "YUVA", for one.
Also, I think the returned alpha channel is 0 for BGRA camera output, not 1, so I'm not sure why it works (IIRC to convert it to a CGImage you need to use AlphaNoneSkipLast).
The 420 "bi planar" output is structued something like this:
A header telling you where the planes are (used by CVPixelBufferGetBaseAddressOfPlane() and friends)
The Y plane: height × bytes_per_row_1 × 1 bytes
The Cb,Cr plane: height/2 × bytes_per_row_2 × 2 bytes (2 bytes per 2x2 block).
bytes_per_row_1 is approximately width and bytes_per_row_2 is approximately width/2, but you'll want to use CVPixelBufferGetBytesPerRowOfPlane() for robustness (you also might want to check the results of ..GetHeightOfPlane and ...GetWidthOfPlane).
You might have luck treating it as a 1-component width*height texture and a 2-component width/2*height/2 texture. You'll probably want to check bytes-per-row and handle the case where it isn't simply width*number-of-components (although this is probably true for most of the video modes). AIUI, you'll also want to flush the GL context before calling CVPixelBufferUnlockBaseAddress().
Alternatively, you can copy it all to memory into your expected format (optimizing this loop might be a bit tricky). Copying has the advantage that you don't need to worry about things accessing memory after you've unlocked the pixel buffer.