Direct3D 11 not rasterizing any vertices - directx

I'm trying to render a simple triangle on screen using Direct3D 11, but nothing shows up. Here are my vertices:
SimpleVertex vertices[ 3 ] =
{
{ XMFLOAT3( -1.0f, -1.0f, 0.0f ) },
{ XMFLOAT3( 1.0f, -1.0f, 0.0f ) },
{ XMFLOAT3( -1.0f, 1.0f, 0.0f ) },
};
The expected output is a triangle with one point in the top left corner of the screen, one point in the top right corner of the screen, and one point in the bottom left corner of the screen. However, nothing is being rendered anywhere.
I'm not performing any matrix transformations, and the vertex shader just passes the input directly to the output. Everything seems to be set up correctly, and when I use the graphics debugger in Visual Studio 2012, the correct vertex position is being passed to the vertex shader. However, it skips directly from the vertex shader stage to the output merger stage in the pipeline. I assume this means that nothing is being sent to the pixel shader, which would again mean that the vectors are being discarded in the rasterizer stage. Why is this happening?
Here is my rasterizer state:
D3D11_RASTERIZER_DESC rasterizerDesc;
rasterizerDesc.AntialiasedLineEnable = false;
rasterizerDesc.CullMode = D3D11_CULL_NONE;
rasterizerDesc.DepthBias = 0;
rasterizerDesc.DepthBiasClamp = 0.0f;
rasterizerDesc.DepthClipEnable = true;
rasterizerDesc.FillMode = D3D11_FILL_SOLID;
rasterizerDesc.FrontCounterClockwise = false;
rasterizerDesc.MultisampleEnable = false;
rasterizerDesc.ScissorEnable = false;
rasterizerDesc.SlopeScaledDepthBias = 0.0f;
And my viewport (width/height are the window client area matching my back buffer, which are set to 1024x576 in my test setup):
D3D11_VIEWPORT viewport;
viewport.Height = static_cast< float >( height );
viewport.MaxDepth = 1.0f;
viewport.MinDepth = 0.0f;
viewport.TopLeftX = 0.0f;
viewport.TopLeftY = 0.0f;
viewport.Width = static_cast< float >( width );
Can anyone see what is making the rasterize stage drop my vertices? Or are there any other parts of my D3D setup that could be causing this?

i found this on the internet .. it took absolulely ages to load so i copied and pasted i have highlighted in bold an interesting point.
The D3D_OVERLOADS constructors defined in row 11 offers a convenient way for C++ programmers to create transformed and lit vertices with D3DTLVERTEX.
_D3DTLVERTEX(const D3DVECTOR& v, float _rhw, D3DCOLOR _color,
D3DCOLOR _specular, float _tu, float _tv)
{
sx = v.x;
sy = v.y;
sz = v.z;
rhw = _rhw;
color = _color;
specular = _specular;
tu = _tu;
tv = _tv;
}
The system requires a vertex position that has already been transformed. So the x and y values must be in screen coordinates, and z must be the depth value of the pixel, which could be used in a z-buffer (we won't use a z-buffer here). Z values can range from 0.0 to 1.0, where 0.0 is the closest possible position to the viewer, and 1.0 is the farthest position still visible within the viewing area. Immediately following the position, transformed and lit vertices must include an RHW (reciprocal of homogeneous W) value.
Before rasterizing the vertices, they have to be converted from homogeneous vertices to non-homogeneous vertices, because the rasterizer expects them this way. Direct3D converts the homogeneous vertices to non-homogeneous vertices by dividing the x-, y-, and z-coordinates by the w-coordinate, and produces an RHW value by inverting the w-coordinate. This is only done for vertices which are transformed and lit by Direct3D.
The RHW value is used in multiple ways: for calculating fog, for performing perspective-correct texture mapping, and for w-buffering (an alternate form of depth buffering).
With D3D_OVERLOADS defined, D3DVECTOR is declared as
_D3DVECTOR(D3DVALUE _x, D3DVALUE _y, D3DVALUE _z);
D3DVALUE is the fundamental Direct3D fractional data type. It's declared in d3dtypes.h as
typedef float D3DVALUE, *LPD3DVALUE;
The source shows that the x and y values for the D3DVECTOR are always 0.0f (this will be changed in InitDeviceObjects()). rhw is always 0.5f, color is 0xfffffff and specular is set to 0. Only the tu1 and tv1 values are differing between the four vertices. These are the coordinates of the background texture.
In order to map texels onto primitives, Direct3D requires a uniform address range for all texels in all textures. Therefore, it uses a generic addressing scheme in which all texel addresses are in the range of 0.0 to 1.0 inclusive.
If, instead, you decide to assign texture coordinates to make Direct3D use the bottom half of the texture, the texture coordinates your application would assign to the vertices of the primitive in this example are (0.0,0.0), (1.0,0.0), (1.0,0.5), and (0.0,0.5). Direct3D will apply the bottom half of the texture as the background.
Note: By assigning texture coordinates outside that range, you can create certain special texturing effects.
You will find the declaration of D3DTextr_CreateTextureFromFile() in the Framework source in d3dtextr.cpp. It creates a local bitmap from a passed file. Textures could be created from *.bmp and *.tga files. Textures are managed in the framework in a linked list, which holds the info per texture, called texture container.
struct TextureContainer
{
TextureContainer* m_pNext; // Linked list ptr
TCHAR m_strName[80]; // Name of texture (doubles as image filename)
DWORD m_dwWidth;
DWORD m_dwHeight;
DWORD m_dwStage; // Texture stage (for multitexture devices)
DWORD m_dwBPP;
DWORD m_dwFlags;
BOOL m_bHasAlpha;
LPDIRECTDRAWSURFACE7 m_pddsSurface; // Surface of the texture
HBITMAP m_hbmBitmap; // Bitmap containing texture image
DWORD* m_pRGBAData;
public:
HRESULT LoadImageData();
HRESULT LoadBitmapFile( TCHAR* strPathname );
HRESULT LoadTargaFile( TCHAR* strPathname );
HRESULT Restore( LPDIRECT3DDEVICE7 pd3dDevice );
HRESULT CopyBitmapToSurface();
HRESULT CopyRGBADataToSurface();
TextureContainer( TCHAR* strName, DWORD dwStage, DWORD dwFlags );
~TextureContainer();
};

The problem was actually in my rendering logic. I set the stride of the vertex buffer to 0 instead of the size of my vertex struct. Changed that, and it renders just fine!

Related

Integrating a metal depth buffer with scenekit rendering

I'm using Metal to render a scene with a z buffer and now need to integrate this z-buffer into SceneKit's rendering. However I can't figure out how to get SceneKit to use this depth better correctly and am not even 100% sure what format SceneKit expects it's z-buffers to be in
Base on this question, my understanding was that SceneKit uses a reverse logarithmic z-buffer in the range of 1 (near) to 0 (far). However I can't get this working and objects I draw with SceneKit don't properly respect the depth buffer: they are either always showing or always hidden
First, here's how the generate a z-buffer texture in a Metal render pass:
struct FragmentOut {
float4 color [[color(0)]];
float depth [[depth(any)]];
};
fragment FragmentOut metalRenderFragment(const InOut in [[ stage_in ]]) {
FragmentOut out;
out.depth = 0; // 0 is far with reverse z buffer
...
float cameraSpaceZ = ...; // Computed in shader
// There constants are taken from SceneKit's camera and inlined here
const float zNear = 0.0010000000474974513;
const float zFar = 1000.0;
float logDepth = log(z / zNear) / log(zFar / zNear);
out.depth = 1.0 - logDepth; // Reverse the depth for scenekit
return out;
}
Then to integrate the depth buffer into SceneKit, I render a full screen quad in scenekit with a SCNProgram that uses the depth texture generated in the previous step:
fragment FragmentOut sceneKitFullScreenQuadFragment(const InOut in [[ stage_in ]],
depth2d<float, access::sample> depthTexture [[texture(1)]])
{
constexpr sampler sampler(filter::linear);
const float depth = depthTexture.sample(sampler, in.uv);
return {
.color = float4(0),
.depth = depth,
};
}
So two questions:
What format does SceneKit use for its z-buffer? Is it a reversed logarithmic z-buffer?
What am I doing wrong in generating the z-buffer values for SceneKit?
SceneKit uses a reverse logarithmic Z-Buffer. This post and this post show you how to get a normalized linear mapping space [0...1]. You need the opposite formula.
Also, you can toggle the value from reverseZ to directZ this way:
let sceneView = self.view as! SCNView
sceneView.usesReverseZ = true // default
Andy Jazz's answer helped but I still found the links confusing. Here's what ultimately worked for me (although there are possibly other ways to do this):
When generating the depth map (this would be inside the the metal shader in my original example) pass in SceneKit's projection transform matrix and use this to transform the depth value:
// In a metal shader generating the depth map
// The z distance from the camera, e.g. if the object
// at the current position is 5 units away, this would be 5.
const float z = ...;
// The camera points along the -z axis, so transform the -z position
// with SceneKit's projection matrix (you can get this from SCNCamera)
const float4 depthPos = (sceneKitState.projectionTransform * float4(0, 0, -z, 1));
// Then do perspective division to get the final depth value
out.depth = depthPos.z / depthPos.w;
Then inside of the SceneKit shader, simply write out the depth, taking into account usesReverseZ:
// In a scenekit, full screen quad shader
const float depth = depthTexture.sample(sampler, in.uv);
return {
.color = float4(0),
.depth = 1.0 - depth,
};
❗️ The above assumes you are using sceneView.usesReverseZ = true (the default). If you are using usesReverseZ = false, simply do .depth = depth instead

Shadow Mapping - Space Transformations are going bad

I am currently studying shadow mapping, and my biggest issue right now is the transformations between spaces. This is my current working theory/steps.
Pass 1:
Get depth of pixel from camera, store in depth buffer
Get depth of pixel from light, store in another buffer
Pass 2:
Use texture coordinate to sample camera's depth buffer at current pixel
Convert that depth to a view space position by multiplying the projection coordinate with invProj matrix. (also do a perspective divide).
Take that view position and multiply by invV (camera's inverse view) to get a world space position
Multiply world space position by light's viewProjection matrix.
Perspective divide that projection-space coordinate, and manipulate into [0..1] to sample from light depth buffer.
Get current depth from light and closest (sampled) depth, if current depth > closest depth, it's in shadow.
Shader Code
Pass1:
PS_INPUT vs(VS_INPUT input) {
output.pos = mul(input.vPos, mvp);
output.cameraDepth = output.pos.zw;
..
float4 vPosInLight = mul(input.vPos, m);
vPosInLight = mul(vPosInLight, light.viewProj);
output.lightDepth = vPosInLight.zw;
}
PS_OUTPUT ps(PS_INPUT input){
float cameraDepth = input.cameraDepth.x / input.cameraDepth.y;
//Bundle cameraDepth in alpha channel of a normal map.
output.normal = float4(input.normal, cameraDepth);
//4 Lights in total -- although only 1 is active right now. Going to use r/g/b/a for each light depth.
output.lightDepths.r = input.lightDepth.x / input.lightDepth.y;
}
Pass 2 (Screen Quad):
float4 ps(PS_INPUT input) : SV_TARGET{
float4 pixelPosView = depthToViewSpace(input.texCoord);
..
float4 pixelPosWorld = mul(pixelPosView, invV);
float4 pixelPosLight = mul(pixelPosWorld, light.viewProj);
float shadow = shadowCalc(pixelPosLight);
//For testing / visualisation
return float4(shadow,shadow,shadow,1);
}
float4 depthToViewSpace(float2 xy) {
//Get pixel depth from camera by sampling current texcoord.
//Extract the alpha channel as this holds the depth value.
//Then, transform from [0..1] to [-1..1]
float z = (_normal.Sample(_sampler, xy).a) * 2 - 1;
float x = xy.x * 2 - 1;
float y = (1 - xy.y) * 2 - 1;
float4 vProjPos = float4(x, y, z, 1.0f);
float4 vPositionVS = mul(vProjPos, invP);
vPositionVS = float4(vPositionVS.xyz / vPositionVS.w,1);
return vPositionVS;
}
float shadowCalc(float4 pixelPosL) {
//Transform pixelPosLight from [-1..1] to [0..1]
float3 projCoords = (pixelPosL.xyz / pixelPosL.w) * 0.5 + 0.5;
float closestDepth = _lightDepths.Sample(_sampler, projCoords.xy).r;
float currentDepth = projCoords.z;
return currentDepth > closestDepth; //Supposed to have bias, but for now I just want shadows working haha
}
CPP Matrices
// (Position, LookAtPos, UpDir)
auto lightView = XMMatrixLookAtLH(XMLoadFloat4(&pos4), XMVectorSet(0,0,0,1), XMVectorSet(0,1,0,0));
// (FOV, AspectRatio (1000/680), NEAR, FAR)
auto lightProj = XMMatrixPerspectiveFovLH(1.57f , 1.47f, 0.01f, 10.0f);
XMStoreFloat4x4(&_cLightBuffer.light.viewProj, XMMatrixTranspose(XMMatrixMultiply(lightView, lightProj)));
Current Outputs
White signifies that a shadow should be projected there. Black indicates no shadow.
CameraPos (0, 2.5, -2)
CameraLookAt (0, 0, 0)
CameraFOV (1.57)
CameraNear (0.01)
CameraFar (10.0)
LightPos (0, 2.5, -2)
LightLookAt (0, 0, 0)
LightFOV (1.57)
LightNear (0.01)
LightFar (10.0)
If I change the CameraPosition to be (0, 2.5, 2), basically just flipped on the Z axis, this is the result.
Obviously a shadow shouldn't change its projection depending on where the observer is, so I think I'm making a mistake with the invV. But I really don't know for sure. I've debugged the light's projView matrix, and the values seem correct - going from CPU to GPU. It's also entirely possible I've misunderstood some theory along the way because this is quite a tricky technique for me.
Aha! Found my problem. It was a silly mistake, I was calculating the depth of pixels from each light, but storing them in a texture that was based on the view of the camera. The following image should explain my mistake better than I can with words.
For future reference, the solution I decided was to scrap my idea for storing light depths in texture channels. Instead, I basically make a new pass for each light, and bind a unique depth-stencil texture to render the geometry to. When I want to do light calculations, I bind each of the depth textures to a shader resource slot and go from there. Obviously this doesn't scale well with many lights, but for my student project where I'm only required to have 2 shadow casters, it suffices.
_context->DrawIndexed(indexCount, 0, 0); //Draw to regular render target
_sunlight->use(1, _context); //Use sunlight shader (basically just runs a Vertex Shader & Null Pixel shader so depth can be written to depth map)
_sunlight->bindDSVSetNullRenderTarget(_context);
_context->DrawIndexed(indexCount, 0, 0); //Draw to sunlight depth target
bindDSVSetNullRenderTarget(ctx){
ID3D11RenderTargetView* nullrv = { nullptr };
ctx->OMSetRenderTargets(1, &nullrv, _sunlightDepthStencilView);
}
//The purpose of setting a null render target before doing the draw call is
//that a draw call with only a depth target bound is much faster.
//(At least I believe so, from my reading online)

using pointSize to trigger the fragment shader to draw pixels

I queries the pointSize range gl.getParameter(gl.ALIASED_POINT_SIZE_RANGE) and got [1,1024] this means, that using this point to cover a texture (so it triggers the fragment shader to draw all pixels spans by the pointSize
at best, using this method i cannot render images larger then 1024x1024, ?
I guess i have to bind 2 triangles (6 points) to the fragment shader so it covers all of clipspace and then gl.viewport(x, y, width, height); will map this entire area to the output texture (frame buffer object or canvas)?
is there any other way (maybe something new in webgl2) other then using an attribute in the fragment shader?
Correct, the largest size area you can render with a single point is whatever is returned by gl.getParameter(gl.ALIASED_POINT_SIZE_RANGE)
The spec does not require any size larger than 1. The fact that your GPU/Driver/Browser returned 1024 does not mean that your users' machines will also return 1024.
note: Answering based on your history of questions
The normal thing to do in WebGL for 99% off all cases is to submit vertices. Want to draw a quad, submit 4 vertices and 6 indices or 6 vertex. Want to draw a triangle, submit 3 vertices. Want to draw a circle, submit the vertices for a circle. Want to draw a car, submit the vertices for a car or more likely submit the vertices for a wheel, draw 4 wheels with those vertices, submit the vertices for other parts of the car, draw each part of the car.
You multiply those vertices by some matrices to move, scale, rotate, and project them into 2D or 3D space. All your favorite games do this. The canvas 2D api does this via OpenGL ES internally. Chrome itself does this to render all the parts of this webpage. That's the norm. Anything else is an exception and will likely lead to limitations.
For fun, in WebGL2, there are some other things you can do. They are not the normal thing to do and they are not recommended to actually solve real world problems. They can be fun though just for the challenge.
In WebGL2 there is an global variable in the vertex shader called gl_VertexID which is the count of the vertex currently being processed. You can use that with clever math to generate vertices in the vertex shader with no other data.
Here's some code that draws a quad that covers the canvas
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
const vs = `#version 300 es
void main() {
int x = gl_VertexID % 2;
int y = (gl_VertexID / 2 + gl_VertexID / 3) % 2;
gl_Position = vec4(ivec2(x, y) * 2 - 1, 0, 1);
}
`;
const fs = `#version 300 es
precision mediump float;
out vec4 outColor;
void main() {
outColor = vec4(1, 0, 0, 1);
}
`;
// compile shaders, link program
const prg = twgl.createProgram(gl, [vs, fs]);
gl.useProgram(prg);
const count = 6;
gl.drawArrays(gl.TRIANGLES, 0, count);
}
main();
<canvas></canvas>
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
Example: And one that draws a circle
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
const vs = `#version 300 es
#define PI radians(180.0)
void main() {
const int TRIANGLES_AROUND_CIRCLE = 100;
int triangleId = gl_VertexID / 3;
int pointId = gl_VertexID % 3;
int pointIdOffset = pointId % 2;
float angle = float((triangleId + pointIdOffset) * 2) * PI /
float(TRIANGLES_AROUND_CIRCLE);
float radius = 1. - step(1.5, float(pointId));
float x = sin(angle) * radius;
float y = cos(angle) * radius;
gl_Position = vec4(x, y, 0, 1);
}
`;
const fs = `#version 300 es
precision mediump float;
out vec4 outColor;
void main() {
outColor = vec4(1, 0, 0, 1);
}
`;
// compile shaders, link program
const prg = twgl.createProgram(gl, [vs, fs]);
gl.useProgram(prg);
const count = 300; // 100 triangles, 3 points each
gl.drawArrays(gl.TRIANGLES, 0, 300);
}
main();
<canvas></canvas>
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
There is an entire website based on this idea. The site is based on the puzzle of making pretty pictures given only an id for each vertex. It's the vertex shader equivalent of shadertoy.com. On Shadertoy.com the puzzle is basically given only gl_FragCoord as input to a fragment shader write a function to draw something interesting.
Both sites are toys/puzzles. Doing things this way is not recommended for solving real issues like drawing a 3D world in a game, doing image processing, rendering the contents of a browser window, etc. They are cute puzzles on given only minimal inputs, drawing something interesting.
Why is this technique not advised? The most obvious reason is it's hard coded and inflexible where as the standard techniques are super flexible. For example above to draw a fullscreen quad required one shader. To draw a circle required a different shader. Where a standard vertex buffer based attributes multiplied by matrices can be used for any shape provided, 2d or 3d. Not just any shape, with just a simple single matrix multiply in the shader those shapes can be translated, rotated, scaled, projected into 3D, there rotation centers and scale centers can be independently set, etc.
Note: you are free to do whatever you want. If you like these techniques then by all means use them. The reason I'm trying to steer you away form them is based on your previous questions you're new to WebGL and I feel like you'll end up making WebGL much harder for yourself if you use obscure and hard coded techniques like these instead of the traditional more common flexible techniques that experienced devs use to get real work done. But again, it's up to you, do whatever you want.

What's the effect of geometry on the final texture output in WebGL?

Updated with more explanation around my confusion
(This is how a non-graphics developer imagines the rendering process!)
I specify a 2x2 sqaure to be drawn in by way of two triangles. I'm going to not talk about the triangle anymore. Square is a lot better. Let's say the square gets drawn in one piece.
I have not specified any units for my drawing. The only places in my code that I do something like that is: canvas size (set to 1x1 in my case) and the viewport (i always set this to the dimensions of my output texture).
Then I call draw().
What happens is this: that regardless of the size of my texture (being 1x1 or 10000x10000) all my texels are filled with data (color) that I returned from my frag shader. This is working each time perfectly.
So now I'm trying to explain this to myself:
The GPU is only concerned with coloring the pixels.
Pixel is the smallest unit that the GPU deals with (colors).
Depending on how many pixels my 2x2 square is mapped to, I should be running into one of the following 3 cases:
The number of pixels (to be colored) and my output texture dims match one to one: In this ideal case, for each pixel, there would be one value assigned to my output texture. Very clear to me.
The number of pixels are fewer than my output texture dims. In this case, I should expect that some of the output texels to have exact same value (which is the color of the pixel the fall under). For instance if the GPU ends up drawing 16x16 pixels and my texture is 64x64 then I'll have blocks of 4 texel which get the same value. I have not observed such case regardless of the size of my texture. Which means there is never a case where we end up with fewer pixels (really hard to imagine -- let's keep going)
The number of pixels end up being more than the number of texels. In this case, the GPU should decide which value to assign to my texel. Would it average out the pixel colors? If the GPU is coloring 64x64 pixels and my output texture is 16x16 then I should expect that each texel gets an average color of the 4x4 pixels it contains. Anyway, in this case my texture should be completely filled with values I didn't intend specifically for them (like averaged out) however this has not been the case.
I didn't even talk about how many times my frag shader gets called because it didn't matter. The results would be deterministic anyway.
So considering that I have never run into 2nd and 3rd case where the values in my texels are not what I expected them the only conclusion I can come up with is that the whole assumption of the GPU trying to render pixels is actually wrong. When I assign an output texture to it (which is supposed to stretch over my 2x2 square all the time) then the GPU will happily oblige and for each texel will call my frag shader. Somewhere along the line the pixels get colored too.
But the above lunatistic explanation also fails to answer why I end up with no values in my texels or incorrect values if I stretch my geometry to 1x1 or 4x4 instead of 2x2.
Hopefully the above fantastic narration of the GPU coloring process has given you clues as to where I'm getting this wrong.
Original Post:
We're using WebGL for general computation. As such we create a rectangle and draw 2 triangles in it. Ultimately what we want is the data inside the texture mapped to this geometry.
What I don't understand is if I change the rectangle from (-1,-1):(1,1) to say (-0.5,-0.5):(0.5,0.5) suddenly data is dropped from the texture bound to the framebuffer.
I'd appreciate if someone makes me understand the correlations. The only places that real dimensions of the output texture come into play are the call to viewPort() and readPixels().
Below are relevant pieces of code for you to see what I'm doing:
... // canvas is created with size: 1x1
... // context attributes passed to canvas.getContext()
contextAttributes = {
alpha: false,
depth: false,
antialias: false,
stencil: false,
preserveDrawingBuffer: false,
premultipliedAlpha: false,
failIfMajorPerformanceCaveat: true
};
... // default geometry
// Sets of x,y,z (for rectangle) and s,t coordinates (for texture)
return new Float32Array([
-1.0, 1.0, 0.0, 0.0, 1.0, // upper left
-1.0, -1.0, 0.0, 0.0, 0.0, // lower left
1.0, 1.0, 0.0, 1.0, 1.0, // upper right
1.0, -1.0, 0.0, 1.0, 0.0 // lower right
]);
...
const geometry = this.createDefaultGeometry();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, geometry, gl.STATIC_DRAW);
... // binding to the vertex shader attribs
gl.vertexAttribPointer(positionHandle, 3, gl.FLOAT, false, 20, 0);
gl.vertexAttribPointer(textureCoordHandle, 2, gl.FLOAT, false, 20, 12);
gl.enableVertexAttribArray(positionHandle);
gl.enableVertexAttribArray(textureCoordHandle);
... // setting up framebuffer; I set the viewport to output texture dimensions (I think this is absolutely needed but not sure)
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer);
gl.framebufferTexture2D(
gl.FRAMEBUFFER, // The target is always a FRAMEBUFFER.
gl.COLOR_ATTACHMENT0, // We are providing the color buffer.
gl.TEXTURE_2D, // This is a 2D image texture.
texture, // The texture.
0); // 0, we aren't using MIPMAPs
gl.viewport(0, 0, width, height);
... // reading from output texture
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.framebufferTexture2D(
gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, texture,
0);
gl.readPixels(0, 0, width, height, gl.FLOAT, gl.RED, buffer);
new answer
I'm just saying the same thing yet again (3rd time?)
Copied from below
WebGL is destination based. That means it's going to iterate over the pixels of the line/point/triangle it's drawing and for each point call the fragment shader and ask 'what value should I store here`?
It's destination based. It's going to draw each pixel exactly once. For that pixel it's going to ask "what color should I make this"
destination based loop
for (let i = start; i < end; ++i) {
fragmentShaderFunction(); // must set gl_FragColor
destinationTextureOrCanvas[i] = gl_FragColor;
You can see in the loop above there is no setting any random destination. There is no setting any part of destination twice. It's just going to run from start to end and exactly once for each pixel in the destination between start and end ask what color it should make that pixel.
How to do you set start and end? Again, to make it simple let's assume a 200x1 texture so we can ignore Y. It works like this
vertexShaderFunction(); // must set gl_Position
const start = clipspaceToArrayspaceViaViewport(viewport, gl_Position.x);
vertexShaderFunction(); // must set gl_Position
const end = clipspaceToArrayspaceViaViewport(viewport, gl_Position.x);
for (let i = start; i < end; ++i) {
fragmentShaderFunction(); // must set gl_FragColor
texture[i] = gl_FragColor;
}
see below for clipspaceToArrayspaceViaViewport
What is viewport? viewport is what you set when you called `gl.viewport(x, y, width, height)
So, set gl_Position.x to -1 and +1, viewport.x to 0 and viewport.width = 200 (the width of the texture) then start will be 0, end will be 200
set gl_Position.x to .25 and .75, viewport.x to 0 and viewport.width = 200 (the width of the texture). The start will be 125 and end will be 175
I honestly feel like this answer is leading you down the wrong path. It's not remotely this complicated. You don't have to understand any of this to use WebGL IMO.
The simple answer is
You set gl.viewport to the sub rectangle you want to affect in your destination (canvas or texture it doesn't matter)
You make a vertex shader that somehow sets gl_Position to clip space coordinates (they go from -1 to +1) across the texture
Those clip space coordinates get converted to the viewport space. It's basic math to map one range to another range but it's mostly not important. It's seems intuitive that -1 will draw to the viewport.x pixel and +1 will draw to the viewport.x + viewport.width - 1 pixel. That's what "maps from clip space to the viewport settings means".
It's most common for the viewport settings to be (x = 0, y = 0, width = width of destination texture or canvas, height = height of destination texture or canvas)
So that just leaves what you set gl_Position to. Those values are in clip space just like it explains in this article.
You can make it simple by doing if you want by converting from pixel space to clip space just like it explains in this article
zeroToOne = someValueInPixels / destinationDimensions;
zeroToTwo = zeroToOne * 2.0;
clipspace = zeroToTwo - 1.0;
gl_Position = clipspace;
If you continue the articles they'll also show adding a value (translation) and multiplying by a value (scale)
Using just those 2 things and a unit square (0 to 1) you can choose any rectangle on the screen. Want to effect 123 to 127. That's 5 units so scale = 5, translation = 123. Then apply the math above to convert from pixels to clips space and you'll get the rectangle you want.
If you continue further though those articles you'll eventually get the point where that math is done with matrices but you can do that math however you want. It's like asking "how do I compute the value 3". Well, 1 + 1 + 1, or 3 + 0, or 9 / 3, or 100 - 50 + 20 * 2 / 30, or (7^2 - 19) / 10, or ????
I can't tell you how to set gl_Position. I can only tell you make up whatever math you want and set it to *clip space* and then give an example of converting from pixels to clipspace (see above) as just one example of some possible math.
old answer
I get that this might not be clear I don't know how to help. WebGL draws lines, points, or triangles two a 2D array. That 2D array is either the canvas, a texture (as a framebuffer attachment) or a renderbuffer (as a framebuffer attachment).
The size of the area is defined by the size of the canvas, texture, renderbuffer.
You write a vertex shader. When you call gl.drawArrays(primitiveType, offset, count) you're telling WebGL to call your vertex shader count times. Assuming primitiveType is gl.TRIANGLES then for every 3 vertices generated by your vertex shader WebGL will draw a triangle. You specify that triangle by setting gl_Position in clip space.
Assuming gl_Position.w is 1, Clip space goes from -1 to +1 in X and Y across the destination canvas/texture/renderbuffer. (gl_Position.x and gl_Position.y are divided by gl_Position.w) which is not really important for your case.
To convert back to actually pixels your X and Y are converted based on the settings of gl.viewport. Let's just do X
pixelX = ((clipspace.x / clipspace.w) * .5 + .5) * viewport.width + viewport.x
WebGL is destination based. That means it's going to iterate over the pixels of the line/point/triangle it's drawing and for each point call the fragment shader and ask 'what value should I store here`?
Let's translate that to JavaScript in 1D. Let's assume you have an 1D array
const dst = new Array(100);
Let's make a function that takes a start and end and sets values between
function setRange(dst, start, end, value) {
for (let i = start; i < end; ++i) {
dst[i] = value;
}
}
You can fill the entire 100 element array with 123
const dst = new Array(100);
setRange(dst, 0, 99, 123);
To set the last half of the array to 456
const dst = new Array(100);
setRange(dst, 50, 99, 456);
Let's change that to use clip space like coordinates
function setClipspaceRange(dst, clipStart, clipEnd, value) {
const start = clipspaceToArrayspace(dst, clipStart);
const end = clipspaceToArrayspace(dst, clipEnd);
for (let i = start; i < end; ++i) {
dst[i] = value;
}
}
function clipspaceToArrayspace(array, clipspaceValue) {
// convert clipspace value (-1 to +1) to (0 to 1)
const zeroToOne = clipspaceValue * .5 + .5;
// convert zeroToOne value to array space
return Math.floor(zeroToOne * array.length);
}
This function now works just like the previous one except takes clip space values instead of array indices
// fill entire array with 123
const dst = new Array(100);
setClipspaceRange(dst, -1, +1, 123);
Set the last half of the array to 456
setClipspaceRange(dst, 0, +1, 456);
Now abstract one more time. Instead of using the array's length use a setting
// viewport looks like `{ x: number, width: number} `
function setClipspaceRangeViaViewport(dst, viewport, clipStart, clipEnd, value) {
const start = clipspaceToArrayspaceViaViewport(viewport, clipStart);
const end = clipspaceToArrayspaceViaViewport(viewport, clipEnd);
for (let i = start; i < end; ++i) {
dst[i] = value;
}
}
function clipspaceToArrayspaceViaViewport(viewport, clipspaceValue) {
// convert clipspace value (-1 to +1) to (0 to 1)
const zeroToOne = clipspaceValue * .5 + .5;
// convert zeroToOne value to array space
return Math.floor(zeroToOne * viewport.width) + viewport.x;
}
Now to fill the entire array with 123
const dst = new Array(100);
const viewport = { x: 0, width: 100; }
setClipspaceRangeViaViewport(dst, viewport, -1, 1, 123);
Set the last half of the array to 456 there are now 2 ways. Way one is just like the previous using 0 to +1
setClipspaceRangeViaViewport(dst, viewport, 0, 1, 456);
You can also set the viewport to start half way through the array
const halfViewport = { x: 50, width: 50; }
setClipspaceRangeViaViewport(dst, halfViewport, -1, +1, 456);
I don't know if that was helpful or not.
The only other thing to add is instead of value replace that with a function that gets called every iteration to supply value
function setClipspaceRangeViaViewport(dst, viewport, clipStart, clipEnd, fragmentShaderFunction) {
const start = clipspaceToArrayspaceViaViewport(viewport, clipStart);
const end = clipspaceToArrayspaceViaViewport(viewport, clipEnd);
for (let i = start; i < end; ++i) {
dst[i] = fragmentShaderFunction();
}
}
Note this is the exact same thing that is said in this article and clearified somewhat in this article.

Centering all of the points in iOS OpenGL ES app

I have an OpenGL view that displays a set of 3D points with some basic shaders:
// Fragment Shader
static const char* PointFS = STRINGIFY
(
void main(void)
{
gl_FragColor = vec4(0.8, 0.8, 0.8, 1.0);
}
);
// Vertex Shader
static const char* PointVS = STRINGIFY
(
uniform mediump mat4 uProjectionMatrix;
attribute mediump vec4 position;
void main( void )
{
gl_Position = uProjectionMatrix * position;
gl_PointSize = 3.0;
}
);
And the MVP matrix is calculated as:
- (void)setMatrices
{
// ModelView Matrix
GLKMatrix4 modelViewMatrix = GLKMatrix4Identity;
modelViewMatrix = GLKMatrix4Scale(modelViewMatrix, 2, 2, 2);
// Projection Matrix
const GLfloat aspectRatio = (GLfloat)(self.view.bounds.size.width) / (GLfloat)(self.view.bounds.size.height);
const GLfloat fieldView = GLKMathDegreesToRadians(90.0f);
const GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(fieldView, aspectRatio, 0.1f, 10.0f);
glUniformMatrix4fv(self.pointShader.uProjectionMatrix, 1, 0, GLKMatrix4Multiply(projectionMatrix, modelViewMatrix).m);
}
This works fine, but I have a set of 500 points and I see only a few.
How do I scale/translate the MVP matrix to display all of them (they are a dynamic set)? Ideally the "centroid" should be at the origin, and all of the points visible. It should be able to adapt to rotations of the view (gestures are the next step I want to implement).
Seeing how you present this you might need quite a lot... I guess best approach might be using "look at", the point you are looking at is (0,0,0) as you stated, camera position should probably be (0,0,Z) and up (0,1,0). So the only issue here is the Z component of camera position.
If you start the Z with for instance -.1 and the iterate through all the points then sin(fieldView*.5f) * (p.z-Z) >= point.y for the point to be visible. So you can compute Z1 = p.z-(point.y/sin(fieldView*.5f)) and if Z1<Z then Z=Z1. This check is only for the positive Y check, you also need the same for negative Y and same for +-X. These evasions are very similar though when checking X you could also take the screen ratio into account.
This procedure should give you the smallest field possible to see all the points (with given limitations such as looking towards (0,0,0)) but is far from the simplest. You also need to consider if the equation will work if p.z<-Z.
Another bit easier approach is to generate the smallest cube around centre which holds all the points: iterate through points and get the coordinate with largest absolute value (any of X,Y or Z). When you have it use it with frustum instead perspective so that all rect parameters (top, bottom, left and right) are generated with this value as +-largest. Then you need to compute the translation which for 90 degrees field is Z = (largest*.5). Z is the zNear for the frustum and then also translate the matrix by -(Z+largest). Again one of the coordinate in frustum must be multiplied by screen ratio.
In any case do watch out what your zFar is, having it only 10.0f might be a bit too short in your case. Until you need the depth buffer you should not worry about that value being too large.

Resources