I am trying to play around with WebGL and I can render a triangle in this plunker. Now I try to make this a full square by changing...
//This...
this.triangleVertexPositionBuffer.numItems = 3;
// ...
this.gl.drawArrays(this.gl.TRIANGLE_STRIP, 0, this.triangleVertexPositionBuffer.numItems);
// To this..
this.triangleVertexPositionBuffer.numItems = 4;
This to me should just make a square, however, the triangle completely disappears. I also get the following warning...
GL ERROR :GL_INVALID_OPERATION : glDrawArrays: attempt to access out of range vertices in attribute 1
What am I missing here?
You need to add a color attribute for your fourth vertex aswell.
Related
I am trying to implement shadows into my WEBGL 2.0 Project using this tutorial
https://webgl2fundamentals.org/webgl/lessons/webgl-shadows.html
Currently I am getting really bad results like this:
Basically a ton of the terrain is being drawn in shadow that shouldn't be. The light projection is from your camera towards the direction you are looking so hypothetically you shouldn't be able to see any shdaows becuase the light projection is the same as your camera ( I am just doing this for testing until I can get this working properly)
I have everything the same as the tutorial I believe except I am using glMatrix instead of their matrix math library (shouldn't matter I would assume). Here's the thing though. I don't use a model view matrix for anything I am rendering so none of my points are on a -1,1 range. They can go out as far as -3200...ect Its just all one big terrain mesh chunked out.
I think the issue lies with how I am creating the texture matrix
textureMatrix = glMatrix.mat4.create();
glMatrix.mat4.translate(textureMatrix,textureMatrix,[0.5,0.5,0.5]);
glMatrix.mat4.scale(textureMatrix,textureMatrix,[0.5,0.5,0.5]);
glMatrix.mat4.multiply(textureMatrix,textureMatrix, projectionMatrix);
glMatrix.mat4.invert(lightMatrix,lightMatrix);
glMatrix.mat4.multiply(textureMatrix,textureMatrix, lightMatrix);
I am using the same matrix for the light projection as your normal projection, is that an issue? if anyone could help it would be greatly appreciated.
That's probably because the Y position of your light (in your example, it is much more the distance between the eye and the scene) is too big for the Z size of your shadow volume (the size of your shadow volume in the view direction.) Here if posY is inside the wireframe box :
But if you increase posY too much (i.e. your shapes get out of the shadow volume, they disappear
So you should increase the size of your shadow volume (or shrinken your scene, either way.) You cannot simulate that with the slider because they just give you the control to the two dimensions X and Y dimensions : projWidth and projHeight.
i.e. in the last code in your tutorial page, the latest parameter ("far") for example change it from 10 to 100
const lightProjectionMatrix = settings.perspective
? m4.perspective(
degToRad(settings.fieldOfView),
settings.projWidth / settings.projHeight,
0.5, // near
10) // far
: m4.orthographic(
-settings.projWidth / 2, // left
settings.projWidth / 2, // right
-settings.projHeight / 2, // bottom
settings.projHeight / 2, // top
0.5, // near
100); // far
Then you can increase posY far more :
without having your full code, it is hard to reproduce and help you. Could you not try to just inject your scene into the tutorial code ? You can bind the viewpoint with the source and orientation of the light by using the same inputs : (just adding 0.5 to X to see a bit of shadow and make sure it is properly computed.)
/*const cameraPosition = [settings.cameraX, settings.cameraY, 15];*/
const cameraPosition = [settings.posX+0.5, settings.posY, settings.posZ];
/*const target = [0, 0, 0]; */
const target = [settings.targetX, settings.targetY, settings.targetZ];
I am doing a block matrix inversion of a 6x6 matrix, split into a 4x4, 2x4, 4x2 and 2x2 block, but somewhere along the way something goes wrong and attempting to access one of the values causes a crash. I thought I would try using isnan() or isinf() to detect the bad value, but that appears to cause a crash as well.
// Pieces of a block matrix inversion:
mat4 invA = inverse(A);
mat2 invD = inverse(D);
mat4 schurA = A - B*invD*C;
mat2 schurD = D - C*invA*B;
mat2x4 upperR = -invA*B*schurD;
mat4x2 lowerL = -invD*C*schurA;
// Set outgoing color for the vertex to yellowish:
v_pColorMarker = vec3(0.90, 1.00, 0.60);
// Does not crash, so it seems I can do a multiply with these matrix values:
vec2 p45 = vec2(0.0);
p45 += lowerL*b0123 + schurD*b45;
float temp = schurD[0][1];
// Checking matrix entry for NaN causes crash ????!!!!
if (isnan(temp))
{
// Set the vertex color to something I’ll be able to see and detect:
v_pColorMarker = vec3(0.0, 1.0, 0.0);
}
Any ideas? I am not sure how to debug this since the crash is happening in the vertex shader and I do not have a good way to inspect the values inside of the matrix. Is it possible for the value at [0][1] in the matrix to be a non-numeric value of some description that would be different from NaN of INF and would crash both isnan() and isinf()?
This ended up being an unrelated problem, where the "instabilities" were coming from using too many addresses and causing the GPU/compiler to either recycle or lose track of the values in particular addresses. Refactoring the code to be more address efficient cause the instability to go away.
I'm using SlimDX for a Direct3D 10 apps. In the apps I've loaded 2 to more mesh, with images loaded as texture and using a fx code for shader. The code was modified from SlimDX's sample "SimpleModel10"
I move the draw call, shader setup code into a class that manage 1 mesh, shader (effect) and draw call. Then I initialize 2 copy of this class, then call the draw function one after another.
The output, no matter how I change the Z position of the mesh, the one being draw later will always stay on top. Later, when I use PIX to debug the draw call, I found out that the 2nd mesh doesn't have depth while the first one does. I've tried with 3 meshes, 2nd and 3rd one will not have depth too. The funny thing is all of then are instantiated from the same class, using the same draw call.
What could have cause such problem?
Following is part of the code in the draw function of the class, I've omitted the rest as it's lengthy involved a few classes. I keep the existing OnRenderBegin() and OnRenderEnd() of the sample:
PanelEffect.GetVariableByName("world").AsMatrix().SetMatrix(world);
lock (this)
{
device.InputAssembler.SetInputLayout(layout);
device.InputAssembler.SetPrimitiveTopology(PrimitiveTopology.TriangleList);
device.InputAssembler.SetIndexBuffer(indices, Format.R32_UInt, 0);
device.InputAssembler.SetVertexBuffers(0, binding);
PanelEffect.GetTechniqueByIndex(0).GetPassByIndex(0).Apply();
device.DrawIndexed(indexCount, 0, 0);
device.InputAssembler.SetIndexBuffer(null, Format.Unknown, 0);
device.InputAssembler.SetVertexBuffers(0, nullBinding);
}
Edit: After much debugging and code isolation, I found out the culprit is Font.Draw() in my DrawString() function
internal void DrawString(string text)
{
sprite.Begin(SpriteFlags.None);
string[] texts = text.Split(new string[] {"\r\n"}, StringSplitOptions.None);
int y = PanelY;
foreach (string t in texts)
{
font.Draw(sprite, t, new System.Drawing.Rectangle(PanelX, y, PanelSize.Width, PanelSize.Height), FontDrawFlags.SingleLine, new Color4(Color.Red));
y += font.Description.Height;
}
sprite.End();
}
Comment out Font.Draw solve the problem. Maybe it automatically set some states which causes the next Mesh draw to discard depth. Looking into SlimDX's source code now.
After much debugging in PIX, this is the conclusion.
Calling Font.Draw() will automatically set DepthEnable to false and DepthFunction to D3D10_COMPARISON_NEVER, that's after comparing PIX's detail on the OutputMerger of before and after calling Font.Draw
Solution
Context10_1.Device.OutputMerger.DepthStencilState = depthStencilState;
Put that before the next Mesh draw call fixed the problem.
Previously I only set the DepthStencilState in the OnRenderBegin()
I've created a simple DirectX app that renders a field of vertices. Vertices are rendered like this (if viewed from top):
|\|\|\|\|
|\|\|\|\|
Each triangle is rendered like this:
1
|\
2 3
Which should mean that the polygon is counterclockwise and not be rendered, but it is. Anyway when viewed from top the plane is perfect.
However, when viewed from another level some polygons are sort of transparent and you can see geometry behind them. I've highlighted some of the place where this is happening.
I am thinking this is some of the basic, beginner problems. What am I missing? My rasterizer description is such:
new RasterizerStateDescription
{
CullMode = CullMode.Front,
IsAntialiasedLineEnabled = true,
IsMultisampleEnabled = true,
IsDepthClipEnabled = true,
IsFrontCounterclockwise = false,
IsScissorEnabled = true,
DepthBias = 1,
DepthBiasClamp = 1000.0f,
FillMode = FillMode.Wireframe,
SlopeScaledDepthBias = 1.0f
};
This is by design. FillMode.Wireframe only draws edges of each triangle as lines. That's all.
Do a first pass with a solid fill mode and depth writes on and a color mask (RenderTargetWriteMask in D3D11 terminology), and a second one with depth test on (but depth writes off) and wireframe mode on. You will probably need depth bias too since lines and triangles are not rasterized the same way (and their z can differ at equal fragment position).
BTW, this technique is known as hidden line removal. You can check this presentation for more details.
Turned out I just had no depth-stencil buffer set up. Oh well.
I have a tex2D sampler I want to only return precisely those colours that are present on my texture. I am using Shader Model 3, so cannot use load.
In the event of a texel overlapping multiple colours, I want it to pick one and have the whole texel be that colour.
I think to do this I want to disable mipmapping, or at least trilinear filtering of mips.
sampler2D gColourmapSampler : register(s0) = sampler_state {
Texture = <gColourmapTexture>; //Defined above
MinFilter = None; //Controls sampling. None, Linear, or Point.
MagFilter = None; //Controls sampling. None, Linear, or Point.
MipFilter = None; //Controls how the mips are generated. None, Linear, or Point.
//...
};
My problem is I don't really understand Min/Mag/Mip filtering, so am not sure what combination I need to set these in, or if this is even what I am after.
What a portion of my source texture looks like;
Screenshot of what the relevant area looks like after the full texture is mapped to my sphere;
The anti-aliasing/blending/filtering artefacts are clearly visible; I don't want these.
MSDN has this to say;
D3DSAMP_MAGFILTER: Magnification filter of type D3DTEXTUREFILTERTYPE
D3DSAMP_MINFILTER: Minification filter of type D3DTEXTUREFILTERTYPE.
D3DSAMP_MIPFILTER: Mipmap filter to use during minification. See D3DTEXTUREFILTERTYPE.
D3DTEXF_NONE: When used with D3DSAMP_MIPFILTER, disables mipmapping.
Another good link on understanding hlsl intrinsics.
RESOLVED
Not an HLSL issue at all! Sorry all. I seem to ask a lot of questions that are impossible to answer. Ogre was over-riding the above settings. This was fixed with;
Ogre::MaterialManager::getSingleton().setDefaultTextureFiltering(Ogre::FO_NONE , Ogre::FO_NONE, Ogre::FO_NONE);
What it looks to me is that you're getting the values from a lower level mip-map (unfiltered) than the highest detail you're showing.
MipFilter = None
should prevent that, unless something in the code overrides it. So look for calls to SetSamplerState.
What you have done should turn off filtering. There are 2 potential issues, that I can think of, though
1) The driver just ignores you and filters anyway (If this is happening there is nothing you can do)
2) You have some form of edge anti-aliasing enabled.
Looking at your resulting image that doesn't look much like bilinear filtering to me so I'd think you are suffering from having antialiasing turned on somewhere. Have you set the antialiasing flag when you create the device/render-texture?
If you want to have really just one texel, use load instead of sample. load takes (as far as i know) an int2as an argument, that specifies the actual array coordinates in the texture. load looks then up the entry in your texture at the given array coordinates.
So, just scale your float2, e.g. by using ceil(float2(texCoord.x*textureWidth, texCoord.y*textureHeight)).
MSDN for load: http://msdn.microsoft.com/en-us/library/bb509694(VS.85).aspx
When using just shader model 3, you could a little hack to achieve this: Again, let's assume that you know textureWidth and textureHeight.
// compute floating point stride for texture
float step_x = 1./textureWidth;
float step_y = 1./textureHeight;
// compute texel array coordinates
int target_x = texCoord.x * textureWidth;
int target_y = texCoord.y * textureHeight;
// round to values, such that they're multiples of step_x and step_y
float2 texCoordNew;
texCoordNew.x = target_x * step_x;
texCoordNew.y = target_y * step_y;
I did not test it, but I think it could work.