DirectX11 wireframe z-fighting help (or why D3D11_RASTERIZER_DESC.DepthBias is an INT?) - directx

I'm trying to use the DepthBias property on the rasterizer state in DirectX 11 (D3D11_RASTERIZER_DESC) to help with the z-fighting that occurs when I render in wireframe mode over solid polygons (wireframe overlay), and it seems setting it to any value doesn't change anything to the result. But I noticed something strange... the value is defined as a INT rather than a FLOAT. That doesn't make sense to me, but it still doesn't happen to work as expected. How do we properly set that value if it is a INT that needs to be interpreted as a UNORM in the shader pipeline?
Here's what I do:
Render all geometry
Set the rasterizer to render in wireframe
Render all geometry again
I can clearly see the wireframe overlay, but the z-fighting is horrible. I tried to set the DepthBias to a lot of different values, such as 0.000001, 0.1, 1, 10, 1000 and all the minus equivalent, still no results... obviously, I'm aware when casting the float as integer, all the decimals get cut... meh?
D3D11_RASTERIZER_DESC RasterizerDesc;
ZeroMemory(&RasterizerDesc, sizeof(RasterizerDesc));
RasterizerDesc.FillMode = D3D11_FILL_WIREFRAME;
RasterizerDesc.CullMode = D3D11_CULL_BACK;
RasterizerDesc.FrontCounterClockwise = FALSE;
RasterizerDesc.DepthBias = ???
RasterizerDesc.SlopeScaledDepthBias = 0.0f;
RasterizerDesc.DepthBiasClamp = 0.0f;
RasterizerDesc.DepthClipEnable = TRUE;
RasterizerDesc.ScissorEnable = FALSE;
RasterizerDesc.MultisampleEnable = FALSE;
RasterizerDesc.AntialiasedLineEnable = FALSE;
As anyone figured out how to set the DepthBias properly? Or perhaps it is a bug in DirectX (which I doubt) or again maybe there's a better way to achieve this than using DepthBias?
Thank you!

http://msdn.microsoft.com/en-us/library/windows/desktop/cc308048(v=vs.85).aspx
Depending on whether your depth buffer is UNORM or floating point varies the meaning of the number. In most cases you're just looking for the smallest possible value that gets rid of your z-fighting rather than any specific value. Small values are a small bias, large values are a large bias, but how that equates to a shift numerically depends on the format of your depth buffer.
As for the values you've tried, anything less than 1 would have rounded to zero and had no effect. 1, 10, 1000 may simply not have been enough to fix the issue. In the case of a D24 UNORM depth buffer, the formula would suggest a depth bias of 1000 would offset depth by: 1000 * (1 / 2^24), which equals 0.0000596, a not very significant shift in z-buffering terms.
Does a large value of 100,000 or 1,000,000 fix the z-fighting?

If anyone cares, I made myself a macro to make it easier. Note that this macro will only work if you are using a 32bit float depth buffer format. A different macro might be needed if you are using a different depth buffer format.
#define DEPTH_BIAS_D32_FLOAT(d) (d/(1/pow(2,23)))
That way you can simply set your depth bias using standard values, such as:
RasterizerDesc.DepthBias = DEPTH_BIAS_D32_FLOAT(-0.00001);

Related

In ImageJ, how to set pixel value to zero for any pixel intensity greater than some arbitrary value in the whole image?

I have Z-stacks of fluorescently labelled cells.
The samples have an artefact that causes very bright regions inside the cells which are not based on my signal of interest.
Since the intensity (brightness) of these artefacts is far above my signal of interest's intensity, I want to simply zero all those pixels that are above some arbitrary value I will chose.
So I want macro that logically does something like:
For each slice:
For each pixel:
if pixel intensity>150 then set pixel=0
I am coding in imageJ macro language. I want to avoid using ROIs for this part because I already have ROIs representing each cell and am looping through them in my script.
I think this should be really simple but right now my attempted solution is super cumbersome; going through thresholding, analyze particles, generating ROIs, selecting each ROI, and subtracting the value (e.g 150) from each ROI.
Any idea how this is done in simple way?
The problem is resolved using selection and thresholding:
HotPix = 150; Stack.getStatistics(voxelCount, mean, min, StackMax, stdDev); setThreshold(HotPix, StackMax); //your thresholds here for (i = 1; i <= nSlices; i++) { setSlice(i); run("Create Selection"); if (selectionType() != -1) { run("Set...", "value=0"); } run("Select None"); } resetThreshold;
the olsution comes from #antonis on imageJ forum: https://forum.image.sc/t/how-to-delete-all-pixels-or-set-to-zero-in-a-roi-which-are-above-a-certain-value/51173/5

MonoGame - Unit Convertion

I want to use my own Unit"system".
Something like 1 Pixel is equals to 0.01 Units.
Now when I want to draw something with my own Unitsystem, I always have to multiply/divide the value by 100.
I've found some answers that mentioned to use matrix in SpriteBatch.Begin, but I dont know how.
Could someone help me^^?
Spritebatch.Begin()´s last parameter can be a transform matrix.
TransformMatrix = Matrix.CreateScale(0.01);
spriteBatch.Begin(SpriteSortMode.Immediate, null, null, null, null, null, TransformMatrix);
Farseer Physics provides a ConvertUnits class for this kind of thing. From memory the methods you're interested in are ToSimUnits and ToDisplayUnits. The Farseer documentation describes it like this:
// 1 meter = 64 pixels
ConvertUnits.SetDisplayUnitToSimUnitRatio(64f);
// If you object is 512 pixels wide and 128 pixels high:
float width = ConvertUnits.ToSimUnits(512f) // 8 meters
float height = ConvertUnits.ToSimUnits(128f) // 2 meters
So the rules are:
Whenever you need to input pixels to the engine, you need to convert the number to meters first.
Whenever you take a value from the engine, and draw it on the screen, you need to convert it to pixels first.
If you follow those simple rules, you will have a stable physics simulation.
On a related topic, you should look into resolution independence.

Using shader modifiers to animate texture in SceneKit leads to jittery textures over time

I'm writing a scene in SceneKit for iOS.
I'm trying to apply a texture to an object using a sprite sheet. I iterate through the images in that sheet with this code:
happyMaterial = [SCNMaterial new];
happyMaterial.diffuse.contents = happyImage;
happyMaterial.diffuse.wrapS = SCNWrapModeRepeat;
happyMaterial.diffuse.wrapT = SCNWrapModeRepeat;
happyMaterial.shaderModifiers = #{ SCNShaderModifierEntryPointGeometry : #"_geometry.texcoords[0] = vec2((_geometry.texcoords[0].x+floor(u_time*30.0))/10.0, (_geometry.texcoords[0].y+floor(u_time*30.0/10.0))/7.0);" };
All is good. Except over time, the texture starts to get random jitteriness in it, especially along the x-axis.
Someone mentioned it could be because of "floating-point precision issues," but I'm not sure how to diagnose or fix this.
Also: I'm not sure how to log data from the shader code. Would be awesome to be able to look into variables like "u_time" and see exactly what's going on.
It's definitely a floating point precision issue. you should probably try to do a modulo on (u_time*30.0) so that it loops within a reasonable range.
if you want to iterate over images your texture coordinate must stay the same for a short period of time (1 second for instance).
u_time is similar to CACurrentMediaTime(), it's a time in seconds.
Now let's say you have N textures. Then mod(u_time, N) will increase every second from 0 to N-1 and then go back to 0. If you divide this by N you've got your texture coordinate, and you don't need SCNWrapModeRepeat.
If you want your image to change every 0.04 second (25 times per second), then use mod(25 * u_time, N) / N.

How to use instancing offsets

Suppose I have a single buffer of instance data for 2 different groups of instances (ie, I want to draw them in separate draw calls because they use different textures). How do I set up the offsets to accomplish this? IASetVertexBuffers and DrawIndexedInstanced both have offset parameters and its not clear to me which ones I need to use. Also, DrawIndexedInstanced isn't exactly clear if the offset value is in bytes or not.
Offsets
Those offsets works independently. You can offset either in ID3D11DeviceContext::IASetVertexBuffers or in ID3D11DeviceContext::DrawIndexedInstanced or in both (then they will combine).
ID3D11DeviceContext::IASetVertexBuffers accepts offset in bytes:
bindedData = (uint8_t*)data + sizeof(uint8_t) * offset
ID3D11DeviceContext::DrawIndexedInstanced accepts all offsets in elements (indices, vertices, instances). They are just values added to indices. Vertex and instance offsets works independently. Index offset also offsets vertices and instances (obviously):
index = indexBuffer[i] + StartIndexLocation
indexVert = index + BaseVertexLocation
indexInst = index + StartInstanceLocation
I prefer offsetting in draw call:
no byte (pointer) arithmetic needed -- less chances to make a mistake
no need to re-bind buffer if you just changing offset -- less visible state changes (and, hopefully, invisible too)
Alternative solution
Instead of splitting rendering to two draw calls, you can merge your texture resources and draw all in one draw call:
both textures binded at same draw call, branching in shader (if/else) depending on integer passed via constant buffer (simplest solution)
texture array (if target hardware supports it)
texture atlas (will need some coding, but always useful)
Hope it helps!

HLSL - How can I set sampler Min/Mag/Mip filters to disable all filtering/anti-aliasing?

I have a tex2D sampler I want to only return precisely those colours that are present on my texture. I am using Shader Model 3, so cannot use load.
In the event of a texel overlapping multiple colours, I want it to pick one and have the whole texel be that colour.
I think to do this I want to disable mipmapping, or at least trilinear filtering of mips.
sampler2D gColourmapSampler : register(s0) = sampler_state {
Texture = <gColourmapTexture>; //Defined above
MinFilter = None; //Controls sampling. None, Linear, or Point.
MagFilter = None; //Controls sampling. None, Linear, or Point.
MipFilter = None; //Controls how the mips are generated. None, Linear, or Point.
//...
};
My problem is I don't really understand Min/Mag/Mip filtering, so am not sure what combination I need to set these in, or if this is even what I am after.
What a portion of my source texture looks like;
Screenshot of what the relevant area looks like after the full texture is mapped to my sphere;
The anti-aliasing/blending/filtering artefacts are clearly visible; I don't want these.
MSDN has this to say;
D3DSAMP_MAGFILTER: Magnification filter of type D3DTEXTUREFILTERTYPE
D3DSAMP_MINFILTER: Minification filter of type D3DTEXTUREFILTERTYPE.
D3DSAMP_MIPFILTER: Mipmap filter to use during minification. See D3DTEXTUREFILTERTYPE.
D3DTEXF_NONE: When used with D3DSAMP_MIPFILTER, disables mipmapping.
Another good link on understanding hlsl intrinsics.
RESOLVED
Not an HLSL issue at all! Sorry all. I seem to ask a lot of questions that are impossible to answer. Ogre was over-riding the above settings. This was fixed with;
Ogre::MaterialManager::getSingleton().setDefaultTextureFiltering(Ogre::FO_NONE , Ogre::FO_NONE, Ogre::FO_NONE);
What it looks to me is that you're getting the values from a lower level mip-map (unfiltered) than the highest detail you're showing.
MipFilter = None
should prevent that, unless something in the code overrides it. So look for calls to SetSamplerState.
What you have done should turn off filtering. There are 2 potential issues, that I can think of, though
1) The driver just ignores you and filters anyway (If this is happening there is nothing you can do)
2) You have some form of edge anti-aliasing enabled.
Looking at your resulting image that doesn't look much like bilinear filtering to me so I'd think you are suffering from having antialiasing turned on somewhere. Have you set the antialiasing flag when you create the device/render-texture?
If you want to have really just one texel, use load instead of sample. load takes (as far as i know) an int2as an argument, that specifies the actual array coordinates in the texture. load looks then up the entry in your texture at the given array coordinates.
So, just scale your float2, e.g. by using ceil(float2(texCoord.x*textureWidth, texCoord.y*textureHeight)).
MSDN for load: http://msdn.microsoft.com/en-us/library/bb509694(VS.85).aspx
When using just shader model 3, you could a little hack to achieve this: Again, let's assume that you know textureWidth and textureHeight.
// compute floating point stride for texture
float step_x = 1./textureWidth;
float step_y = 1./textureHeight;
// compute texel array coordinates
int target_x = texCoord.x * textureWidth;
int target_y = texCoord.y * textureHeight;
// round to values, such that they're multiples of step_x and step_y
float2 texCoordNew;
texCoordNew.x = target_x * step_x;
texCoordNew.y = target_y * step_y;
I did not test it, but I think it could work.

Resources