Changing texture in HLSL - xna

float4x4 WVP;
texture cubeTexture;
sampler TextureSampler = sampler_state
{
texture = <cubeTexture>;
MipFilter = Point;
MagFilter = Point;
MinFilter = Point;
AddressU = Wrap;
AddressV = Wrap;
MaxAnisotropy = 16;
};
So if I'm not mistaken this tells the sampler state what texture I'm using.
I'm using one effect file for many many sprites, therfor this allows me to use one texture (atlas).
And I could combine all my texture atlases into one grand daddy atlas but I fear the complications.
Is there a way to tell the pixel shader to use a certain texture by its parameter?
I'm new to HLSL and it's very confusing to me.

If you give your shader a semantic to reference a register, like so
// HLSL
sampler TextureSampler : register(s1);
Then you can assign the texture in code using the GraphicsDevice.Textures property in your game code
// C#
Texture2D texture2D = Content.Load<Texture2D>("contentfile");
graphicsDevice.Textures[1] = texture2D;
I used register 1 rather than 0 because the texture argument in Spritebatch.Draw() uses register 0. If you aren't spritebatching, feel free to use register 0;

You can write a shader that contains several textures and samplers, but that is not very effective. A texture-atlas is much more preferrable, as it is easier to avoid branching.
I would suggest, for using 4 different textures, having an extra Color on each vert where each part of the color (r,g,b,a) can be used to blend the textures. You would then set the value of this extra color in C#, and use it in the shader.
This is not optimal, as it does 4 * the number of texture samplings, but it might do the trick, depending on situation.
var c1 = Tex2D(texture1, texcoords) * extracolor.r;
var c2 = Tex2D(texture2, texcoords) * extracolor.g;
var c3 = Tex2D(texture3, texcoords) * extracolor.b;
var c4 = Tex2D(texture4, texcoords) * extracolor.a;
output.color = c1 + c2 + c3 + c4;

Related

Integrating a metal depth buffer with scenekit rendering

I'm using Metal to render a scene with a z buffer and now need to integrate this z-buffer into SceneKit's rendering. However I can't figure out how to get SceneKit to use this depth better correctly and am not even 100% sure what format SceneKit expects it's z-buffers to be in
Base on this question, my understanding was that SceneKit uses a reverse logarithmic z-buffer in the range of 1 (near) to 0 (far). However I can't get this working and objects I draw with SceneKit don't properly respect the depth buffer: they are either always showing or always hidden
First, here's how the generate a z-buffer texture in a Metal render pass:
struct FragmentOut {
float4 color [[color(0)]];
float depth [[depth(any)]];
};
fragment FragmentOut metalRenderFragment(const InOut in [[ stage_in ]]) {
FragmentOut out;
out.depth = 0; // 0 is far with reverse z buffer
...
float cameraSpaceZ = ...; // Computed in shader
// There constants are taken from SceneKit's camera and inlined here
const float zNear = 0.0010000000474974513;
const float zFar = 1000.0;
float logDepth = log(z / zNear) / log(zFar / zNear);
out.depth = 1.0 - logDepth; // Reverse the depth for scenekit
return out;
}
Then to integrate the depth buffer into SceneKit, I render a full screen quad in scenekit with a SCNProgram that uses the depth texture generated in the previous step:
fragment FragmentOut sceneKitFullScreenQuadFragment(const InOut in [[ stage_in ]],
depth2d<float, access::sample> depthTexture [[texture(1)]])
{
constexpr sampler sampler(filter::linear);
const float depth = depthTexture.sample(sampler, in.uv);
return {
.color = float4(0),
.depth = depth,
};
}
So two questions:
What format does SceneKit use for its z-buffer? Is it a reversed logarithmic z-buffer?
What am I doing wrong in generating the z-buffer values for SceneKit?
SceneKit uses a reverse logarithmic Z-Buffer. This post and this post show you how to get a normalized linear mapping space [0...1]. You need the opposite formula.
Also, you can toggle the value from reverseZ to directZ this way:
let sceneView = self.view as! SCNView
sceneView.usesReverseZ = true // default
Andy Jazz's answer helped but I still found the links confusing. Here's what ultimately worked for me (although there are possibly other ways to do this):
When generating the depth map (this would be inside the the metal shader in my original example) pass in SceneKit's projection transform matrix and use this to transform the depth value:
// In a metal shader generating the depth map
// The z distance from the camera, e.g. if the object
// at the current position is 5 units away, this would be 5.
const float z = ...;
// The camera points along the -z axis, so transform the -z position
// with SceneKit's projection matrix (you can get this from SCNCamera)
const float4 depthPos = (sceneKitState.projectionTransform * float4(0, 0, -z, 1));
// Then do perspective division to get the final depth value
out.depth = depthPos.z / depthPos.w;
Then inside of the SceneKit shader, simply write out the depth, taking into account usesReverseZ:
// In a scenekit, full screen quad shader
const float depth = depthTexture.sample(sampler, in.uv);
return {
.color = float4(0),
.depth = 1.0 - depth,
};
❗️ The above assumes you are using sceneView.usesReverseZ = true (the default). If you are using usesReverseZ = false, simply do .depth = depth instead

How to get normal Map to work using Direct X, in Pixel shader

I have the tangents and the bitangents etc, I think I have the math correct up until the final stuff in the pixel shader.
I have been looking at tutorials but can't seem to get it to work in my own programme.
My normal map code looks like this so far:
float3 normalMap = (nTex.Sample(mySampler, input.UV).rgb);
normalMap = (normalMap * 2) - 1.0f;
input.tangents = normalize(input.tangents - dot(input.tangents, input.normal) * input.normal);
float3 biTang = cross(input.normal, input.tangents);
float3x3 texSpace = float3x3(input.tangents, biTang, input.normal);
//float3 normalWW = float3(normalMap.r, normalMap.g, normalMap.b);
input.normal = normalize(mul(normalMap, texSpace));
float3 plDir = float3(pointLightDir.rgb);
//float final = saturate(dot(-plDir, input.normal));
//float4 finalW = saturate(float4(combined * final));
output.Color = color * combined;
output.Color+= saturate((dot(-plDir, input.normal)*combined)*color);
output.Color.a = 1;
I haven't named things in a good way yet.
"combined" is my phong shading, I have diffuse, specular and ambient added together. "color" is just the sampled diffuse texture.
The result is no different than if I only had the first texture. if I only use the normal map it's displayed so that works.
I'm in directx in c++.

Varying Line Width with Open GL using GL_POINTS (iOS)

I'm making a drawing application using swift (based on GLPaint) and open gl. Now I would like to improve the curve so that it varies with stroke speed (in eg thicker if drawing fast)
However, since my knowledge in open gl is quite limited I need some guidance. What I want to do is to vary the size of my texture/point for each CGPoint I calculate and add to the screen. Is it possible?
func addQuadBezier(var from:CGPoint, var ctrl:CGPoint, var to:CGPoint, startTime:CGFloat, endTime:CGFloat) {
scalePoints(from: from, ctrl: ctrl, to: to)
let pointCount = calculatePointsNeeded(from: from, to: to, min: 16.0, max: 256.0)
var vertexBuffer: [GLfloat] = [GLfloat](count: Int(pointCount), repeatedValue:0.0)
var t : CGFloat = startTime + 0.0002
for i in 0..<Int(pointCount) {
let p = calculatePoint(from:from, ctrl: ctrl, to: to)
vertexBuffer.insert(p.x.f, atIndex: i*2)
vertexBuffer.insert(p.y.f, atIndex: i*2+1)
t += (CGFloat(1)/CGFloat(pointCount))
}
glBufferData(GL_ARRAY_BUFFER.ui, Int(pointCount)*2*sizeof(GLfloat), vertexBuffer, GL_STATIC_DRAW.ui)
glDrawArrays(GL_POINTS.ui, 0, Int(pointCount).i)
}
func render()
{
context.presentRenderbuffer(GL_RENDERBUFFER.l)
}
where render() is called every 1/60 s.
shader
attribute vec4 inVertex;
uniform mat4 MVP;
uniform float pointSize;
uniform lowp vec4 vertexColor;
varying lowp vec4 color;
void main()
{
gl_Position = MVP * inVertex;
gl_PointSize = pointSize;
color = vertexColor;
}
Thanks in advance!
In your vertex shader, set gl_pointSize to the width you want. That measurement is in framebuffer pixels, so if the size of your framebuffer changes with the device's scale factor, you'll need to adjust your point size appropriately.
If you find a way to control the line width in the vertex shader it would most likely be the best solution. Not only the lines would have different width but even a single line may have an increasing width (interpolated) between the points. I am not sure you will be able to achieve this on your platform though.
So if you do find a way you would add the point size to your buffer and use it with a new attribute in the vertex shader.
If not you will need to use triangles to draw the line which is generally a better practice anyway. To define vertices between point A and B you can get the normal as W = (B-A).normalized(), normal = N = (W.y, -W.x). Then the 4 positions are k = lineWidth/2.0, t1 = A + N*k, t2 = A - N*k, t3 = B + N*k, t4 = B - N*k. So this is what you add into your buffer and draw as a triangle strip or triangles depending on what you are looking for.

How can I repeat my texture in DX

There is a handy feature in three.js 3d library that you can set the sampler to repeat mode and set the repeat attribute to some values you like, for example, (3, 5) means this texture will repeat 3 times horizontally and 5 times vertically. But now I'm using DirectX and I cannot find some good solutions for this problem. Note that the UV coordinates of vertices still ranges from 0 to 1, and I don't want to change my HLSL codes because I want a programmable solution for this, thanks very much!
Edit : presume I have a cube model already. And the texture coordinates of its vertices are between0 and 1. If i use wrap mode or clamp mode for sampling textures it's all OK now. But I want to repeat a texture on one of its faces, and I first need to change to wrap mode. That's i already knows. Then I have to edit my model so that texture coordinates range 0-3. What if I don't change my model? So far i came out one way: I need to add a variable to pixel shader represents how many times does the map repeats and I will multiply this factor to coordinate when sampling. Not a graceful solution i think emmmm…
Since you've edited your Question, there is another Answer to your problem:
From what I understood, you have a face with uv's like so:
0,1 1,1
-------------
| |
| |
| |
-------------
0,0 1,0
But want the texture repeated 3 times (for example) instead of 1 time.
(Without changing the original model)
Multiple solutions here:
You could do it, when updating your buffers (if you do it):
D3D11_MAPPED_SUBRESOURCE resource;
HRESULT hResult = D3DDeviceContext->Map(vertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &resource);
if(hResult != S_OK) return false;
YourVertexFormat *ptr=(YourVertexFormat*)resource.pData;
for(int i=0;i<vertexCount;i++)
{
ptr[i] = vertices[i];
ptr[i].uv.x *= multiplyX; //in your case 3
ptr[i].uv.y *= multiplyY; //in your case 5
}
D3DDeviceContext->Unmap(vertexBuffer, 0);
But if you don't need updating the buffer anyways, i wouldn't recommend it, because it is terribly slow.
A faster way is to use the vertex shader:
cbuffer MatrixBuffer
{
matrix worldMatrix;
matrix viewMatrix;
matrix projectionMatrix;
};
struct VertexInputType
{
float4 position : POSITION0;
float2 uv : TEXCOORD0;
// ...
};
struct PixelInputType
{
float4 position : SV_POSITION;
float2 uv : TEXCOORD0;
// ...
};
PixelInputType main(VertexInputType input)
{
input.position.w = 1.0f;
PixelInputType output;
output.position = mul(input.position, worldMatrix);
output.position = mul(output.position, viewMatrix);
output.position = mul(output.position, projectionMatrix);
This is what you basicly need:
output.uv = input.uv * 3; // 3x3
Or more advanced:
output.uv = float2(input.u * 3, input.v * 5);
// ...
return output;
}
I would recommend the vertex shader solution, because it's fast and in directx you use vertex shaders anyways, so it's not as expensive as the buffer update solution...
Hope that helped solving your problems :)
You basicly want to create a sampler state like so:
ID3D11SamplerState* m_sampleState;
3D11_SAMPLER_DESC samplerDesc;
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
samplerDesc.BorderColor[0] = 0;
samplerDesc.BorderColor[1] = 0;
samplerDesc.BorderColor[2] = 0;
samplerDesc.BorderColor[3] = 0;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;
// Create the texture sampler state.
result = ifDEVICE->ifDX11->getD3DDevice()->CreateSamplerState(&samplerDesc, &m_sampleState);
And when you are setting your shader constants, call this:
ifDEVICE->ifDX11->getD3DDeviceContext()->PSSetSamplers(0, 1, &m_sampleState);
Then you can write your pixel shaders like this:
Texture2D Texture;
SamplerState SampleType;
...
float4 main(PixelInputType input) : SV_TARGET
{
float4 textureColor = shaderTexture.Sample(SampleType, input.uv);
...
}
Hope that helps...

Direct3D 11 not rasterizing any vertices

I'm trying to render a simple triangle on screen using Direct3D 11, but nothing shows up. Here are my vertices:
SimpleVertex vertices[ 3 ] =
{
{ XMFLOAT3( -1.0f, -1.0f, 0.0f ) },
{ XMFLOAT3( 1.0f, -1.0f, 0.0f ) },
{ XMFLOAT3( -1.0f, 1.0f, 0.0f ) },
};
The expected output is a triangle with one point in the top left corner of the screen, one point in the top right corner of the screen, and one point in the bottom left corner of the screen. However, nothing is being rendered anywhere.
I'm not performing any matrix transformations, and the vertex shader just passes the input directly to the output. Everything seems to be set up correctly, and when I use the graphics debugger in Visual Studio 2012, the correct vertex position is being passed to the vertex shader. However, it skips directly from the vertex shader stage to the output merger stage in the pipeline. I assume this means that nothing is being sent to the pixel shader, which would again mean that the vectors are being discarded in the rasterizer stage. Why is this happening?
Here is my rasterizer state:
D3D11_RASTERIZER_DESC rasterizerDesc;
rasterizerDesc.AntialiasedLineEnable = false;
rasterizerDesc.CullMode = D3D11_CULL_NONE;
rasterizerDesc.DepthBias = 0;
rasterizerDesc.DepthBiasClamp = 0.0f;
rasterizerDesc.DepthClipEnable = true;
rasterizerDesc.FillMode = D3D11_FILL_SOLID;
rasterizerDesc.FrontCounterClockwise = false;
rasterizerDesc.MultisampleEnable = false;
rasterizerDesc.ScissorEnable = false;
rasterizerDesc.SlopeScaledDepthBias = 0.0f;
And my viewport (width/height are the window client area matching my back buffer, which are set to 1024x576 in my test setup):
D3D11_VIEWPORT viewport;
viewport.Height = static_cast< float >( height );
viewport.MaxDepth = 1.0f;
viewport.MinDepth = 0.0f;
viewport.TopLeftX = 0.0f;
viewport.TopLeftY = 0.0f;
viewport.Width = static_cast< float >( width );
Can anyone see what is making the rasterize stage drop my vertices? Or are there any other parts of my D3D setup that could be causing this?
i found this on the internet .. it took absolulely ages to load so i copied and pasted i have highlighted in bold an interesting point.
The D3D_OVERLOADS constructors defined in row 11 offers a convenient way for C++ programmers to create transformed and lit vertices with D3DTLVERTEX.
_D3DTLVERTEX(const D3DVECTOR& v, float _rhw, D3DCOLOR _color,
D3DCOLOR _specular, float _tu, float _tv)
{
sx = v.x;
sy = v.y;
sz = v.z;
rhw = _rhw;
color = _color;
specular = _specular;
tu = _tu;
tv = _tv;
}
The system requires a vertex position that has already been transformed. So the x and y values must be in screen coordinates, and z must be the depth value of the pixel, which could be used in a z-buffer (we won't use a z-buffer here). Z values can range from 0.0 to 1.0, where 0.0 is the closest possible position to the viewer, and 1.0 is the farthest position still visible within the viewing area. Immediately following the position, transformed and lit vertices must include an RHW (reciprocal of homogeneous W) value.
Before rasterizing the vertices, they have to be converted from homogeneous vertices to non-homogeneous vertices, because the rasterizer expects them this way. Direct3D converts the homogeneous vertices to non-homogeneous vertices by dividing the x-, y-, and z-coordinates by the w-coordinate, and produces an RHW value by inverting the w-coordinate. This is only done for vertices which are transformed and lit by Direct3D.
The RHW value is used in multiple ways: for calculating fog, for performing perspective-correct texture mapping, and for w-buffering (an alternate form of depth buffering).
With D3D_OVERLOADS defined, D3DVECTOR is declared as
_D3DVECTOR(D3DVALUE _x, D3DVALUE _y, D3DVALUE _z);
D3DVALUE is the fundamental Direct3D fractional data type. It's declared in d3dtypes.h as
typedef float D3DVALUE, *LPD3DVALUE;
The source shows that the x and y values for the D3DVECTOR are always 0.0f (this will be changed in InitDeviceObjects()). rhw is always 0.5f, color is 0xfffffff and specular is set to 0. Only the tu1 and tv1 values are differing between the four vertices. These are the coordinates of the background texture.
In order to map texels onto primitives, Direct3D requires a uniform address range for all texels in all textures. Therefore, it uses a generic addressing scheme in which all texel addresses are in the range of 0.0 to 1.0 inclusive.
If, instead, you decide to assign texture coordinates to make Direct3D use the bottom half of the texture, the texture coordinates your application would assign to the vertices of the primitive in this example are (0.0,0.0), (1.0,0.0), (1.0,0.5), and (0.0,0.5). Direct3D will apply the bottom half of the texture as the background.
Note: By assigning texture coordinates outside that range, you can create certain special texturing effects.
You will find the declaration of D3DTextr_CreateTextureFromFile() in the Framework source in d3dtextr.cpp. It creates a local bitmap from a passed file. Textures could be created from *.bmp and *.tga files. Textures are managed in the framework in a linked list, which holds the info per texture, called texture container.
struct TextureContainer
{
TextureContainer* m_pNext; // Linked list ptr
TCHAR m_strName[80]; // Name of texture (doubles as image filename)
DWORD m_dwWidth;
DWORD m_dwHeight;
DWORD m_dwStage; // Texture stage (for multitexture devices)
DWORD m_dwBPP;
DWORD m_dwFlags;
BOOL m_bHasAlpha;
LPDIRECTDRAWSURFACE7 m_pddsSurface; // Surface of the texture
HBITMAP m_hbmBitmap; // Bitmap containing texture image
DWORD* m_pRGBAData;
public:
HRESULT LoadImageData();
HRESULT LoadBitmapFile( TCHAR* strPathname );
HRESULT LoadTargaFile( TCHAR* strPathname );
HRESULT Restore( LPDIRECT3DDEVICE7 pd3dDevice );
HRESULT CopyBitmapToSurface();
HRESULT CopyRGBADataToSurface();
TextureContainer( TCHAR* strName, DWORD dwStage, DWORD dwFlags );
~TextureContainer();
};
The problem was actually in my rendering logic. I set the stride of the vertex buffer to 0 instead of the size of my vertex struct. Changed that, and it renders just fine!

Resources