How to implement a "render-to-texture" pipeline in Direct3D 11? - directx

I am trying to some GPGPU calculations using HLSL and Direct3D 11.
My goal is to calculate several images sequentially (one rendering per image), then sum them.
To achive this I created three textures: one is an input, two are for summarizing the results. The method I am trying: first I render to texture2 (and I use the texture and texture3 textures as input), then I render to texture3 with a slightly modified pixel shader (and I use the texture and texture2 textures as input). After this, I dump the textures to a PNG file. My problem is that somehow the second pixel shader doesn't get the result of the first shader (texture2), but the original, empty texture. I defined both texture, texture2, and texture3 with usage D3D11_USAGE_DEFAULT, and D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE binding flags.
My HLSL shader (shader1):
float4 main(PS_INPUT input) : SV_TARGET
{
...
float4 val5 = 0.25F * (val + val2 + val3 + val4) + tex3.Sample(sam, input.Tex4);
return val5;
}
The other shader (shader2):
float4 main(PS_INPUT input) : SV_TARGET
{
...
float4 val5 = 0.25F * (val + val2 + val3 + val4) + tex2.Sample(sam, input.Tex4);
return val5;
}
The code that performns the rendering and the dumping of textures:
context->OMSetRenderTargets(1, &targetViewTexture2, NULL);
float z = 280.0F;
GenMatrix(z);
context->VSSetShader(vs, NULL, 0);
context->PSSetShader(ps, NULL, 0);
context->PSSetSamplers(0, 1, &state);
context->VSSetConstantBuffers(0, 1, &cbuf);
context->Draw(6, 0);
swapChain->Present(0, 0);
context->OMSetRenderTargets(1, &targetViewTexture3, NULL);
z = 0.0F;
GenMatrix(z);
context->VSSetShader(vs, NULL, 0);
context->PSSetShader(ps2, NULL, 0);
context->PSSetSamplers(0, 1, &state);
context->VSSetConstantBuffers(0, 1, &cbuf);
context->Draw(6, 0);
swapChain->Present(0, 0);
DirectX::ScratchImage im;
DirectX::CaptureTexture(device, context, texture3, im);
const DirectX::Image *realImage = im.GetImage(0, 0, 0);
HRESULT hr2 = DirectX::SaveToWICFile(*realImage, DirectX::WIC_FLAGS_NONE, GUID_ContainerFormatPng, L"output_tex3.png", NULL);
DirectX::CaptureTexture(device, context, texture2, im);
realImage = im.GetImage(0, 0, 0);
hr2 = DirectX::SaveToWICFile(*realImage, DirectX::WIC_FLAGS_NONE, GUID_ContainerFormatPng, L"output_tex2.png", NULL);
The ouput of the first pass (ps) is correct, but the second is not. If I only leave the shader code:
float4 val5 = tex2.Sample(sam, input.Tex4);
I get an empty image. Am I missing something? Do I have to call some methods to use the texture in the second pixel shader? I think I pinpointed the relevant sections of my large code base, but if you need more information just ask for it in the comment section.

The problem was that I used the texture both as input and output. You could use a texture either as input or render target in one pass.

Related

Metal Shader Function that can deal with both RGB and YUV textures

I'm trying to teach myself the basics of computer graphics on the iPhone and Apple's Metal API. I'm trying to do something pretty basic, but I'm getting a little stuck.
What I want to do is just "texture a quad". Basically, I make a rectangle and I have an image texture that covers the rectangle. I can make that work for the basic case where the image texture just comes from an image of a known format, but I'm having trouble figuring out how to make my code a little more generic and able to handle different formats.
For example, sometimes the image texture comes from an image file, which after decoding it, the pixel data is in the RGB format. Sometimes, my image texture actually comes from a video frame where the data is stored in the YUV format.
Ideally, I'd want to create some sort of "sampler" object or function that can just hand me back an RGB color for a particular texture coordinate. In the code where I prepare for rendering, that's the part with context on which format is getting used, and so it would have enough information to figure out which type of sampler should get used. For example, in the video frame case, it knows that it's working with a video frame and so it creates a YUV sampler and passes it the relevant data. And then from my shader code that just wants to read colors, it can just ask for the color at some particular coordinates, and the YUV sampler would do the proper work to compute the right RGB color. If I passed in an RGB sampler instead, it would just read the RGB data without doing any sort of calculations.
I thought this would be really simple to do? I feel like this has to be a common problem for graphics code that deals with textures in different formats, or colorspaces, or whatever? Am I missing something obvious?
How do you do this without writing a bunch of versions of all of your shaders?
Here are functions for transforming RGBA to YUVA and vice versa on the fly.
float4 rgba2yuva(float4 rgba)
{
float4 yuva = float4(0.0);
yuva.x = rgba.r * 0.299 + rgba.g * 0.587 + rgba.b * 0.114;
yuva.y = rgba.r * -0.169 + rgba.g * -0.331 + rgba.b * 0.5 + 0.5;
yuva.z = rgba.r * 0.5 + rgba.g * -0.419 + rgba.b * -0.081 + 0.5;
yuva.w = rgba.a;
return yuva;
}
float4 yuva2rgba(float4 yuva)
{
float4 rgba = float4(0.0);
rgba.r = yuva.x * 1.0 + yuva.y * 0.0 + yuva.z * 1.4;
rgba.g = yuva.x * 1.0 + yuva.y * -0.343 + yuva.z * -0.711;
rgba.b = yuva.x * 1.0 + yuva.y * 1.765 + yuva.z * 0.0;
rgba.a = yuva.a;
return rgba;
}
I adapted the code from here: https://github.com/libretro/glsl-shaders/blob/master/nnedi3/shaders/
Simple OpenGL shaders are quite straightforward to port to Metal. I pretty much just changed the datatype vec4 to float4. If you want a half version, just substitute float4 for half4.
metal shader function ARK, now you can use #Jeshua Lacock to convert between the two.
// tweak your color offsets as desired
#include <metal_stdlib>
using namespace metal;
kernel void YUVColorConversion(texture2d<uint, access::read> yTexture [[texture(0)]],
texture2d<uint, access::read> uTexture [[texture(1)]],
texture2d<uint, access::read> vTexture [[texture(2)]],
texture2d<float, access::write> outTexture [[texture(3)]],
uint2 gid [[thread_position_in_grid]])
{
float3 colorOffset = float3(0, -0.5, -0.5);
float3x3 colorMatrix = float3x3(
float3(1, 1, 1),
float3(0, -0.344, 1.770),
float3(1.403, -0.714, 0)
);
uint2 uvCoords = uint2(gid.x / 2, gid.y / 2);
float y = yTexture.read(gid).r / 255.0;
float u = uTexture.read(uvCoords).r / 255.0;
float v = vTexture.read(uvCoords).r / 255.0;
float3 yuv = float3(y, u, v);
float3 rgb = colorMatrix * (yuv + colorOffset);
outTexture.write(float4(float3(rgb), 1.0), gid);
}
Good ref here , and then you can build pipelines or variants for processing specifically what you need like here
#include <metal_stdlib>
#include <simd/simd.h>
#include <metal_texture>
#include <metal_matrix>
#include <metal_geometric>
#include <metal_math>
#include <metal_graphics>
#include "AAPLShaderTypes.h"
using namespace metal;
// Variables in constant address space.
constant float3 lightPosition = float3(0.0, 1.0, -1.0);
// Per-vertex input structure
struct VertexInput {
float3 position [[attribute(AAPLVertexAttributePosition)]];
float3 normal [[attribute(AAPLVertexAttributeNormal)]];
half2 texcoord [[attribute(AAPLVertexAttributeTexcoord)]];
};
// Per-vertex output and per-fragment input
typedef struct {
float4 position [[position]];
half2 texcoord;
half4 color;
} ShaderInOut;
// Vertex shader function
vertex ShaderInOut vertexLight(VertexInput in [[stage_in]],
constant AAPLFrameUniforms& frameUniforms [[ buffer(AAPLFrameUniformBuffer) ]],
constant AAPLMaterialUniforms& materialUniforms [[ buffer(AAPLMaterialUniformBuffer) ]]) {
ShaderInOut out;
// Vertex projection and translation
float4 in_position = float4(in.position, 1.0);
out.position = frameUniforms.projectionView * in_position;
// Per vertex lighting calculations
float4 eye_normal = normalize(frameUniforms.normal * float4(in.normal, 0.0));
float n_dot_l = dot(eye_normal.rgb, normalize(lightPosition));
n_dot_l = fmax(0.0, n_dot_l);
out.color = half4(materialUniforms.emissiveColor + n_dot_l);
// Pass through texture coordinate
out.texcoord = in.texcoord;
return out;
}
// Fragment shader function
fragment half4 fragmentLight(ShaderInOut in [[stage_in]],
texture2d<half> diffuseTexture [[ texture(AAPLDiffuseTextureIndex) ]]) {
constexpr sampler defaultSampler;
// Blend texture color with input color and output to framebuffer
half4 color = diffuseTexture.sample(defaultSampler, float2(in.texcoord)) * in.color;
return color;
}

Texture sampler in HLSL does not interpolate

I am currently working on a multi-textured terrain and I have problems with the Sample function of Texture2DArray.
In my example, I use a Texture2DArray to store a set of different terrain texture, e.g. grass, sand, asphalt, etc. Each of my vertices stores a texture coordinate (UV coordinate) and an index of the texture I want to use. So, if my index is 0, I use the first texture. If the index is 1, I use the second texture, and so on. This works fine, as long as my index is a natural number (0, 1, ..). However, it fails, if the index is a real number (like 1.5f).
In order to look for the problem, I reduced my entire pixel shader to this:
Texture2DArray DiffuseTextures : register(t0);
Texture2DArray NormalTextures : register(t1);
Texture2DArray EmissiveTextures : register(t2);
Texture2DArray SpecularTextures : register(t3);
SamplerState Sampler : register(s0);
struct PS_IN
{
float4 pos : SV_POSITION;
float3 nor : NORMAL;
float3 tan : TANGENT;
float3 bin : BINORMAL;
float4 col : COLOR;
float4 TextureIndices : COLOR1;
float4 tra : COLOR2;
float2 TextureUV : TEXCOORD0;
};
float4 PS(PS_IN input) : SV_Target
{
float4 texCol = DiffuseTextures.Sample(Sampler, float3(input.TextureUV, input.TextureIndices.r));
return texCol;
}
The following image shows the result of a sample scene on the left side. As you can see, there is a hard border between the used textures. There is no form of interpolation.
In order to check my texture indices, I changed my pixel shader from above by returning the texture indices as a color:
return float4(input.TextureIndices.r, input.TextureIndices.r, input.TextureIndices.r, 1.0f);
The result can be seen on the right side of the image. The texture indices are correct, since they range in the interval [0, 1] and you can clearly see the interpolation at the border of the area. However, my sampled texture does not show any form of interpolation.
Since my pixel shader is pretty simple, I wonder what causes this behaviour? Is there any setting in DirextX responsible for this?
I use DirectX 11, pixel shader ps_5_0 (I also tested with ps_4_0) and I use DDS textures (BC3 compression).
Edit
This is the sampler I am using:
SharpDX.Direct3D11.SamplerStateDescription samplerStateDescription = new SharpDX.Direct3D11.SamplerStateDescription()
{
AddressU = SharpDX.Direct3D11.TextureAddressMode.Wrap,
AddressV = SharpDX.Direct3D11.TextureAddressMode.Wrap,
AddressW = SharpDX.Direct3D11.TextureAddressMode.Wrap,
Filter = SharpDX.Direct3D11.Filter.MinMagMipLinear
};
SharpDX.Direct3D11.SamplerState samplerState = new SharpDX.Direct3D11.SamplerState(_device, samplerStateDescription);
_deviceContext.PixelShader.SetSampler(0, samplerState);
Solution
I made a function using the code presented by catflier for getting a texture color:
float4 GetTextureColor(Texture2DArray textureArray, float2 textureUV, float textureIndex)
{
float tid = textureIndex;
int id = (int)tid;
float l = frac(tid);
float4 texCol1 = textureArray.Sample(Sampler, float3(textureUV, id));
float4 texCol2 = textureArray.Sample(Sampler, float3(textureUV, id + 1));
return lerp(texCol1, texCol2, l);
}
This way, I can get the desired texture color for all texture types (diffuse, specular, emissive, ...) with a simple function call:
float4 texCol = GetTextureColor(DiffuseTextures, input.TextureUV, input.TextureIndices.r);
float4 bumpMap = GetTextureColor(NormalTextures, input.TextureUV, input.TextureIndices.g);
float4 emiCol = GetTextureColor(EmissiveTextures, input.TextureUV, input.TextureIndices.b);
float4 speCol = GetTextureColor(SpecularTextures, input.TextureUV, input.TextureIndices.a);
The result is as smooth as I wanted it to be: :-)
Texture arrays do not sample across slices, so technically, this is expected result.
If you want to interpolate between slices (eg: 1.5f gives you "half" of second texture and "half" of third texture), you can use a Texture3d instead, which allows this (but will cost some more as it will perform trilinear filtering)
Otherwise, you can perform your sampling that way :
float4 PS(PS_IN input) : SV_Target
{
float tid = input.TextureIndices.r;
int id = (int)tid;
float l = frac(tid); //lerp amount
float4 texCol1 = DiffuseTextures.Sample(Sampler, float3(input.TextureUV,id));
float4 texCol2 = DiffuseTextures.Sample(Sampler, float3(input.TextureUV,id+1));
return lerp(texCol1,texCol2, l);
}
Please note that this technique is quite more flexible, since you can also provide non adjacent slices as input (so you can lerp between slice 2 and 23 for example), and eventually use a different blend mode by changing lerp by some other function.

Sampling a texture within vertex shader?

I'm using DirectX 11 targeting Shader Model 5 (well actually using SharpDX for directx 11.2 build) and i'm at loss with what is wrong with this simple shader i'm writing.
The pixel shader is applied on a flat high poly plane (so there are plenty of vertices to displace), the texture is sampled without issues (the pixel shader displays it fine), however the displacement in the vertex shader just doesn't work.
It's not an issue with the displacement itsels, replacing the += height.SampleLevel with += 0.5 shows all vertices displaced, it's not an issue with the sampling either since the very same code works in the pixel shader. And as far as i understand it's not an API usage issue since i hear SampleLevel (unlike sample) is perfectly usable in the VS (since you provide the LOD level).
Using a noise function to displace instead of a texture also works just fine, which leads me to think the issue is with the texture sampling and only Inside the VS, which is odd since i'm using a function that is supposedly VS compatible?
I've been trying random things for the past hour and i'm really clueless as to where to look, i also find nearly no info about displacement mapping in hlsl and even less so in DX11 + to use as reference.
struct VS_IN
{
float4 pos : POSITION;
float2 tex : TEXCOORD;
};
struct PS_IN
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD;
};
float4x4 worldViewProj;
Texture2D<float4> diffuse: register(t0);
Texture2D<float4> height: register(t1);
Texture2D<float4> lightmap: register(t2);
SamplerState pictureSampler;
PS_IN VS( VS_IN input )
{
PS_IN output = (PS_IN) 0;
input.pos.z += height.SampleLevel(pictureSampler, input.tex, 0).r;
output.pos = mul(input.pos, worldViewProj);
output.tex = input.tex;
return output;
}
float4 PS( PS_IN input ) : SV_Target
{
return height.SampleLevel(pictureSampler, input.tex, 0).rrrr;
}
For reference in case it matters:
Texture description :
new SharpDX.Direct3D11.Texture2DDescription()
{
Width = bitmapSource.Size.Width,
Height = bitmapSource.Size.Height,
ArraySize = 1,
BindFlags = SharpDX.Direct3D11.BindFlags.ShaderResource ,
Usage = SharpDX.Direct3D11.ResourceUsage.Immutable,
CpuAccessFlags = SharpDX.Direct3D11.CpuAccessFlags.None,
Format = SharpDX.DXGI.Format.R8G8B8A8_UNorm,
MipLevels = 1,
OptionFlags = SharpDX.Direct3D11.ResourceOptionFlags.None,
SampleDescription = new SharpDX.DXGI.SampleDescription(1, 0),
};
Sampler description :
var sampler = new SamplerState(device, new SamplerStateDescription()
{
Filter = Filter.MinMagMipLinear,
AddressU = TextureAddressMode.Clamp,
AddressV = TextureAddressMode.Clamp,
AddressW = TextureAddressMode.Clamp,
BorderColor = Color.Pink,
ComparisonFunction = Comparison.Never,
MaximumAnisotropy = 16,
MipLodBias = 0,
MinimumLod = 0,
MaximumLod = 16
});
So the answer was that Ronan wasn't setting the shader resource view to the vertex shader so it can access the textures and set the calculated values.

Pixel Shader can access Structured Buffer while Vertex Shader can't - is that a DirectX spec?

==================== EDIT: the solution =====================
I finally have found the problem, and since the answer might be important for beginners who are learning DirectX I post it here. (I’m using F# and SharpDX as the .NET wrapper to DirectX)
In my program the resources (buffers, views etc.) only change when the swap chain is resized. So I put all resource allocations (IA, OM, VS, PS) into a function switchTo2DLayout. switchTo2DLayout returns immediately (without doing anything) if the swap chain was not resized. This is controlled by a flag.
Later on I detected that this flag was never reset, and so the resource allocations were done before every draw call. I corrected this mistake, but now the image was rendered only on first call to renderPixels. It turned out that I have to set the ShaderresourceView every time before the draw call.
let renderPixels () =
switchTo2DLayout()
// this line was missing:
context.PixelShader.SetShaderResource(COUNT_COLOR_SLOT, countColorSRV_2D)
context.ClearRenderTargetView (renderTargetView2D, Color.White.ToColor4())
context.Draw(4,0)
swapChain.Present(1, PresentFlags.None)
This was completely unexpected for me. The books about DirectX that I use never state explicitly which resources can be allocated once (as long the setting is not changed), and which have to be allocated on every draw call.
For the mesh rendering I use a similar setup (here without the bug I mentioned), and again the equivalent line was missing:
let draw3D() =
switchTo3DLayout()
// this line was missing:
context.VertexShader.SetShaderResource(TRIANGLE_SLOT, triangleSRV )
context.ClearDepthStencilView(depthView3D, DepthStencilClearFlags.Depth, 1.0f, 0uy
context.ClearRenderTargetView(renderTargetView2D, Color4.Black )
context.Draw(triangleCount, 0)
swapChain.Present(1, PresentFlags.None)
This explains why the 2D rendering worked because of the bug (pixel shader reads from buffer), and the 3D did not (vertex shader reads from buffer).
======================= My original post: =================
Some days ago I posted a problem [link:] How can I feed compute shader results into vertex shader w/o using a vertex buffer? that was probably too complicated to be answered. Meanwhile I have stripped down the setup to a much simpler case:
struct PS_IN
{
float4 pos : SV_POSITION;
float4 col : COLOR;
};
RWStructuredBuffer<float4> colorOutputTable : register (u5);
StructuredBuffer<float4> output2 : register (t5);
Case A: pixel shader sets color (works)
// vertex shader A
PS_IN VS_A ( uint vid : SV_VertexID )
{
PS_IN output = (PS_IN)0;
if (vid == 0) output.pos = float4(-1, -1, 0, 1);
if (vid == 1) output.pos = float4( 1, -1, 0, 1);
if (vid == 2) output.pos = float4(-1, 1, 0, 1);
if (vid == 3) output.pos = float4( 1, 1, 0, 1);
return output;
}
// pixel shader A
float4 PS_A ( float4 input : SV_Position) : SV_Target
{
uint2 pixel = uint2(input.x, input.y);
return output2[ pixel.y * width + pixel.x]; // PS accesses buffer (works)
}
Case B: vertex shader sets color (doesn't work)
// vertex shader B
PS_IN VS_B ( uint vid : SV_VertexID )
{
PS_IN output = (PS_IN)0;
if (vid == 0) output.pos = float4(-1, -1, 0, 1);
if (vid == 1) output.pos = float4( 1, -1, 0, 1);
if (vid == 2) output.pos = float4(-1, 1, 0, 1);
if (vid == 3) output.pos = float4( 1, 1, 0, 1);
output.col = output2[vid]; // VS accesses buffer (does not work)
return output;
}
// pixel shader B
float4 PS_B (PS_IN input ) : SV_Target
{
return input.col;
}
Obviously the pixel shader can access the “output2” buffer while the vertex shader can’t (reading always zeros).
Searching the Internet I could not find any explanation for this behavior. In my “real” application a compute shader calculates a triangle list and stores it in a RWStructuredBuffer, so I need access to this table from the vertex shader (via the mapped slot).
I guess that many people who work with compute shaders might stumble over this problem. Any idea how to solve this? (I currently can't use Level 11.1 or 11.2, I have to find a solution based on 11.0)
Just tried here, your shader seems to work (never had issues with StructuredBuffer not accessible on any stage, feature level 11).
Sample is just from SharpDX minicube (just replaced shader code inside, and added a buffer).
Only thing I had to do was to reverse winding for your trianglestrip (I could have modified the rasterizer).
Since I know you have issue with debug pipeline, one other useful way to see if something wrong is to use queries (mostly PipelineStatistics and Occlusion in your case).
Some wrong state might easily create issue, so you can immediately see if something is wrong by looking at number of pixels written/rendered primitives).
Shader code here:
struct PS_IN
{
float4 pos : SV_POSITION;
float4 col : COLOR;
};
RWStructuredBuffer<float4> colorOutputTable : register (u5);
StructuredBuffer<float4> output2 : register (t5);
PS_IN VS_B(uint vid : SV_VertexID)
{
PS_IN output = (PS_IN)0;
if (vid == 1) output.pos = float4(-1, -1, 0, 1);
if (vid == 0) output.pos = float4(1, -1, 0, 1);
if (vid == 3) output.pos = float4(-1, 1, 0, 1);
if (vid == 2) output.pos = float4(1, 1, 0, 1);
output.col = output2[vid]; // VS accesses buffer (does not work)
return output;
}
// pixel shader B
float4 PS_B(PS_IN input) : SV_Target
{
return input.col;
}
Code is here (it's quite bulky so I've put it on pastebin)

How can I repeat my texture in DX

There is a handy feature in three.js 3d library that you can set the sampler to repeat mode and set the repeat attribute to some values you like, for example, (3, 5) means this texture will repeat 3 times horizontally and 5 times vertically. But now I'm using DirectX and I cannot find some good solutions for this problem. Note that the UV coordinates of vertices still ranges from 0 to 1, and I don't want to change my HLSL codes because I want a programmable solution for this, thanks very much!
Edit : presume I have a cube model already. And the texture coordinates of its vertices are between0 and 1. If i use wrap mode or clamp mode for sampling textures it's all OK now. But I want to repeat a texture on one of its faces, and I first need to change to wrap mode. That's i already knows. Then I have to edit my model so that texture coordinates range 0-3. What if I don't change my model? So far i came out one way: I need to add a variable to pixel shader represents how many times does the map repeats and I will multiply this factor to coordinate when sampling. Not a graceful solution i think emmmm…
Since you've edited your Question, there is another Answer to your problem:
From what I understood, you have a face with uv's like so:
0,1 1,1
-------------
| |
| |
| |
-------------
0,0 1,0
But want the texture repeated 3 times (for example) instead of 1 time.
(Without changing the original model)
Multiple solutions here:
You could do it, when updating your buffers (if you do it):
D3D11_MAPPED_SUBRESOURCE resource;
HRESULT hResult = D3DDeviceContext->Map(vertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &resource);
if(hResult != S_OK) return false;
YourVertexFormat *ptr=(YourVertexFormat*)resource.pData;
for(int i=0;i<vertexCount;i++)
{
ptr[i] = vertices[i];
ptr[i].uv.x *= multiplyX; //in your case 3
ptr[i].uv.y *= multiplyY; //in your case 5
}
D3DDeviceContext->Unmap(vertexBuffer, 0);
But if you don't need updating the buffer anyways, i wouldn't recommend it, because it is terribly slow.
A faster way is to use the vertex shader:
cbuffer MatrixBuffer
{
matrix worldMatrix;
matrix viewMatrix;
matrix projectionMatrix;
};
struct VertexInputType
{
float4 position : POSITION0;
float2 uv : TEXCOORD0;
// ...
};
struct PixelInputType
{
float4 position : SV_POSITION;
float2 uv : TEXCOORD0;
// ...
};
PixelInputType main(VertexInputType input)
{
input.position.w = 1.0f;
PixelInputType output;
output.position = mul(input.position, worldMatrix);
output.position = mul(output.position, viewMatrix);
output.position = mul(output.position, projectionMatrix);
This is what you basicly need:
output.uv = input.uv * 3; // 3x3
Or more advanced:
output.uv = float2(input.u * 3, input.v * 5);
// ...
return output;
}
I would recommend the vertex shader solution, because it's fast and in directx you use vertex shaders anyways, so it's not as expensive as the buffer update solution...
Hope that helped solving your problems :)
You basicly want to create a sampler state like so:
ID3D11SamplerState* m_sampleState;
3D11_SAMPLER_DESC samplerDesc;
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
samplerDesc.BorderColor[0] = 0;
samplerDesc.BorderColor[1] = 0;
samplerDesc.BorderColor[2] = 0;
samplerDesc.BorderColor[3] = 0;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;
// Create the texture sampler state.
result = ifDEVICE->ifDX11->getD3DDevice()->CreateSamplerState(&samplerDesc, &m_sampleState);
And when you are setting your shader constants, call this:
ifDEVICE->ifDX11->getD3DDeviceContext()->PSSetSamplers(0, 1, &m_sampleState);
Then you can write your pixel shaders like this:
Texture2D Texture;
SamplerState SampleType;
...
float4 main(PixelInputType input) : SV_TARGET
{
float4 textureColor = shaderTexture.Sample(SampleType, input.uv);
...
}
Hope that helps...

Resources