directx/HLSL what are input and output semantics for? - directx

i was wondering what those input and output semantics in HLSL are for?
i.e. why do i have to write that TEXCOORD0;
struct VS_OUTPUT
{
float2 tc : TEXCOORD0;
};
when the type and the name are already given?

Semantics let the shader know where to read or write data from. They correspond to parts of the vertex structure or certain values.
In your example above, the value of tc comes from the first texture coordinate component.
For info on semantics and what they mean, check here: http://msdn.microsoft.com/en-us/library/bb509647(v=vs.85).aspx
In the vertex shader, the data will be coming from the FVF or vertex declaration.

Related

IOS how metal convert fragment shader return value to MTKView.colorPixelFormat?

I have a question is between the fragment shader return value and MTkView.colorPixelFormat.
My fragment shader return float4, it is a 4 * 32bit vector, and MTkView.colorPixelFormat is .bgr10_xr.
how to convert float4 to .bgr10_xr? or this conversion is automatically?
It should just work. Metal will do the conversion for you. Refer to section 7.7 of Metal Shading Language Specification theres an entry about 10-bit formats 7.7.4 Conversion for 10- and 11-bit Floating-Point Pixel Data Type.

What is the correct usage of sampler lod_options (MSL) in Metal shader?

I'm trying to learn MSL through the Metal Shading Language Specification, and saw that you can set LOD options when sampling a texture by specifying the options in the sample function. This is one of the examples given in the spec:
Tv sample(sampler s, float2 coord, lod_options options, int2 offset = int2(0)) const
lod_options include bias, level, gradient2d, etc.
I've looked all over but cannot find the usage syntax for this. Are these named arguments? Is lod_options a struct? For example, if I want to specify the LOD level, what is the correct way to do it? I know these options can also be specified in the sampler object itself, but if I want to do it here, what would be the right syntax to do so?
There is no lod_options type as such; you can think of it as a placeholder for one of the bias, level, gradient2d, etc. types. Each of these types is a different struct, which allows the Metal standard library to have an overloaded variant of the sample function for each such option.
To specify, for example, the mipmap level to sample, you'd provide a parameter of level type:
float4 color = myTexture.sample(mySampler, coords, level(1));

How to specify LOD bias in Metal?

I'm rewriting an OpenGL filter from the Android version of the app I'm currently working at in Metal. It uses the following texture lookup function:
vec4 texture2D(sampler2D sampler, vec2 coord, float bias)
Assuming my filter kernel function looks like this:
float4 fname(sampler src) {
...
}
The texture lookup call would be the following:
src.sample(coord)
But how can I pass the bias parameter? (the sample function takes only 1 argument)
I'm afraid Core Image only supports 2D textures – no mipmapping and LOD selection. Only bilinear sampling is available.
If you need different LODs, you need to pass different samplers to your kernel and do the interpolation yourself.

Is it better to use one large buffer with all related data or several smaller buffers in HLSL

I interested in both a code design and performance aspect if having a separated buffers when sending data to the GPU in HLSL, or another high-level shader language, is better.
This is where a particular shader needs to have a lot of variable data which changes during runtime and as such needs information to be passed by buffers.
I give an very basic example:
cbuffer SomeLargeBuffer : register(cb0)
{
float3 data;
float someData;
float4 largeArray[2500];
float moreData;
...
...
...
...
...
}
or to have
cbuffer SamllerBuffer: register(cb0)
{
float3 data;
float someRelatedData;
}
cbuffer SecondSmallerBuffer : register(cb1)
{
float4 largeArray[2500];
float moreData;
}
cbuffer ThirdBuffer: register(cb2)
{
...
...
...
}
In terms of efficiency, the documentation on shader constants in HLSL gives the following advice:
The best way to efficiently use constant buffers is to organize shader
variables into constant buffers based on their frequency of update.
This allows an application to minimize the bandwidth required for
updating shader constants. For example, a shader might declare two
constant buffers and organize the data in each based on their
frequency of update: data that needs to be updated on a per-object
basis (like a world matrix) is grouped into a constant buffer which
could be updated for each object. This is separate from data that
characterizes a scene and is therefore likely to be updated much less
often (when the scene changes).
So, if you data updates at different rates, it would be best to group all data that are updated at the same frequency in the same constant buffers. Generally, data is either updated a) every frame, b) sporadically or c) never (once at startup). Reducing the number of total constant buffers also is helpful for performance, because it will reduce the number of binding calls, and required resource tracking.
In terms of code design, it's difficult to say, although usually it fits naturally the frequency-of-update pattern.

Sampling from single channel textures in DirectX

I have a 2D texture formatted as DXGI_FORMAT_R32_FLOAT. In my pixel shader I sample from it thusly:
float sample = texture.Sample(sampler, coordinates);
This results in the following compiler warning:
warning X3206: implicit truncation of vector type
I'm confused by this. Shouldn't Sample return a single channel, and therefore a scalar value, as opposed to a vector?
I'm using shader model 4 level 9_1.
Either declare your texture as having one channel, or specify which channel you want. Without the <float> bit, it'll assume it's a 4 channel texture and so therefore Sample will return a float4.
Texture2D<float> texture;
or
float sample = texture.Sample(sampler, coordinates).r;

Resources