I found this example of implementing Phong lightning in hlsl. It is first snippet where I see that strange syntax where you declare variables in hlsl like here:
float3 materialEmissive : EMISSIVE;
float3 materialAmbient : AMBIENT;
In usual instead of EMISSIVE or AMBIENT I used to declare position in register like:
float3 materialEmissive : register(c0);
float3 materialAmbient : register(c1);
Why would I declare variables like in example from link? I checked DirectX documentation, but didn't find whether EMMISIVE or AMBIENT are some key words in hlsl.
In this case, EMISSIVE and AMBIENT are so called semantics. They describe what that variable should contain (and not where it is stored).
The shader compiler can access these semantics to create a handle for the variable. E.g. the Effect Framework and older versions of DirectX let you specify global variables by their semantic name. This decouples the actual shader implementation (i.e. the variable name) from its interface to the outside world (i.e. the semantics).
Related
The permute command from AVX2 instructions needs a parameter from type imm8. This parameter controls how the permutation is performed. Unfortunately I do not understand how this imm8 parameter is "created". What value do I have to set or how can I determine what value I have to set for a specific permuation?
Example:
_mm256_permute_pd(vec2, 0x5);
Here the parameter 0x5 permutes the first and second double in vec2 and the third and fourth double in vec2. But how do I know that 0x5 does that?
It's 4x 1-bit indices that select one of the two elements from the corresponding lane of the source vector, for each destination element. Read the Operation section of the docs for the asm instruction: http://felixcloutier.com/x86/VPERMILPD.html.
Or look it up in Intel's intrinsics guide, which has similar pseudo-code that shows exactly how each bit selects the source for an element of the result.
It's not lane-crossing vpermpd, so it's not like the 2-bit indices that _MM_SHUFFLE is a helper macro for, so it's not quite like Convert _mm_shuffle_epi32 to C expression for the permutation?.
I want to use some of the vector data types defined in Metal (such as uint2, docs) in Objective-C for my iOS app. However, I haven't been able to find any information about whether or not this is possible.
Those types are in <simd/SIMD.h> for C and C++ (and by extension, for Objective-C and Objective-C++).
They're actually the same types, with the same data layout, and the same associated functions, as those that you use from a Metal shader. So using them in CPU-side code where you expect to interface with Metal is a great idea. For example, you can define your own struct for vertex shader input in a C++ header file, then import that header and use the same struct definition in both your CPU code and the shader.
Note that the names differ a bit: e.g. uint2 is vector_uint2 in C, but simd::uint2 in C++.
I am trying to make a simple use of typeclasses in Nim. Please, keep in mind that I only have been using Nim since this morning, so I may have been doing something stupid.
Anyway, I would like to define a pseudorandom generator that produces a stream of values of type T. Sometimes T is numeric, hence it makes sense to know something about the minimum and maximum values attainable - say to rescale the values. Here are my types
type
Generator*[T] = generic x
next(var x) is T
BoundedGenerator*[T] = generic x
x is Generator[T]
min(x) is T
max(x) is T
I also have such an instance, say LinearCongruentialGenerator.
Say I want to use this to define Uniform generator that produces float values in an interval. I have tried
type Uniform* = object
gen: BoundedGenerator[int]
min_p: float
max_p: float
proc create*(gen: BoundedGenerator[int], min: float, max: float): Uniform =
return Uniform(gen: gen, min_p: min, max_p: max)
I omit the obvious definitions of next, min and max.
The above, however, does not compile, due to Error: 'BoundedGenerator' is not a concrete type
If I explicitly put LinearCongruentialGenerator in place of BoundedGenerator[int], everyting compiles, but of course I want to be able to switch more sophisticated generators.
Can anyone help me understand the compiler error?
The type classes in Nim are not used to create abstract polymorphic types as it is the case with Haskell's type classes and C++'s interfaces. Instead, they are much more similar to the concepts proposal for C++. They define a set of arbitrary type requirements that can be used as overload-resolution criteria for generic functions.
If you want to work with abstract types, you can either define a type hierarchy with a common base type and use methods (which use multiple dispatch) or you can roll your own vtable-based solution. In the future, the user defined type classes will gain the ability to automatically convert the matched values to a different type (during overload resolution). This will make the vtable approach very easy to use as values of types with compatible interfaces will be convertible to a "fat pointer" carrying the vtable externally to the object (with the benefit that many pointers with different abstract types can be created for the same object). I'll be implementing these mechanisms in the next few months, hopefully before the 1.0 release.
Araq (the primary author of Nim) also has some plans for optimizing a certain type of group of closures bundled together to a cheaper representation, where the closure environment is shared between them and the end result is quite close to the traditional C++-like vtable-carrying object.
Is there any important difference between using parentheses and angled brackets for texture sampler parameters? I have used them interchangeably before without any different effect.
For instance
Sampler TexSceneSampler {
Texture = <TexScene>;
}
Versus
Sampler TexSceneSampler {
Texture = (TexScene);
}
The angular brackets is the correct format according to the docs. However, AFAIK, there are no differences between the 2. I'd stick to the angular though in case the compiler gets changed. Mind its a DX9 only thing so you'll probably be ok either way.
I am using #defines, which I pass runtime to my shader sources based on program state, to optimize my huge shaders to be less complex. I would like to write the optimized shader to a file so that next time I run my program, I do not have to pass the #defines again, but I can straight compile the optimized shaders during program startup because now I know what kind of shaders by program needs.
Is there a way to get the result from shader preprocessor? I can of course store the #define values to a file and based on that compile the shaders during program startup but that would not be as elegant.
Preprocess the shader source using a C preprocessor.
For example, GCC have an option for preprocess only the source, and save the intermediate result in another file, using the option for defining preprocessor symbols, you will get the wanted result.