HLSL sampler inside struct possible? - directx

I'm using the DirectX 9 effect framework.
I'd like to create a struct which contains a sampler like so:
struct Test
{
texture tex;
sampler texSamp = sampler_state
{
Texture = <tex>;
};
};
However the shader compiler fails with:
internal error: this-relative Test::tex 'tex' found outsideof function scope
It seems like the idea of the this-relative reference is kind of working, but I need to somehow declare it inside a function, but I'm not sure how that could work, since declaring samplers inside functions doesn't work? Anyone have any ideas?

I though that in HLSL everything is a value type.
You know what implication this would have?
Each time you assigned this struct to some other variable you would do a copy of the sampler.
There are limitation in shading language on many things like number of samplings, not only number of samplers.

It seems that non-numeric types are not supported inside HLSL structs, which is a crying shame for my application.

Related

XCode Metal template: Where is VertexAttribute.position defined?

In XCode 9's Metal template, there is one part where it's setting attributes and layouts on a MTLVertexDescriptor.
let mtlVertexDescriptor = MTLVertexDescriptor()
mtlVertexDescriptor.attributes[VertexAttribute.position.rawValue].format = ...
mtlVertexDescriptor.attributes[VertexAttribute.texcoord.rawValue].format = ...
I tried hard, but cannot figure out where the magic keywords position, texcoord and later meshPositions, meshGenerics are coming from.
I guess it's not from the source code, but I didn't find any documentation where these would be specified. For VertexAttribute all I got was the reference page, without any mention about position and texcoord.
XCode points me to ShaderTypes.h, namely this section:
typedef NS_ENUM(NSInteger, VertexAttribute) {
VertexAttributePosition = 0,
VertexAttributeTexcoord = 1,
};
I feel this is the key for understanding this, but I have multiple problems:
As a developer who started with Swift, in a Swift project template this Obj-C part is a bit confusing. Can you explain me clearly what it does exactly?
How does VertexAttributePosition in an NS_ENUM add a magical property .position and VertexAttributeTexcoord -> .texcoord?
Why are none of these documented (Google finds mostly OpenGL related pages), nor can XCode find any help / jump to definition on them?
The clue to understanding what's going on here can be found on this line in the ShaderTypes.h file:
// Header containing types and enum constants shared between Metal shaders and Swift/ObjC source
The express intent of ShaderTypes.h is to declare types that can be used by both the shaders (which are written in Metal Shading Language, a dialect of C++) and the renderer class (which is written in Objective-C or Swift, depending on which you select when opening the template). The way this is achieved is by constructing a header file that can be included in both. The twist comes when using Swift, because Swift lacks a notion of header files except for one special case: bridging headers.
When incorporating C or Objective-C code into a Swift app, you provide a bridging header that imports or declares types and methods you want to use in Swift. In Xcode, you configure this with the "Objective-C Bridging Header" setting in the "Swift Compiler - General" portion of your project's Build Settings. It's kinda buried, but if you go there, you'll see that it's populated with the "ShaderTypes.h" header from the template. That's how your Swift code knows about the VertexAttribute enum type.
In Objective-C, to get the attribute index, you'd use one of the defined enum values directly: VertexAttributePosition, which is functionally equivalent to a literal 0. When that gets imported into Swift, the name of the enum (VertexAttribute) gets stripped off the front, and the values get transformed into lower camel case, per Swift style, e.g. position. You can read more about the particulars here.
The upshot of this is that even though there is no enum value named "position" anywhere in the code, that name gets synthesized for you when you use the Objective-C enumeration from Swift. The rawValue property is the integer value associated with that particular enum value, which can then be used as an index on an attribute or vertex descriptor (again, in this case, it's equal to 0).
None of these are documented because they're defined exclusively in this template. They're not part of the Metal API or really any API; they're just names that provide a convenient label for the underlying constants, in order to make the shader and app code consistent with one another.

Is making WebGL context object a global/semi-global variable a bad idea?

So, my idea is to do something like that (the code is simplified of course):
var gl;
function Renderer(canvas) {
gl = this.gl = canvas.getContext('experimental-webgl');
}
function Object() {
}
Object.prototype.render = function() {
...
gl.drawElements(...);
}
The gl variable itself can be placed into a namespace for better consistency, it can also be incapsulated by wrapping all the code into an anonymous function to make sure it won't clash with anything.
I can see one obvious tradeoff here: problems with running multiple WebGL canvases on the same page. But I'm totally fine with it.
Why doing that? Because otherwise it's more painful to call any WebGL functions, you have to pass your renderer as a parameter here and there. That's actually the thing I don't like about Three.js: all the graphics stuff is handled inside a Renderer object, which makes the whole Renderer object huge and complicated.
If using a globally visible context, you don't have to bother about OpenGL constants, you don't have to worry about your renderer object's visibility, and so on.
So, my question is: should I expect any traps with this approach? Aside from potential emptiness of the gl variable, of course.
Define bad
Lots of WebGL programs do this. OpenGL does this by default since the functions are global in scope. In normal OpenGL you have to call eglMakeCurrent (or equivalent) to switch contexts which effectively is just doing a hidden gl = contextToMakeCurrent under the hood.
So, basically it's up to you. If you think someday you're going to need multiple WebGL contexts then it might be wise to not have your contexts use global variables. But you can always fallback to the eglMakeCurrent style of coding. Both have their pluses and minuses.

Can DirectX11 Dynamic Shader Linkage be used without Shader Reflections?

I tried to implement dynamic shader linkage from what I saw in the DirectX11 SDK,but they are using the Effects11 framework and shader reflections.I'm trying to get a cleaner more low-level implementation.For instance - for constants buffer instead of using reflections,I just set a struct.I couldn't find anywhere a clean tutorial on how to implement the dynamic shader linkage in DirectX,everyone uses huge pieces of Effects11 code.
It is possible to use dynamic shader linkage in directx11 without using shader reflection, however it means that you need to know the names of the classes and interfaces at compile time.
I have achieved this myself by using a combination of shader preprocessor macros that i use to declare all of my shader classes, and a common header file that I include in both my shader and my .cpp file.
I've been searching for this problem too.
Check this out:
https://msdn.microsoft.com/en-us/library/windows/desktop/ff471421(v=vs.85).aspx
Maybe this would help. :)

If I create/assign shared memory in one function, I can use it inside the function I call after can't I?

So, if I have a device (or global) function that creates/copies some data into shared memory and I later call another device function, like so:
__global__ void a(){
__shared__ int blah=0;
fun();
}
__device__ void fun(){
blah = 1; //perform some operations
//do whatever
}
I'm a bit rusty with my CUDA, I think you might have had to "redefine" shared variable (I assume the operation checked if a shared variable of that name exists, if so assigns it) - this had the effect of creating context - so basically the variable didn't just come out of nowhere. Alternatively, if it's similar to having a global variable in standard C/C++ and I can just reference it, like I did above, it'd be great.
I am familiar with memory hierarchy, I'm just rusty on the semantics of creating/referencing memory.
Please advise on whether the above sketch would work. Thanks.
No that won't work in CUDA, any more that it would work in standard C99. Currently, the preferred method of __device__ function compilation is inline expansion (they are also compiled as standalone code objects for the Fermi architecture), but even so __device__ functions still must obey standard syntax and scope conventions of C99. So you need to pass arguments which don't have compilation unit scope by reference to __device__ functions.

Managed Direct3D: Lock entire Vertex Buffer

I have a Mesh object returned from Mesh::TextFromFont and I am trying to set the color of each vertex. I am calling the vertex buffer's Lock function like this:
mesh->VertexBuffer->Lock(0, LockFlags::None);
However, this call throws an exception. Another overload of Lock seems to work fine, however it requires me to pass the rank of the returned vertex array. What is the solution here? How do I lock the vertex buffer of a mesh returned from TextFromFont?
The answer might probably lie here:
When using this method to retrieve an
array from a resource that was not
created with a type, always use the
overload that accepts a type.
In true MSDN fashion, there is no further explanation.

Resources