Can I share a texture 2D with ArraySize > 1 between 2 D3D11Device? - directx

I need to share a 2D texture with multiple levels, the documentation says you can not share mipmap textures, it does not talk about array size. But when I try with the GetSharedHandle and OpenSharedRessource methods, only the texture at index 0 works, the other index are corrupted. So is it possible?

Related

How to handle 3d texture on webgl2

I am trying to work with 3D texture in webgl2 and I came to know about the
gl.texImage3D();
I have experience with 2d texture and I found it very convenient but there is another approach that people are using on the internet.
gl.texStorage3D()
and then,
gl.texSubImage3D() // with all offset of x,y and z as 0.
I just want to know what is the difference between the two approaches. I came to know that equivalent of the second option is available for the 2D texture as well but I don't use it to provide data to the target. I know that subimage is to create texture's subimage to the fragment shader but I don't understand what is the difference between two approaches.
The short answer is texStorage2D and texStorage3D allocate all of the texture memory up front. Where as texImage2D and texImage3D allocate one mip level at a time.
texSubImage2D and texSubImage3D do not allocate anything. They just copy data into a texture mip level that was previously allocated with one of the functions above.
As for why one or the other. texStorage2D and texStorage3D can immediately allocate memory on the GPU. texImage2D and texImage3D can not since they don't know the complete texture (all the mips) until you actually try to draw something with the texture. To put it another way, texStorage2D/3D might be more efficient where as texImage2D/3D is more flexible.
In order for a texture to actually be renderable, all the mip levels you are going to use need to be the same internal format and the correct sizes.
When you call texStorage2D/3D you tell the size of mip level 0 (the largest level) and how many mip levels in total to allocate. So let's say you tell it an internal format of gl.RGBA8, width and height of 8 and 4 mip levels.
gl.texStorage2D(gl.TEXTURE_2D,
4, // 4 levels
gl.RGBA8, // internal format
8, // width
8); // height
It will allocate 8x8x4, 4x4x4, 2x2x4, 1x1x4 mip levels, all 4 mip levels. It knows they are all RGBA8. It knows they are all the correct size. Textures allocated with texStorage2D can't be changed in size or internal format. If you try to call texImage2D on a texture created with texStorage2D you'll get an error.
If you instead used texImage2D well first you probably specify the first mip
gl.texImage2D(gl.TEXTURE_2D,
0, // mip level
gl.RGBA8, // internal format
8, // width
8, // height
0, // border
gl.RGBA, // data format
gl.UNSIGNED_BYTE, // data type
data);
so now you have just 1 mip level, level #0. Will you add the other 3 mips? Will they be the correct size? Will those other 3 mips have the same internal format? Will you change mip level #0 to something else, a different size, or different internal format? WebGL doesn't have any idea what your next command will be, it has to wait until you actually try to draw with the texture before it can check. With texStorage you decide the sizes and formats of all the mips up front so it only has to check one time. With texImage you don't tell it everything up front so it has to check at draw time again if things change.

Write pixel data to certain mipmap level of texture2d

As you might know, Metal Shading Language allows few ways to read pixel data from texture2d in the kernel function. It can be either simple read(short2 coord) or sample(float2 coord, [different additional parameters]). But I noticed, that when it comes to writing something into texture, there's only write method.
And the problem here is that sample method allows to sample from certain mipmap level which is very convenient. Developer just needs to create a sampler with mipFilter and use normalized coordinates.
But what if I want to write into certain mipmap level of the texture? The thing is that write method doesn't have mipmap parameter the way sample method has and I cannot find any alternative for that.
I'm pretty sure there should be a way to choose mipmap level for writing data to the texture, because Metal Performance Shaders framework has solutions where mipmaps of textures are being populated.
Thanks in advance!
You can do this with texture views.
The purpose of texture views is to reinterpret the contents of a base texture by selecting a subset of its levels and slices and potentially reading/writing its pixels in a different (but compatible) pixel format.
The -newTextureViewWithPixelFormat:textureType:levels:slices: method on the MTLTexture protocol returns a new instance of id<MTLTexture> that has the first level specified in the levels range as its base mip level. By creating one view per mip level you wish to write to, you can "target" each level in the original texture.
For example, to create a texture view on the second mip level of a 2D texture, you might call the method like this:
id<MTLTexture> viewTexture =
[baseTexture newTextureViewWithPixelFormat:baseTexture.pixelFormat
textureType:baseTexture.textureType
levels:NSMakeRange(1, 1)
slices:NSMakeRange(0, 1)];
When binding this new texture as an argument, its mip level 0 will correspond to mip level 1 of its base texture. You can therefore use the ordinary texture write function in a shader to write to the selected mip level:
myShaderTexture.write(color, coords);

Live AR data capture of ARFaceGeometry to a custom Metal pipeline

I've got a Metal pipeline working.
I'm rendering the live face geometry,
captured from the TrueDepth camera on an iPhone X.
I grab the ARFaceGeometryfrom the ARSessionDelegate every frame.
I pass the data to my framework into the metal pipeline.
Tho the render is at 2.5fps...
Here's the render pipeline: PixelsRender.swift
Data: xyz's uv's & an index array.
The ARFaceGeometry consists of 2304 triangles.
I timed the render pipeline:
[1.086ms] Command Buffer
[0.006ms] Input Texture
[0.054ms] Drawable
[0.110ms] Command Encoder
[0.123ms] Uniforms
[0.006ms] Uniform Arrays
[0.009ms] Fragment Texture
[68.015ms] Vertices
[0.002ms] Vertex Uniforms
[0.000ms] Custom Vertex Texture
[0.027ms] Draw
[0.036ms] Encode
[80.207ms] All CPU
[346.936ms] GPU
[431.035ms] All CPU + GPU
[434.100ms] Total
It's all the vertices that take such a long time to render.
Is there a way to cache the the memory space on the GPU or something?
I've got a lot to optimise, I'm sure, tho anything obvious I'm missing?
Here's my friends face:
Update (Solved)
I was mistaking instances for triangles!
It was in the main draw func. (Thanks Ken Thomases for catching this)
commandEncoder.drawPrimitives(type: vertices.type,
vertexStart: 0,
vertexCount: vertices.vertexCount,
instanceCount: 1 /* previously triangle count of 2304 */)
The new GPU time:
[2.769ms] GPU
You were unintentionally using instanced drawing, by passing a value greater than 1 for the instanceCount: parameter. That basically multiplies the amount of rendering work the GPU has to do. So, if you don't actually need/want instanced drawing, pass 1 there.

WebGL: Are there cases where gl.MAX_TEXTURE_IMAGE_UNITS == 1

Sorry to ask such a strange question, but I'm working on some logic for a WebGL visualization and would like to know, are there cases where:
gl.getParameter(gl.MAX_TEXTURE_IMAGE_UNITS)
equals 1?
I ask because I'm trying to figure out how many vertices I can draw in each draw call, and each vertex needs some content from one of several textures. The minimal case I'm wanting to support is one in which I load two textures for each draw call, but if there are cards that don't support multiple textures per draw call I'll need to rethink my life.
The minimum value for MAX_TEXTURE_IMAGE_UNITS WebGL is required to support is 8. You can look up the limits in the spec section 6.2. Note: Search for "MAX TEXTURE IMAGE UNITS" (with the spaces not underscores)
That said WebGL has a different limit for textures used in a fragment shader vs textures used in a vertex shader.
For a vertex shader the minimum requires is 0 on WebGL1. You can check the number of textures supported in a vertex shader by looking at MAX_VERTEX_TEXTURE_IMAGE_UNITS.
Fortunately most machines support at least 4 in the vertex shader
There is also yet another limit MAX_COMBINED_TEXTURE_IMAGE_UNITS which is how many textures total you can use combined. In other words if MAX_COMBINED_TEXTURE_IMAGE_UNITS is 8, MAX_VERTEX_TEXTURE_IMAGE_UNITS is 8 and MAX_VERTEX_TEXTURE_IMAGE_UNITS is 4 that means you could use 8 textures at once of which up to 4 could be used in the vertex shader. You could not use 12 textures at once.
Other minimums
MAX VERTEX ATTRIBS 8
MAX VERTEX UNIFORM VECTORS 128
MAX VARYING VECTORS 8
MAX COMBINED TEXTURE IMAGE UNITS 8
MAX VERTEX TEXTURE IMAGE UNITS 0
MAX TEXTURE IMAGE UNITS 8
MAX FRAGMENT UNIFORM VECTORS 16

Drawing a variable number of textures

For some scientific data visualization, I am drawing a large float array using WebGL. The dataset is two-dimensional and typically hundreds or few thousands of values in height and several tens of thousands values in width.
To fit this dataset into video memory, I cut it up into several non-square textures (depending on MAX_TEXTURE_SIZE) and display them next to one another. I use the same shader with a single sampler2d to draw all the textures. This means that I have to iterate over all the textures for drawing:
for (var i=0; i<dataTextures.length; i++) {
gl.activeTexture(gl.TEXTURE0+i);
gl.bindTexture(gl.TEXTURE_2D, dataTextures[i]);
gl.uniform1i(samplerUniform, i);
gl.bindBuffer(gl.ARRAY_BUFFER, vertexPositionBuffers[i]);
gl.vertexAttribPointer(vertexPositionAttribute, 2, gl.FLOAT, false, 0, 0);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
}
However, if the number of textures gets larger than half a dozen, performance becomes quite bad. Now I know that games use quite a few more textures than that, so this can't be expected behavior. I also read that you can bind arrays of samplers, but as far as I can tell, the total number of texture has to be known ahead of time. For me, the number of textures depends on the dataset, so I can't know it before loading the data.
Also, I suspect that I am doing unnecessary things in this render loop. Any hints would be welcome.
How would you normally draw a variable number of textures in WebGL?
Here's a few previous answers that will help
How to bind an array of textures to a WebGL shader uniform?
How to send multiple textures to a fragment shader in WebGL?
How many textures can I use in a webgl fragment shader?
Some ways off the top if my head
Create a shader that loops over N textures. Set the textures you're not using to some 1x1 pixel texture with 0,0,0,0 in it or something else that doesn't effect your calculations
Create a shader that loops over N textures. Create a uniform boolean array, in the loop skip any texture who's corresponding boolean value is false.
Generate a shader on the fly that has exactly the number of textures you need. It shouldn't be that hard to concatinate a few strings etc..

Resources