Confused about the generation of mipmap in metal? - metal

I set mipmapped to YES in texture2DDescriptorWithPixelFormat and call generateMipmapsForTexture method of MTLBlitCommandEncoder on given texture to automatically generate mipmaps.
The question is if I have set the mipmapped to YES, doesn't it means that the resulting image should be mipmapped, why should I need MTLBlitCommandEncoder to explicit generate mipmaps?

It's a bit confusing, so let's walk through it.
texture2DDescriptorWithPixelFormat takes format, width, height and mipmapped as parameters. The mipmapped parameter is there to tell Metal to calculate the number of mip levels that resulting image will have, since there is no parameter to pass mip level count. Here's how it's described in documentation:
mipmapped
A Boolean indicating whether the resulting image should be
mipmapped. If YES, then the mipmapLevelCount property in the returned
descriptor is computed from width and height. If NO, then
mipmapLevelCount is 1.
If you would use newTextureWithDescriptor with texture descriptor you created explicitly, then there's no mipmapped parameter, since you explicitly pass number of mip levels in mipmapLevelCount property of MTLTextureDescriptor.
Since you create a new texture, there's no reason to generate mipmaps, since the texture is empty.
The generateMipmapsForTexture method is used to generate mipmaps for a texture that already has mip levels and you just need to populate them with mipmaps generated automatically.
So, to get this straight, mipmapped parameter just tells Metal to create a texture that has mip levels, that you can later populate (if you want) with generateMipmapsForTexture (or in some other ways, such as using texture as color attachment in render pass with level specified).

Related

How to handle 3d texture on webgl2

I am trying to work with 3D texture in webgl2 and I came to know about the
gl.texImage3D();
I have experience with 2d texture and I found it very convenient but there is another approach that people are using on the internet.
gl.texStorage3D()
and then,
gl.texSubImage3D() // with all offset of x,y and z as 0.
I just want to know what is the difference between the two approaches. I came to know that equivalent of the second option is available for the 2D texture as well but I don't use it to provide data to the target. I know that subimage is to create texture's subimage to the fragment shader but I don't understand what is the difference between two approaches.
The short answer is texStorage2D and texStorage3D allocate all of the texture memory up front. Where as texImage2D and texImage3D allocate one mip level at a time.
texSubImage2D and texSubImage3D do not allocate anything. They just copy data into a texture mip level that was previously allocated with one of the functions above.
As for why one or the other. texStorage2D and texStorage3D can immediately allocate memory on the GPU. texImage2D and texImage3D can not since they don't know the complete texture (all the mips) until you actually try to draw something with the texture. To put it another way, texStorage2D/3D might be more efficient where as texImage2D/3D is more flexible.
In order for a texture to actually be renderable, all the mip levels you are going to use need to be the same internal format and the correct sizes.
When you call texStorage2D/3D you tell the size of mip level 0 (the largest level) and how many mip levels in total to allocate. So let's say you tell it an internal format of gl.RGBA8, width and height of 8 and 4 mip levels.
gl.texStorage2D(gl.TEXTURE_2D,
4, // 4 levels
gl.RGBA8, // internal format
8, // width
8); // height
It will allocate 8x8x4, 4x4x4, 2x2x4, 1x1x4 mip levels, all 4 mip levels. It knows they are all RGBA8. It knows they are all the correct size. Textures allocated with texStorage2D can't be changed in size or internal format. If you try to call texImage2D on a texture created with texStorage2D you'll get an error.
If you instead used texImage2D well first you probably specify the first mip
gl.texImage2D(gl.TEXTURE_2D,
0, // mip level
gl.RGBA8, // internal format
8, // width
8, // height
0, // border
gl.RGBA, // data format
gl.UNSIGNED_BYTE, // data type
data);
so now you have just 1 mip level, level #0. Will you add the other 3 mips? Will they be the correct size? Will those other 3 mips have the same internal format? Will you change mip level #0 to something else, a different size, or different internal format? WebGL doesn't have any idea what your next command will be, it has to wait until you actually try to draw with the texture before it can check. With texStorage you decide the sizes and formats of all the mips up front so it only has to check one time. With texImage you don't tell it everything up front so it has to check at draw time again if things change.

Write pixel data to certain mipmap level of texture2d

As you might know, Metal Shading Language allows few ways to read pixel data from texture2d in the kernel function. It can be either simple read(short2 coord) or sample(float2 coord, [different additional parameters]). But I noticed, that when it comes to writing something into texture, there's only write method.
And the problem here is that sample method allows to sample from certain mipmap level which is very convenient. Developer just needs to create a sampler with mipFilter and use normalized coordinates.
But what if I want to write into certain mipmap level of the texture? The thing is that write method doesn't have mipmap parameter the way sample method has and I cannot find any alternative for that.
I'm pretty sure there should be a way to choose mipmap level for writing data to the texture, because Metal Performance Shaders framework has solutions where mipmaps of textures are being populated.
Thanks in advance!
You can do this with texture views.
The purpose of texture views is to reinterpret the contents of a base texture by selecting a subset of its levels and slices and potentially reading/writing its pixels in a different (but compatible) pixel format.
The -newTextureViewWithPixelFormat:textureType:levels:slices: method on the MTLTexture protocol returns a new instance of id<MTLTexture> that has the first level specified in the levels range as its base mip level. By creating one view per mip level you wish to write to, you can "target" each level in the original texture.
For example, to create a texture view on the second mip level of a 2D texture, you might call the method like this:
id<MTLTexture> viewTexture =
[baseTexture newTextureViewWithPixelFormat:baseTexture.pixelFormat
textureType:baseTexture.textureType
levels:NSMakeRange(1, 1)
slices:NSMakeRange(0, 1)];
When binding this new texture as an argument, its mip level 0 will correspond to mip level 1 of its base texture. You can therefore use the ordinary texture write function in a shader to write to the selected mip level:
myShaderTexture.write(color, coords);

Metal fragment shader get size

Is there a way to retrieve sizes of Frame Buffer & buffers passed in Metal fragment shader, or we need to manually pass them as arguments? I wish to retrieve the width and height of Frame Buffer texture to which results are being written as well as length of other MTLBuffers ([[buffer(0)]], [[buffer(1)]],...) passed to the fragment shader.
That information is not automatically available. You have to pass it in as arguments.

iOS Metal Shader - Texture read and write access?

I'm using a metal shader to draw many particles onto the screen. Each particle has its own position (which can change) and often two particles have the same position. How can I check if the texture2d I write into does not have a pixel at a certain position yet? (I want to make sure that I only draw a particle at a certain position if there hasn't been drawn a particle yet, because I get an ugly flickering if many particles are drawn at the same positon)
I've tried outTexture.read(particlePosition), but this obviously doesn't work, because of the texture access qualifier, which is access::write.
Is there a way I can have read and write access to a texture2d at the same time? (If there isn't, how could I still solve my problem?)
There are several approaches that could work here. In concurrent systems programming, what you're talking about is termed first-write wins.
1) If the particles only need to preclude other particles from being drawn (and aren't potentially obscured by other elements in the scene in the same render pass), you can write a special value to the depth buffer to signify that a fragment has already been written to a particular coordinate. For example, you'd turn on depth test (using the depth compare function Equal), clear the depth buffer to some distant value (like 1.0), and then write a value of 0.0 to the depth buffer in the fragment function. Any subsequent write to a given pixel will fail to pass the depth test and will not be drawn.
2) Use framebuffer read-back. On iOS, Metal allows you to read from the currently-bound primary renderbuffer by attributing a parameter to your fragment function with [[color(0)]]. This parameter will contain the current color value in the renderbuffer, which you can test against to determine whether it has been written to. This does require you to clear the texture to a predetermined color that will never otherwise be produced by your fragment function, so it is more limited than the above approach, and possibly less performant.
All of the above applies whether you're rendering to a drawable's texture for direct presentation to the screen, or to some offscreen texture.
To answer the read and write part : you can specify a read/write access for the output texture as such :
texture2d<float, access::read_write> outTexture [[texture(1)]],
Also, your texture descriptor must specify usage :
textureDescriptor?.usage = [.shaderRead, .shaderWrite]

Difference between Texture2D and Texture2DMS in DirectX11

I'm using SharpDX and I want to do antialiasing in the Depth buffer. I need to store the Depth Buffer as a texture to use it later. So is it a good idea if this texture is a Texture2DMS? Or should I take another approach?
What I really want to achieve is:
1) Depth buffer scaling
2) Depth test supersampling
(terms I found in section 3.2 of this paper: http://gfx.cs.princeton.edu/pubs/Cole_2010_TFM/cole_tfm_preprint.pdf
The paper calls for a depth pre-pass. Since this pass requires no color, you should leave the render target unbound, and use an "empty" pixel shader. For depth, you should create a Texture2D (not MS) at 2x or 4x (or some other 2Nx) the width and height of the final render target that you're going to use. This isn't really "supersampling" (since the pre-pass is an independent phase with no actual pixel output) but it's similar.
For the second phase, the paper calls for doing multiple samples of the high-resolution depth buffer from the pre-pass. If you followed the sizing above, every pixel will correspond to some (2N)^2 depth values. You'll need to read these values and average them. Fortunately, there's a hardware-accelerated way to do this (called PCF) using SampleCmp with a COMPARISON sampler type. This samples a 2x2 stamp, compares each value to a specified value (pass in the second-phase calculated depth here, and don't forget to add some epsilon value (e.g. 1e-5)), and returns the averaged result. Do 2x2 stamps to cover the entire area of the first-phase depth buffer associated with this pixel, and average the results. The final result represents how much of the current line's spine corresponds to the foremost depth of the pre-pass. Because of the PCF's smooth filtering behavior, as lines become visible, they will slowly fade in, as opposed to the aliased "dotted" line effect described in the paper.

Resources