In the compute shader, I can see that bicubic is an option but only if __HAVE_BICUBIC_FILTERING__ is defined. When I set bicubic as the filtering option, I get a syntax error. Linear or Nearest compile fine. I'm building for metal 2 on MacOS.
Another method would be to define the sampler on the CPU with a sampler descriptor but there is no option for bicubic there either.
The comments in this thread discuss their inability to set bicubic.
I've searched the metal shading language specification, and bicubic is not mentioned there at all. Any help would be appreciated.
It depends on the type of device, operating system (version and type), and processor architecture.
The code below can be easy compiled on the following configurations: iOS 15, iPhone 12 / 13, Xcode 13.
#if defined(__HAVE_BICUBIC_FILTERING__)
constexpr sampler textureSampler (mag_filter::bicubic, min_filter::bicubic);
const half4 colorSample = colorTexture.sample (textureSampler, in.textureCoordinate);
return float4(colorSample);
#endif
Related
When using GLSL, it's easy writing into specific mipmap level.
But I found out it seems missing in Metal shading language.
Well, I might be wrong. Maybe there are some workaround.
You have two options here:
If you are using Metal 2.3 or higher, you can use void write(Tv color, uint2 coord, uint lod = 0) or void write(Tv color, ushort2 coord, ushort lod = 0) methods on metal::texture2d. The problem is, that even with Metal 2.3 lod must be 0 on Intel and AMD GPUs.
To work around that limitation, you can make an MTLTexture view using newTextureViewWithPixelFormat:textureType:levels:slices: (https://developer.apple.com/documentation/metal/mtltexture/1515409-newtextureviewwithpixelformat?language=objc) for the level you want to write.
I'm trying to learn MSL through the Metal Shading Language Specification, and saw that you can set LOD options when sampling a texture by specifying the options in the sample function. This is one of the examples given in the spec:
Tv sample(sampler s, float2 coord, lod_options options, int2 offset = int2(0)) const
lod_options include bias, level, gradient2d, etc.
I've looked all over but cannot find the usage syntax for this. Are these named arguments? Is lod_options a struct? For example, if I want to specify the LOD level, what is the correct way to do it? I know these options can also be specified in the sampler object itself, but if I want to do it here, what would be the right syntax to do so?
There is no lod_options type as such; you can think of it as a placeholder for one of the bias, level, gradient2d, etc. types. Each of these types is a different struct, which allows the Metal standard library to have an overloaded variant of the sample function for each such option.
To specify, for example, the mipmap level to sample, you'd provide a parameter of level type:
float4 color = myTexture.sample(mySampler, coords, level(1));
I'm rewriting an OpenGL filter from the Android version of the app I'm currently working at in Metal. It uses the following texture lookup function:
vec4 texture2D(sampler2D sampler, vec2 coord, float bias)
Assuming my filter kernel function looks like this:
float4 fname(sampler src) {
...
}
The texture lookup call would be the following:
src.sample(coord)
But how can I pass the bias parameter? (the sample function takes only 1 argument)
I'm afraid Core Image only supports 2D textures – no mipmapping and LOD selection. Only bilinear sampling is available.
If you need different LODs, you need to pass different samplers to your kernel and do the interpolation yourself.
I'm porting a directx hlsl script to webgl 2, but I cannot find the equivalent of a structuredbuffer.
I can only see a constant buffer which are limited to 64k size and use aligning. Should I split the structuredbuffers to constant buffers?
The more-or-less equivalent in OpenGL land of D3D's StructuredBuffers are Shader Storage Buffer Objects. However, WebGL 2.0 is based on OpenGL ES 3.0, which does not include SSBOs.
In OpenGL ES it is possible to set precision to uniforms and attributes using lopw/mediump/highp. Is there something like this in Metal?
The metal shading language supports the half data type (see section 2.1 of the spec). It's defined there as:
A 16-bit floating-point. The half data type must conform to the IEEE 754 binary16 storage format.
This makes it pretty much equivalent to mediump.
There isn't really an equivalent to lowp in metal. However, that's no real loss because I believe that metal capable iOS GPUs don't benefit from lowp anyway and just do any lowp operations at mediump.