How to bind a variable number of textures to Metal shader? - ios

On the CPU I'm gathering an array of MTLTexture objects that I want to send to the fragment shader. There can be any number of these textures at any given moment. How can I send a variable-length array of MTLTextures to a fragment shader?
Example.)
CPU:
var txrs: [MTLTexture] = []
for ... {
txrs.append(...)
}
// Send array of textures to fragment shader.
GPU:
fragment half4 my_fragment(Vertex v [[stage_in]], <array of textures>, ...) {
...
for(int i = 0; i < num_textures; i++) {
texture2d<half> txr = array_of_textures[i];
}
...
}

The array other person suggested won't work, because textures will take up all the bind points up to 31, at which point it will run out.
Instead, you need to use argument buffers.
So, for this to work, you need a tier 2 argument buffer support. You can check it with argumentBuffersSupport property on an MTLDevice.
You can read more about argument buffers here or watch this talk about bindless rendering.
The basic idea is to use MTLArgumentEncoder to encode textures you need in argument buffers. Unfortunately, I don't think there's a direct way to just encode a bunch of MTLTextures, so instead, you would create a struct in your shaders like this
struct SingleTexture
{
texture2d<half> texture;
};
The texture in this struct has an implicit id of 0. To learn more about id, read Argument Buffers section in the spec, but it's basically a unique index for each entry in the ab.
Then, change your function signature to
fragment half4 my_fragment(Vertex v [[stage_in]], device ushort& textureCount [[ buffer(0), device SingleTexture* textures [[ buffer(1) ]])
You will then need to bind the count (use uint16_t instead of uint32_t in most cases). Just as a 2 (or 4) byte buffer. (You can use set<...>Bytes function on an encoder for that).
Then, you will need to compile that function to MTLFunction and from it, you can create a MTLArgumentEncoder using newArgumentEncoderForBufferAtIndex method. You will use buffer index 1 in this case, because that's where your AB is bound in the function.
From MTLArgumentEncoder you can get encodedLength, which is basically a size for one SingleTexture struct in AB. After you get that, multiply it by number of textures to get a buffer of a proper size to encode your argument buffer to.
After that, in your setup code, you can just do this
for(size_t i = 0; i < textureCount; i++)
{
// We basically just offset into an array of SignlaTexture
[argumentEncoder setArgumentBuffer:<your buffer you just created> offset:argumentEncoder.encodedLength * i];
[argumentEncoder setTexture:textures[i] atIndex:0];
}
And then, when you are done encoding the buffer, you can hold on to it, until your texture array changes (you don't need to reencode it every frame).
Then, you need to bind the argument buffer to buffer binding point 1, just as you would bind any other buffer.
Last thing you need to do is to make sure all the resources referenced indirectly are resident on the GPU. Since you encoded your textures into AB, driver has no way to know whether you used them or not, because you are not binding them directly.
To do that, use useResource or useResources variation on an encoder you are using, kinda like this:
[encoder useResources:&textures[0] count:textureCount usage:MTLResourceUsageRead];
This is kinda a mouthful, but this is the proper way to bind anything you want to your shaders.

Related

why are nan being produced when assigning values in a cube?

I'm having this weird issue with Armadillo and RcppArmadillo. I'm creating a cube filled with zeroes values, and I want specific elements to be turned into ones. However, when I used an assigment to do that, values of other elements changle slightly and often become equal to nan. Does anyone has any idea what could be causing that?
example:
#include <RcppArmadillo.h>
using namespace arma;
// [[Rcpp::depends(RcppArmadillo)]]
// [[Rcpp::export]]
cube testc() {
cube tester = cube(10,10,2);
uvec indexes = {25,125};
for(unsigned int i=0; i<indexes.n_elem; i++) {
tester(indexes(i))=1.0;
};
cout<< tester;
return(tester);
}
This error does not happen when i assign each element individually (tester(25)=1.0 followed by tester(125)=1.0), but this is impractical if I have a larger number of elements to replace. The nan show up in coutand in the R object, which makes me think the issue is independent of Rcpp.
Your cube object is not initialized with zeros, so it's possible to get NaN values.
From the documentation:
Constructors:
cube()
cube(n_rows, n_cols, n_slices)     (memory is not initialised)
cube(n_rows, n_cols, n_slices, fill_type)     (memory is initialised)
...
When using the cube(n_rows, n_cols, n_slices) or cube(size(X)) constructors, by default the memory is uninitialised (ie. may contain garbage); memory can be explicitly initialised by specifying the fill_type, as per the Mat class (except for fill::eye)
Examples of explicit initialization with zeros:
cube A(10,10,2,fill::zeros);
cube B(10,10,2);
B.zeros();
cube C;
C.zeros(10,10,2);

no operator [] match these operands

I am adapting an old code which uses cvMat. I use the constructor from cvMat :
Mat A(B); // B is a cvMat
When I write A[i][j], I get the error no operator [] match these operands.
Why? For information: B is a single channel float matrix (from a MLData object read from a csv file).
The documentation lists the at operator as being used to access a member.
A.at<int>(i,j); //Or whatever type you are storing.
first, you should have a look at the most basic opencv tutorials
so, if you have a 3channel, bgr image (the most common case), you will have to access it like:
Vec3b & pixel = A.at<Vec3b>(y,x); // we're in row,col world, here !
pixel = Vec3b(17,18,19); // at() returns a reference, so you can *set* that, too.
the 1channel (grayscale) version would look like this:
uchar & pixel = A.at<uchar>(y,x);
since you mention float images:
float & pixel = A.at<float>(y,x);
you can't choose the type at will, you have to use, what's inside the Mat, so try to query A.type() before.

How to read vertices from vertex buffer in Direct3d11

I have a question regarding vertex buffers. How does one read the vertices from the vertex buffer in D3D11? I want to get a particular vertex's position for calculations, if this approach is wrong, how would one do it? The following code does not (obviously) work.
VERTEX* vert;
D3D11_MAPPED_SUBRESOURCE ms;
devcon->Map(pVBufferSphere, NULL, D3D11_MAP_READ, NULL, &ms);
vert = (VERTEX*) ms.pData;
devcon->Unmap(pVBufferSphere, NULL);
Thanks.
Where your code is wrong:
You asking GPU to give you an address to its memory(Map()),
Storing this adress (operator=()),
Then saying: "Thanks, I don't need it anymore" (Unmap()).
After unmap, you can't really say where your pointer now points. It can point to memory location where already allocated another stuff or at memory of your girlfriend's laptop (just kidding =) ).
You must copy data (all or it's part), not pointer in between Map() Unmap(): use memcopy, for loop, anything. Put it in array, std::vector, BST, everything.
Typical mistakes that newcomers can made here:
Not to check HRESULT return value from ID3D11DeviceContext::Map method. If map fails it can return whatever pointer it likes. Dereferencing such pointer leads to undefined behavior. So, better check any DirectX function return value.
Not to check D3D11 debug output. It can clearly say what's wrong and what to do in plain good English language (clearly better than my English =) ). So, you can fix bug almost instantly.
You can only read from ID3D11Buffer if it was created with D3D11_CPU_ACCESS_READ CPU access flag which means that you must also set D3D11_USAGE_STAGING usage fag.
How do we usualy read from buffer:
We don't use staging buffers for rendering/calculations: it's slow.
Instead we copy from main buffer (non-staging and non-readable by CPU) to staging one (ID3D11DeviceContext::CopyResource() or ID3D11DeviceContext::CopySubresourceRegion()), and then copying data to system memory (memcopy()).
We don't do this too much in release builds, it will harm performance.
There are two main real-life usages of staging buffers: debugging (see if buffer contains wrong data and fix some bug in algorithm) and reading final non-pixel data (for example if you calculating scientific data in Compute shader).
In most cases you can avoid staging buffers at all by well-designing your code. Think as if CPU<->GPU was connected only one way: CPU->GPU.
The following code only get the address of the mapped resource, you didn't read anything before Unmap.
vert = (VERTEX*) ms.pData;
If you want to read data from the mapped resource, first allocate enough memory, then use memcpy to copy the data, I don't know your VERTEX structure, so I suppose vert is void*, you can convert it yourself
vert = new BYTE[ms.DepthPitch];
memcpy(vert, ms.pData, ms.DepthPitch];
Drop's answer was helpful. I figured that the reason why I wasn't able to read the buffer was because I didn't have the CPU_ACCESS_FLAG set to D3D11_CPU_ACCESS_READ before. Here
D3D11_BUFFER_DESC bufferDesc;
ZeroMemory(&bufferDesc, sizeof(bufferDesc));
bufferDesc.ByteWidth = iNumElements * sizeof(T);
bufferDesc.Usage = D3D11_USAGE_DEFAULT;
bufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE;
bufferDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE ;
bufferDesc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED;
bufferDesc.StructureByteStride = sizeof(T);
And then to read data I did
const ID3D11Device& device = *DXUTGetD3D11Device();
ID3D11DeviceContext& deviceContext = *DXUTGetD3D11DeviceContext();
D3D11_MAPPED_SUBRESOURCE ms;
HRESULT hr = deviceContext.Map(g_pParticles, 0, D3D11_MAP_READ, 0, &ms);
Particle* p = (Particle*)malloc(sizeof(Particle*) * g_iNumParticles);
ZeroMemory(p, sizeof(Particle*) * g_iNumParticles);
memccpy(p, ms.pData, 0, sizeof(ms.pData));
deviceContext.Unmap(g_pParticles, 0);
delete[] p;
I agree it's a performance decline, I wanted to do this, just to be able to debug the values!
Thanks anyway! =)

How to declare local memory in OpenCL?

I'm running the OpenCL kernel below with a two-dimensional global work size of 1000000 x 100 and a local work size of 1 x 100.
__kernel void myKernel(
const int length,
const int height,
and a bunch of other parameters) {
//declare some local arrays to be shared by all 100 work item in this group
__local float LP [length];
__local float LT [height];
__local int bitErrors = 0;
__local bool failed = false;
//here come my actual computations which utilize the space in LP and LT
}
This however refuses to compile, since the parameters length and height are not known at compile time. But it is not clear to my at all how to do this correctly. Should I use pointers with memalloc? How to handle this in a way that the memory is only allocated once for the entire workgroup and not once per work item?
All that I need is 2 arrays of floats, 1 int and 1 boolean that are shared among the entire workgroup (so all 100 work items). But I fail to find any method that does this correctly...
It's relatively simple, you can pass the local arrays as arguments to your kernel:
kernel void myKernel(const int length, const int height, local float* LP,
local float* LT, a bunch of other parameters)
You then set the kernelargument with a value of NULL and a size equal to the size you want to allocate for the argument (in byte). Therefore it should be:
clSetKernelArg(kernel, 2, length * sizeof(cl_float), NULL);
clSetKernelArg(kernel, 3, height* sizeof(cl_float), NULL);
local memory is always shared by the workgroup (as opposed to private), so I think the bool and int should be fine, but if not you can always pass those as arguments too.
Not really related to your problem (and not necessarily relevant, since I do not know what hardware you plan to run this on), but at least gpus don't particulary like workingsizes which are not a multiple of a particular power of two (I think it was 32 for nvidia, 64 for amd), meaning that will probably create workgroups with 128 items, of which the last 28 are basically wasted. So if you are running opencl on gpu it might help performance if you directly use workgroups of size 128 (and change the global work size appropriately)
As a side note: I never understood why everyone uses the underscore variant for kernel, local and global, seems much uglier to me.
You could also declare your arrays like this:
__local float LP[LENGTH];
And pass the LENGTH as a define in your kernel compile.
int lp_size = 128; // this is an example; could be dynamically calculated
char compileArgs[64];
sprintf(compileArgs, "-DLENGTH=%d", lp_size);
clBuildProgram(program, 0, NULL, compileArgs, NULL, NULL);
You do not have to allocate all your local memory outside the kernel, especially when it is a simple variable instead of a array.
The reason that your code cannot compile is that OpenCL does not support local memory initialization. This is specified in the document(https://www.khronos.org/registry/cl/sdk/1.1/docs/man/xhtml/local.html). It is also not feasible in CUDA(Is there a way of setting default value for shared memory array?)
ps:The answer from Grizzly is good enough and it would be better if I can post it as a comment, but I am restricted by the reputation policy. Sorry.

arm asm/neon optimisation for image processing

I m currently working on a painting app on ios.
I use a directly draw into a NSMutableData buffer and apply blending with my brush like this:
- (void) combineColorDestination:(unsigned char*) dest source:(unsigned char*) src
{
const unsigned char sra = ((unsigned char *)src)[3];
const float oneminusalpha = 1.0f - (sra / 255.f);
int d[4];
for (int i=0;i<4;i++)
{
d[i] = oneminusalpha * ((unsigned char *)dest)[i] + ((unsigned char *)src)[i];
if (d[i]>255)
d[i] = 255;
((unsigned char *)dest)[i] = (unsigned char)d[i];
}
}
Any suggestions for optimisations ?
I previously tried to use neon , but i ve got a bug I wasnt able to fix (the bordering pixels was buggy)
I was iterating pixels 2 by 2 like this :
uint8x8_t va = vld1_u8(dest);
uint8x8_t vb = vld1_u8(src);
uint8x8_t res = vqadd_u8(va,vb);
vst1_u8(dest, res);
Suggestions? Alright. Note that these are valid whichever multimedia manipulation you are doing and is hardly restricted to your case.
First, before you even do NEON, you should change your code to have one function that changes a bunch of pixels (at least a row, a rectangle if you can) at once, instead of a function (or method - even worse) that changes one pixel and is called a bunch of times: somehow I doubt the brush is only 1x1 pixel.
Second, except for the column loop (and eventual row loop), there should be no branch (that is, flow control structures). No for (i=0;i<4;i++); just write the code for the four channels in sequence (use a macro if necessary). No if (d[i]>255); express that as an alternative: dest[i] = (temp>255?255:temp); at the very least, if not replacing it by a more efficient way to do saturation (tricks using subtractions, shifts, and masks exist).
Third, avoid any conversion between floating-point and integer; this is always valid advice, but float->int conversions are particularly devastating on ARM. Since you're manipulating integers, this means foregoing floating-point here.
And once you've done that, surprise, besides making your code faster you have in fact done the preparation work for NEON: NEON is only remotely useful if you process a bunch of pixels at once, if there is no branch, and if you don't convert between floating-point and integer all over the place. So only then will we talk about NEON, if it is even necessary at this point.

Resources