Manually set a 1D Texture in Metal - ios

I'm trying to fill a 1D texture with values manually and pass that texture to a compute shader (these are 2 pixels that I want to set via code, they don't represent any image).
Due to the current small amount of Metal examples, all examples I could find deal with 2D textures that load the texture by converting a loaded UIImage to raw bytes data, but creating a dummy UIImage felt like a hack for me.
This is the "naive" way I started with -
...
var manualTextureData: [Float] = [ 1.0, 0.0, 0.0, 1.0,
0.0, 0.0, 1.0, 1.0 ];
let region: MTLRegion = MTLRegionMake1D(0, textureDescriptor.width);
myTexture.replaceRegion(region, mipmapLevel: 0, withBytes: &manualTextureData, bytesPerRow: 0);
but Metal doesn't recognize those values in the shader (it gets an empty texture, except for the first value).
I quickly realized that the Float array probably has to be converted into a bytes array (e.g UInt8), but couldn't find a way to convert from [Float] to [UInt8] either.
Another possible option I consider is using a CVPixelBuffer object, but that also felt like a workaround to the problem.
So whats the right way to tackle that?
Thanks in advance.
Please note I'm not familiar with Objective-C, hence I'm not sure whether using CVPixelBuffer / UIImage is exaggerated for something which should be straight-forward.

Please forgive the terse reply, but you may find it useful to take a look at my experiments with Swift and Metal. I've created a particle system in Swift which is passed to a Metal compute shader as a one dimensional array of Particle structs. By using posix_memalign, I'm able to eliminate the bottleneck caused by passing the array between Metal and Swift.
I've blogged extensively about this: http://flexmonkey.blogspot.co.uk/search/label/Metal
I hope this helps.
Simon

I don't see any reason for you to pass data using 1D texture. Instead I would go with just passing a buffer. Like this:
var dataBuffer:MTLBuffer? = device.newBufferWithBytes(&manualTextureData, length: sizeOf(manualTextureData), options: MTLResourceOptions.OptionCPUCacheModeDefault)
Then you hook it to your renderCommandEncoder like this:
renderCommandEncoder.setFragmentBuffer(dataBuffer, offset: 0, atIndex: 1)//Note that if you want this buffer to be passed to you vertex shader you should use setVertexBuffer
Then in your shader, you should add parameter like this const device float* bufferPassed [[ buffer(1) ]]
And then use it like this, inside your shader implementation:
float firstFloat = bufferPassed[0];
This will get the job done.

Not really answering your question, but you could just define an array in your metal shader instead of passing the values as a texture.
Something like:
constant float manualData[8] = { 1.0, 0.0, 0.0, 1.0,
0.0, 0.0, 1.0, 1.0 };
vertex float4 world_vertex(unsigned int vid[[vertex_id]], ...) {
int manualIndex = vid % 8;
float manualValue = manualData[manualIndex];
// something deep and meaningful here...
return float4(manualValue);
}

If you want a float texture bytesPerRow should be 4 for times the width, because a float has a size of 4 bytes. Metal copies the memory and dont care about the values. That is your task ;-)
Something like:
myTexture.replaceRegion(region, mipmapLevel: 0, withBytes: &manualTextureData, bytesPerRow: manualTextureData.count * sizeof(Float));

Related

Metal Shading language for Core Image color kernel, how to pass an array of float3

I'm trying to port some CIFilter from this source by using metal shading language for Core Image.
I have a palette of color composed by an array of RGB struct and I want to pass them as an argument to a custom CI color image kernel.
The RGB struct is converted into an array of SIMD3<Float>.
static func SIMD3Palette(_ palette: [RGB]) -> [SIMD3<Float>] {
return palette.map{$0.toFloat3()}
}
The kernel should take and array of simd_float3 values, the problem is the when I launch the filter it tells me that the argument at index 1 is expecting an NSData.
override var outputImage: CIImage? {
guard let inputImage = inputImage else
{
return nil
}
let palette = EightBitColorFilter.palettes[Int(inputPaletteIndex)]
let extent = inputImage.extent
let arguments = [inputImage, palette, Float(palette.count)] as [Any]
let final = colorKernel.apply(extent: extent, arguments: arguments)
return final
}
This is the kernel:
float4 eight_bit(sample_t image, simd_float3 palette[], float paletteSize, destination dest) {
float dist = distance(image.rgb, palette[0]);
float3 returnColor = palette[0];
for (int i = 1; i < floor(paletteSize); ++i) {
float tempDist = distance(image.rgb, palette[i]);
if (tempDist < dist) {
dist = tempDist;
returnColor = palette[i];
}
}
return float4(returnColor, 1);
}
I'm wondering how can I pass a data buffer to the kernel since converting it into an NSData seems not enough. I saw some example but they are using "full" shading language that is not available for Core Image that is a sort of subset for dealing only with fragments.
Update
We have now figured out how to pass data buffers directly into Core Image kernels. Using a CIImage as described below is not needed, but still possible.
Assuming that you have your raw data as an NSData, you can just pass it to the kernel on invocation:
kernel.apply(..., arguments: [data, ...])
Note: Data might also work, but I know that NSData is an argument type that allows Core Image to cache filter results based on input arguments. So when in doubt, better cast to NSData.
Then in the kernel function, you only need to declare the parameter with an appropriate constant type:
extern "C" float4 myKernel(constant float3 data[], ...) {
float3 data0 = data[0];
// ...
}
Previous Answer
Core Image kernels don't seem to support pointer or array parameter types. Though there seem to be something coming with iOS 13. From the Release Notes:
Metal CIKernel instances support arguments with arbitrarily structured data.
But, as so often with Core Image, there seem to be no further documentation for that…
However, you can still use the "old way" of passing buffer data by wrapping it in a CIImage and sampling it in the kernel. For example:
let array: [Float] = [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0]
let data = array.withUnsafeBufferPointer { Data(buffer: $0) }
let dataImage = CIImage(bitmapData: data, bytesPerRow: data.count, size: CGSize(width: array.count/4, height: 1), format: .RGBAf, colorSpace: nil)
Note that there is no CIFormat for 3-channel images since the GPU doesn't support those. So you either have to use single-channel .Rf and re-pack the values inside your kernel to float3 again, or add some strides to your data and use .RGBAf and float4 respectively (which I'd recommend since it reduces texture fetches).
When you pass that image into your kernel, you probably want to set the sampling mode to nearest, otherwise you might get interpolated values when sampling between two pixels:
kernel.apply(..., arguments: [dataImage.samplingNearest(), ...])
In your (Metal) kernel, you can assess the data as you would with a normal input image via a sampler:
extern "C" float4 myKernel(coreimage::sampler data, ...) {
float4 data0 = data.sample(data.transform(float2(0.5, 0.5))); // data[0]
float4 data1 = data.sample(data.transform(float2(1.5, 0.5))); // data[1]
// ...
}
Note that I added 0.5 to the coordinates so that they point in the middle of a pixel in the data image to avoid ambiguity and interpolation.
Also note that pixel values you get from a sampler always have 4 channels. So even when you are creating your data image with formate .Rf, you'll get a float4 when sampling it (the other values are filled with 0.0 for G and B and 1.0 for alpha). In this case, you can just do
float data0 = data.sample(data.transform(float2(0.5, 0.5))).x;
Edit
I previously forgot to transform the sample coordinate from absolute pixel space (where (0.5, 0.5) would be the middle of the first pixel) to relative sampler space (where (0.5, 0.5) would be the middle of the whole buffer). It's fixed now.
I made it, event if the answer was good and also deploys to lower target the result wasn't exactly what I was expecting. The difference between the original kernel written as a string and the above method to create an image to be used as a source of data were kind of big.
Didn't get exactly the reason, but the image I was passing as a source of the palette was kind of different from the created one in size and color(probably due to color spaces).
Since there was no documentation about this statement:
Metal CIKernel instances support arguments with arbitrarily structured
data.
I tried a lot in my spare time and came up to this.
First the shader:
float4 eight_bit_buffer(sampler image, constant simd_float3 palette[], float paletteSize, destination dest) {
float4 color = image.sample(image.transform(dest.coord()));
float dist = distance(color.rgb, palette[0]);
float3 returnColor = palette[0];
for (int i = 1; i < floor(paletteSize); ++i) {
float tempDist = distance(color.rgb, palette[i]);
if (tempDist < dist) {
dist = tempDist;
returnColor = palette[i];
}
}
return float4(returnColor, 1);
}
Second the palette transformation into SIMD3<Float>:
static func toSIMD3Buffer(from palette: [RGB]) -> Data {
var simd3Palette = SIMD3Palette(palette)
let size = MemoryLayout<SIMD3<Float>>.size
let count = palette.count * size
let palettePointer = UnsafeMutableRawPointer.allocate(
byteCount: simd3Palette.count * MemoryLayout<SIMD3<Float>>.stride,
alignment: MemoryLayout<SIMD3<Float>>.alignment)
let simd3Pointer = simd3Palette.withUnsafeMutableBufferPointer { (buffer) -> UnsafeMutablePointer<SIMD3<Float>> in
let p = palettePointer.initializeMemory(as: SIMD3<Float>.self,
from: buffer.baseAddress!,
count: buffer.count)
return p
}
let data = Data(bytesNoCopy: simd3Pointer, count: count * MemoryLayout<SIMD3<Float>>.stride, deallocator: .free)
return data
}
The first time I tried by appending SIMD3 to the Data object but wasn't working probably due to memory alignment.
Remember to dealloc the memory created after you used it.
Hope to help someone else.

How to load a existing Texture using MTLRenderPassDescriptor

I am having a image in the asset. Then I am changing it to MTLTexture. I want to pass this texture to the shader function and append the texture and add the smudge feature to the passed texture using shader functions.
Currently I am passing the texture in MTLRenderPassDescriptor like below.
let renderPassWC = MTLRenderPassDescriptor()
renderPassWC.colorAttachments[0].texture = ssTexture
renderPassWC.colorAttachments[0].loadAction = .load
renderPassWC.colorAttachments[0].storeAction = .store
When I edit the texture in Shader function like Moving the pixel of ssTexture to adjacent Pixel. The movement don't stops. Because the operations I am doing in shader functions continuously operating in the appended Textures every draw cycle.
So I think rather than loadAction with load I feel clear will be a option but texture which passed become clear when I changed the code as below
renderPassWC.colorAttachments[0].texture = ssTexture
renderPassWC.colorAttachments[0].loadAction = .clear
renderPassWC.colorAttachments[0].clearColor = MTLClearColorMake( 0.0, 0.0, 0.0, 0.0)
.Any possible way to pass the image texture with clear.

Metal compute shader with 1D data buffer in and out?

I understand it is possible to pass a 1D array buffer to a metal shader, but is it possible to have it output to a 1D array buffer? I don't want it to write to a texture - I just need an array of processed values.
I can get values out with the shader with the following code, but they are one value at a time. Ideally I could get a whole array out (in the same order as the input 1D array buffer).
Any examples or pointers would be greatly appreciated!
var resultdata = [Float](repeating: 0, count: 3)
let outVectorBuffer = device.makeBuffer(bytes: &resultdata, length: MemoryLayout<float3>.size, options: [])
commandEncoder!.setBuffer(outVectorBuffer, offset: 0, index: 6)
commandBuffer!.addCompletedHandler {commandBuffer in
let data = NSData(bytes: outVectorBuffer!.contents(), length: MemoryLayout<float3>.size)
var out: float3 = float3(0,0,0)
data.getBytes(&out, length: MemoryLayout<float3>.size)
print("data: \(out)")
}
//In the Shader:
kernel void compute1d(
...
device float3 &outBuffer [[buffer(6)]],
outBuffer = float3(1.0, 2.0, 3.0);
)
Two things:
You need to create the buffer large enough to hold however many float3 elements as you want. You really need to use .stride and not .size when calculating the buffer size, though. In particular, float3 has 16-byte alignment, so there's padding between elements in an array. So, you would use something like MemoryLayout<float3>.stride * desiredNumberOfElements.
Then, in the shader, you need to change the declaration of outBuffer from a reference to a pointer. So, device float3 *outBuffer [[buffer(6)]]. Then you can index into it to access the elements (e.g. outBuffer[2] = ...;).

Pass cuda texture variable as an argument

I have setup the cudaArray, and have bound it to a texture:
texture<float, 2, cudaReadModeElementType> tex;
cudaChannelFormatDesc channelDesc =
cudaCreateChannelDesc(32, 0, 0, 0, cudaChannelFormatKindFloat);
cudaArray *cuArray;
checkCudaErrors(cudaMallocArray(&cuArray,
&channelDesc,
width,
height));
checkCudaErrors(cudaMemcpyToArray(cuArray,
0,
0,
hData,
size,
cudaMemcpyHostToDevice));
Now I am wondering, if the content within the cuArray and tex remains the same all the time during the calculation, can I pass tex and/or cuArray to another function so that I don't have to do the binding every time?
Something like this:
DoJobUsingTex(float* output, float* input, int size, texture tex)
{
\\ do something here
}
CUDA introduced texture objects when CUDA 5 and Kepler hardware were released. These are so called "bindless" textures which can be passed by value to kernels, so there isn't a need to rebind memory every time you want to run a kernel on different texture data.
You can read more about their use here.

how to pass from farseer vertices list to vertexpositioncolortexture data vertex

My issue started when i was doing the texture to vertices example (https://gamedev.stackexchange.com/questions/30050/building-a-shape-out-of-an-texture-with-farseer) then i pop up if its posible to pass this "farseer vertices" to vertex data that can be used in DrawUserIndexedPrimitives in order to have the vertices ready for modification on alpha textures.
Example:
You draw your texture(with transparency in some places) over the triangle strip vertex data so you can manipulate the points in order to disort the image like this:
http://www.tutsps.com/images/Water_Design_avec_Photoshop/Water_Design_avec_Photoshop_20.jpg
As you can see the A letter was just a normal image on a PNG file but after the conversion iam looking it can be used to disort image.
plz any solution give some code or link to tutorial that can help me to figure out this...
Thanks all!!
P.D. i think the main issue is how to make the indexData and the textureCoordination from just the vertices that PolygonTools.CreatePolygon makes.
TexturedFixture polygon = fixture.UserData as TexturedFixture;
effect.Texture = polygon.Texture;
effect.CurrentTechnique.Passes[0].Apply();
VertexPositionColorTexture[] points;
int vertexCount;
int[] indices;
int triangleCount;
polygon.Polygon.GetTriangleList(fixture.Body.Position, fixture.Body.Rotation, out points, out vertexCount, out indices, out triangleCount);
GraphicsDevice.SamplerStates[0] = SamplerState.AnisotropicClamp;
GraphicsDevice.RasterizerState = new RasterizerState() { FillMode = FillMode.Solid, CullMode = CullMode.None, };
GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionColorTexture>(PrimitiveType.TriangleList, points, 0, vertexCount, indices, 0, triangleCount);
This will do the trick

Resources