What is UnsafeMutablePointer<Void>? How to modify the underlying memory? - ios

I am trying to work with SpriteKit's SKMutableTexture class but I don't know how to work with UnsafeMutablePointer< Void >. I have a vague idea that it is a pointer to a succession of byte data in memory. But how can I update it? What would this actually look like in code?
Edit
Here is a basic code sample to work with. How would I get this to do something as simple as create a red square on the screen?
let tex = SKMutableTexture(size: CGSize(width: 10, height: 10))
tex.modifyPixelDataWithBlock { (ptr:UnsafeMutablePointer<Void>, n:UInt) -> Void in
/* ??? */
}

From the docs for SKMutableTexture.modifyPixelDataWithBlock:
The texture bytes are assumed to be stored as tightly packed 32 bpp, 8bpc (unsigned integer) RGBA pixel data. The color components you provide should have already been multiplied by the alpha value.
So, while you’re given a void*, the underlying data is in the form of a stream of 4x8 bits.
You could manipulate such a structure like so:
// struct of 4 bytes
struct RGBA {
var r: UInt8
var g: UInt8
var b: UInt8
var a: UInt8
}
let tex = SKMutableTexture(size: CGSize(width: 10, height: 10))
tex.modifyPixelDataWithBlock { voidptr, len in
// convert the void pointer into a pointer to your struct
let rgbaptr = UnsafeMutablePointer<RGBA>(voidptr)
// next, create a collection-like structure from that pointer
// (this second part isn’t necessary but can be nicer to work with)
// note the length you supply to create the buffer is the number of
// RGBA structs, so you need to convert the supplied length accordingly...
let pixels = UnsafeMutableBufferPointer(start: rgbaptr, count: Int(len / sizeof(RGBA))
// now, you can manipulate the pixels buffer like any other mutable collection type
for i in indices(pixels) {
pixels[i].r = 0x00
pixels[i].g = 0xff
pixels[i].b = 0x00
pixels[i].a = 0x20
}
}

UnsafeMutablePointer<Void> is the Swift equivalent of void* - a pointer to anything at all. You can access the underlying memory as its memory property. Typically, if you know what the underlying type is, you'll coerce to a pointer to that type first. You can then use subscripting to reach a particular "slot" in memory.
For example, if the data is really a sequence of UInt8 values, you could say:
let buffer = UnsafeMutablePointer<UInt8>(ptr)
You can now access the individual UIInt8 values as buffer[0], buffer[1], and so forth.

Related

Metal Shading language for Core Image color kernel, how to pass an array of float3

I'm trying to port some CIFilter from this source by using metal shading language for Core Image.
I have a palette of color composed by an array of RGB struct and I want to pass them as an argument to a custom CI color image kernel.
The RGB struct is converted into an array of SIMD3<Float>.
static func SIMD3Palette(_ palette: [RGB]) -> [SIMD3<Float>] {
return palette.map{$0.toFloat3()}
}
The kernel should take and array of simd_float3 values, the problem is the when I launch the filter it tells me that the argument at index 1 is expecting an NSData.
override var outputImage: CIImage? {
guard let inputImage = inputImage else
{
return nil
}
let palette = EightBitColorFilter.palettes[Int(inputPaletteIndex)]
let extent = inputImage.extent
let arguments = [inputImage, palette, Float(palette.count)] as [Any]
let final = colorKernel.apply(extent: extent, arguments: arguments)
return final
}
This is the kernel:
float4 eight_bit(sample_t image, simd_float3 palette[], float paletteSize, destination dest) {
float dist = distance(image.rgb, palette[0]);
float3 returnColor = palette[0];
for (int i = 1; i < floor(paletteSize); ++i) {
float tempDist = distance(image.rgb, palette[i]);
if (tempDist < dist) {
dist = tempDist;
returnColor = palette[i];
}
}
return float4(returnColor, 1);
}
I'm wondering how can I pass a data buffer to the kernel since converting it into an NSData seems not enough. I saw some example but they are using "full" shading language that is not available for Core Image that is a sort of subset for dealing only with fragments.
Update
We have now figured out how to pass data buffers directly into Core Image kernels. Using a CIImage as described below is not needed, but still possible.
Assuming that you have your raw data as an NSData, you can just pass it to the kernel on invocation:
kernel.apply(..., arguments: [data, ...])
Note: Data might also work, but I know that NSData is an argument type that allows Core Image to cache filter results based on input arguments. So when in doubt, better cast to NSData.
Then in the kernel function, you only need to declare the parameter with an appropriate constant type:
extern "C" float4 myKernel(constant float3 data[], ...) {
float3 data0 = data[0];
// ...
}
Previous Answer
Core Image kernels don't seem to support pointer or array parameter types. Though there seem to be something coming with iOS 13. From the Release Notes:
Metal CIKernel instances support arguments with arbitrarily structured data.
But, as so often with Core Image, there seem to be no further documentation for that…
However, you can still use the "old way" of passing buffer data by wrapping it in a CIImage and sampling it in the kernel. For example:
let array: [Float] = [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0]
let data = array.withUnsafeBufferPointer { Data(buffer: $0) }
let dataImage = CIImage(bitmapData: data, bytesPerRow: data.count, size: CGSize(width: array.count/4, height: 1), format: .RGBAf, colorSpace: nil)
Note that there is no CIFormat for 3-channel images since the GPU doesn't support those. So you either have to use single-channel .Rf and re-pack the values inside your kernel to float3 again, or add some strides to your data and use .RGBAf and float4 respectively (which I'd recommend since it reduces texture fetches).
When you pass that image into your kernel, you probably want to set the sampling mode to nearest, otherwise you might get interpolated values when sampling between two pixels:
kernel.apply(..., arguments: [dataImage.samplingNearest(), ...])
In your (Metal) kernel, you can assess the data as you would with a normal input image via a sampler:
extern "C" float4 myKernel(coreimage::sampler data, ...) {
float4 data0 = data.sample(data.transform(float2(0.5, 0.5))); // data[0]
float4 data1 = data.sample(data.transform(float2(1.5, 0.5))); // data[1]
// ...
}
Note that I added 0.5 to the coordinates so that they point in the middle of a pixel in the data image to avoid ambiguity and interpolation.
Also note that pixel values you get from a sampler always have 4 channels. So even when you are creating your data image with formate .Rf, you'll get a float4 when sampling it (the other values are filled with 0.0 for G and B and 1.0 for alpha). In this case, you can just do
float data0 = data.sample(data.transform(float2(0.5, 0.5))).x;
Edit
I previously forgot to transform the sample coordinate from absolute pixel space (where (0.5, 0.5) would be the middle of the first pixel) to relative sampler space (where (0.5, 0.5) would be the middle of the whole buffer). It's fixed now.
I made it, event if the answer was good and also deploys to lower target the result wasn't exactly what I was expecting. The difference between the original kernel written as a string and the above method to create an image to be used as a source of data were kind of big.
Didn't get exactly the reason, but the image I was passing as a source of the palette was kind of different from the created one in size and color(probably due to color spaces).
Since there was no documentation about this statement:
Metal CIKernel instances support arguments with arbitrarily structured
data.
I tried a lot in my spare time and came up to this.
First the shader:
float4 eight_bit_buffer(sampler image, constant simd_float3 palette[], float paletteSize, destination dest) {
float4 color = image.sample(image.transform(dest.coord()));
float dist = distance(color.rgb, palette[0]);
float3 returnColor = palette[0];
for (int i = 1; i < floor(paletteSize); ++i) {
float tempDist = distance(color.rgb, palette[i]);
if (tempDist < dist) {
dist = tempDist;
returnColor = palette[i];
}
}
return float4(returnColor, 1);
}
Second the palette transformation into SIMD3<Float>:
static func toSIMD3Buffer(from palette: [RGB]) -> Data {
var simd3Palette = SIMD3Palette(palette)
let size = MemoryLayout<SIMD3<Float>>.size
let count = palette.count * size
let palettePointer = UnsafeMutableRawPointer.allocate(
byteCount: simd3Palette.count * MemoryLayout<SIMD3<Float>>.stride,
alignment: MemoryLayout<SIMD3<Float>>.alignment)
let simd3Pointer = simd3Palette.withUnsafeMutableBufferPointer { (buffer) -> UnsafeMutablePointer<SIMD3<Float>> in
let p = palettePointer.initializeMemory(as: SIMD3<Float>.self,
from: buffer.baseAddress!,
count: buffer.count)
return p
}
let data = Data(bytesNoCopy: simd3Pointer, count: count * MemoryLayout<SIMD3<Float>>.stride, deallocator: .free)
return data
}
The first time I tried by appending SIMD3 to the Data object but wasn't working probably due to memory alignment.
Remember to dealloc the memory created after you used it.
Hope to help someone else.

Fatal error when using withMemoryRebound in iOS/Swift

I have the following code to create a table for sampling an image in iOS using the Swift accelerate functions
When I rebound the memory to UInt16 which the table creation expects from Int which is the original type I get a fatal error.
var arr = Array<Float>(repeating: 0, count: 163840)
arr.withUnsafeBufferPointer{
arr_pointer in do {
arr_pointer.withMemoryRebound(to: UInt16.self){ // This causes a FATAL ERROR
arr_r_pointer in do {
let table = vImageMultidimensionalTable_Create( arr_r_pointer.baseAddress!,
3, 3, dims_r_pointer.baseAddress!, kvImageMDTableHint_Float,
vImage_Flags(kvImageNoFlags), nil )
vImageMultiDimensionalInterpolatedLookupTable_PlanarF( &srcBuffer,
&destBuffer,nil,table!,
kvImageFullInterpolation,
vImage_Flags(kvImageNoFlags))
}
}
}
}
Could anyone point out my mistake here?
You should've read Note for withMemoryRebound function:
Note
Only use this method to rebind the buffer’s memory to a type with the same size and stride as the currently bound Element type. To
bind a region of memory to a type that is a different size, convert
the buffer to a raw buffer and use the bindMemory(to:) method.
Size of Float is 32 bits, size of UInt16 is 16 bits, so they don't have same size and cannot be rebound.
So you should do something like this:
arr.withUnsafeBufferPointer { pointer in
let raw = UnsafeRawBufferPointer(pointer)
let uints = raw.bindMemory(to: UInt16.self)
// use buffer pointer to `UInt16`s here
}
But also note, that each Float from initial array will be split in two UInt16 that way. I don't know if that's what you need.
Your original array arr is an array of Floats
var arr = Array<Float>(repeating: 0, count: 163840)
but you're trying to bind the pointer to a UInt16
arr_pointer.withMemoryRebound(to: UInt16.self)

Metal compute shader with 1D data buffer in and out?

I understand it is possible to pass a 1D array buffer to a metal shader, but is it possible to have it output to a 1D array buffer? I don't want it to write to a texture - I just need an array of processed values.
I can get values out with the shader with the following code, but they are one value at a time. Ideally I could get a whole array out (in the same order as the input 1D array buffer).
Any examples or pointers would be greatly appreciated!
var resultdata = [Float](repeating: 0, count: 3)
let outVectorBuffer = device.makeBuffer(bytes: &resultdata, length: MemoryLayout<float3>.size, options: [])
commandEncoder!.setBuffer(outVectorBuffer, offset: 0, index: 6)
commandBuffer!.addCompletedHandler {commandBuffer in
let data = NSData(bytes: outVectorBuffer!.contents(), length: MemoryLayout<float3>.size)
var out: float3 = float3(0,0,0)
data.getBytes(&out, length: MemoryLayout<float3>.size)
print("data: \(out)")
}
//In the Shader:
kernel void compute1d(
...
device float3 &outBuffer [[buffer(6)]],
outBuffer = float3(1.0, 2.0, 3.0);
)
Two things:
You need to create the buffer large enough to hold however many float3 elements as you want. You really need to use .stride and not .size when calculating the buffer size, though. In particular, float3 has 16-byte alignment, so there's padding between elements in an array. So, you would use something like MemoryLayout<float3>.stride * desiredNumberOfElements.
Then, in the shader, you need to change the declaration of outBuffer from a reference to a pointer. So, device float3 *outBuffer [[buffer(6)]]. Then you can index into it to access the elements (e.g. outBuffer[2] = ...;).

Spectrogram from AVAudioPCMBuffer using Accelerate framework in Swift

I'm trying to generate a spectrogram from an AVAudioPCMBuffer in Swift. I install a tap on an AVAudioMixerNode and receive a callback with the audio buffer. I'd like to convert the signal in the buffer to a [Float:Float] dictionary where the key represents the frequency and the value represents the magnitude of the audio on the corresponding frequency.
I tried using Apple's Accelerate framework but the results I get seem dubious. I'm sure it's just in the way I'm converting the signal.
I looked at this blog post amongst other things for a reference.
Here is what I have:
self.audioEngine.mainMixerNode.installTapOnBus(0, bufferSize: 1024, format: nil, block: { buffer, when in
let bufferSize: Int = Int(buffer.frameLength)
// Set up the transform
let log2n = UInt(round(log2(Double(bufferSize))))
let fftSetup = vDSP_create_fftsetup(log2n, Int32(kFFTRadix2))
// Create the complex split value to hold the output of the transform
var realp = [Float](count: bufferSize/2, repeatedValue: 0)
var imagp = [Float](count: bufferSize/2, repeatedValue: 0)
var output = DSPSplitComplex(realp: &realp, imagp: &imagp)
// Now I need to convert the signal from the buffer to complex value, this is what I'm struggling to grasp.
// The complexValue should be UnsafePointer<DSPComplex>. How do I generate it from the buffer's floatChannelData?
vDSP_ctoz(complexValue, 2, &output, 1, UInt(bufferSize / 2))
// Do the fast Fournier forward transform
vDSP_fft_zrip(fftSetup, &output, 1, log2n, Int32(FFT_FORWARD))
// Convert the complex output to magnitude
var fft = [Float](count:Int(bufferSize / 2), repeatedValue:0.0)
vDSP_zvmags(&output, 1, &fft, 1, vDSP_length(bufferSize / 2))
// Release the setup
vDSP_destroy_fftsetup(fftsetup)
// TODO: Convert fft to [Float:Float] dictionary of frequency vs magnitude. How?
})
My questions are
How do I convert the buffer.floatChannelData to UnsafePointer<DSPComplex> to pass to the vDSP_ctoz function? Is there a different/better way to do it maybe even bypassing vDSP_ctoz?
Is this different if the buffer contains audio from multiple channels? How is it different when the buffer audio channel data is or isn't interleaved?
How do I convert the indices in the fft array to frequencies in Hz?
Anything else I may be doing wrong?
Update
Thanks everyone for suggestions. I ended up filling the complex array as suggested in the accepted answer. When I plot the values and play a 440 Hz tone on a tuning fork it registers exactly where it should.
Here is the code to fill the array:
var channelSamples: [[DSPComplex]] = []
for var i=0; i<channelCount; ++i {
channelSamples.append([])
let firstSample = buffer.format.interleaved ? i : i*bufferSize
for var j=firstSample; j<bufferSize; j+=buffer.stride*2 {
channelSamples[i].append(DSPComplex(real: buffer.floatChannelData.memory[j], imag: buffer.floatChannelData.memory[j+buffer.stride]))
}
}
The channelSamples array then holds separate array of samples for each channel.
To calculate the magnitude I used this:
var spectrum = [Float]()
for var i=0; i<bufferSize/2; ++i {
let imag = out.imagp[i]
let real = out.realp[i]
let magnitude = sqrt(pow(real,2)+pow(imag,2))
spectrum.append(magnitude)
}
Hacky way: you can just cast a float array. Where reals and imag values are going one after another.
It depends on if audio is interleaved or not. If it's interleaved (most of the cases) left and right channels are in the array with STRIDE 2
Lowest frequency in your case is frequency of a period of 1024 samples. In case of 44100kHz it's ~23ms, lowest frequency of the spectrum will be 1/(1024/44100) (~43Hz). Next frequency will be twice of this (~86Hz) and so on.
4: You have installed a callback handler on an audio bus. This is likely run with real-time thread priority and frequently. You should not do anything that has potential for blocking (it will likely result in priority inversion and glitchy audio):
Allocate memory (realp, imagp - [Float](.....) is shorthand for Array[float] - and likely allocated on the heap`. Pre-allocate these
Call lengthy operations such as vDSP_create_fftsetup() - which also allocates memory and initialises it. Again, you can allocate this once outside of your function.

Swift pointer arithmetic and dereferencing; converting some C-like map code to Swift

I have this little bit of Swift code that does not seem to be working...
// earlier, in Obj C...
typedef struct _Room {
uint8_t *map;
int width;
int height;
} Room;
A Room is part of a stab at a roguelike game if you're curious. I'm trying to rewrite a couple of parts in Swift. Here's the code that appears broken, and what I hope I am doing as comments:
let ptr = UnsafePointer<UInt8>(room.map) // grab a pointer to the map out of the room struct
let offset = (Int(room.width) * Int(point.y)) + Int(point.x) // calculate an int offset to the location I am interested in examining
let locationPointer = ptr + offset // pointer advances to point to the offset I want
var pointValue = ptr.memory // What I used to get with *ptr
Something is wrong here because simple tests show the value of pointValue is not what I know I am looking at on the map, having set a very simple location (1, 1) to a known value. It seems pretty obvious that Swift is not supposed to be doing this kind of thing but it's a conversion with the aim of learning the Way of Swift when I know the syntax well enough.
I expect the error is in the swift code - since this was all working in the objective C version. Where's the error?
You are assigning locationPointer to point to the new location, but still using ptr in the next line, and the value of ptr has not been changed. Change your last line to:
var pointValue = locationPointer.memory
or you could change your pointer to a var and advance it:
var ptr = UnsafePointer<UInt8>(room.map) // grab a pointer to the map out of the room struct
let offset = (Int(room.width) * Int(point.y)) + Int(point.x) // calculate an int offset to the location I am interested in examining
ptr = ptr + offset // pointer advances to point to the offset I want
var pointValue = ptr.memory // What I used to get with *ptr

Resources