Swift: EXC_BAD_ACCESS when accessing an external library - ios

I'm using an external library written in C that applies filters to an image. It receives the original image pixels as an array of float values and writes the new image float values on another array. One specific filter creates a mask to be used by a sharpen filter, and I don't know why but it only works with smaller images, while bigger images (a million pixels more or less) cause the application to crash, giving an EXC_BAD_ACCESS error right after executing the wrapper that calls the external lib function. Is there anything wrong with my code, which creates the parameters that will be passed to the external lib, or is the problem likely in the external library?
func allocateMaskArgs() { //method to allocate mask parameters in memory, to be used by sharpen filter
let size = originalImageMatrix.params[0] * originalImageMatrix.params[1] //height of the image multiplied by width
if maskBuffer != nil {
self.maskBuffer.deallocate()
}
maskBuffer = UnsafeMutablePointer<UnsafeMutablePointer<Float>?>.allocate(capacity: 2)
let constantPointer: UnsafeMutablePointer<Float>?
constantPointer = UnsafeMutablePointer<Float>.allocate(capacity: 1)
constantPointer!.advanced(by: 0).pointee = 4.0 //this is the intensity value of the mask, it should always be 4
maskBuffer.advanced(by: 0).pointee = constantPointer
let maskArrayPointer: UnsafeMutablePointer<Float>? //this is where the mask created by createMask() should be stored by the external lib function
maskArrayPointer = UnsafeMutablePointer<Float>.allocate(capacity: size)
maskBuffer.advanced(by: 1).pointee = maskArrayPointer
}
func createMask() { //creates sharpen mask and stores in maskBuffer
var input_params: [Int] = [self.originalImageMatrix.params[0], self.originalImageMatrix.params[1]]
var output_params: [Int] = [self.newImageMatrix.params[0], self.newImageMatrix.params[1]]
self.imagingAPI.applyFilters(self.originalImageMatrix.v!, input_params: &input_params, output_image: self.newImageMatrix.v!, output_params: &output_params, filter_id: 11, args: self.maskBuffer)
}
The external library function is accessed through this wrapper function:
- (void) applyFilters: (float *) input_image input_params: (long *) input_params output_image : (float *) output_image output_params : (long *) output_params filter_id : (int) filter_id args : (float**) args;

Related

Metal Shading language for Core Image color kernel, how to pass an array of float3

I'm trying to port some CIFilter from this source by using metal shading language for Core Image.
I have a palette of color composed by an array of RGB struct and I want to pass them as an argument to a custom CI color image kernel.
The RGB struct is converted into an array of SIMD3<Float>.
static func SIMD3Palette(_ palette: [RGB]) -> [SIMD3<Float>] {
return palette.map{$0.toFloat3()}
}
The kernel should take and array of simd_float3 values, the problem is the when I launch the filter it tells me that the argument at index 1 is expecting an NSData.
override var outputImage: CIImage? {
guard let inputImage = inputImage else
{
return nil
}
let palette = EightBitColorFilter.palettes[Int(inputPaletteIndex)]
let extent = inputImage.extent
let arguments = [inputImage, palette, Float(palette.count)] as [Any]
let final = colorKernel.apply(extent: extent, arguments: arguments)
return final
}
This is the kernel:
float4 eight_bit(sample_t image, simd_float3 palette[], float paletteSize, destination dest) {
float dist = distance(image.rgb, palette[0]);
float3 returnColor = palette[0];
for (int i = 1; i < floor(paletteSize); ++i) {
float tempDist = distance(image.rgb, palette[i]);
if (tempDist < dist) {
dist = tempDist;
returnColor = palette[i];
}
}
return float4(returnColor, 1);
}
I'm wondering how can I pass a data buffer to the kernel since converting it into an NSData seems not enough. I saw some example but they are using "full" shading language that is not available for Core Image that is a sort of subset for dealing only with fragments.
Update
We have now figured out how to pass data buffers directly into Core Image kernels. Using a CIImage as described below is not needed, but still possible.
Assuming that you have your raw data as an NSData, you can just pass it to the kernel on invocation:
kernel.apply(..., arguments: [data, ...])
Note: Data might also work, but I know that NSData is an argument type that allows Core Image to cache filter results based on input arguments. So when in doubt, better cast to NSData.
Then in the kernel function, you only need to declare the parameter with an appropriate constant type:
extern "C" float4 myKernel(constant float3 data[], ...) {
float3 data0 = data[0];
// ...
}
Previous Answer
Core Image kernels don't seem to support pointer or array parameter types. Though there seem to be something coming with iOS 13. From the Release Notes:
Metal CIKernel instances support arguments with arbitrarily structured data.
But, as so often with Core Image, there seem to be no further documentation for that…
However, you can still use the "old way" of passing buffer data by wrapping it in a CIImage and sampling it in the kernel. For example:
let array: [Float] = [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0]
let data = array.withUnsafeBufferPointer { Data(buffer: $0) }
let dataImage = CIImage(bitmapData: data, bytesPerRow: data.count, size: CGSize(width: array.count/4, height: 1), format: .RGBAf, colorSpace: nil)
Note that there is no CIFormat for 3-channel images since the GPU doesn't support those. So you either have to use single-channel .Rf and re-pack the values inside your kernel to float3 again, or add some strides to your data and use .RGBAf and float4 respectively (which I'd recommend since it reduces texture fetches).
When you pass that image into your kernel, you probably want to set the sampling mode to nearest, otherwise you might get interpolated values when sampling between two pixels:
kernel.apply(..., arguments: [dataImage.samplingNearest(), ...])
In your (Metal) kernel, you can assess the data as you would with a normal input image via a sampler:
extern "C" float4 myKernel(coreimage::sampler data, ...) {
float4 data0 = data.sample(data.transform(float2(0.5, 0.5))); // data[0]
float4 data1 = data.sample(data.transform(float2(1.5, 0.5))); // data[1]
// ...
}
Note that I added 0.5 to the coordinates so that they point in the middle of a pixel in the data image to avoid ambiguity and interpolation.
Also note that pixel values you get from a sampler always have 4 channels. So even when you are creating your data image with formate .Rf, you'll get a float4 when sampling it (the other values are filled with 0.0 for G and B and 1.0 for alpha). In this case, you can just do
float data0 = data.sample(data.transform(float2(0.5, 0.5))).x;
Edit
I previously forgot to transform the sample coordinate from absolute pixel space (where (0.5, 0.5) would be the middle of the first pixel) to relative sampler space (where (0.5, 0.5) would be the middle of the whole buffer). It's fixed now.
I made it, event if the answer was good and also deploys to lower target the result wasn't exactly what I was expecting. The difference between the original kernel written as a string and the above method to create an image to be used as a source of data were kind of big.
Didn't get exactly the reason, but the image I was passing as a source of the palette was kind of different from the created one in size and color(probably due to color spaces).
Since there was no documentation about this statement:
Metal CIKernel instances support arguments with arbitrarily structured
data.
I tried a lot in my spare time and came up to this.
First the shader:
float4 eight_bit_buffer(sampler image, constant simd_float3 palette[], float paletteSize, destination dest) {
float4 color = image.sample(image.transform(dest.coord()));
float dist = distance(color.rgb, palette[0]);
float3 returnColor = palette[0];
for (int i = 1; i < floor(paletteSize); ++i) {
float tempDist = distance(color.rgb, palette[i]);
if (tempDist < dist) {
dist = tempDist;
returnColor = palette[i];
}
}
return float4(returnColor, 1);
}
Second the palette transformation into SIMD3<Float>:
static func toSIMD3Buffer(from palette: [RGB]) -> Data {
var simd3Palette = SIMD3Palette(palette)
let size = MemoryLayout<SIMD3<Float>>.size
let count = palette.count * size
let palettePointer = UnsafeMutableRawPointer.allocate(
byteCount: simd3Palette.count * MemoryLayout<SIMD3<Float>>.stride,
alignment: MemoryLayout<SIMD3<Float>>.alignment)
let simd3Pointer = simd3Palette.withUnsafeMutableBufferPointer { (buffer) -> UnsafeMutablePointer<SIMD3<Float>> in
let p = palettePointer.initializeMemory(as: SIMD3<Float>.self,
from: buffer.baseAddress!,
count: buffer.count)
return p
}
let data = Data(bytesNoCopy: simd3Pointer, count: count * MemoryLayout<SIMD3<Float>>.stride, deallocator: .free)
return data
}
The first time I tried by appending SIMD3 to the Data object but wasn't working probably due to memory alignment.
Remember to dealloc the memory created after you used it.
Hope to help someone else.

What is UnsafeMutablePointer<Void>? How to modify the underlying memory?

I am trying to work with SpriteKit's SKMutableTexture class but I don't know how to work with UnsafeMutablePointer< Void >. I have a vague idea that it is a pointer to a succession of byte data in memory. But how can I update it? What would this actually look like in code?
Edit
Here is a basic code sample to work with. How would I get this to do something as simple as create a red square on the screen?
let tex = SKMutableTexture(size: CGSize(width: 10, height: 10))
tex.modifyPixelDataWithBlock { (ptr:UnsafeMutablePointer<Void>, n:UInt) -> Void in
/* ??? */
}
From the docs for SKMutableTexture.modifyPixelDataWithBlock:
The texture bytes are assumed to be stored as tightly packed 32 bpp, 8bpc (unsigned integer) RGBA pixel data. The color components you provide should have already been multiplied by the alpha value.
So, while you’re given a void*, the underlying data is in the form of a stream of 4x8 bits.
You could manipulate such a structure like so:
// struct of 4 bytes
struct RGBA {
var r: UInt8
var g: UInt8
var b: UInt8
var a: UInt8
}
let tex = SKMutableTexture(size: CGSize(width: 10, height: 10))
tex.modifyPixelDataWithBlock { voidptr, len in
// convert the void pointer into a pointer to your struct
let rgbaptr = UnsafeMutablePointer<RGBA>(voidptr)
// next, create a collection-like structure from that pointer
// (this second part isn’t necessary but can be nicer to work with)
// note the length you supply to create the buffer is the number of
// RGBA structs, so you need to convert the supplied length accordingly...
let pixels = UnsafeMutableBufferPointer(start: rgbaptr, count: Int(len / sizeof(RGBA))
// now, you can manipulate the pixels buffer like any other mutable collection type
for i in indices(pixels) {
pixels[i].r = 0x00
pixels[i].g = 0xff
pixels[i].b = 0x00
pixels[i].a = 0x20
}
}
UnsafeMutablePointer<Void> is the Swift equivalent of void* - a pointer to anything at all. You can access the underlying memory as its memory property. Typically, if you know what the underlying type is, you'll coerce to a pointer to that type first. You can then use subscripting to reach a particular "slot" in memory.
For example, if the data is really a sequence of UInt8 values, you could say:
let buffer = UnsafeMutablePointer<UInt8>(ptr)
You can now access the individual UIInt8 values as buffer[0], buffer[1], and so forth.

Swift function that takes in array giving error: '#lvalue $T24' is not identical to 'CGFloat'

So I'm writing a lowpass accelerometer function to moderate the jitters of the accelerometer. I have a CGFloat array to represent the data and i want to damp it with this function:
// Damps the gittery motion with a lowpass filter.
func lowPass(vector:[CGFloat]) -> [CGFloat]
{
let blend:CGFloat = 0.2
// Smoothens out the data input.
vector[0] = vector[0] * blend + lastVector[0] * (1 - blend)
vector[1] = vector[1] * blend + lastVector[1] * (1 - blend)
vector[2] = vector[2] * blend + lastVector[2] * (1 - blend)
// Sets the last vector to be the current one.
lastVector = vector
// Returns the lowpass vector.
return vector
}
In this case, lastVector is defined as follows up at the top of my program:
var lastVector:[CGFloat] = [0.0, 0.0, 0.0]
The three lines in the form vector[a] = ... give me the errors. Any ideas as to why i am getting this error?
That code seems to compile if you pass the array with the inout modifier:
func lowPass(inout vector:[CGFloat]) -> [CGFloat] {
...
}
I'm not sure whether that's a bug or not. Instinctively, if I pass an array to a function I expect to be able to modify it. If I pass with the inout modifier, I'd expect to be able to make the original variable to point to a new array - similar to what the & modifier does in C and C++.
Maybe the reason behind is that in Swift there are mutable and immutable arrays (and dictionaries). Without the inout it's considered immutable, hence the reason why it cannot be modified.
Addendum 1 - It's not a bug
#newacct says that's the intended behavior. After some research I agree with him. But even if not a bug I originally considered it wrong (read up to the end for conclusions).
If I have a class like this:
class WithProp {
var x : Int = 1
func SetX(newVal : Int) {
self.x = newVal
}
}
I can pass an instance of that class to a function, and the function can modify its internal state
var a = WithProp()
func Do1(p : WithProp) {
p.x = 5 // This works
p.SetX(10) // This works too
}
without having to pass the instance as inout.
I can use inout instead to make the a variable to point to another instance:
func Do2(inout p : WithProp) {
p = WithProp()
}
Do2(&a)
With that code, from within Do2 I make the p parameter (i.e. the a variable) point to a newly created instance of WithProp.
The same cannot be done with an array (and I presume a dictionary as well). To change its internal state (modify, add or remove an element) the inout modifier must be used. That was counterintuitive.
But everything gets clarified after reading this excerpt from the swift book:
Swift’s String, Array, and Dictionary types are implemented as structures. This means that strings, arrays, and dictionaries are copied when they are assigned to a new constant or variable, or when they are passed to a function or method.
So when passed to a func, it's not the original array, but a copy of it - Hence any change made to it (even if possible) wouldn't be done on the original array.
So, in the end, my original answer above is correct and the experienced behavior is not a bug
Many thanks to #newacct :)
Since Xcode 6 beta 3, modifying the contents of an Array is a mutating operation. You cannot modify a constant (i.e. let) Array; you can only modify a non-constant (i.e. var) Array.
Parameters to a function are constants by default. Therefore, you cannot modify the contents of vector since it is a constant. Like other parameters, there are two ways to be able to change a parameter:
Declare it var, in which case you can assign to it, but it is still passed by value, so any changes to the parameter has no effect on the calling scope.
Declare it inout, in which case the parameter is passed by reference, and any changes to the parameter is just like you made the changes on the variable in the calling scope.
You can see in the Swift standard library that all the functions that take an Array and mutate it, like sort(), take the Array as inout.
P.S. this is just like how arrays work in PHP by the way
Edit: The following worked for Xcode Beta 2. Apparently, the syntax and behavior of arrays has changed in Beta 3. You can no longer modify the contents of an array with subscripts if it is immutable (a parameter not declared inout or var):
Not valid with the most recent changes to the language
The only way I could get it to work in the play ground was change how you are declaring the arrays. I suggest trying this (works in playground):
import Cocoa
let lastVector: CGFloat[] = [0.0,0.0,0.0]
func lowPass(vector:CGFloat[]) -> CGFloat[] {
let blend: CGFloat = 0.2
vector[0] = vector[0] * blend + lastVector[0] * ( 1 - blend)
vector[1] = vector[1] * blend + lastVector[1] * ( 1 - blend)
vector[2] = vector[2] * blend + lastVector[2] * ( 1 - blend)
return vector
}
var test = lowPass([1.0,2.0,3.0]);
Mainly as a followup for future reference, #newacct's answer is the correct one. Since the original post showed a function that returns an array, the correct answer to this question is to tag the parameter with var:
func lowPass(var vector:[CGFloat]) -> [CGFloat] {
let blend:CGFloat = 0.2
// Smoothens out the data input.
vector[0] = vector[0] * blend + lastVector[0] * (1 - blend)
vector[1] = vector[1] * blend + lastVector[1] * (1 - blend)
vector[2] = vector[2] * blend + lastVector[2] * (1 - blend)
// Sets the last vector to be the current one.
lastVector = vector
// Returns the lowpass vector.
return vector
}

Problem assigning values to Mat array in OpenCV 2.3 - seems simple

Using the new API for OpenCV 2.3, I am having trouble assigning values to a Mat array (or say image) inside a loop. Here is the code snippet which I am using;
int paddedHeight = 256 + 2*padSize;
int paddedWidth = 256 + 2*padSize;
int n = 266; // padded height or width
cv::Mat fx = cv::Mat(paddedHeight,paddedWidth,CV_64FC1);
cv::Mat fy = cv::Mat(paddedHeight,paddedWidth,CV_64FC1);
float value = -n/2.0f;
for(int i=0;i<n;i++)
{
for(int j=0;j<n;j++)
fx.at<cv::Vec2d>(i,j) = value++;
value = -n/2.0f;
}
meshElement = -n/2.0f;
for(int i=0;i<n;i++)
{
for(int j=0;j<n;j++)
fy.at<cv::Vec2d>(i,j) = value;
value++;
}
Now in the first loop as soon as j = 133, I get an exception which seems to be related to depth of the image, I cant figure out what I am doing wrong here.
Please Advise! Thanks!
You are accessing the data as 2-component double vector (using .at<cv::Vec2d>()), but you created the matrices to contain only 1 component doubles (using CV_64FC1). Either create the matrices to contain two components per element (with CV_64FC2) or, what seems more appropriate to your code, access the values as simple doubles, using .at<double>(). This explodes exactly at j=133 because that is half the size of your image and when treated as containing 2-component vectors when it only contains 1, it is only half as wide.
Or maybe you can merge these two matrices into one, containing two components per element, but this depends on the way you are going to use these matrices in the future. In this case you can also merge the two loops together and really set a 2-component vector:
cv::Mat f = cv::Mat(paddedHeight,paddedWidth,CV_64FC2);
float yValue = -n/2.0f;
for(int i=0;i<n;i++)
{
float xValue = -n/2.0f;
for(int j=0;j<n;j++)
{
f.at<cv::Vec2d>(i,j)[0] = xValue++;
f.at<cv::Vec2d>(i,j)[1] = yValue;
}
++yValue;
}
This might produce a better memory accessing scheme if you always need both values, the one from fx and the one from fy, for the same element.

F# lazy pixels reading

I want to make a lazy loading of image pixels to the 3 dimensional array of integers.
For example in simple way it looks like this:
for i=0 to Width
for j=0 to Height
let point=image.GetPixel(i,j)
pixels.[0,i,j] <- point.R
pixels.[1,i,j] <- point.G
pixels.[2,i,j] <- point.B
How it can be made in lazy way?
What would be slow is the call to GetPixel. If you want to call it only as needed, you could use something like this:
open System.Drawing
let lazyPixels (image:Bitmap) =
let Width = image.Width
let Height = image.Height
let pixels : Lazy<byte>[,,] = Array3D.zeroCreate 3 Width Height
for i = 0 to Width-1 do
for j = 0 to Height-1 do
let point = lazy image.GetPixel(i,j)
pixels.[0,i,j] <- lazy point.Value.R
pixels.[1,i,j] <- lazy point.Value.G
pixels.[2,i,j] <- lazy point.Value.B
pixels
GetPixel will be called at most once for every pixel, and then reused for the other components.
Another way of approaching this problem would be to do a bulk-load of the entire image. This will be a lot quicker than calling GetPixel over and over again.
open System.Drawing
open System.Drawing.Imaging
let pixels (image:Bitmap) =
let Width = image.Width
let Height = image.Height
let rect = new Rectangle(0,0,Width,Height)
// Lock the image for access
let data = image.LockBits(rect, ImageLockMode.ReadOnly, image.PixelFormat)
// Copy the data
let ptr = data.Scan0
let stride = data.Stride
let bytes = stride * data.Height
let values : byte[] = Array.zeroCreate bytes
System.Runtime.InteropServices.Marshal.Copy(ptr,values,0,bytes)
// Unlock the image
image.UnlockBits(data)
let pixelSize = 4 // <-- calculate this from the PixelFormat
// Create and return a 3D-array with the copied data
Array3D.init 3 Width Height (fun i x y ->
values.[stride * y + x * pixelSize + i])
(adopted from the C# sample on Bitmap.LockBits)
What do you mean by lazy?
An array is not a lazy data type, which means that if you want to use arrays, you need to load all pixels during the initialization. If we were using single-dimensional array, an alternative would be to use seq<_> which is lazy (but you can access elements only sequentially). There is nothing like seq<_> for multi-dimensional arrays, so you'll need to use something else.
Probably the closest option would be to use three-dimensional array of lazy values (Lazy<int>[,,]). This is an array of delayed thunks that access pixels and are evaluated only when you actually read the value at the location. You could initialize it like this:
for i=0 to Width
for j=0 to Height
let point = lazy image.GetPixel(i,j)
pixels.[0,i,j] <- lazy point.Value.R
pixels.[1,i,j] <- lazy point.Value.G
pixels.[2,i,j] <- lazy point.Value.B
The snippet creates a lazy value that reads the pixel (point) and then three lazy values to get the individual color components. When accessing color component, the point value is evaluated (by accessing Value).
The only difference in the rest of your code is that you'll need to call Value (e.g. pixels.[0,10,10].Value to get the actual color component of the pixel.
You could define more complex data structures (such as your own type that supports indexing and is lazy), but I think that array of lazy values should be a good starting point.
As mentioned already by other comments that you can use the lazy pixel loading in the 3D array but that would just make the GetPixel operation lazy and not the memory allocation of the 3D array as the array is allocated already when you call create method of Array3D.
If you want to make the memory allocation as well as GetPixel lazy then you can use sequences as shown by below code:
let getPixels (bmp:Bitmap) =
seq {
for i = 0 to bmp.Height-1 do
yield seq {
for j = 0 to bmp.Width-1 do
let pixel = bmp.GetPixel(j,i)
yield (pixel.R,pixel.G,pixel.B)
}
}

Resources