I am running into some issues trying to use the Accelerate framework with vDSP API from Swift. Obviously I am doing something wrong although the compiler gives me all sorts of warnings
var srcAsFloat:CConstPointer<CFloat> = CFloat[](count: Int(width*height), repeatedValue: 0)
var dstAsFloat = CFloat[](count: Int(width*height), repeatedValue: 0)
if shouldClip {
var min:CFloat = 0.0
var max:CFloat = 255.0
var l:vDSP_Stride = Int(width*height)
vDSP_vclip(CConstPointer<CFloat>(&dstAsFloat), vDSP_Stride(1), CConstPointer<CFloat>(&min), CConstPointer<CFloat>(&max), CMutablePointer<CFloat>(&dstAsFloat), vDSP_Stride(1), l)
}
The error:
error: could not find an overload for 'init' that accepts the supplied arguments
vDSP_vclip(CConstPointer<CFloat>(&dstAsFloat),
vDSP_Stride(1),
CConstPointer<CFloat>(&min),
CConstPointer<CFloat>(&max),
CMutablePointer<CFloat>(&dstAsFloat),
vDSP_Stride(1),
l) –
I've tried to cast the heck out of it but so far, no luck.
Okay, I figured out a (probably sub-optimal) solution. I decided to bridge to Objective-C (and used a slightly different function).
main.swift
import Foundation
func abs(x: matrix) -> matrix{
let N = x.count
var arg1 = NSArray(array: x)
var yy = abs_objc(arg1, CInt(N))
var y = zeros(N)
for i in 0..N{
y[i] = Double(yy[i])
}
return y
abs_objc.m
#import <Accelerate/Accelerate.h>
double* abs_objc(NSArray * x, int N){
// converting input to double *
double * xx = (double *)malloc(sizeof(double) * N);
for (int i=0; i<[x count]; i++) {
xx[i] = [[x objectAtIndex:i] doubleValue];
}
// init'ing output
double * y = (double *)malloc(sizeof(double) * N);
for (int i=0; i<N; i++){
y[i] = 0;
}
vDSP_vabsD(xx,1,y,1,N);
return y;
}
swix-Bridging-Header.h
#import <Foundation/Foundation.h>
double* abs_objc(NSArray * x, int N);
I've been working with vDSP to just create a general matrix language.
I have found this way to be the easiest to get the correct pointer types.
Hopefully you find it useful (this might not be optimal either but a start).
import Accelerate
func myFunction(width:Float, height:Float) {
var dstAsFloat = CFloat[](count: Int(width*height), repeatedValue: 0)
var min:CFloat = 0.0
var max:CFloat = 255.0
func vclipPointerConversion(dstAsFloat:CMutablePointer<CFloat>) {
vDSP_vclip(CConstPointer<CFloat>(nil,dstAsFloat.value), vDSP_Stride(1),
&min, &max, dstAsFloat, vDSP_Stride(1), CUnsignedLong(width*height))
}
vclipPointerConversion(&dstAsFloat)
//...
// return whatever you wanted
}
I got it to work with a few tweaks. Notice the first param doesn't have an ampersand (CConstPointer). However the second one does (I use the same pointer for src & dst). I also replaced the casting (necessary) of values. You need to use CFloats for ceiling values. Here is the code:
let width: UInt = CGBitmapContextGetWidth(context)
let height: UInt = CGBitmapContextGetHeight(context)
var dstAsFloat = CFloat[](count: Int(width*height), repeatedValue: 0)
if shouldClip {
var min:CFloat = 0.0
var max:CFloat = 255.0
vDSP_vclip(dstAsFloat, CLong(1), &min, &max, &dstAsFloat, CLong(1), UInt(width*height))
}
Related
I am trying to use a CIColorKernel or CIBlendKernel with sampler arguments but the program crashes. Here is my shader code which compiles successfully.
extern "C" float4 wipeLinear(coreimage::sampler t1, coreimage::sampler t2, float time) {
float2 coord1 = t1.coord();
float2 coord2 = t2.coord();
float4 innerRect = t2.extent();
float minX = innerRect.x + time*innerRect.z;
float minY = innerRect.y + time*innerRect.w;
float cropWidth = (1 - time) * innerRect.w;
float cropHeight = (1 - time) * innerRect.z;
float4 s1 = t1.sample(coord1);
float4 s2 = t2.sample(coord2);
if ( coord1.x > minX && coord1.x < minX + cropWidth && coord1.y > minY && coord1.y <= minY + cropHeight) {
return s1;
} else {
return s2;
}
}
And it crashes on initialization.
class CIWipeRenderer: CIFilter {
var backgroundImage:CIImage?
var foregroundImage:CIImage?
var inputTime: Float = 0.0
static var kernel:CIColorKernel = { () -> CIColorKernel in
let url = Bundle.main.url(forResource: "AppCIKernels", withExtension: "ci.metallib")!
let data = try! Data(contentsOf: url)
return try! CIColorKernel(functionName: "wipeLinear", fromMetalLibraryData: data) //Crashes here!!!!
}()
override var outputImage: CIImage? {
guard let backgroundImage = backgroundImage else {
return nil
}
guard let foregroundImage = foregroundImage else {
return nil
}
return CIWipeRenderer.kernel.apply(extent: backgroundImage.extent, arguments: [backgroundImage, foregroundImage, inputTime])
}
}
It crashes in the try line with the following error:
Fatal error: 'try!' expression unexpectedly raised an error: Foundation._GenericObjCError.nilError
If I replace the kernel code with the following, it works like a charm:
extern "C" float4 wipeLinear(coreimage::sample_t s1, coreimage::sample_t s2, float time)
{
return mix(s1, s2, time);
}
So there are no obvious errors in the code, such as passing incorrect function name or so.
For your use case, you actually can use a CIColorKernel. You just have to pass the extent of your render destination to the kernel as well, then you don't need the sampler to access it.
The kernel would look like this:
extern "C" float4 wipeLinear(coreimage::sample_t t1, coreimage::sample_t t2, float4 destinationExtent, float time, coreimage::destination destination) {
float minX = destinationExtent.x + time * destinationExtent.z;
float minY = destinationExtent.y + time * destinationExtent.w;
float cropWidth = (1.0 - time) * destinationExtent.w;
float cropHeight = (1.0 - time) * destinationExtent.z;
float2 destCoord = destination.coord();
if ( destCoord.x > minX && destCoord.x < minX + cropWidth && destCoord.y > minY && destCoord.y <= minY + cropHeight) {
return t1;
} else {
return t2;
}
}
And you call it like this:
let destinationExtent = CIVector(cgRect: backgroundImage.extent)
return CIWipeRenderer.kernel.apply(extent: backgroundImage.extent, arguments: [backgroundImage, foregroundImage, destinationExtent, inputTime])
Note that the last destination parameter in the kernel is passed automatically by Core Image. You don't need to pass it with the arguments.
Yes, you can't use samplers in CIColorKernel or CIBlendKernel. Those kernels are optimized for the use case where you have a 1:1 mapping from input pixel to output pixel. This allows Core Image to execute multiple of these kernels in one command buffer since they don't require any intermediate buffer writes.
A sampler would allow you to sample the input at arbitrary coordinates, which is not allowed in this case.
You can simply use a CIKernel instead. It's meant to be used when you need to sample the input more freely.
To initialize the kernel, you need to adapt the code like this:
static var kernel: CIKernel = {
let url = Bundle.main.url(forResource: "AppCIKernels", withExtension: "ci.metallib")!
let data = try! Data(contentsOf: URL)
return try! CIKernel(functionName: "wipeLinear", fromMetalLibraryData: data)
}()
When calling the kernel, you now need to also provide a ROI callback, like this:
let roiCallback: CIKernelROICallback = { index, rect -> CGRect in
return rect // you need the same region from the input as the output
}
// or even shorter
let roiCallback: CIKernelROICallback = { $1 }
return CIWipeRenderer.kernel.apply(extent: backgroundImage.extent, roiCallback: roiCallback, arguments: [backgroundImage, foregroundImage, inputTime])
Bonus answer:
For this blending effect, you actually don't need any kernel at all. You can achieve all that with simple cropping and compositing:
class CIWipeRenderer: CIFilter {
var backgroundImage:CIImage?
var foregroundImage:CIImage?
var inputTime: CGFloat = 0.0
override var outputImage: CIImage? {
guard let backgroundImage = backgroundImage else { return nil }
guard let foregroundImage = foregroundImage else { return nil }
// crop the foreground based on time
var foregroundCrop = foregroundImage.extent
foregroundCrop.size.width *= inputTime
foregroundCrop.size.height *= inputTime
return foregroundImage.cropped(to: foregroundCrop).composited(over: backgroundImage)
}
}
Is there anyway to round up a double value?
I want result always rounded up.
int offSet = (totalRecords / 10).round();
It's ceil:
Returns the least integer no smaller than this.
int offSet = (totalRecords / 10).ceil();
Here I'm rounding it to the next double or to next 0.5;
Sample: If its 6.6 then rount to 7.0. If its 6.2, then round to 6.5. See code bellow:
String arredonde(String n) {
final List x = n.split('.'); //break in to a list
if (x.length > 1) { //if its 0, then its already a rounded number or integer
int fstNmbr = int.parse(x[0]);
final int lstNmbrs = int.parse(x[1]);
if (lstNmbrs > 5) {
fstNmbr = fstNmbr + 1;
final String finalNumber = fstNmbr.toStringAsFixed(1);
return finalNumber;
} else {
if (lstNmbrs != 0) {
final double finalNumber = fstNmbr + 0.5;
return finalNumber.toStringAsFixed(1);
} else {
return n;
}
}
} else {
return n;
}
}
try with
num.parse((totalRecords / 10).toStringAsFixed(3))
if you want 3 decimal
Now you have something like you want. I choose sup to 5 to round up, you can change if you want
num offSet = (totalRecords / 10);
var eval = offSet.toStringAsFixed(1).split('.');
var res =
int.parse(eval[1]) > 5 ? int.parse(eval[0]) + 1 : int.parse(eval[0]);
print(res);
I'm not sure if there is an issue or not, so i'm just gonna write it down.
I'm developing using swift, xcode 7.2 , on iphone 5s.
And calculating execution time using
NSDate.timeIntervalSinceReferenceDate()
I created 2 arrays, one with 200,000 elements and one with 20.
and try to have random access to their elements. accessing elements on big one is almost 55 times slower! i know its bigger but isn't this O(1) ?
I also tried the same on java and the accessing speed is the same for big and small array.
From CFArrayheader in apple documentation, i found this:
Accessing any value at a particular index in an array is at worst O(log n), but should usually be O(1).
but it think this cant be true based on the numbers i've tested.
I know i didn't make a big test or anything special, but the fact that its not working is really messing with my head!
i kinda need this for what i'm working on. and the algorithm is not working on swift and iOS and its working on java and android.
let bigSize:Int = 200000
var bigArray = [Int](count:bigSize,repeatedValue:0)
let smallSize:Int = 20
var smallArray = [Int](count:smallSize,repeatedValue:0)
for i in 0..<bigSize
{
bigArray[i] = i + 8 * i
}
for i in 0..<smallSize
{
smallArray[i] = i + 9 * i
}
let indexBig = Int(arc4random_uniform(UInt32(bigSize)) % UInt32(bigSize))
let indexSmall = Int(arc4random_uniform(UInt32(smallSize)) % UInt32(smallSize))
var a = NSDate.timeIntervalSinceReferenceDate()
print(bigArray[indexBig])
var b = NSDate.timeIntervalSinceReferenceDate()
print(b-a) \\prints 0.000888049602508545
a = NSDate.timeIntervalSinceReferenceDate()
print(smallArray[indexSmall])
b = NSDate.timeIntervalSinceReferenceDate()
print(b-a) \\prints 6.90221786499023e-05
java :
(accessing one element is so fast on java and its on pc, so i access more elements, but same number on both arrays)
int bigSize = 200000;
int[] bigArray = new int[bigSize];
Random rand = new Random();
int smallSize = 20;
int[] smallArray = new int[smallSize];
for(int i = 0;i < bigSize;i++)
bigArray[i] = i + i * 8;
for(int i = 0;i < smallSize;i++)
smallArray[i] = i + i * 8;
int smallIndex = rand.nextInt(smallSize);
int bigIndex = rand.nextInt(bigSize);
int sum = 0;
long a = System.currentTimeMillis();
for(int i = 0;i < 10000;i++)
{
sum += bigArray[rand.nextInt(bigSize)];
}
System.out.println(sum);
long b = System.currentTimeMillis();
System.out.println(b-a); //prints 2
a = System.currentTimeMillis();
sum = 0;
for(int i = 0; i < 10000;i++)
{
sum += smallArray[rand.nextInt(smallSize)];
}
System.out.println(sum);
b = System.currentTimeMillis();
System.out.println(b - a); //prints 1
If you change the order of your two tests, you'll find that the performance is flipped. In short, the first test runs more slowly than the second one, regardless of whether it's the small array or the big one. This is a result of some dynamics of print. If you do a print before you perform the tests, the delay resulting from the first print is eliminated.
A better way to test this would be to create a unit test, which (a) repeats the subscript operator many times; and (b) uses measureBlock to repeat the test a few times to check for standard deviation and the like.
When I do that, I find the access time is indistinguishable, consistent with O(1). This were my unit tests:
let bigSize: Int = 200_000
let smallSize: Int = 20
func testBigArrayPerformance() {
let size = bigSize
let array = Array(0 ..< size).map { $0 + 8 * $0 }
var value = 0
measureBlock {
let baseIndex = Int(arc4random_uniform(UInt32(size)))
for index in 0 ..< 1_000_000 {
value += array[(baseIndex + index) % size]
}
}
print(value)
print(array.count)
}
func testSmallArrayPerformance() {
let size = smallSize
let array = Array(0 ..< size).map { $0 + 8 * $0 }
var value = 0
measureBlock {
let baseIndex = Int(arc4random_uniform(UInt32(size)))
for index in 0 ..< 1_000_000 {
value += array[(baseIndex + index) % size]
}
}
print(value)
print(array.count)
}
Admittedly, I've added some mathematical operations that change the index (my intent was to make sure the compiler didn't do some radical optimization that removed my attempt to repeat the subscript operation), and the overhead of that mathematical operation will dilute the subscript operator performance difference. But, even when I simplified the index operator, the performance between the two renditions was indistinguishable.
I have the following for loop in Objective C code and am trying to transfer it to Swift.
double lastAx[4],lastAy[4],lastAz[4];
for (int i = 0; i < 4; ++i){
lastAx[i] = lastAy[i] = lastAz[i] = 0;
}
My Code so far gives me the error: Type Double has no subscript members
var lastAx:Double = 4
var lastAy:Double = 4
var lastAz:Double = 4
for i: Int32 in 0 ..< 4 {
lastAx[i] = lastAy[i] = lastAz[i] = 0
}
What am I missing? Help is very appreciated.
Declare lastAx, lastAy, lastAz as arrays with init(count:repeatedValue:) initializer:
var lastAx = [Double](count:4, repeatedValue: 0)
var lastAy = [Double](count:4, repeatedValue: 0)
var lastAz = [Double](count:4, repeatedValue: 0)
Also you will not have to zero them because these initializers set all of the values to zeros. You won't need the loop from the original code, so just delete it.
I am using Accord.NET in F# for the first time and I am having problems creating the function to calculate the distance for KNN.
Here is my code
static member RunKNN =
let inputs = MachineLearningEngine.TrainingInputClass
let outputs = MachineLearningEngine.TrainingOutputClass
let knn = new KNearestNeighbors<int*int*int*int>(1,inputs,outputs,null)
let input = 1,1,1,1
knn.Compute(input)
When I swap out the null for a function like this
let distanceFunction = fun (a:int,b:int,c:int,d:int)
(e:int,f:int,g:int,h:int)
(i:float) ->
0
I get an exception like this:
*Error 1 This expression was expected to have type
System.Func<(int * int * int * int),(int * int * int * int),float> but here has type
int * int * int * int -> int * int * int * int -> float -> int*
So far, the only article I found close to my problem is this one. Apparently, there is a problem with how F# and C# handle delegates?
I posted this same question on the Google group for Accord.NET here.
Thanks in advance
Declare the distance function like this:
let distanceFunction (a:int,b:int,c:int,d:int) (e:int,f:int,g:int,h:int) =
0.0
(it takes two tuples in input and returns a float), and then create a delegate from it:
let distanceDelegate =
System.Func<(int * int * int * int),(int * int * int * int),float>(distanceFunction)
Passing this delegate to Accord.NET should do the trick.
I would guess you should use the tuple form like so
let distanceFunction = fun ((a:int,b:int,c:int,d:int),
(e:int,f:int,g:int,h:int),
(i:float)) ->