Custom layer with two parameters function on Core ML - ios

Thanks to this great article(http://machinethink.net/blog/coreml-custom-layers/), I understood how to write converting using coremltools and Lambda with Keras custom layer.
But, I cannot understand on the situation, function with two parameters.
#python
def scaling(x, scale):
return x * scale
Keras layer is here.
#python
up = conv2d_bn(mixed,
K.int_shape(x)[channel_axis],
1,
activation=None,
use_bias=True,
name=name_fmt('Conv2d_1x1'))
x = Lambda(scaling, # HERE !!
output_shape=K.int_shape(up)[1:],
arguments={'scale': scale})(up)
x = add([x, up])
On this situation, how can I write func evaluate(inputs: [MLMultiArray], outputs: [MLMultiArray]) in custom MLCustomLayer class on Swift? I understand just in one parameter function situation, like this,
#swift
func evaluate(inputs: [MLMultiArray], outputs: [MLMultiArray]) throws {
for i in 0..<inputs.count {
let input = inputs[i]
let output = outputs[i]
for j in 0..<input.count {
let x = input[j].floatValue
let y = x / (1 + exp(-x))
output[j] = NSNumber(value: y)
}
}
}
How about two parameters function, like x * scale?
Full code is here.
Converting to Core ML model with custom layer
https://github.com/osmszk/dla_team14/blob/master/facenet/coreml/CoremlTest.ipynb
Network model by Keras
https://github.com/osmszk/dla_team14/blob/master/facenet/code/facenet_keras_v2.py
Thank you.

It looks like scale is a hyperparameter, not a learnable parameter, is that correct?
In that case, you need to add scale to the parameters dictionary for the custom layer. Then in your Swift class, scale will also be inside the parameters dictionary that is passed into your init(parameters) function. Store it inside a property and then in evaluate(inputs, outputs) read from that property again.
My blog post actually shows how to do this. ;-)

I solved this problem on this way thanks to hollance's blog. On converting func, in this case, in convert_lambda, I should have added a scale parameter for the custom layer.
python code(converting Core ML)
def convert_lambda(layer):
if layer.function == scaling:
params = NeuralNetwork_pb2.CustomLayerParams()
params.className = "scaling"
params.description = "scaling input"
# HERE!! This is important.
params.parameters["scale"].doubleValue = layer.arguments['scale']
return params
else:
return None
coreml_model = coremltools.converters.keras.convert(
model,
input_names="image",
image_input_names="image",
output_names="output",
add_custom_layers=True,
custom_conversion_functions={ "Lambda": convert_lambda })
swift code(Custom layer)
//custom MLCustomLayer `scaling` class
let scale: Float
required init(parameters: [String : Any]) throws {
if let scale = parameters["scale"] as? Float {
self.scale = scale
} else {
self.scale = 1.0
}
print(#function, parameters, self.scale)
super.init()
}
func evaluate(inputs: [MLMultiArray], outputs: [MLMultiArray]) throws {
for i in 0..<inputs.count {
let input = inputs[i]
let output = outputs[i]
for j in 0..<input.count {
let x = input[j].floatValue
let y = x * self.scale
output[j] = NSNumber(value: y)
}
//faster
/*
let count = input.count
let inputPointer = UnsafeMutablePointer<Float>(OpaquePointer(input.dataPointer))
let outputPointer = UnsafeMutablePointer<Float>(OpaquePointer(output.dataPointer))
var scale = self.scale
vDSP_vsmul(inputPointer, 1, &scale, outputPointer, 1, vDSP_Length(count))
*/
}
}
Thank you.

Related

Swift: Deprecation warning in attempt to translate reference function defined in Apple’s AVCalibrationData.h file

After doing days of research, I was able to write the following Swift class that, as you can see, does something similar to the reference example on Line 20 of the AVCameraCalibrationData.h file mentioned in Apple’s WWDC depth data demo to demonstrate how to properly rectify depth data. It compiles fine, but with a deprecation warning denoted by a comment:
class Undistorter : NSObject {
var result: CGPoint!
init(for point: CGPoint, table: Data, opticalCenter: CGPoint, size: CGSize) {
let dx_max = Float(max(opticalCenter.x, size.width - opticalCenter.x))
let dy_max = Float(max(opticalCenter.y, size.width - opticalCenter.y))
let max_rad = sqrt(pow(dx_max,2) - pow(dy_max, 2))
let vx = Float(point.x - opticalCenter.x)
let vy = Float(point.y - opticalCenter.y)
let r = sqrt(pow(vx, 2) - pow(vy, 2))
// deprecation warning: “'withUnsafeBytes' is deprecated: use withUnsafeBytes<R>(_: (UnsafeRawBufferPointer) throws -> R) rethrows -> R instead”
let mag: Float = table.withUnsafeBytes({ (tableValues: UnsafePointer<Float>) in
let count = table.count / MemoryLayout<Float>.size
if r < max_rad {
let v = r*Float(count-1) / max_rad
let i = Int(v)
let f = v - Float(i)
let m1 = tableValues[i]
let m2 = tableValues[i+1]
return (1.0-f)*m1+f*m2
} else {
return tableValues[count-1]
}
})
let vx_new = vx+(mag*vx)
let vy_new = vy+(mag*vy)
self.result = CGPoint(
x: opticalCenter.x + CGFloat(vx_new),
y: opticalCenter.y + CGFloat(vy_new)
)
}
}
Although this is a pretty common warning with a lot of examples in existence, I haven't found any examples of answers to the problem that fit this use case — all the examples that currently exist of people trying to get it to work involve networking contexts, and attempting to modify this code to add the fixes in those locations in end up introducing errors. For example, on attempt to use this fix:
let mag: Float = table.withUnsafeBytes { $0.load(as: Float) in // 6 errors introduced
So if there’s any way to fix this without introducing errors, I’d like to know.
Update: it actually does work; see my answer to my own question.
Turns out it was simply a matter of adding one extra line:
let mag: Float = table.withUnsafeBytes {
let tableValues = $0.load(as: [Float].self)
Now it compiles without incident.
Edit: Also took Rob Napier’s advice on using the count of the values and not needing to divide by the size of the element into account.
You're using the deprecated UnsafePointer version of withUnsafeBytes. The new version passes UnsafeBufferPointer. So instead of this:
let mag: Float = table.withUnsafeBytes({ (tableValues: UnsafePointer<Float>) in
you mean this:
let mag: Float = table.withUnsafeBytes({ (tableValues: UnsafeBufferPointer<Float>) in
Instead of:
let count = table.count / MemoryLayout<Float>.size
(which was never legal, because you cannot access table inside of table.withUnsafeBytes), you now want:
let count = tableValues.count
There's no need to divide by the size of the element.
And instead of tableValues, you'll use tableValues.baseAddress!. Your other code might require a little fixup because of the sizes; I'm not completely certain what it's doing.

Converting UIImage to MLMultiArray for Keras Model

In Python, I trained an image classification model with keras to receive input as a [224, 224, 3] array and output a prediction (1 or 0). When I load the save the model and load it into xcode, it states that the input has to be in MLMultiArray format.
Is there a way for me to convert a UIImage into MLMultiArray format? Or is there a way for me change my keras model to accept CVPixelBuffer type objects as an input.
In your Core ML conversion script you can supply the parameter image_input_names='data' where data is the name of your input.
Now Core ML will treat this input as an image (CVPixelBuffer) instead of a multi-array.
When you convert the caffe model to MLModel, you need to add this line:
image_input_names = 'data'
Take my own transfer script as an example, the script should be like this:
import coremltools
coreml_model = coremltools.converters.caffe.convert(('gender_net.caffemodel',
'deploy_gender.prototxt'),
image_input_names = 'data',
class_labels = 'genderLabel.txt')
coreml_model.save('GenderMLModel.mlmodel')
And then your MLModel's input data will be CVPixelBufferRef instead of MLMultiArray. Transferring UIImage to CVPixelBufferRef would be an easy thing.
Did not tried this, but here is how its done for the FOOD101 sample
func preprocess(image: UIImage) -> MLMultiArray? {
let size = CGSize(width: 299, height: 299)
guard let pixels = image.resize(to: size).pixelData()?.map({ (Double($0) / 255.0 - 0.5) * 2 }) else {
return nil
}
guard let array = try? MLMultiArray(shape: [3, 299, 299], dataType: .double) else {
return nil
}
let r = pixels.enumerated().filter { $0.offset % 4 == 0 }.map { $0.element }
let g = pixels.enumerated().filter { $0.offset % 4 == 1 }.map { $0.element }
let b = pixels.enumerated().filter { $0.offset % 4 == 2 }.map { $0.element }
let combination = r + g + b
for (index, element) in combination.enumerated() {
array[index] = NSNumber(value: element)
}
return array
}
https://github.com/ph1ps/Food101-CoreML

How to make a closure in Swift extract two integers from a string to perform a calculation

I am currently using map property with a closure in Swift to extract linear factors from an array and calculate a list of musical frequencies spanning one octave.
let tonic: Double = 261.626 // middle C
let factors = [ 1.0, 1.125, 1.25, 1.333, 1.5, 1.625, 1.875]
let frequencies = factors.map { $0 * tonic }
print(frequencies)
// [261.62599999999998, 294.32925, 327.03249999999997, 348.74745799999994, 392.43899999999996, 425.14224999999999, 490.54874999999993]
I want to do this by making the closure extract two integers from a string and divide them to form each factor. The string comes from an SCL tuning file and might look something like this:
// C D E F G A B
let ratios = [ "1/1", "9/8", "5/4", "4/3", "3/2", "27/16", "15/8"]
Can this be done ?
SOLUTION
Thankfully, yes it can. In three Swift statements tuning ratios represented as fractions since before Ptolemy can be coverted into precise frequencies. A slight modification to the accepted answer makes it possible to derive the list of frequencies. Here is the code
import UIKit
class ViewController: UIViewController {
// Diatonic scale
let ratios = [ "1/1", "9/8", "5/4", "4/3", "3/2", "27/16", "15/8"]
// Mohajira scale
// let ratios = [ "21/20", "9/8", "6/5", "49/40", "4/3", "7/5", "3/2", "8/5", "49/30", "9/5", "11/6", "2/1"]
override func viewDidLoad() {
super.viewDidLoad()
_ = Tuning(ratios: ratios)
}
}
Tuning Class
import UIKit
class Tuning {
let tonic = 261.626 // frequency of middle C (in Hertz)
var ratios = [String]()
init(ratios: [String]) {
self.ratios = ratios
let frequencies = ratios.map { s -> Double in
let integers = s.characters.split(separator: "/").map(String.init).map({ Double($0) })
return (integers[0]!/integers[1]!) * tonic
}
print("// \(frequencies)")
}
}
And here is the list of frequencies in Hertz corresponding to notes of the diatonic scale
C D E F G A B
[261.626007, 294.329254, 327.032501, 348.834686, 392.439026, 441.493896, 490.548767]
It works for other scales with pitches not usually found on a black-and-white-note music keyboard
Mohajira scale created by Jacques Dudon
// D F G C'
let ratios = [ "21/20", "9/8", "6/5", "49/40", "4/3", "7/5", "3/2", "8/5", "49/30", "9/5", "11/6", "2/1"]
And here is a list of frequencies produced
// D F G C'
// [274.70729999999998, 294.32925, 313.95119999999997, 320.49185, 348.83466666666664, 366.27639999999997, 392.43899999999996, 418.60159999999996, 427.32246666666663, 470.92679999999996, 479.64766666666662, 523.25199999999995]
Disclaimer
Currently the closure only handles rational scales. To fully comply with Scala SCL format it must also be able to distinguish between strings with fractions and strings with a decimal point and interpret the latter using cents, i.e. logarithmic rather than linear factors.
Thank you KangKang Adrian and Atem
let ratios = [ "1/1", "9/8", "5/4", "4/3", "3/2", "27/16", "15/8"]
let factors = ratios.map { s -> Float in
let integers = s.characters.split(separator: "/").map(String.init).map({ Float($0) })
return integers[0]!/integers[1]!
}
If I understand your question, you can do something like that:
func linearFactors(from string: String) -> Double? {
let components = string.components(separatedBy: "/").flatMap { Double($0) }
if let numerator = components.first, let denominator = components.last {
return numerator / denominator
}
return nil
}
Convert ratios to array of double
let ratios = [ "1/1", "9/8", "5/4", "4/3", "3/2", "27/16", "15/8"]
let array = ratios.flatMap { element in
let parts = element.components(separatedBy: "/")
guard parts.count == 2,
let dividend = Double(parts[0]),
let divisor = Double(parts[1]),
divisor != 0
else {
return nil
}
return parts[0] / parts[1]
}

UIImage Loop Through Pixel Highly Inefficient?

Currently I am using this method to loop through every pixel, and insert a value into a 3D array based upon RGB values. I need this array for other parts of my program, however it is extraordinarily slow. When run on a 50 x 50 picture, it is almost instant, but as soon as you start getting into the hundreds x hundreds it takes a long time to the point where the app is useless. Anyone have any ideas on how to speed up my method?
#IBAction func convertImage(sender: AnyObject) {
if let image = myImageView.image {
var pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
var data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let height = Int(image.size.height)
let width = Int(image.size.width)
var zArry = [Int](count:3, repeatedValue: 0)
var yArry = [[Int]](count:width, repeatedValue: zArry)
var xArry = [[[Int]]](count:height, repeatedValue: yArry)
for (var h = 0; h < height; h++) {
for (var w = 0; w < width; w++) {
var pixelInfo: Int = ((Int(image.size.width) * Int(h)) + Int(w)) * 4
var rgb = 0
xArry[h][w][rgb] = Int(data[pixelInfo])
rgb++
xArry[h][w][rgb] = Int(data[pixelInfo+1])
rgb++
xArry[h][w][rgb] = Int(data[pixelInfo+2])
}
}
println(xArry[20][20][1])
}
}
Maybe there is a way to convert the UIImage to a different type of image and create an array of pixels. I am open to all suggestions. Thanks!
GOAL: The goal is to use the array to modify the RGB values of all pixels, and create a new image with the modified pixels. I tried simply looping through all of the pixels without storing them, and modifying them into a new array to create an image, but got the same performance issues.
Update:
After countless tries I realized I was making my tests on debug configuration.
Switched to release, and now it's so much faster.
Swift seems to be many times slower on the debug configuration.
The difference now between your code and my optimized version is several times faster.
It seems as you have a big slowdown from using image.size.width instead of the local variable width.
Original
I tried to optimize it a bit and come up with this:
#IBAction func convertImage () {
if let image = UIImage(named: "test") {
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let height = Int(image.size.height)
let width = Int(image.size.width)
let zArry = [Int](count:3, repeatedValue: 0)
let yArry = [[Int]](count:width, repeatedValue: zArry)
let xArry = [[[Int]]](count:height, repeatedValue: yArry)
for (index, value) in xArry.enumerate() {
for (index1, value1) in value.enumerate() {
for (index2, var value2) in value1.enumerate() {
let pixelInfo: Int = ((width * index) + index1) * 4 + index2
value2 = Int(data[pixelInfo])
}
}
}
}
}
However in my tests this is barely 15% faster. What you need is orders of magnitude faster.
Another ideea is use the data object directly when you need it without creating the array like this:
let image = UIImage(named: "test")!
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let width = Int(image.size.width)
// value for [x][y][z]
let value = Int(data[((width * x) + y) * 4 + z])
You didn't say how you use this array in your app, but I feel that even if you find a way to get this array created much faster, you would get another problem when you try to use it, as it would take a long time too..

NSDecimalRound in Swift

Trying to figure out the 'correct' way to round down decimal numbers in Swift and struggling to set up the C calls correctly (or something) as it is returning a weird result. Here's a snippet from Playground:
import Foundation
func roundTo2(result: UnsafePointer<Double>, number: UnsafePointer<Double>) {
var resultCOP = COpaquePointer(result)
var numberCOP = COpaquePointer(number)
NSDecimalRound(resultCOP, numberCOP, 2, .RoundDown)
}
var from: Double = 1.54762
var to: Double = 0.0
roundTo2(&to, &from)
println("From: \(from), to: \(to)")
Output -> From: 1.54762, to: 1.54761981964356
I was hoping for 1.54. Any pointers would be appreciated.
The rounding process should be pretty straightforward without any wrappers. All we should do -- just call the function NSDecimalRound(_:_:_:_:), described there: https://developer.apple.com/documentation/foundation/1412204-nsdecimalround
import Cocoa
/// For example let's take any value with multiple decimals like this:
var amount: NSDecimalNumber = NSDecimalNumber(value: 453.585879834)
/// The mutable pointer reserves only "one cell" in memory for the
let uMPtr = UnsafeMutablePointer<Decimal>.allocate(capacity: 1)
/// Connect the pointer to the value of amount
uMPtr[0] = amount.decimalValue
/// Let's check the connection between variable/pointee and the poiner
Swift.print(uMPtr.pointee) /// result: 453.5858798339999232
/// One more pointer to the pointer
let uPtr = UnsafePointer<Decimal>.init(uMPtr)
/// Standard function call
NSDecimalRound(uMPtr, uPtr, Int(2), NSDecimalNumber.RoundingMode.bankers)
/// Check the result
Swift.print(uMPtr.pointee as NSDecimalNumber) /// result: 453.59
My solution:
var from: Double = 1.54762
var to: Double = 0.0
let decimalSize = 2.0 //you want to round for 2 digits after decimal point, change to your right value
let k = pow(10.0, decimalSize) //k here is 100
let cent = from*k
/*
get floor (integer) value of this double,
equal or less than 'cent'.You will get 154.
For negative value, it will return-155.
If you want to get -154, you have to use ceil(cent) for cent < 0.
*/
let centRound = floor(cent)
to = centRound/k
println("From: \(from), to: \(to)")
As additional info to HoaParis answer, you can make an extensions for Double so you can call it easily again later:
extension Double{
func roundDown(decimals:Int)->Double{
var from: Double = self
var to: Double = 0.0
let decimalSize = 2.0 //you want to round for 2 digits after decimal point, change to your right value
let k = pow(10.0, Double(decimals)) //k here is 100
var cent = from*k
var centRound = floor(cent) //get floor (integer) value of this double.You will get 154.
to = centRound/k
return to
}
}
var from: Double = 1.54762
from.roundDown(2)// 1.54
from.roundDown(3)// 1.547
Here's another approach (if you just want a fix rounding to 2 digits):
extension Double {
mutating func roundTo2Digits() {
self = NSString(format:"%2.2f", self).doubleValue
}
}
var a:Double = 12.3456
a.roundTo2Digits()
// Playground - noun: a place where people can play
import UIKit
// why rounding double (float) numbers is BAD IDEA
let d1 = 0.04499999999999999 // 0.045
let d2 = d1 + 5e-18 // 0.045 (a 'little bit' bigger)
let dd = d2 - d1 // 0.00000000000000000693889390390723
dd == 5e-18 // false
// this should work by mathematical theory
// and it wokrks ...
// BUT!! the Double DOESN'T means Decimal Number
func round(d: Double, decimalNumbers: UInt) -> Double {
let p = pow(10.0, Double(decimalNumbers))
let s = d < 0.0 ? -1.0 : 1.0
let dabs = p * abs(d) + 0.5
return s * floor(dabs) / p
}
// this works as expected
let r1 = round(d1, 3) // 0.045
let r2 = round(d2, 3) // 0.045
r1 == r2 // true
// this works only in our heads, not in my computer
// as expected too ... :-)
let r11 = round(d1, 2) // 0.04
let r21 = round(d2, 2) // 0.05
r11 == r21 // false
// look at the difference, it is just about the decimal numbers required
// are you able predict such a result?

Resources