Language:
Swift 3
What I am using:
Hardware: An IPhone 6 and FLIR One thermal camera
Software: Xcode 8.1, Locked down FLIROne SDK (written and documented for obj-c and has little documentation at all)
What I am trying to do:
Get the temperature of the center pixel from the returned camera stream
The problem:
I have everything in the app I am building setup and working as expected not including getting this thermal data. I have been able to get temperature from the returned stream from the camera, however I can only get it for the 0,0 (top left) pixel. I have crosshairs in the center of the image view that the stream feeds into and I would like to get the pixel at this center point (the exact center of the image). I have been working on this for two days and I can't seem to figure out how to do this. The SDK does not allow you to specify which pixel to read from and from what I have read on online, you have to loop through each pixel and stop at the center one.
The below code will get the temperature from pixel 0,0 and display it correctly. I want to get temperature to read center pixel instead. temperature MUST be UInt16 to provide correct Kelvin readout (from what I understand at least), but I receive the radiometric data from Data! as UInt8. NOTE: NSData! does not work with these delegates. Attempting to use it (even in swift 2.3) causes the delegate to never fire. I have to use Data! for it to even run the delegate. I found it very odd that I couldn't use swift 2.3 because it only has NSData. If this is because I screwed something up please let me know.
func flirOneSDKDelegateManager(_ delegateManager: FLIROneSDKDelegateManager!, didReceiveRadiometricData radiometricData: Data!, imageSize size: CGSize) {
let byteArray = radiometricData?.withUnsafeBytes{
[UInt8](UnsafeBufferPointer(start: $0, count: (radiometricData?.count)!))
}
let temperature = UnsafePointer(byteArray!).withMemoryRebound(to: UInt16.self, capacity: 1){
$0.pointee
}
debugPrint(temperature)
DispatchQueue.main.async{
self.tempLabel.text = NSString(format: "%.1f",self.convertToFarenheit(kelvin: temperature)) as String
}
}
I am new to Swift so I am not sure if this is the best way to do this so if you have a better way please advise me on it.
Current solution:
Gets the center pixel, but not dynamically (which is what I want)
let byteArray = radiometricData?.withUnsafeBytes{
[UInt8](UnsafeBufferPointer(start: $0, count: (radiometricData?.count)!))
}
let bytes:[UInt8] = [(byteArray?[74170])!,(byteArray?[74171])!]
let temperature = UnsafePointer(bytes).withMemoryRebound(to: UInt16.self, capacity: 1){
$0.pointee
}
assuming you try to read some UInt16 values from swift's Data
import Foundation
var data = Data([1,0,2,0,255,255,1]) // 7 bytes
var arr: [UInt16] = []
// in my data only 6 bytes could be represented as Int16 values, so i have to ignore the last one ...
for i in stride(from: 0, to: 2 * (data.count / 2) , by: MemoryLayout<UInt16>.stride) {
arr.append(data.subdata(in: i..<(i+MemoryLayout<UInt16>.stride)).withUnsafeBytes { $0.pointee })
}
print(arr) // [1, 2, 65535]
or you could use something like
let arr0: [UInt16] = data.withUnsafeBytes { (p: UnsafePointer<UInt8>)->[UInt16] in
let capacity = data.count / MemoryLayout<UInt16>.stride
return p.withMemoryRebound(to: UInt16.self, capacity: capacity) {
var arr = [UInt16]()
for i in 0..<capacity {
arr.append(($0 + i).pointee)
}
return arr
}
}
print(arr0) // [1, 2, 65535]
or
extension Data {
func scan<T>(offset: Int, bytes: Int) -> T {
return self.subdata(in: offset..<(offset+bytes)).withUnsafeBytes { $0.pointee }
}
}
let k: UInt16 = data.scan(offset: 2, bytes: MemoryLayout<UInt16>.stride)
print(k) // 2
or even better
extension Data {
func scan<T>(from: Int)->T {
return self.withUnsafeBytes { (p: UnsafePointer<UInt8>)->T in
p.advanced(by: from).withMemoryRebound(to: T.self, capacity: 1) {
$0.pointee
}
}
}
}
let u0: UInt16 = data.scan(from: 2)
print(u0) // 2
or
extension Data {
func scan2<T>(from: Int)->T {
return self.withUnsafeBytes {
// Precondition: The underlying pointer plus offset is properly aligned for accessing T.
// Precondition: The memory is initialized to a value of some type, U, such that T is layout compatible with U.
UnsafeRawPointer($0).load(fromByteOffset: from, as: T.self)
}
}
}
let u2: UInt16 = data.scan2(from: 2)
print(u2) // 2
what is the right offset in your Data, to read the value? it is hard to say from the information you provided in your question.
Related
In my application, I used VNImageRequestHandler with a custom MLModel for object detection.
The app works fine with iOS versions before 14.5.
When iOS 14.5 came, it broke everything.
Whenever try handler.perform([visionRequest]) throws an error (Error Domain=com.apple.vis Code=11 "encountered unknown exception" UserInfo={NSLocalizedDescription=encountered unknown exception}), the pixelBuffer memory is held and never released, it made the buffers of AVCaptureOutput full then new frame not came.
I have to change the code as below, by copy the pixelBuffer to another var, I solved the problem that new frame not coming, but memory leak problem is still happened.
Because of memory leak, the app crashed after some times.
Notice that before iOS version 14.5, detection works perfectly, try handler.perform([visionRequest]) never throws any error.
Here is my code:
private func predictWithPixelBuffer(sampleBuffer: CMSampleBuffer) {
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
// Get additional info from the camera.
var options: [VNImageOption : Any] = [:]
if let cameraIntrinsicMatrix = CMGetAttachment(sampleBuffer, kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, nil) {
options[.cameraIntrinsics] = cameraIntrinsicMatrix
}
autoreleasepool {
// Because of iOS 14.5, there is a bug that when perform vision request failed, pixel buffer memory leaked so the AVCaptureOutput buffers is full, it will not output new frame any more, this is a temporary work around to copy pixel buffer to a new buffer, this currently make the memory increased a lot also. Need to find a better way
var clonePixelBuffer: CVPixelBuffer? = pixelBuffer.copy()
let handler = VNImageRequestHandler(cvPixelBuffer: clonePixelBuffer!, orientation: orientation, options: options)
print("[DEBUG] detecting...")
do {
try handler.perform([visionRequest])
} catch {
delegate?.detector(didOutputBoundingBox: [])
failedCount += 1
print("[DEBUG] detect failed \(failedCount)")
print("Failed to perform Vision request: \(error)")
}
clonePixelBuffer = nil
}
}
Has anyone experienced the same problem? If so, how did you fix it?
iOS 14.7 Beta available on the developer portal seems to have fixed this issue.
I have a partial fix for this using #Matthijs Hollemans CoreMLHelpers library.
The model I use has 300 classes and 2363 anchors. I used a lot of the code Matthijs provided here to convert the model to MLModel.
In the last step a pipeline is built using the 3 sub models: raw_ssd_output, decoder, and nms. For this workaround you need to remove the nms model from the pipeline, and output raw_confidence and raw_coordinates.
In your app you need to add the code from CoreMLHelpers.
Then add this function to decode the output from your MLModel:
func decodeResults(results:[VNCoreMLFeatureValueObservation]) -> [BoundingBox] {
let raw_confidence: MLMultiArray = results[0].featureValue.multiArrayValue!
let raw_coordinates: MLMultiArray = results[1].featureValue.multiArrayValue!
print(raw_confidence.shape, raw_coordinates.shape)
var boxes = [BoundingBox]()
let startDecoding = Date()
for anchor in 0..<raw_confidence.shape[0].int32Value {
var maxInd:Int = 0
var maxConf:Float = 0
for score in 0..<raw_confidence.shape[1].int32Value {
let key = [anchor, score] as [NSNumber]
let prob = raw_confidence[key].floatValue
if prob > maxConf {
maxInd = Int(score)
maxConf = prob
}
}
let y0 = raw_coordinates[[anchor, 0] as [NSNumber]].doubleValue
let x0 = raw_coordinates[[anchor, 1] as [NSNumber]].doubleValue
let y1 = raw_coordinates[[anchor, 2] as [NSNumber]].doubleValue
let x1 = raw_coordinates[[anchor, 3] as [NSNumber]].doubleValue
let width = x1-x0
let height = y1-y0
let x = x0 + width/2
let y = y0 + height/2
let rect = CGRect(x: x, y: y, width: width, height: height)
let box = BoundingBox(classIndex: maxInd, score: maxConf, rect: rect)
boxes.append(box)
}
let finishDecoding = Date()
let keepIndices = nonMaxSuppressionMultiClass(numClasses: raw_confidence.shape[1].intValue, boundingBoxes: boxes, scoreThreshold: 0.5, iouThreshold: 0.6, maxPerClass: 5, maxTotal: 10)
let finishNMS = Date()
var keepBoxes = [BoundingBox]()
for index in keepIndices {
keepBoxes.append(boxes[index])
}
print("Time Decoding", finishDecoding.timeIntervalSince(startDecoding))
print("Time Performing NMS", finishNMS.timeIntervalSince(finishDecoding))
return keepBoxes
}
Then when you receive the results from Vision, you call the function like this:
if let rawResults = vnRequest.results as? [VNCoreMLFeatureValueObservation] {
let boxes = self.decodeResults(results: rawResults)
print(boxes)
}
This solution is slow because of the way I move the data around and formulate my list of BoundingBox types. It would be much more efficient to process the MLMultiArray data using underlying pointers, and maybe use Accelerate to find the maximum score and best class for each anchor box.
In my case it helped to disable neural engine by forcing CoreML to run on CPU and GPU only. This is often slower but doesn't throw the exception (at least in our case). At the end we implemented a policy to force some of our models to not run on neural engine for certain iOS devices.
See MLModelConfiguration.computeUntis to constraint the hardware coreml model can use.
I'm starting with my first app for iOS and I am trying to get gyro data to play a whip sound when you flick your phone.
From what I can tell, I should be using CoreMotion to get the state of the gyro, then doing some math to work out when a whip-like gesture is made, and then to run my function?
This is what I have so far - this is my ContentView.swift file.
import SwiftUI
import AVFoundation
import CoreMotion
let popSound = Bundle.main.url(forResource: "whip", withExtension: "mp3")
var audioPlayer = AVAudioPlayer()
var motionManager: CMMotionManager!
func audioPlayback() {
do {
audioPlayer = try AVAudioPlayer(contentsOf: popSound!)
audioPlayer.play()
} catch {
print("couldn't load sound file")
}
}
struct ContentView: View {
var body: some View {
Text("Press button!")
Button(action: {
audioPlayback()
}, label: {
Text("Press me!")
})
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}
Currently it's set to a button. Can someone link me to a resource, or walk me though this?
Usually when dealing with such devices you can either get a current snapshot or you can request to get a feed of snapshot changes. In your case the promising methods seem to be startAccelerometerUpdates and startDeviceMotionUpdates for CMMotionManager. I am pretty sure somewhere in there should be sufficient information to nearly detect a gesture you are describing.
If you dig into these methods you will see that you get a feed of "frames" where each frame describes a situation at certain time.
Since you are detecting a gesture you are not interested into a single frame but rather a series of frames. So probably the first thing you need is some object to which you can append frames and this object will evaluate if current set of frames corresponds to your gesture or not. It should also be able to discard data which is not interesting. For instance frames older than 3 seconds can be discarded as this gesture should never need more than 3 seconds.
So now your problem is split into 2 parts. First part is creating an object that is able to collect frames. I would give it a public method like appendFrame or appendSnapshot. Then keep collecting frames on it. The object also needs to be able to report back that it has detected a required gesture so that you play a sound at that point. Without the detection you should be able to mock for instance that after 100 frames the buffer is cleared and that notification is reported back which then triggers the sound. So no detection at this point but everything else.
The second part is the detection itself. You now have a pool of samples, frames or snapshots. You can at any time aggregate data anyway you want to. You would probably use a secondary thread to process the data so the UI is not laggy and to be able to throttle how much CPU power you put into it. As for the detection itself I would say you may try to create some samples and try figure out the "math" part. When you have some idea or can at least preset the community with some recordings you could ask another specific question about that. It does look like a textbook example to use Machine Learning for instance.
From mathematical point of view there may be some shortcuts. A very simple example would be just looking at the direction of your device as normalized direction(x, y, z). I think you can actually already get that very easily from native components. In a "chopping" motion we expect that rotation suddenly (nearly) stopped and was recently (nearly) 90 degrees offset from current direction.
Speed:
Assuming you have an array of direction such as let directions[(x: CGFloat, y: CGFloat, z: CGFloat)] then you could identify some rotation speed changes with length of cross product.
let rotationSpeed = length(cross(directions[index], directions[index+1]))
the speed should always be a value between 0 and 1 where a maximum of 1 would mean 90 degrees change. Hope it never comes to that and you are always in values between 0 and 0.3. If you DO get to values larger than 0.5 then frame-rate of your device is too low and samples are best just discarded.
Using this approach you can map your rotations from array of vectors to array of speeds rotationSpeeds: [Float] which becomes more convenient for you. You are now looking within this array if there is a part where the rotation speed suddenly drops from high value to low value. What those values are you will need to test yourself and tweak them. But a "sudden drop" may not be on only 2 sequential samples. You need to find for instance 5 high speed frames followed by 2 low speed frames. Rather even more than that.
Now that you found such a point you found a candidate for end of your chop. At this point you can now go backwards and check all frames going back in time up to somewhere between 0.5 and 1.0 seconds from candidate (again a value you will need to try out yourself). If any of this frame is nearly 90 degrees away from candidate then you have your gesture. Something like the following should do:
length(cross(directions[index], directions[candidateIndex])) > 0.5
where the 0.5 is again something you will need to test. The closer to 1.0 the more precise the gesture needs to be. I think 0.5 should be pretty good to begin with.
Perhaps you can play with the following and see if you can get satisfying results:
struct Direction {
let x: Float
let y: Float
let z: Float
static func cross(_ a: Direction, _ b: Direction) -> Direction {
Direction(x: a.y*b.z - a.z*b.y, y: a.z*b.x - a.x*b.z, z: a.x*b.y - a.y*b.z) // Needs testing
}
var length: Float { (x*x + y*y + z*z).squareRoot() }
}
class Recording<Type> {
private(set) var samples: [Type] = [Type]()
func appendSample(_ sample: Type) { samples.append(sample) }
}
class DirectionRecording: Recording<Direction> {
func convertToSpeedRecording() -> SpeedRecording {
let recording = SpeedRecording()
if samples.count > 1 { // Need at least 2 samples
for index in 0..<samples.count-1 {
recording.appendSample(Direction.cross(samples[index], samples[index+1]).length)
}
}
return recording
}
}
class SpeedRecording: Recording<Float> {
func detectSuddenDrops(minimumFastSampleCount: Int = 4, minimumSlowSampleCount: Int = 2, maximumThresholdSampleCount: Int = 2, minimumSpeedTreatedAsHigh: Float = 0.1, maximumSpeedThresholdTreatedAsLow: Float = 0.05) -> [Int] { // Returns an array of indices where sudden drop occurred
var result: [Int] = [Int]()
// Using states to identify where in the sequence we currently are.
// The state should go none -> highSpeed -> lowSpeed
// Or the state should go none -> highSpeed -> thresholdSpeed -> lowSpeed
enum State {
case none
case highSpeed(sequenceLength: Int)
case thresholdSpeed(sequenceLength: Int)
case lowSpeed(sequenceLength: Int)
}
var currentState: State = .none
samples.enumerated().forEach { index, sample in
if sample > minimumSpeedTreatedAsHigh {
// Found a high speed sample
switch currentState {
case .none: currentState = .highSpeed(sequenceLength: 1) // Found a first high speed sample
case .lowSpeed: currentState = .highSpeed(sequenceLength: 1) // From low speed to high speed resets it back to high speed step
case .thresholdSpeed: currentState = .highSpeed(sequenceLength: 1) // From threshold speed to high speed resets it back to high speed step
case .highSpeed(let sequenceLength): currentState = .highSpeed(sequenceLength: sequenceLength+1) // Append another high speed sample
}
} else if sample > maximumSpeedThresholdTreatedAsLow {
// Found a sample somewhere between fast and slow
switch currentState {
case .none: break // Needs to go to high speed first
case .lowSpeed: currentState = .none // Low speed back to threshold resets to beginning
case .thresholdSpeed(let sequenceLength):
if sequenceLength < maximumThresholdSampleCount { currentState = .thresholdSpeed(sequenceLength: sequenceLength+1) } // Can still stay inside threshold
else { currentState = .none } // In threshold for too long. Reseting back to start
case .highSpeed: currentState = .thresholdSpeed(sequenceLength: 1) // A first transition from high speed to threshold
}
} else {
// A low speed sample found
switch currentState {
case .none: break // Waiting for high speed sample sequence
case .lowSpeed(let sequenceLength):
if sequenceLength < minimumSlowSampleCount { currentState = .lowSpeed(sequenceLength: sequenceLength+1) } // Not enough low speed samples yet
else { result.append(index); currentState = .none } // Got everything we need. This is a HIT
case .thresholdSpeed: currentState = .lowSpeed(sequenceLength: 1) // Threshold can always go to low speed
case .highSpeed: currentState = .lowSpeed(sequenceLength: 1) // High speed can always go to low speed
}
}
}
return result
}
}
func recordingContainsAChoppingGesture(recording: DirectionRecording, minimumAngleOffset: Float = 0.5, maximumSampleCount: Int = 50) -> Bool {
let speedRecording = recording.convertToSpeedRecording()
return speedRecording.detectSuddenDrops().contains { index in
for offset in 1..<maximumSampleCount {
let sampleIndex = index-offset
guard sampleIndex >= 0 else { return false } // Can not go back any further than that
if Direction.cross(recording.samples[index], recording.samples[sampleIndex]).length > minimumAngleOffset {
return true // Got it
}
}
return false // Sample count drained
}
}
I have a set of Metal textures that are stored in an Xcode Assets Catalog as Texture Sets. I'm loading these using MTKTextureLoader.newTexture(name:scaleFactor:bundle:options).
I then use a MTLArgumentEncoder to encode all of the textures into a Metal 2 argument buffer.
This works great. However, the Introducing Metal 2 WWDC 2017 session recommends combining argument buffers with resource heaps for even better performance, and I'm quite keen to try this. According to the Argument Buffer documentation, instead of having to call MTLRenderCommandEncoder.useResource on each texture in the argument buffer, you just call useHeap on the heap that the textures were allocated from.
However, I haven't found a straightforward way to use MTKTextureLoader together with MTLHeap. It doesn't seem to have a loading option to allocate the texture from a heap.
I'm guessing that the approach would be:
load the textures with MTKTextureLoader
reverse-engineer a set of MTLTextureDescriptor objects for each texture
use the texture descriptors to create an appropriately sized MTLHeap
assign a new set of textures from the MTLHeap
use some method to copy the textures across, perhaps replaceBytes or maybe even a MTLBlitCommandEncoder
deallocate the original textures loaded with the MTKTextureLoader
It seems like a fairly long-winded approach, and i've not seen any examples of this, so I thought I'd ask here first in case I'm missing something obvious.
Should I abandon MTKTextureLoader, and search out some pre-MetalKit art on loading textures from asset catalogs?
I'm using Swift, but happy to accept Objective-C answers.
Well, the method I outlined above seems to work. As predicted, it's pretty long-winded. I'd be very interested to know if anyone has anything more elegant.
enum MetalError: Error {
case anErrorOccured
}
extension MTLTexture {
var descriptor: MTLTextureDescriptor {
let descriptor = MTLTextureDescriptor()
descriptor.width = width
descriptor.height = height
descriptor.depth = depth
descriptor.textureType = textureType
descriptor.cpuCacheMode = cpuCacheMode
descriptor.storageMode = storageMode
descriptor.pixelFormat = pixelFormat
descriptor.arrayLength = arrayLength
descriptor.mipmapLevelCount = mipmapLevelCount
descriptor.sampleCount = sampleCount
descriptor.usage = usage
return descriptor
}
var size: MTLSize {
return MTLSize(width: width, height: height, depth: depth)
}
}
extension MTKTextureLoader {
func newHeap(withTexturesNamed names: [String], queue: MTLCommandQueue, scaleFactor: CGFloat, bundle: Bundle?, options: [MTKTextureLoader.Option : Any]?, onCompletion: (([MTLTexture]) -> Void)?) throws -> MTLHeap {
let device = queue.device
let sourceTextures = try names.map { name in
return try newTexture(name: name, scaleFactor: scaleFactor, bundle: bundle, options: options)
}
let storageMode: MTLStorageMode = .private
let descriptors: [MTLTextureDescriptor] = sourceTextures.map { source in
let desc = source.descriptor
desc.storageMode = storageMode
return desc
}
let sizeAligns = descriptors.map { device.heapTextureSizeAndAlign(descriptor: $0) }
let heapDescriptor = MTLHeapDescriptor()
heapDescriptor.size = sizeAligns.reduce(0) { $0 + $1.size }
heapDescriptor.cpuCacheMode = descriptors[0].cpuCacheMode
heapDescriptor.storageMode = storageMode
guard let heap = device.makeHeap(descriptor: heapDescriptor),
let buffer = queue.makeCommandBuffer(),
let blit = buffer.makeBlitCommandEncoder()
else {
throw MetalError.anErrorOccured
}
let destTextures = descriptors.map { descriptor in
return heap.makeTexture(descriptor: descriptor)
}
let origin = MTLOrigin()
zip(sourceTextures, destTextures).forEach {(source, dest) in
blit.copy(from: source, sourceSlice: 0, sourceLevel: 0, sourceOrigin: origin, sourceSize: source.size, to: dest, destinationSlice: 0, destinationLevel: 0, destinationOrigin: origin)
blit.generateMipmaps(for: dest)
}
blit.endEncoding()
buffer.addCompletedHandler { _ in
onCompletion?(destTextures)
}
buffer.commit()
return heap
}
}
I'm doing loops on big arrays (images) and through Instruments I found out that the major bottleneck was Array.subscript.nativePinningMutableAddressor, so I made this unit tests to compare,
// average: 0.461 seconds (iPhone6 iOS 10.2) ~5.8 times slower than native arrays
func testArrayPerformance() {
self.measure {
var array = [Float](repeating: 1, count: 2048 * 2048)
for i in 0..<array.count {
array[(i+1)%array.count] = Float(i)
}
}
}
// average: 0.079 seconds
func testNativeArrayPerformance() {
self.measure {
let count = 2048 * 2048
let array = UnsafeMutablePointer<Float>.allocate(capacity: count)
for i in 0..<count {
array[(i+1)%count] = Float(i)
}
array.deallocate(capacity: count)
}
}
As you can see, the native array is much faster. Is there any other way to access the array faster? "Unsafe" doesn't sound "safe", but what would you guys do in this situation? Is there any other type of array that wraps a native one?
For a more complex example, you can see follow the comments in this article: Rendering Text in Metal with Signed-Distance Fields
I re-implemented that example in Swift, and the original implementation took 52 seconds to start up, https://github.com/endavid/VidEngine/tree/textprimitive-fail
After switching to native arrays, I went down to 10 seconds, https://github.com/endavid/VidEngine/tree/fontatlas-array-optimization
Tested on Xcode 8.3.3.
Edit1:
The timings for this test are in Debug configuration, but the timings for the Signed Distance Fields example are in Release configuration. Thanks for the micro-optimizations (count, initialization) for the unit tests in the comments, but in the real world example those are negligible and the memory buffer solution is still 5 times faster on iOS.
Edit2:
Here are the timings (Instruments session on iPhone6) of the most expensive functions in the Signed Distance Fields example,
using Swift arrays,
using memory buffers,
Edit3: apart from performance issues, I had severe memory problems using Swift arrays. NSKeyedArchiver would run out of memory and crash the app. I had to use the byte buffer instead, and store it in a NSData. Ref. commit: https://github.com/endavid/VidEngine/commit/6c1822523a2b18759f294def3188755eaaf98b41
So I guess the answer to my question is: for big arrays of numeric data (e.g. images), better use memory buffers.
Simply caching the count improved the speed from 0.2s to 0.14s, which is twice the time it takes the pointer-based code. This is entirely expected, given that the array based code does a preinitialization of all elements to 1.
Baseline:
After caching the count:
I decided to test the uninitialized Array performance on my 2014 Macbook Pro :
// average: 0.315 seconds (macOS Sierra 10.12.5)
func testInitializedArrayPerformance() {
self.measure {
var array = [Float](repeating: 1, count: 2048 * 2048)
for i in 0..<array.count {
array[(i+1)%array.count] = Float(i)
}
}
}
// average: 0.043 seconds (macOS Sierra 10.12.5)
func testUninitializedArrayPerformance() {
self.measure {
var array : [Float] = []
array.reserveCapacity(2048 * 2048)
array.append(0)
for i in 0..<(2048 * 2048) {
array.append(Float(i))
}
array[0] = Float(2048 * 2048-1)
}
}
// average: 0.077 seconds (macOS Sierra 10.12.5)
func testNativeArrayPerformance() {
self.measure {
let count = 2048 * 2048
let array = UnsafeMutablePointer<Float>.allocate(capacity: count)
for i in 0..<count {
array[(i+1)%count] = Float(i)
}
array.deallocate(capacity: count)
}
}
This confirms that the array initialization is causing a big performance hit.
As mentioned by Alexander, UnsafeMutablePointer is not a native array, it's just a pointer operation.
Testing on iPhone 7+/iOS 10.3.2, in equivalent condition (both initialized) with Release build:
//0.030,0.027,0.017,0.027,0.024 -> avg 0.025
func testArrayPerformance2() {
self.measure {
let count = 2048 * 2048
var array = [Float](repeating: 1, count: count)
for i in 0..<count {
array[(i+1)%count] = Float(i)
}
}
}
//0.021,0.022,0.011,0.021,0.021 -> avg 0.0192
func testPointerOpPerformance2() {
self.measure {
let count = 2048 * 2048
let array = UnsafeMutablePointer<Float>.allocate(capacity: count)
array.initialize(to: 1, count: count)
for i in 0..<count {
array[(i+1)%count] = Float(i)
}
array.deinitialize(count: count)
array.deallocate(capacity: count)
}
}
Not a big difference. Less than 2 times. (About 1.3 times.)
Generally, Swift optimizer for Arrays work well for:
Block-local variables
Private properties
Whole Module Optimization would affect, but I have not tested.
If your more complex example takes 5 times to start up, it may be written in a hard-to-optimize manner. (Please pick up the core parts affecting the performance and include it in your question.)
I want to write an algorithm which allows me to rescale numbers to between 0 and 1. This means if I pass 25, 100, 500 then it should generate a new scale and represent those numbers on a scale of 0 to 1.
Here is what I have which is incorrect and does not make sense.
height: item.height/item.height * 20
Pass in the numbers in an array.
Loop through the numbers and find the max.
Map the array of integers to an array of Doubles, each one being the value from the source array, divided by the max.
Try to write that code. If you have trouble, update your question with your attempt and tell us what's going wrong.
EDIT:
Your answer shows how to print your resulting scaled values, but you implied that you actually want to create a new array containing the scaled values. For that you could use a function like this:
func scaleArray(_ sourceArray: [Int]) -> [Double] {
guard let max = sourceArray.max() else {
return [Double]()
}
return sourceArray.map {
return Double($0)/Double(max)
}
}
Edit #2:
Here is code that would let you test the above:
func scaleAndPrintArray(_ sourceArray: [Int]) {
let scaledArray = scaleArray(sourceArray)
for index in 0..<sourceArray.count {
print(String(format: "%3d", sourceArray[index]), String(format: "%0.5f",scaledArray[index]))
}
}
for arrayCount in 1...5 {
let arraySize = Int(arc4random_uniform(15)) + 5
var array = [Int]()
for _ in 1..<arraySize {
array.append(Int(arc4random_uniform(500)))
}
scaleAndPrintArray(array)
if arrayCount < 5 {
print("-----------")
}
}
(Sorry but I don't know swift)
If you're wanting to create a linear scale, a linear equation is y(x) = m*x + c. You wish the output to range from 0 to 1 when the input ranges from the minimum value to the maximum (your question is ambiguous, maybe you may wish to lock y(0) to 0).
y(0) = min
y(1) = max
therefore
c = min
m = max - min
and to find the value of any intervening value
y = m*x + c