I'm using these Compilation Swift Flag to identify codes that slow down the compilation time:
-Xfrontend -warn-long-function-bodies=100
-Xfrontend -warn-long-expression-type-checking=100
Then after building, I get warnings like these:
Instance method 'startFadePositionTitle()' took 2702ms to type-check (limit: 500ms)
for this part of the code:
func startFadePositionTitle() -> CGFloat {
let value: CGFloat = ((backgroundImage.frame.height/2 - contentTitle.frame.height/2) - navbarView.frame.height)/2
return value
}
Can someone explains me what is wrong in this method and what could I possibly improve?
You should break it to smaller chunks, then Swift can do type check more easily. Also the more you tell, the less Swift has to think. So you can help compiler and tell it anything you already know:
func beginFadePositionTitle() -> CGFloat {
let n: CGFloat = 2
let a: CGFloat = self.backgroundImage.frame.height/n
let b: CGFloat = self.contentTitle.frame.height/n
let ab: CGFloat = a - b
let c: CGFloat = self.navbarView.frame.height
let abc: CGFloat = ab - c
return abc/n
}
Instance method 'beginFadePositionTitle()' took 1ms to type-check (limit: 1ms)
This is the result when you tell everything to compiler. See the difference?
I recommend to try this one
func startFadePositionTitle() -> CGFloat {
return ((backgroundImage.frame.height - contentTitle.frame.height)/2.0 -
navbarView.frame.height)/2.0
}
with my Xcode 11.2 / Catalina (tested assuming that all those frames are CGRects), there is no such warning with those compiler flags. In the corner case it is possible to use CGFloat(2.0) in corresponding places, but I think it is superfluous.
You are calculating value every time when you call this method startFadePositionTitle
You can specify that once and after than you can use freely.
let value: CGFloat = ((backgroundImage.frame.height/2 - contentTitle.frame.height/2) - navbarView.frame.height)/2
Use that in viewDidLoad method
And optimize your code like that
func startFadePositionTitle() -> CGFloat {
return value
}
Floating point division and square root take considerably longer to compute than addition and multiplication. The latter two are computed directly while the former are usually computed with an iterative algorithm. The most common approach is to use a division-free Newton-Raphson iteration to get an approximation to the reciprocal of the denominator (division) or the reciprocal square root, and then multiply by the numerator (division) or input argument (square root).
Source :
Why is float division slow?
So i have made it to one division:
((x - y)/2 - z)/2 = (x-y-2z)/4
i.e. You can just write something like this
(backgroundImage.frame.height - contentTitle.frame.height - 2.0*navbarView.frame.height)/4.0
So small change to function would be something like this
func startFadePositionTitle() -> CGFloat {
return (backgroundImage.frame.height - contentTitle.frame.height - 2.0*navbarView.frame.height)/4.0
}
Related
Note that I'm not trying to set the value in a CGRect. I'm mystified as to why the compiler is issuing this claim:
let widthFactor = 0.8
let oldWidth = wholeFrameView.frame.width
let newWidth = wholeFrameView.frame.width * widthFactor // Value of type '(CGRect) -> CGRect' has no member 'width'
let newWidth2 = wholeFrameView.frame.width * 0.8 // This is fine.
Width is a CGFloat where your multiplier is a Double. Explicitly declare the type of your multiplier:
let widthFactor: CGFloat = 0.8
All the dimensions of a CGRect are of type CGFloat, not Double, and because Swift is especially strict about types, you can't multiply a CGFloat by a Double.
The interesting thing though, is that both CGFloat and Double implement ExpressibleByFloatLiteral. This means that 0.8, a "float literal", can be interpreted as either a Double or a CGFloat. Without context, it's always a Double, because of how the compiler is designed. Note that this only applies to float literals like 3.14, 3e8 etc, and not to identifiers of variables.
So the expression wholeFrameView.frame.width * 0.8 is valid because the compiler sees that width is a CGFloat, so it treats 0.8 as a CGFloat as well. No problems.
On the other hand, when you declare the variable widthFactor, it is automatically given the type Double, because there aren't any more context on that line to suggest to the compiler that you want it to be any other type.
This can be fixed by directly telling the compiler that you want widthFactor to be a CGFloat:
let widthFactor: CGFloat = 0.8
Because, as others have noted, you can't multiply a Double and a CGFloat, the compiler doesn't know what you're intending.
So, instead of giving you an error about the frame property, which you currently think it's doing, it's actually making its best guess*, and giving you an error related to the frame method. No method method has a width property, so what it tells you is true.
*Of course, its best guess is not good enough, hence a human being coming here to ask a question about it. So please file a bug!
Stepping onto my soapbox: This confusion would be avoided if Apple hadn't named the method the thing it returns. The convention to prefix all such methods with get solves the problem. Some convention is important in any language with first-class functions, to disambiguate between properties and methods.
wholeFrameView.frame has no member width. Also, you need widthFactor to be of type CGFloat. Try:
let newWidth = wholeFrameView.frame.size.width * CGFloat(widthFactor)
According to this question, using == and != should let you check for equality between two CGPoint objects.
However, the code below fails to consider two CGPoint objects as equal even though they output the same value.
What is the right way to check equality among CGPoint objects?
Code:
let boardTilePos = boardLayer.convert(boardTile.position, from: boardTile.parent!)
let shapeTilePos = boardLayer.convert(tile.position, from: tile.parent!)
print("board tile pos: \(boardTilePos). active tile pos: \(shapeTilePos). true/false: \(shapeTilePos == boardTilePos)")
Output:
board tile pos: (175.0, 70.0). active tile pos: (175.0, 70.0). true/false: false
Unfortunately, what you see in the console is not what your real value is.
import UIKit
var x = CGPoint(x:175.0,y:70.0)
var y = CGPoint(x:175.0,y:70.00000000000001)
print("\(x.equalTo(y)), \(x == y),\(x),\(y)")
The problem is, the console only allows for 10-16 but in reality your CGFloat can go even lower than that because on 64bit architecture, CGFloat is Double.
This means you have to cast your CGPoint values to a Float if you want to get equality that will appear on the console, so you need to do something like:
if Float(boxA.x) == Float(boxB.x) && Float(boxA.y) == Float(boxB.y)
{
//We have equality
}
Now I like to take it one step further.
In most cases, we are using CGPoint to determine points on the scene. Rarely do we ever want to be dealing with 1/2 points, they make our lives just confusing.
So instead of Float, I like to cast to Int. This will guarantee if two points are lying on the same CGPoint in scene space
if Int(boxA.x) == Int(boxB.x) && Int(boxA.y) == Int(boxB.y)
{
//We have equality
}
I'm providing an alternate answer since I don't agree with Knight0fDragon's implementation. This is only if you want to deal with factions of a point. If you only care about points in whole numbers, see Knight0fDragon's answer.
You don't always have the luxury of logging points to the console, or seeing if you're trying to compare points that are the victim of floating point math, like comparing (175.0, 70.0) to (175.0, 70.00001) (which both log as (175.0, 70.0) in the console). Yes, truncating to Int is a great way of understanding why two points that appear to print to the console as equal aren't. But it's not a catch all solution one should use for comparing every point. Depending on what level of precision you need, you want to take the absolute value of the difference of both x and y for each point, and see if it is in an acceptable range of a delta you specify.
var boxA = CGPoint(x:175.0, y:70.0)
var boxB = CGPoint(x:175.0, y:70.00000000000001)
let delta: CGFloat = 0.01
if (fabs(boxA.x - boxB.x) < delta) &&
(fabs(boxA.y - boxB.y) < delta) {
// equal enough for our needs
}
The answer to the question "What is the right way to check equality among CGPoint objects?" really depends on the way you compare floating point numbers.
CGPoint provides its own comparison method: equalTo(_ point2: CGPoint)
Try this:
shapeTilePos.equalTo(boardTilePos)
I am trying to implement RGB histogram computation for images in Swift (I am new to iOS).
However the computation time for 1500x1000 image is about 66 sec, which I consider to be too slow.
Are there any ways to speed up image traversal?
P.S. current code is the following:
func calcHistogram(image: UIImage) {
let bins: Int = 20;
let width = Int(image.size.width);
let height = Int(image.size.height);
let binStep: Double = Double(bins-1)/255.0
var hist = Array(count:bins, repeatedValue:Array(count:bins, repeatedValue:Array(count:bins, repeatedValue:Int())))
for i in 0..<bins {
for j in 0..<bins {
for k in 0..<bins {
hist[i][j][k] = 0;
}
}
}
var pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
var data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
for x in 0..<width {
for y in 0..<height {
var pixelInfo: Int = ((width * y) + x) * 4
var r = Double(data[pixelInfo])
var g = Double(data[pixelInfo+1])
var b = Double(data[pixelInfo+2])
let r_bin: Int = Int(floor(r*binStep));
let g_bin: Int = Int(floor(g*binStep));
let b_bin: Int = Int(floor(b*binStep));
hist[r_bin][g_bin][b_bin] += 1;
}
}
}
As noted in my comment on the question, there are some things you might rethink before you even try to optimize this code.
But even if you do move to a better overall solution like GPU-based histogramming, a library, or both... There are some Swift pitfalls you're falling into here that are good to talk about so you don't run into them elsewhere.
First, this code:
var hist = Array(count:bins, repeatedValue:Array(count:bins, repeatedValue:Array(count:bins, repeatedValue:Int())))
for i in 0..<bins {
for j in 0..<bins {
for k in 0..<bins {
hist[i][j][k] = 0;
}
}
}
... is initializing every member of your 3D array twice, with the same result. Int() produces a value of zero, so you could leave out the triple for loop. (And possibly change Int() to 0 in your innermost repeatedValue: parameter to make it more readable.)
Second, arrays in Swift are copy-on-write, but this optimization can break down in multidimensional arrays: changing an element of a nested array can cause the entire nested array to be rewritten instead of just the one element. Multiply that by the depth of nested arrays and number of element writes you have going on in a double for loop and... it's not pretty.
Unless there's a reason your bins need to be organized this way, I'd recommend finding a different data structure for them. Three separate arrays? One Int array where index i is red, i + 1 is green, and i + 2 is blue? One array of a custom struct you define that has separate r, g, and b members? See what conceptually fits with your tastes or the rest of your app, and profile to make sure it works well.
Finally, some Swift style points:
pixelInfo, r, g, and b in your second loop don't change. Use let, not var, and the optimizer will thank you.
Declaring and initializing something like let foo: Int = Int(whatever) is redundant. Some people like having all their variables/constants explicitly typed, but it does make your code a tad less readable and harder to refactor.
Int(floor(x)) is redundant — conversion to integer always takes the floor.
If you have some issues about performance in your code, first of all, use Time Profiler from Instruments. You can start it via Xcode menu Build->Profile, then, Instruments app opened, where you can choose Time Profiler.
Start recording and do all interactions in the your app.
Stop recording and analyse where is the "tightest" place of your code.
Also check options "Invert call tree", "Hide missing symbols" and "Hide system libraries" for better viewing profile results.
You can also double click at any listed function to view it in code and seeing percents of usage
I am trying to write this in Swift (I am in step 54). In a UICollectionViewLayout class I have a function setup function
func setup() {
var percentage = 0.0
for i in 0...RotationCount - 1 {
var newPercentage = 0.0
do {
newPercentage = Double((arc4random() % 220) - 110) * 0.0001
println(newPercentage)
} while (fabs(percentage - newPercentage) < 0.006)
percentage = newPercentage
var angle = 2 * M_PI * (1 + percentage)
var transform = CATransform3DMakeRotation(CGFloat(angle), 0, 0, 1)
rotations.append(transform)
}
}
Here is how the setup function is described in the tutorial
First we create a temporary mutable array that we add objects to. Then
we run through our loop, creating a rotation each time. We create a
random percentage between -1.1% and 1.1% and then use that to create a
tweaked CATransform3D. I geeked out a bit and added some logic to
ensure that the percentage of rotation we randomly generate is a least
0.6% different than the one generated beforehand. This ensures that photos in a stack don't have the misfortune of all being rotated the
same way. Once we have our transform, we add it to the temporary array
by wrapping it in an NSValue and then rinse and repeat. After all 32
rotations are added we set our private array property. Now we just
need to put it to use.
When I run the app, I get a run time error in the while (fabs(percentage - newPercentage) < 0.006) line.
the setup function is called in prepareLayout()
override func prepareLayout() {
super.prepareLayout()
setup()
...
}
Without the do..while loop, the app runs fine. So I am wondering, why?
Turns out I had to be more type safe
newPercentage = Double(Int((arc4random() % 220)) - 110) * 0.0001
This must be a Swift bug. That code should NOT crash at runtime. It should either give a compiler error on the newPercentage = expression or it should correctly promote the types as C does.
So I'm writing a lowpass accelerometer function to moderate the jitters of the accelerometer. I have a CGFloat array to represent the data and i want to damp it with this function:
// Damps the gittery motion with a lowpass filter.
func lowPass(vector:[CGFloat]) -> [CGFloat]
{
let blend:CGFloat = 0.2
// Smoothens out the data input.
vector[0] = vector[0] * blend + lastVector[0] * (1 - blend)
vector[1] = vector[1] * blend + lastVector[1] * (1 - blend)
vector[2] = vector[2] * blend + lastVector[2] * (1 - blend)
// Sets the last vector to be the current one.
lastVector = vector
// Returns the lowpass vector.
return vector
}
In this case, lastVector is defined as follows up at the top of my program:
var lastVector:[CGFloat] = [0.0, 0.0, 0.0]
The three lines in the form vector[a] = ... give me the errors. Any ideas as to why i am getting this error?
That code seems to compile if you pass the array with the inout modifier:
func lowPass(inout vector:[CGFloat]) -> [CGFloat] {
...
}
I'm not sure whether that's a bug or not. Instinctively, if I pass an array to a function I expect to be able to modify it. If I pass with the inout modifier, I'd expect to be able to make the original variable to point to a new array - similar to what the & modifier does in C and C++.
Maybe the reason behind is that in Swift there are mutable and immutable arrays (and dictionaries). Without the inout it's considered immutable, hence the reason why it cannot be modified.
Addendum 1 - It's not a bug
#newacct says that's the intended behavior. After some research I agree with him. But even if not a bug I originally considered it wrong (read up to the end for conclusions).
If I have a class like this:
class WithProp {
var x : Int = 1
func SetX(newVal : Int) {
self.x = newVal
}
}
I can pass an instance of that class to a function, and the function can modify its internal state
var a = WithProp()
func Do1(p : WithProp) {
p.x = 5 // This works
p.SetX(10) // This works too
}
without having to pass the instance as inout.
I can use inout instead to make the a variable to point to another instance:
func Do2(inout p : WithProp) {
p = WithProp()
}
Do2(&a)
With that code, from within Do2 I make the p parameter (i.e. the a variable) point to a newly created instance of WithProp.
The same cannot be done with an array (and I presume a dictionary as well). To change its internal state (modify, add or remove an element) the inout modifier must be used. That was counterintuitive.
But everything gets clarified after reading this excerpt from the swift book:
Swift’s String, Array, and Dictionary types are implemented as structures. This means that strings, arrays, and dictionaries are copied when they are assigned to a new constant or variable, or when they are passed to a function or method.
So when passed to a func, it's not the original array, but a copy of it - Hence any change made to it (even if possible) wouldn't be done on the original array.
So, in the end, my original answer above is correct and the experienced behavior is not a bug
Many thanks to #newacct :)
Since Xcode 6 beta 3, modifying the contents of an Array is a mutating operation. You cannot modify a constant (i.e. let) Array; you can only modify a non-constant (i.e. var) Array.
Parameters to a function are constants by default. Therefore, you cannot modify the contents of vector since it is a constant. Like other parameters, there are two ways to be able to change a parameter:
Declare it var, in which case you can assign to it, but it is still passed by value, so any changes to the parameter has no effect on the calling scope.
Declare it inout, in which case the parameter is passed by reference, and any changes to the parameter is just like you made the changes on the variable in the calling scope.
You can see in the Swift standard library that all the functions that take an Array and mutate it, like sort(), take the Array as inout.
P.S. this is just like how arrays work in PHP by the way
Edit: The following worked for Xcode Beta 2. Apparently, the syntax and behavior of arrays has changed in Beta 3. You can no longer modify the contents of an array with subscripts if it is immutable (a parameter not declared inout or var):
Not valid with the most recent changes to the language
The only way I could get it to work in the play ground was change how you are declaring the arrays. I suggest trying this (works in playground):
import Cocoa
let lastVector: CGFloat[] = [0.0,0.0,0.0]
func lowPass(vector:CGFloat[]) -> CGFloat[] {
let blend: CGFloat = 0.2
vector[0] = vector[0] * blend + lastVector[0] * ( 1 - blend)
vector[1] = vector[1] * blend + lastVector[1] * ( 1 - blend)
vector[2] = vector[2] * blend + lastVector[2] * ( 1 - blend)
return vector
}
var test = lowPass([1.0,2.0,3.0]);
Mainly as a followup for future reference, #newacct's answer is the correct one. Since the original post showed a function that returns an array, the correct answer to this question is to tag the parameter with var:
func lowPass(var vector:[CGFloat]) -> [CGFloat] {
let blend:CGFloat = 0.2
// Smoothens out the data input.
vector[0] = vector[0] * blend + lastVector[0] * (1 - blend)
vector[1] = vector[1] * blend + lastVector[1] * (1 - blend)
vector[2] = vector[2] * blend + lastVector[2] * (1 - blend)
// Sets the last vector to be the current one.
lastVector = vector
// Returns the lowpass vector.
return vector
}