How to Typecast in Swift? - ios

I'm working on a custom path for UIBezierPath, and I'm getting the "Cannot assign a value of type (CGFloat) to a value of type CGFloat." I think it's because there is some issue with typecasting? How can I fix this? The warnings are popping up for the "X = (b2 - b1) / (m1 - m2)" line and "Y = m1 * X + b1" line.
func lineIntersection(m1: CGFloat, b1: CGFloat, m2: CGFloat, b2: CGFloat, inout X: CGFloat, inout Y: CGFloat) -> Bool {
if m1 == m2 {
return false
} else if X = (b2 - b1)/(m1 - m2) {
return true
} else if Y = m1 * X + b1 {
return true
} else {
return false
}
}
I'll be calling this function in another function later on
func xxx() {
var outX = CGFloat()
var outY = CGFloat()
lineIntersection(oldSlope, b1: oldIntercept, m2: newSlope, b2: newIntercept, X: &outX, Y: &outY)
}

Swift doesn't have zero as a "false" value, you need to explicitly compare with != 0 in the ifs. Also, move the assignments to separate lines instead of doing it as part of the condition.
Floating point equality comparisons are not reliable except in a few special cases, and this is not likely to be one of them.
I would remove the inout parameters and return an optional CGPoint, i.e., -> CGPoint?. Return nil for false, and the CGPoint intersection for true.
Resulting function (I've removed the equality checks altogether; the floating point division by zero is not dangerous, we can check for Inf/NaN later):
func lineIntersection(m1 m1: CGFloat, b1: CGFloat, m2: CGFloat, b2: CGFloat) -> CGPoint? {
let x = (b2 - b1) / (m1 - m2)
let y = m1 * x + b1
return y.isFinite ? CGPoint(x: x, y: y) : nil
}
Use at the calling location would be something like:
if let intersection = lineIntersection(m1: m1, b1: b1, m2: m2, b2: b2) {
// use intersection.x and intersection.y
}
As for the literal question of how to typecast (although this was not the issue here), you can init even "basic types" with something like CGFloat(value), which results in a type conversion by "creating" a new CGFloat from value. Several types also have alternative ways to specify how exactly the conversion happens, e.g., Int32(truncatingBitPattern: largerInteger) takes the lowest 32 bits as they are, whereas just Int32(largerInteger) would crash if it over- or underflows. Casting as such is done with as (known to succeed at compile time), as? (try casting at runtime, nil on failure), as! (try casting at runtime, assert success).

Related

Swift: initializer 'init(_:)' requires that 'Decimal' conform to 'BinaryInteger'

I'm trying to create a function that calculates and returns compound interest. The variables are having different data types. Whenever I run the program I get an error initializer 'init(_:)' requires that 'Decimal' conform to 'BinaryInteger'. The following is my code:
import Foundation
class Compound{
var p:Double
var t:Int
var r:Double
var n:Int
var interest:Double
var amount:Double
init(p:Double,t:Int,r:Double,n:Int){
self.p = p
self.t = t
self.r = r
self.n = n
}
func calculateAmount() -> Double {
amount = p * Double(pow(Decimal(1 + (r / Double(n))),n * t))
return amount
}
}
The Error:
error: initializer 'init(_:)' requires that 'Decimal' conform to 'BinaryInteger'
amount = p * Double(pow(Decimal(1 + (r / Double(n))),n * t))
^
After looking at a similar problem I've also tried the following technique but I'm still getting the same error
func calculateAmount() -> Double {
let gg:Int = n * t
amount = p * Double(pow(Decimal(1 + (r / Double(n))),Int(truncating: gg as NSNumber) ))
return amount
}
How to solve this?
It would be easier to use the Double func pow(_: Double, _: Double) -> Double instead of using Decimal func pow(_ x: Decimal, _ y: Int) -> Decimal considering that you want to return a Double:
#discardableResult
func calculateAmount() -> Double {
amount = p * pow(1 + (r / Double(n)), Double(n) * Double(t))
return amount
}

Σ Calculation in iOS SDK

I am working on a financial app where I have to perform a calculation. Finance team provide me below formula for calculation as below image
[![enter image description here][1]][1]
But I am unable to found how to perform below operation in Swift or Objective-C.
n t-1
Σ (1+i)
t=1
Can you please help me providing an idea of the above calculation?
I also read this link but not interested to use the third party.
Swift's reduce function was made for this:
(1...n).reduce(0) { (currentResult, t) -> Decimal in
currentResult + pow(1 + i, t - 1)
}
If we crack it down you have to find summation of (1 + i) ^ (t -1) where t varies from 1 to n.
Equivalent function will be
func evaluteSummation(n: Int, i: Int ) -> Int {
int sum = 0
for t in 1...n {
sum += pow(1 + i, t - 1)
}
return sum
}
You can do code like below.
func calculatePMT(targetValue : Double, interestRate : Double, time: Int) -> Double {
let term = 1.0 + interestRate
var denominator = 0.0
for t in 0..<time {
let p = pow(term, Double(t))
denominator += p
}
return targetValue / denominator
}
let targetValue = 100000.0
let interestRate = 5.0
let time = 4
let pmt = calculatePMT(targetValue: targetValue, interestRate: interestRate, time: time)
print(pmt)

why my code is slow when finding Fibonacci sum?

I'm writing answers for project Euler Questions in this repo
but having some performance issues in my solution
Question 2:
Each new term in the Fibonacci sequence is generated by adding the previous two terms.
By starting with 1 and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.
My Solution is
func solution2()
{
func fibonacci(number: Int) -> (Int)
{
if number <= 1
{
return number
}
else
{
return fibonacci(number - 1) + fibonacci(number - 2)
}
}
var sum = 0
print("calculating...")
for index in 2..<50
{
print (index)
if (fibonacci(index) % 2 == 0)
{
sum += fibonacci(index)
}
}
print(sum)
}
My Question is, why it gets super slow after iteration 42, i want to do it for 4000000 as the question says, any help?
solution 2
func solution2_fast()
{
var phiOne : Double = (1.0 + sqrt(5.0)) / 2.0
var phiTwo : Double = (1.0 - sqrt(5.0)) / 2.0
func findFibonacciNumber (nthNumber : Double) -> Int64
{
let nthNumber : Double = (pow(phiOne, nthNumber) - (pow(phiTwo, nthNumber))) / sqrt(5.0)
return Int64(nthNumber)
}
var sum : Int64 = 0
print("calculating...")
for index in 2..<4000000
{
print (index)
let f = findFibonacciNumber(Double(index))
if (f % 2 == 0)
{
sum += f
}
}
print(sum)
}
The most important thing about PE questions is to think about what it is asking.
This is not asking you to produce all Fibonacci numbers F(n) less than 4000000. It is asking for the sum of all even F(n) less than 4000000.
Think about the sum of all F(n) where F(n) < 10.
1 + 2 + 3 + 5 + 8
I could do this by calculating F(1), then F(2), then F(3), and so on... and then checking they are less than 10 before adding them up.
Or I could store two variables...
F1 = 1
F2 = 2
And a total...
Total = 3
Now I can turn this into a while loop and lose the recursion altogether. In fact, the most complex thing I'm doing is adding two numbers together...
I came up with this...
func sumEvenFibonacci(lessThan limit: Int) -> Int {
// store the first two Fibonacci numbers
var n1 = 1
var n2 = 2
// and a cumulative total
var total = 0
// repeat until you hit the limit
while n2 < limit {
// if the current Fibonacci is even then add to total
if n2 % 2 == 0 {
total += n2
}
// move the stored Fibonacci numbers up by one.
let temp = n2
n2 = n2 + n1
n1 = temp
}
return total
}
It runs in a fraction of a second.
sumEvenFibonacci(lessThan: 4000000)
Finds the correct answer.
In fact this... sumEvenFibonacci(lessThan: 1000000000000000000) runs in about half a second.
The second solution seems to be fast(er) although an Int64 will not be sufficient to store the result. The sum of Fibonacci numbers from 2..91 is 7,527,100,471,027,205,936 but the largest number you can store in an Int64 is 9,223,372,036,854,775,807. For this you need to use some other types like BigInteger
Because you use the recursive, and it cache in the memory.If you iteration 42, it maybe has so many fibonacci function in your memory, and recursive.So it isn't suitable for recursive, and you can store the result in the array, not the reason of the swift.
this is the answer in two different ways
func solution2_recursive()
{
func fibonacci(number: Int) -> (Int)
{
if number <= 1
{
return number
}
else
{
return fibonacci(number - 1) + fibonacci(number - 2)
}
}
var sum = 0
print("calculating...")
for index in 2..<50
{
print (index)
let f = fibonacci(index)
if( f < 4000000)
{
if (f % 2 == 0)
{
sum += f
}
}
else
{
print(sum)
return
}
}
}
solution 2
func solution2()
{
var phiOne : Double = (1.0 + sqrt(5.0)) / 2.0
var phiTwo : Double = (1.0 - sqrt(5.0)) / 2.0
func findFibonacciNumber (nthNumber : Double) -> Int64
{
let nthNumber : Double = (pow(phiOne, nthNumber) - (pow(phiTwo, nthNumber))) / sqrt(5.0)
return Int64(nthNumber)
}
var sum : Int64 = 0
print("calculating...")
for index in 2..<50
{
let f = findFibonacciNumber(Double(index))
if(f < 4000000)
{
if (f % 2 == 0)
{
sum += f
}
}
else
{
print(sum)
return
}
}
}

Swit map: error: cannot invoke 'map' with an argument list of type '((_) -> _)'

I can't understand why this one works:
var arr = [4,5,6,7]
arr.map() {
x in
return x + 2
}
while this one not
arr.map() {
x in
var y = x + 2
return y
}
with error
Playground execution failed: MyPlayground.playground:13:5: error:
cannot invoke 'map' with an argument list of type '((_) -> _)'
arr.map() {
The problem here is there error message. In general, when you see something like cannot invoke .. with ... it means that the compiler's type inference has just not worked.
In this case, you've run up against one of the limitations of inference within closures. Swift can infer the type of single-statement closures only, not multiple-statement ones. In your first example:
arr.map() {
x in
return x + 2
}
There's actually only one statement: return x + 2. However, in the second:
arr.map() {
x in
var y = x + 2
return y
}
There's an assignment statement (var y = x + 2), and then the return. So the error is a little misleading: it doesn't mean you "can't invoke map() with this type of argument", what it means to say is "I can't figure out what type x or y is".
By the way, in single-statement closures, there are two other things that can be inferred. The return statement:
arr.map() {
x in
x + 2
}
And the variable name itself:
arr.map() { $0 + 2 }
It all produces the same compiled code, though. So it's really a matter of taste which one you choose. (For instance, while I think the inferred return looks clean and easier to read, I don't like the $0, so I generally always put x in or something, even for very short closures. It's up to you, though, obviously.)
One final thing: since this is all really just syntax stuff, it's worth noting that the () isn't needed either:
arr.map { x in x + 2 }
As #MartinR pointed out, the compiler can infer some types from outer context as well:
let b: [Int] = arr.map { x in
var y = x + 2
return y
}
Which is worth bearing in mind. (it seems that the "one-statement" rule only applies when there's no other type info available)
Swift can't infer type every time. Even though it should see that y = x + 2 means y is an Int too. My guess is that Swift parses the closure in a certain order that makes it not aware of the return type ahead of time in your case.
This works:
arr.map() {
x -> Int in
var y = x + 2
return y
}

Get RandomPlusMinus in Swift

I want to get a random number either + or -:
But what's wrong here
func randomPlusMinus(value:Float) -> Float {
return value * (arc4random() % 2 ? 1 : -1)
}
Error: Could not find an overload for '*' that accepts the supplied arguments
Try:
func randomPlusMinus(value:Float) -> Float {
let invert: Bool = arc4random_uniform(2) == 1
return value * (invert ? -1.0 : 1.0)
}
I don't think you can say if 0 or if 1. You should be using a boolean value with if and the ternary operator (cond ? v1 : v2).
Then there's the Swift numerics thing (which is really annoying, they need to add/implement more convertible protocols in the Std library :/ )
PS - I don't have an interpreter handy, but I will double check later
Having an explicit test for the result of the modulo operation works for me:
func randomPlusMinus(value:Float) -> Float {
return 0 == (arc4random() % 2) ? value : -value
}
I'm a little late to answering this, but I feel the simplest solution would be:
func randomPlusMinus(value:Float) -> Float {
return value * (arc4random_uniform(2) * 2 - 1)
}
The arc4random call will (supposedly) return 0 50% of the time and 1 50% of the time. So multiplying by 2 gives 0 or 2, then subtracting 1 gives -1 or 1. So the function returns value * -1 50% of the time and value * 1 the other 50% of the time.
I think this is what you are after if you want to random the +- of original value:
func randomPlusMinus(value:Float) -> Float {
let x = arc4random_uniform(2)
switch x {
case 0 :
return value * -1
default :
return value
}
}

Resources