how to take binary string input from user in swift - ios

I want to take input from user in binary, What I want is something like:
10101
11110
Then I need to perform bitwise OR on this. I know how to take input and how to perform bitwise OR, only I want to know is how to convert because what I am currently using is not giving right result. What I tried is as below:
let aBits: Int16 = Int16(a)! //a is String "10101"
let bBits: Int16 = Int16(b)! //b is String "11110"
let combinedbits = aBits | bBits
Edit: I don't need decimal to binary conversion with radix, as my string already have only 0 and 1
String can have upto 500 characters like:
10011011111010110111001011001001101110111110110001001111001111101111010110110111‌​00111001100011111010
this is beyond Int limit, how to handle that in Swift?
Edit2 : As per vacawama 's answer, below code works great:
let maxAB = max(a.count, b.count)
let paddedA = String(repeating: "0", count: maxAB - a.count) + a
let paddedB = String(repeating: "0", count: maxAB - b.count) + b
let Str = String(zip(paddedA, paddedB).map({ $0 == ("0", "0") ? "0" : "1" }))
I can have array of upto 500 string and each string can have upto 500 characters. Then I have to get all possible pair and perform bitwise OR and count maximum number of 1's. Any idea to make above solution more efficient? Thank you

Since you need arbitrarily long binary numbers, do everything with strings.
This function first pads the two inputs to the same length, and then uses zip to pair the digits and map to compute the OR for each pair of characters. The resulting array of characters is converted back into a String with String().
func binaryOR(_ a: String, _ b: String) -> String {
let maxAB = max(a.count, b.count)
let paddedA = String(repeating: "0", count: maxAB - a.count) + a
let paddedB = String(repeating: "0", count: maxAB - b.count) + b
return String(zip(paddedA, paddedB).map({ $0 == ("0", "0") ? "0" : "1" }))
}
print(binaryOR("11", "1100")) // "1111"
print(binaryOR("1000", "0001")) // "1001"
I can have array of upto 500 string and each string can have upto 500
characters. Then I have to get all possible pair and perform bitwise
OR and count maximum number of 1's. Any idea to make above solution
more efficient?
You will have to do 500 * 499 / 2 (which is 124,750 comparisons). It is important to avoid unnecessary and/or repeated work.
I would recommend:
Do an initial pass to loop though your strings to find out the length of the largest one. Then pad all of your strings to this length. I would keep track of the original length of each string in a tiny stuct:
struct BinaryNumber {
var string: String // padded string
var length: Int // original length before padding
}
Modify the binaryOR function to take BinaryNumbers and return Int, the count of "1"s in the OR.
func binaryORcountOnes(_ a: BinaryNumber, _ b: BinaryNumber) -> Int {
let maxAB = max(a.length, b.length)
return zip(a.string.suffix(maxAB), b.string.suffix(maxAB)).reduce(0) { total, pair in return total + (pair == ("0", "0") ? 0 : 1) }
}
Note: The use of suffix helps the efficiency by only checking the digits that matter. If the original strings had length 2 and 3, then only the last 3 digits will be OR-ed even if they're padded to length 500.
Loop and compare all pairs of BinaryNumbers to find largest count of ones:
var numbers: [BinaryNumber] // This array was created in step 1
maxOnes = 0
for i in 0 ..< (numbers.count - 1) {
for j in (i + 1) ..< numbers.count {
let ones = binaryORcountOnes(numbers[i], numbers[j])
if ones > maxOnes {
maxOnes = ones
}
}
}
print("maxOnes = \(maxOnes)")
Additional idea for speedup
OR can't create more ones than were in the original two numbers, and the number of ones can't exceed the maximum length of either of the original two numbers. So, if you count the ones in each number when you are padding them and store that in your struct in a var ones: Int property, you can use that to see if you should even bother calling binaryORcountOnes:
maxOnes = 0
for i in 0 ..< (numbers.count - 1) {
for j in (i + 1) ..< numbers.count {
if maxOnes < min(numbers[i].ones + numbers[j].ones, numbers[i].length, numbers[j].length) {
let ones = binaryORcountOnes(numbers[i], numbers[j])
if ones > maxOnes {
maxOnes = ones
}
}
}
}
By the way, the length of the original string should really just be the minimum length that includes the highest order 1. So if the original string was "00101", then the length should be 3 because that is all you need to store "101".

let number = Int(a, radix: 2)
Radix helps using binary instead of decimical value

You can use radix for converting your string. Once converted, you can do a bitwise OR and then check the nonzeroBitCount to count the number of 1's
let a = Int("10101", radix: 2)!
let b = Int("11110", radix: 2)!
let bitwiseOR = a | b
let nonZero = bitwiseOR.nonzeroBitCount

As I already commented above "10101" is actually a String not a Binary so "10101" | "11110" will not calculate what you actually needed.
So what you need to do is convert both value in decimal then use bitwiseOR and convert the result back to in Binary String (in which format you have the data "11111" not 11111)
let a1 = Int("10101", radix: 2)!
let b1 = Int("11110", radix: 2)!
var result = 21 | 30
print(result)
Output: 31
Now convert it back to binary string
let binaryString = String(result, radix: 2)
print(binaryString)
Output: 11111
--: EDIT :--
I'm going to answer a basic example of how to calculate bitwiseOR as the question is specific for not use inbuilt function as string is very large to be converted into an Int.
Algorithm: 1|0 = 1, 1|1 = 1, 0|0 = 0, 0|1 = 1
So, What we do is to fetch all the characters from String one by one the will perform the | operation and append it to another String.
var str1 = "100101" // 37
var str2 = "10111" // 23
/// Result should be "110111" -> "55"
// #1. Make both string equal
let length1 = str1.characters.count
let length2 = str2.characters.count
if length1 != length2 {
let maxLength = max(length1, length2)
for index in 0..<maxLength {
if str1.characters.count < maxLength {
str1 = "0" + str1
}
if str2.characters.count < maxLength {
str2 = "0" + str2
}
}
}
// #2. Get the index and compare one by one in bitwise OR
// a) 1 - 0 = 1,
// b) 0 - 1 = 1,
// c) 1 - 1 = 1,
// d) 0 - 0 = 0
let length = max(str1.characters.count, str2.characters.count)
var newStr = ""
for index in 0..<length {
let charOf1 = Int(String(str1[str1.index(str1.startIndex, offsetBy: index)]))!
let charOf2 = Int(String(str2[str2.index(str2.startIndex, offsetBy: index)]))!
let orResult = charOf1 | charOf2
newStr.append("\(orResult)")
}
print(newStr)
Output: 110111 // 55
I would like to refer Understanding Bitwise Operators for more detail.

func addBinary(_ a: String, _ b: String) {
var result = ""
let arrA = Array(a)
let arrB = Array(b)
var lengthA = arrA.count - 1
var lengthB = arrB.count - 1
var sum = 0
while lengthA >= 0 || lengthB >= 0 || sum == 1 {
sum += (lengthA >= 0) ? Int(String(arrA[lengthA]))! : 0
sum += (lengthB >= 0) ? Int(String(arrB[lengthB]))! : 0
result = String((sum % 2)) + result
sum /= 2
lengthA -= 1
lengthB -= 1
}
print(result) }
addBinary("11", "1")

Related

Mathematic calculation using array of numbers and array of arithmetic operators in swift [duplicate]

I am doing a simple calculator, but when performing the multiplication and division, my code doesn't make them a priority over plus and minus.
When doing -> 2 + 2 * 4, result = 16 instead of 10...
How to conform to the math logic inside my switch statement?
mutating func calculateTotal() -> Double {
var total: Double = 0
for (i, stringNumber) in stringNumbers.enumerated() {
if let number = Double(stringNumber) {
switch operators[i] {
case "+":
total += number
case "-":
total -= number
case "÷":
total /= number
case "×":
total *= number
default:
break
}
}
}
clear()
return total
}
Assuming you want a generalised and perhaps extensible algorithm for any arithmetic expression, the right way to do this is to use the Shunting Yard algorithm.
You have an input stream, which is the numbers and operators as the user typed them in and you have an output stream, which is the same numbers and operators but rearranged into reverse Polish notation. So, for example 2 + 2 * 4 would be transformed into 2 2 4 * + which is easily calculated by putting the numbers on a stack as you read them and applying the operators to the top items on the stack as you read them.
To do this the algorithm has an operator stack which can be visualised as a siding (hence "shunting yard") into which low priority operators are shunted until they are needed.
The general algorithm is
read an item from the input
if it is a number send it to the output
if the number is an operator then
while the operator on the top of the stack is of higher precedence than the operator you have pop the operator on the stack and send it to the output
push the operator you read from input onto the stack
repeat the above until the input is empty
pop all the operators on the stack into the output
So if you have 2 + 2 * 4 (NB top of the stack is on the left, bottom of the stack is on the right)
start:
input: 2 + 2 * 4
output: <empty>
stack: <empty>
step 1: send the 2 to output
input: + 2 * 4
output: 2
stack: <empty>
step 2: stack is empty so put + on the stack
input: 2 * 4
output: 2
stack: +
step 3: send the 2 to output
input: * 4
output: 2 2
stack: +
step 4: + is lower priority than * so just put * on the stack
input: 4
output: 2 2
stack: * +
step 5: Send 4 to output
input:
output: 2 2 4
stack: * +
step 6: Input is empty so pop the stack to output
input:
output: 2 2 4 * +
stack:
The Wikipedia entry I linked above has a more detailed description and an algorithm that can handle parentheses and function calls and is much more extensible.
For completeness, here is an implementation of my simplified version of the algorithm
enum Token: CustomStringConvertible
{
var description: String
{
switch self
{
case .number(let num):
return "\(num)"
case .op(let symbol):
return "\(symbol)"
}
}
case op(String)
case number(Int)
var precedence: Int
{
switch self
{
case .op(let symbol):
return Token.precedences[symbol] ?? -1
default:
return -1
}
}
var operation: (inout Stack<Int>) -> ()
{
switch self
{
case .op(let symbol):
return Token.operations[symbol]!
case .number(let value):
return { $0.push(value) }
}
}
static let precedences = [ "+" : 10, "-" : 10, "*" : 20, "/" : 20]
static let operations: [String : (inout Stack<Int>) -> ()] =
[
"+" : { $0.push($0.pop() + $0.pop()) },
"-" : { $0.push($0.pop() - $0.pop()) },
"*" : { $0.push($0.pop() * $0.pop()) },
"/" : { $0.push($0.pop() / $0.pop()) }
]
}
struct Stack<T>
{
var values: [T] = []
var isEmpty: Bool { return values.isEmpty }
mutating func push(_ n: T)
{
values.append(n)
}
mutating func pop() -> T
{
return values.removeLast()
}
func peek() -> T
{
return values.last!
}
}
func shuntingYard(input: [Token]) -> [Token]
{
var operatorStack = Stack<Token>()
var output: [Token] = []
for token in input
{
switch token
{
case .number:
output.append(token)
case .op:
while !operatorStack.isEmpty && operatorStack.peek().precedence >= token.precedence
{
output.append(operatorStack.pop())
}
operatorStack.push(token)
}
}
while !operatorStack.isEmpty
{
output.append(operatorStack.pop())
}
return output
}
let input: [Token] = [ .number(2), .op("+"), .number(2), .op("*"), .number(4)]
let output = shuntingYard(input: input)
print("\(output)")
var dataStack = Stack<Int>()
for token in output
{
token.operation(&dataStack)
}
print(dataStack.pop())
If you only have the four operations +, -, x, and ÷, you can do this by keeping track of a pendingOperand and pendingOperation whenever you encounter a + or -.
Then compute the pending operation when you encounter another + or -, or at the end of the calculation. Note that + or - computes the pending operation, but then immediately starts a new one.
I have modified your function to take the stringNumbers, operators, and initial values as input so that it could be tested independently in a Playground.
func calculateTotal(stringNumbers: [String], operators: [String], initial: Double) -> Double {
func performPendingOperation(operand: Double, operation: String, total: Double) -> Double {
switch operation {
case "+":
return operand + total
case "-":
return operand - total
default:
return total
}
}
var total = initial
var pendingOperand = 0.0
var pendingOperation = ""
for (i, stringNumber) in stringNumbers.enumerated() {
if let number = Double(stringNumber) {
switch operators[i] {
case "+":
total = performPendingOperation(operand: pendingOperand, operation: pendingOperation, total: total)
pendingOperand = total
pendingOperation = "+"
total = number
case "-":
total = performPendingOperation(operand: pendingOperand, operation: pendingOperation, total: total)
pendingOperand = total
pendingOperation = "-"
total = number
case "÷":
total /= number
case "×":
total *= number
default:
break
}
}
}
// Perform final pending operation if needed
total = performPendingOperation(operand: pendingOperand, operation: pendingOperation, total: total)
// clear()
return total
}
Tests:
// 4 + 3
calculateTotal(stringNumbers: ["3"], operators: ["+"], initial: 4)
7
// 4 × 3
calculateTotal(stringNumbers: ["3"], operators: ["×"], initial: 4)
12
// 2 + 2 × 4
calculateTotal(stringNumbers: ["2", "4"], operators: ["+", "×"], initial: 2)
10
// 2 × 2 + 4
calculateTotal(stringNumbers: ["2", "4"], operators: ["×", "+"], initial: 2)
8
// 17 - 2 × 3 + 10 + 7 ÷ 7
calculateTotal(stringNumbers: ["2", "3", "10", "7", "7"], operators: ["-", "×", "+", "+", "÷"], initial: 17)
22
First you have to search in the array to see if there is a ÷ or × sign.
Than you can just sum or subtract.
mutating func calculateTotal() -> Double {
var total: Double = 0
for (i, stringNumber) in stringNumbers.enumerated() {
if let number = Double(stringNumber) {
switch operators[i] {
case "÷":
total /= number
case "×":
total *= number
default:
break
}
//Remove the number from the array and make another for loop with the sum and subtract operations.
}
}
clear()
return total
}
This will work if you are not using complex numbers.
If you don't care speed, as it's running by a computer and you may use the machine way to handle it. Just pick one feasible calculate to do it and then repeat until every one is calculated.
Just for fun here. I use some stupid variable and function names.
func evaluate(_ values: [String]) -> String{
switch values[1] {
case "+": return String(Int(values[0])! + Int(values[2])!)
case "-": return String(Int(values[0])! - Int(values[2])!)
case "×": return String(Int(values[0])! * Int(values[2])!)
case "÷": return String(Int(values[0])! / Int(values[2])!)
default: break;
}
return "";
}
func oneTime(_ string: inout String, _ strings: [String]) throws{
if let first = try NSRegularExpression(pattern: "(\\d+)\\s*(\(strings.map{"\\\($0)"}.joined(separator: "|")))\\s*(\\d+)", options: []).firstMatch(in: string , options: [], range: NSMakeRange(0, string.count)) {
let tempResult = evaluate((1...3).map{ (string as NSString).substring(with: first.range(at: $0))})
string.replaceSubrange( Range(first.range(at: 0), in: string)! , with: tempResult)
}
}
func recursive(_ string: inout String, _ strings: [String]) throws{
var count : Int!
repeat{ count = string.count ; try oneTime(&string, strings)
} while (count != string.count)
}
func final(_ string: inout String, _ strings: [[String]]) throws -> String{
return try strings.reduce(into: string) { (result, signs) in
try recursive(&string, signs)
}}
var string = "17 - 23 + 10 + 7 ÷ 7"
try final(&string, [["×","÷"],["+","-"]])
print("result:" + string)
Using JeremyP method and the Shunting Yard algorithm was the way that worked for me, but I had some differences that had to do with the Operator Associativity(left or right priority) so I had to work with it and I developed the code, which is based on JeremyP answer but uses arrays.
First we have the array with the calculation in Strings, e.g.:
let testArray = ["10","+", "5", "*" , "4", "+" , "10", "+", "20", "/", "2"]
We use the function below to get the RPN version using the Shunting Yard algorithm.
func getRPNArray(calculationArray: [String]) -> [String]{
let c = calculationArray
var myRPNArray = [String]()
var operandArray = [String]()
for i in 0...c.count - 1 {
if c[i] != "+" && c[i] != "-" && c[i] != "*" && c[i] != "/" {
//push number
let number = c[i]
myRPNArray.append(number)
} else {
//if this is the first operand put it on the opStack
if operandArray.count == 0 {
let firstOperand = c[i]
operandArray.append(firstOperand)
} else {
if c[i] == "+" || c[i] == "-" {
operandArray.reverse()
myRPNArray.append(contentsOf: operandArray)
operandArray = []
let uniqOperand = c[i]
operandArray.append(uniqOperand)
} else if c[i] == "*" || c[i] == "/" {
let strongOperand = c[i]
//If I want my mult./div. from right(eg because of parenthesis) the line below is all I need
//--------------------------------
// operandArray.append(strongOperand)
//----------------------------------
//If I want my mult./div. from left
let lastOperand = operandArray[operandArray.count - 1]
if lastOperand == "+" || lastOperand == "-" {
operandArray.append(strongOperand)
} else {
myRPNArray.append(lastOperand)
operandArray.removeLast()
operandArray.append(strongOperand)
}
}
}
}
}
//when I have no more numbers I append the reversed operant array
operandArray.reverse()
myRPNArray.append(contentsOf: operandArray)
operandArray = []
print("RPN: \(myRPNArray)")
return myRPNArray
}
and then we enter the RPN array in the function below to calculate the result. In every loop we remove the numbers and the operand used before and we import the previous result and two "p" in the array so in the end we are left with the solution and an array of "p".
func getResultFromRPNarray(myArray: [String]) -> Double {
var a = [String]()
a = myArray
print("a: \(a)")
var result = Double()
let n = a.count
for i in 0...n - 1 {
if n < 2 {
result = Double(a[0])!
} else {
if a[i] == "p" {
//Do nothing else. Calculations are over and the result is in your hands!!!
} else {
if a[i] == "+" {
result = Double(a[i-2])! + Double(a[i-1])!
a.insert(String(result), at: i-2)
a.remove(at: i - 1)
a.remove(at: i - 1)
a.remove(at: i - 1)
a.insert("p", at: 0)
a.insert("p", at: 0)
} else if a[i] == "-" {
result = Double(a[i-2])! - Double(a[i-1])!
a.insert(String(result), at: i-2)
a.remove(at: i - 1)
a.remove(at: i - 1)
a.remove(at: i - 1)
a.insert("p", at: 0)
a.insert("p", at: 0)
} else if a[i] == "*" {
result = Double(a[i-2])! * Double(a[i-1])!
a.insert(String(result), at: i-2)
a.remove(at: i - 1)
a.remove(at: i - 1)
a.remove(at: i - 1)
a.insert("p", at: 0)
a.insert("p", at: 0)
} else if a[i] == "/" {
result = Double(a[i-2])! / Double(a[i-1])!
a.insert(String(result), at: i-2)
a.remove(at: i - 1)
a.remove(at: i - 1)
a.remove(at: i - 1)
a.insert("p", at: 0)
a.insert("p", at: 0)
} else {
// it is a number so do nothing and go the next one
}
}//no over yet
}//n>2
}//iterating
return result
}//Func

Find longest common substring of array of Strings

In my Swift 3.0 app, I want to determine the best name for something by finding the longest common substring of 6 to 12 strings.
Example strings:
ON/OFF office lights
DIM office lights
VALUE office lights
FB office lights
FB VALUE office lights
Desired output:
office lights
I've come across multiple StackOverflow answers for the longest subsequence but haven't been able to adapt any of them to my needs..
Any help would be greatly appreciated!
I converted Java & C++ code into Swift 3 , collected from GeeksForGeeks Longest Common Subsequence & Longest Common Substring.
It works !
class LongestCommon
{
// Returns length of LCS for X[0..m-1], Y[0..n-1]
private static func lcSubsequence(_ X : String , _ Y : String ) -> String
{
let m = X.characters.count
let n = Y.characters.count
var L = Array(repeating: Array(repeating: 0, count: n + 1 ) , count: m + 1)
// Following steps build L[m+1][n+1] in bottom up fashion. Note
// that L[i][j] contains length of LCS of X[0..i-1] and Y[0..j-1]
for i in stride(from: 0, through: m, by: 1)
{
for j in stride(from: 0, through: n, by: 1)
{
if i == 0 || j == 0
{
L[i][j] = 0;
}
else if X[X.index( X.startIndex , offsetBy: (i - 1) )] == Y[Y.index( Y.startIndex , offsetBy: (j - 1) )]
{
L[i][j] = L[i-1][j-1] + 1
}
else
{
L[i][j] = max(L[i-1][j], L[i][j-1])
}
}
}
// Following code is used to print LCS
var index = L[m][n]
// Create a character array to store the lcs string
var lcs = ""
// Start from the right-most-bottom-most corner and
// one by one store characters in lcs[]
var i = m
var j = n
while (i > 0 && j > 0)
{
// If current character in X[] and Y are same, then
// current character is part of LCS
if X[X.index( X.startIndex , offsetBy: (i - 1) )] == Y[Y.index( Y.startIndex , offsetBy: (j - 1) )]
{
lcs.append(X[X.index( X.startIndex , offsetBy: (i - 1) )])
i-=1
j-=1
index-=1
}
// If not same, then find the larger of two and
// go in the direction of larger value
else if (L[i-1][j] > L[i][j-1])
{
i-=1
}
else
{
j-=1
}
}
// return the lcs
return String(lcs.characters.reversed())
}
// Returns length of LCS for X[0..m-1], Y[0..n-1]
private static func lcSubstring(_ X : String , _ Y : String ) -> String
{
let m = X.characters.count
let n = Y.characters.count
var L = Array(repeating: Array(repeating: 0, count: n + 1 ) , count: m + 1)
var result : (length : Int, iEnd : Int, jEnd : Int) = (0,0,0)
// Following steps build L[m+1][n+1] in bottom up fashion. Note
// that L[i][j] contains length of LCS of X[0..i-1] and Y[0..j-1]
for i in stride(from: 0, through: m, by: 1)
{
for j in stride(from: 0, through: n, by: 1)
{
if i == 0 || j == 0
{
L[i][j] = 0;
}
else if X[X.index( X.startIndex , offsetBy: (i - 1) )] == Y[Y.index( Y.startIndex , offsetBy: (j - 1) )]
{
L[i][j] = L[i-1][j-1] + 1
if result.0 < L[i][j]
{
result.length = L[i][j]
result.iEnd = i
result.jEnd = j
}
}
else
{
L[i][j] = 0 //max(L[i-1][j], L[i][j-1])
}
}
}
// Following code is used to print LCS
let lcs = X.substring(with: X.index(X.startIndex, offsetBy: result.iEnd-result.length)..<X.index(X.startIndex, offsetBy: result.iEnd))
// return the lcs
return lcs
}
// driver program
class func subsequenceOf(_ strings : [String] ) -> String
{
var answer = strings[0] // For on string answer is itself
for i in stride(from: 1, to: strings.count, by: 1)
{
answer = lcSubsequence(answer,strings[i])
}
return answer
}
class func substringOf(_ strings : [String] ) -> String
{
var answer = strings[0] // For on string answer is itself
for i in stride(from: 1, to: strings.count, by: 1)
{
answer = lcSubstring(answer,strings[i])
}
return answer
}
}
Usage :
let strings = ["ON/OFF office lights",
"DIM office lights",
"VALUE office lights",
"FB office lights",
"FB VALUE office lights"]
print(LongestCommon.subsequenceOf(strings))
print(LongestCommon.substringOf(strings))

why my code is slow when finding Fibonacci sum?

I'm writing answers for project Euler Questions in this repo
but having some performance issues in my solution
Question 2:
Each new term in the Fibonacci sequence is generated by adding the previous two terms.
By starting with 1 and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.
My Solution is
func solution2()
{
func fibonacci(number: Int) -> (Int)
{
if number <= 1
{
return number
}
else
{
return fibonacci(number - 1) + fibonacci(number - 2)
}
}
var sum = 0
print("calculating...")
for index in 2..<50
{
print (index)
if (fibonacci(index) % 2 == 0)
{
sum += fibonacci(index)
}
}
print(sum)
}
My Question is, why it gets super slow after iteration 42, i want to do it for 4000000 as the question says, any help?
solution 2
func solution2_fast()
{
var phiOne : Double = (1.0 + sqrt(5.0)) / 2.0
var phiTwo : Double = (1.0 - sqrt(5.0)) / 2.0
func findFibonacciNumber (nthNumber : Double) -> Int64
{
let nthNumber : Double = (pow(phiOne, nthNumber) - (pow(phiTwo, nthNumber))) / sqrt(5.0)
return Int64(nthNumber)
}
var sum : Int64 = 0
print("calculating...")
for index in 2..<4000000
{
print (index)
let f = findFibonacciNumber(Double(index))
if (f % 2 == 0)
{
sum += f
}
}
print(sum)
}
The most important thing about PE questions is to think about what it is asking.
This is not asking you to produce all Fibonacci numbers F(n) less than 4000000. It is asking for the sum of all even F(n) less than 4000000.
Think about the sum of all F(n) where F(n) < 10.
1 + 2 + 3 + 5 + 8
I could do this by calculating F(1), then F(2), then F(3), and so on... and then checking they are less than 10 before adding them up.
Or I could store two variables...
F1 = 1
F2 = 2
And a total...
Total = 3
Now I can turn this into a while loop and lose the recursion altogether. In fact, the most complex thing I'm doing is adding two numbers together...
I came up with this...
func sumEvenFibonacci(lessThan limit: Int) -> Int {
// store the first two Fibonacci numbers
var n1 = 1
var n2 = 2
// and a cumulative total
var total = 0
// repeat until you hit the limit
while n2 < limit {
// if the current Fibonacci is even then add to total
if n2 % 2 == 0 {
total += n2
}
// move the stored Fibonacci numbers up by one.
let temp = n2
n2 = n2 + n1
n1 = temp
}
return total
}
It runs in a fraction of a second.
sumEvenFibonacci(lessThan: 4000000)
Finds the correct answer.
In fact this... sumEvenFibonacci(lessThan: 1000000000000000000) runs in about half a second.
The second solution seems to be fast(er) although an Int64 will not be sufficient to store the result. The sum of Fibonacci numbers from 2..91 is 7,527,100,471,027,205,936 but the largest number you can store in an Int64 is 9,223,372,036,854,775,807. For this you need to use some other types like BigInteger
Because you use the recursive, and it cache in the memory.If you iteration 42, it maybe has so many fibonacci function in your memory, and recursive.So it isn't suitable for recursive, and you can store the result in the array, not the reason of the swift.
this is the answer in two different ways
func solution2_recursive()
{
func fibonacci(number: Int) -> (Int)
{
if number <= 1
{
return number
}
else
{
return fibonacci(number - 1) + fibonacci(number - 2)
}
}
var sum = 0
print("calculating...")
for index in 2..<50
{
print (index)
let f = fibonacci(index)
if( f < 4000000)
{
if (f % 2 == 0)
{
sum += f
}
}
else
{
print(sum)
return
}
}
}
solution 2
func solution2()
{
var phiOne : Double = (1.0 + sqrt(5.0)) / 2.0
var phiTwo : Double = (1.0 - sqrt(5.0)) / 2.0
func findFibonacciNumber (nthNumber : Double) -> Int64
{
let nthNumber : Double = (pow(phiOne, nthNumber) - (pow(phiTwo, nthNumber))) / sqrt(5.0)
return Int64(nthNumber)
}
var sum : Int64 = 0
print("calculating...")
for index in 2..<50
{
let f = findFibonacciNumber(Double(index))
if(f < 4000000)
{
if (f % 2 == 0)
{
sum += f
}
}
else
{
print(sum)
return
}
}
}

Approach for reading arbitrary number of bits in swift

I'm trying to do some binary file parsing in swift, and although i have things working I have a situation where i have variable fields.
I have all my parsing working in the default case
I grab
1-bit field
1-bit field
1-bit field
11-bits field
1-bit field
(optional) 4-bit field
(optional) 4-bit field
1-bit field
2-bit field
(optional) 4-bit field
5-bit field
6-bit field
(optional) 6-bit field
(optional) 24-bit field
(junk data - up until byte buffer 0 - 7 bits as needed)
Most of the data uses only a certain set of optionals so I've gone ahead and started writing classes to handle that data. My general approach is to create a pointer structure and then construct a byte array from that:
let rawData: NSMutableData = NSMutableData(data: input_nsdata)
var ptr: UnsafeMutablePointer<UInt8> = UnsafeMutablePointer<UInt8(rawData.mutableBytes)
bytes = UnsafeMutableBufferPointer<UInt8>(start: ptr, count: rawData.length - offset)
So I end up working with an array of [UInt8] and I can do my parsing in a way similar to:
let b1 = (bytes[3] & 0x01) << 5
let b2 = (bytes[4] & 0xF8) >> 3
return Int(b1 | b2)
So where I run into trouble is with the optional fields, because my data does not lie specifically on byte boundaries everything gets complicated. In the ideal world I would probably just work directly with the pointer and advance it by bytes as needed, however, there is no way that I'm aware of to advance a pointer by 3-bits - which brings me to my question
What is the best approach to handle my situation?
One idea i thought was to come up with various structures that reflect the optional fields, except I'm not sure in swift how to create bit-aligned packed structures.
What is my best approach here? For clarification - the initial 1-bit fields determine which of the optional fields are set.
If the fields do not lie on byte boundaries then you'll have to keep
track of both the current byte and the current bit position within a byte.
Here is a possible solution which allows to read an arbitrary number
of bits from a data array and does all the bookkeeping. The only
restriction is that the result of nextBits() must fit into an UInt
(32 or 64 bits, depending on the platform).
struct BitReader {
private let data : [UInt8]
private var byteOffset : Int
private var bitOffset : Int
init(data : [UInt8]) {
self.data = data
self.byteOffset = 0
self.bitOffset = 0
}
func remainingBits() -> Int {
return 8 * (data.count - byteOffset) - bitOffset
}
mutating func nextBits(numBits : Int) -> UInt {
precondition(numBits <= remainingBits(), "attempt to read more bits than available")
var bits = numBits // remaining bits to read
var result : UInt = 0 // result accumulator
// Read remaining bits from current byte:
if bitOffset > 0 {
if bitOffset + bits < 8 {
result = (UInt(data[byteOffset]) & UInt(0xFF >> bitOffset)) >> UInt(8 - bitOffset - bits)
bitOffset += bits
return result
} else {
result = UInt(data[byteOffset]) & UInt(0xFF >> bitOffset)
bits = bits - (8 - bitOffset)
bitOffset = 0
byteOffset = byteOffset + 1
}
}
// Read entire bytes:
while bits >= 8 {
result = (result << UInt(8)) + UInt(data[byteOffset])
byteOffset = byteOffset + 1
bits = bits - 8
}
// Read remaining bits:
if bits > 0 {
result = (result << UInt(bits)) + (UInt(data[byteOffset]) >> UInt(8 - bits))
bitOffset = bits
}
return result
}
}
Example usage:
let data : [UInt8] = ... your data ...
var bitReader = BitReader(data: data)
let b1 = bitReader.nextBits(1)
let b2 = bitReader.nextBits(1)
let b3 = bitReader.nextBits(1)
let b4 = bitReader.nextBits(11)
let b5 = bitReader.nextBits(1)
if b1 > 0 {
let b6 = bitReader.nextBits(4)
let b7 = bitReader.nextBits(4)
}
// ... and so on ...
And here is another possible implemention, which is a bit simpler
and perhaps more effective. It collects bytes into an UInt, and
then extracts the result in a single step.
Here the restriction is that numBits + 7 must be less or equal
to the number of bits in an UInt (32 or 64). (Of course UInt
can be replace by UInt64 to make it platform independent.)
struct BitReader {
private let data : [UInt8]
private var byteOffset = 0
private var currentValue : UInt = 0 // Bits which still have to be consumed
private var currentBits = 0 // Number of valid bits in `currentValue`
init(data : [UInt8]) {
self.data = data
}
func remainingBits() -> Int {
return 8 * (data.count - byteOffset) + currentBits
}
mutating func nextBits(numBits : Int) -> UInt {
precondition(numBits <= remainingBits(), "attempt to read more bits than available")
// Collect bytes until we have enough bits:
while currentBits < numBits {
currentValue = (currentValue << 8) + UInt(data[byteOffset])
currentBits = currentBits + 8
byteOffset = byteOffset + 1
}
// Extract result:
let remaining = currentBits - numBits
let result = currentValue >> UInt(remaining)
// Update remaining bits:
currentValue = currentValue & UInt(1 << remaining - 1)
currentBits = remaining
return result
}
}

Swift convert decimal String to UInt8-Array

I have a very long String (600+ characters) holding a big decimal value (yes I know - sounds like a BigInteger) and need the byte representation of this value.
Is there any easy way to archive this with swift?
static func decimalStringToUInt8Array(decimalString:String) -> [UInt8] {
...
}
Edit: Updated for Swift 5
I wrote you a function to convert your number string. This is written in Swift 5 (originally Swift 1.2).
func decimalStringToUInt8Array(_ decimalString: String) -> [UInt8] {
// Convert input string into array of Int digits
let digits = Array(decimalString).compactMap { Int(String($0)) }
// Nothing to process? Return an empty array.
guard digits.count > 0 else { return [] }
let numdigits = digits.count
// Array to hold the result, in reverse order
var bytes = [UInt8]()
// Convert array of digits into array of Int values each
// representing 6 digits of the original number. Six digits
// was chosen to work on 32-bit and 64-bit systems.
// Compute length of first number. It will be less than 6 if
// there isn't a multiple of 6 digits in the number.
var ints = Array(repeating: 0, count: (numdigits + 5)/6)
var rem = numdigits % 6
if rem == 0 {
rem = 6
}
var index = 0
var accum = 0
for digit in digits {
accum = accum * 10 + digit
rem -= 1
if rem == 0 {
rem = 6
ints[index] = accum
index += 1
accum = 0
}
}
// Repeatedly divide value by 256, accumulating the remainders.
// Repeat until original number is zero
while ints.count > 0 {
var carry = 0
for (index, value) in ints.enumerated() {
var total = carry * 1000000 + value
carry = total % 256
total /= 256
ints[index] = total
}
bytes.append(UInt8(truncatingIfNeeded: carry))
// Remove leading Ints that have become zero.
while ints.count > 0 && ints[0] == 0 {
ints.remove(at: 0)
}
}
// Reverse the array and return it
return bytes.reversed()
}
print(decimalStringToUInt8Array("0")) // prints "[0]"
print(decimalStringToUInt8Array("255")) // prints "[255]"
print(decimalStringToUInt8Array("256")) // prints "[1,0]"
print(decimalStringToUInt8Array("1024")) // prints "[4,0]"
print(decimalStringToUInt8Array("16777216")) // prints "[1,0,0,0]"
Here's the reverse function. You'll notice it is very similar:
func uInt8ArrayToDecimalString(_ uint8array: [UInt8]) -> String {
// Nothing to process? Return an empty string.
guard uint8array.count > 0 else { return "" }
// For efficiency in calculation, combine 3 bytes into one Int.
let numvalues = uint8array.count
var ints = Array(repeating: 0, count: (numvalues + 2)/3)
var rem = numvalues % 3
if rem == 0 {
rem = 3
}
var index = 0
var accum = 0
for value in uint8array {
accum = accum * 256 + Int(value)
rem -= 1
if rem == 0 {
rem = 3
ints[index] = accum
index += 1
accum = 0
}
}
// Array to hold the result, in reverse order
var digits = [Int]()
// Repeatedly divide value by 10, accumulating the remainders.
// Repeat until original number is zero
while ints.count > 0 {
var carry = 0
for (index, value) in ints.enumerated() {
var total = carry * 256 * 256 * 256 + value
carry = total % 10
total /= 10
ints[index] = total
}
digits.append(carry)
// Remove leading Ints that have become zero.
while ints.count > 0 && ints[0] == 0 {
ints.remove(at: 0)
}
}
// Reverse the digits array, convert them to String, and join them
return digits.reversed().map(String.init).joined()
}
Doing a round trip test to make sure we get back to where we started:
let a = "1234567890987654321333555777999888666444222000111"
let b = decimalStringToUInt8Array(a)
let c = uInt8ArrayToDecimalString(b)
if a == c {
print("success")
} else {
print("failure")
}
success
Check that eight 255 bytes is the same as UInt64.max:
print(uInt8ArrayToDecimalString([255, 255, 255, 255, 255, 255, 255, 255]))
print(UInt64.max)
18446744073709551615
18446744073709551615
You can use the NSData(int: Int, size: Int) method to get an Int to NSData, and then get the bytes from NSData to an array: [UInt8].
Once you know that, the only thing is to know the size of your array. Darwin comes in handy there with the powfunction. Here is a working example:
func stringToUInt8(string: String) -> [UInt8] {
if let int = string.toInt() {
let power: Float = 1.0 / 16
let size = Int(floor(powf(Float(int), power)) + 1)
let data = NSData(bytes: &int, length: size)
var b = [UInt8](count: size, repeatedValue: 0)
return data.getBytes(&b, length: size)
}
}
You can always do:
let bytes = [UInt8](decimalString.utf8)
If you want the UTF-8 bytes.
Provided you had division implemented on your decimal string you could divide by 256 repeatedly. The reminder of the first division is the your least significant byte.
Here's an example of division by a scalar in C (assumed the length of the number is stored in A[0] and writes the result in the same array):
void div(int A[], int B)
{
int i, t = 0;
for (i = A[0]; i > 0; i--, t %= B)
A[i] = (t = t * 10 + A[i]) / B;
for (; A[0] > 1 && !A[A[0]]; A[0]--);
}

Resources