Dividing massive numbers in Swift - ios

I have a UInt128 holding a massive number like 2000009100000000000000 and I want to divide it by 1/10^30
How do I do that?

Possibly by using NSDecimalNumber. For example,
let num1 = NSDecimalNumber(string: "2000009100000000000000")
let num2 = NSDecimalNumber(mantissa: 10, exponent: 30, isNegative: false)
let result = num1.dividing(by: num2)

Related

how to take binary string input from user in swift

I want to take input from user in binary, What I want is something like:
10101
11110
Then I need to perform bitwise OR on this. I know how to take input and how to perform bitwise OR, only I want to know is how to convert because what I am currently using is not giving right result. What I tried is as below:
let aBits: Int16 = Int16(a)! //a is String "10101"
let bBits: Int16 = Int16(b)! //b is String "11110"
let combinedbits = aBits | bBits
Edit: I don't need decimal to binary conversion with radix, as my string already have only 0 and 1
String can have upto 500 characters like:
10011011111010110111001011001001101110111110110001001111001111101111010110110111‌​00111001100011111010
this is beyond Int limit, how to handle that in Swift?
Edit2 : As per vacawama 's answer, below code works great:
let maxAB = max(a.count, b.count)
let paddedA = String(repeating: "0", count: maxAB - a.count) + a
let paddedB = String(repeating: "0", count: maxAB - b.count) + b
let Str = String(zip(paddedA, paddedB).map({ $0 == ("0", "0") ? "0" : "1" }))
I can have array of upto 500 string and each string can have upto 500 characters. Then I have to get all possible pair and perform bitwise OR and count maximum number of 1's. Any idea to make above solution more efficient? Thank you
Since you need arbitrarily long binary numbers, do everything with strings.
This function first pads the two inputs to the same length, and then uses zip to pair the digits and map to compute the OR for each pair of characters. The resulting array of characters is converted back into a String with String().
func binaryOR(_ a: String, _ b: String) -> String {
let maxAB = max(a.count, b.count)
let paddedA = String(repeating: "0", count: maxAB - a.count) + a
let paddedB = String(repeating: "0", count: maxAB - b.count) + b
return String(zip(paddedA, paddedB).map({ $0 == ("0", "0") ? "0" : "1" }))
}
print(binaryOR("11", "1100")) // "1111"
print(binaryOR("1000", "0001")) // "1001"
I can have array of upto 500 string and each string can have upto 500
characters. Then I have to get all possible pair and perform bitwise
OR and count maximum number of 1's. Any idea to make above solution
more efficient?
You will have to do 500 * 499 / 2 (which is 124,750 comparisons). It is important to avoid unnecessary and/or repeated work.
I would recommend:
Do an initial pass to loop though your strings to find out the length of the largest one. Then pad all of your strings to this length. I would keep track of the original length of each string in a tiny stuct:
struct BinaryNumber {
var string: String // padded string
var length: Int // original length before padding
}
Modify the binaryOR function to take BinaryNumbers and return Int, the count of "1"s in the OR.
func binaryORcountOnes(_ a: BinaryNumber, _ b: BinaryNumber) -> Int {
let maxAB = max(a.length, b.length)
return zip(a.string.suffix(maxAB), b.string.suffix(maxAB)).reduce(0) { total, pair in return total + (pair == ("0", "0") ? 0 : 1) }
}
Note: The use of suffix helps the efficiency by only checking the digits that matter. If the original strings had length 2 and 3, then only the last 3 digits will be OR-ed even if they're padded to length 500.
Loop and compare all pairs of BinaryNumbers to find largest count of ones:
var numbers: [BinaryNumber] // This array was created in step 1
maxOnes = 0
for i in 0 ..< (numbers.count - 1) {
for j in (i + 1) ..< numbers.count {
let ones = binaryORcountOnes(numbers[i], numbers[j])
if ones > maxOnes {
maxOnes = ones
}
}
}
print("maxOnes = \(maxOnes)")
Additional idea for speedup
OR can't create more ones than were in the original two numbers, and the number of ones can't exceed the maximum length of either of the original two numbers. So, if you count the ones in each number when you are padding them and store that in your struct in a var ones: Int property, you can use that to see if you should even bother calling binaryORcountOnes:
maxOnes = 0
for i in 0 ..< (numbers.count - 1) {
for j in (i + 1) ..< numbers.count {
if maxOnes < min(numbers[i].ones + numbers[j].ones, numbers[i].length, numbers[j].length) {
let ones = binaryORcountOnes(numbers[i], numbers[j])
if ones > maxOnes {
maxOnes = ones
}
}
}
}
By the way, the length of the original string should really just be the minimum length that includes the highest order 1. So if the original string was "00101", then the length should be 3 because that is all you need to store "101".
let number = Int(a, radix: 2)
Radix helps using binary instead of decimical value
You can use radix for converting your string. Once converted, you can do a bitwise OR and then check the nonzeroBitCount to count the number of 1's
let a = Int("10101", radix: 2)!
let b = Int("11110", radix: 2)!
let bitwiseOR = a | b
let nonZero = bitwiseOR.nonzeroBitCount
As I already commented above "10101" is actually a String not a Binary so "10101" | "11110" will not calculate what you actually needed.
So what you need to do is convert both value in decimal then use bitwiseOR and convert the result back to in Binary String (in which format you have the data "11111" not 11111)
let a1 = Int("10101", radix: 2)!
let b1 = Int("11110", radix: 2)!
var result = 21 | 30
print(result)
Output: 31
Now convert it back to binary string
let binaryString = String(result, radix: 2)
print(binaryString)
Output: 11111
--: EDIT :--
I'm going to answer a basic example of how to calculate bitwiseOR as the question is specific for not use inbuilt function as string is very large to be converted into an Int.
Algorithm: 1|0 = 1, 1|1 = 1, 0|0 = 0, 0|1 = 1
So, What we do is to fetch all the characters from String one by one the will perform the | operation and append it to another String.
var str1 = "100101" // 37
var str2 = "10111" // 23
/// Result should be "110111" -> "55"
// #1. Make both string equal
let length1 = str1.characters.count
let length2 = str2.characters.count
if length1 != length2 {
let maxLength = max(length1, length2)
for index in 0..<maxLength {
if str1.characters.count < maxLength {
str1 = "0" + str1
}
if str2.characters.count < maxLength {
str2 = "0" + str2
}
}
}
// #2. Get the index and compare one by one in bitwise OR
// a) 1 - 0 = 1,
// b) 0 - 1 = 1,
// c) 1 - 1 = 1,
// d) 0 - 0 = 0
let length = max(str1.characters.count, str2.characters.count)
var newStr = ""
for index in 0..<length {
let charOf1 = Int(String(str1[str1.index(str1.startIndex, offsetBy: index)]))!
let charOf2 = Int(String(str2[str2.index(str2.startIndex, offsetBy: index)]))!
let orResult = charOf1 | charOf2
newStr.append("\(orResult)")
}
print(newStr)
Output: 110111 // 55
I would like to refer Understanding Bitwise Operators for more detail.
func addBinary(_ a: String, _ b: String) {
var result = ""
let arrA = Array(a)
let arrB = Array(b)
var lengthA = arrA.count - 1
var lengthB = arrB.count - 1
var sum = 0
while lengthA >= 0 || lengthB >= 0 || sum == 1 {
sum += (lengthA >= 0) ? Int(String(arrA[lengthA]))! : 0
sum += (lengthB >= 0) ? Int(String(arrB[lengthB]))! : 0
result = String((sum % 2)) + result
sum /= 2
lengthA -= 1
lengthB -= 1
}
print(result) }
addBinary("11", "1")

Mathematical integrity of NSDecimalNumber

I'm using numbers divided by 10^30
I may be adding values like 1000000000000000 and 5000000000000000 stored in NSDecimalNumbers.
My concern is that I think I've seen a few times, when adding or subtracting these values, incorrect math being done.
Is that a possibility or are NSDecimalNumbers pretty sound in terms of the integrity of their math.
In answer to your question, the math offered by Decimal/NSDecimalNumber is sound, and the problem probably rests in either:
The calculations might exceed the capacity of these decimal formats (as outlined by rob mayoff). For example, this works because we're within the 38 digit mantissa:
let x = Decimal(sign: .plus, exponent: 60, significand: 1)
let y = Decimal(sign: .plus, exponent: 30, significand: 1)
let z = x + y
1,000,000,000,000,000,000,000,000,000,001,000,000,000,000,000,000,000,000,000,000
But this will not:
let x = Decimal(sign: .plus, exponent: 60, significand: 1)
let y = Decimal(sign: .plus, exponent: 10, significand: 1)
let z = x + y
1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000
Or, it could just be how you are instantiating these decimal values, e.g. supplying a floating point number rather than using the Decimal(sign:exponent:significand:) or NSDecimalNumber(mantissa:exponent:isNegative:) initializers:
For example, this works fine:
let formatter = NumberFormatter()
formatter.numberStyle = .decimal
let x = Decimal(sign: .plus, exponent: 30, significand: 1)
print(formatter.string(for: x)!)
That results in:
1,000,000,000,000,000,000,000,000,000,000
But these won't, because you're supplying a floating point number which suffers lower limits in precision:
let y = Decimal(1.0e30)
print(formatter.string(for: y)!)
let z = Decimal(1_000_000_000_000_000_000_000_000_000_000.0)
print(formatter.string(for: z)!)
These both result in:
1,000,000,000,000,000,409,600,000,000,000
For more information on floating-point arithmetic (and why certainly decimal numbers cannot be perfectly captured in floating-point types), see floating-point arithmetic.
In your other question, you ask why the following:
let foo = NSDecimalNumber(value: 334.99999).multiplying(byPowerOf10: 30)
produced:
334999990000000051200000000000000
This is the same underlying issue that I outlined above in point 2. Floating point numbers cannot accurately represent certain decimal values.
Note, your question is the same as the following Decimal rendition:
let adjustment = Decimal(sign: .plus, exponent: 30, significand: 1)
let foo = Decimal(334.99999) * adjustment
This also produces:
334999990000000051200000000000000
But you will get the desired result if you supply either a string or a exponent and mantissa/significant, because these will be accurately represented as a Decimal/NSDecimalNumber:
let bar = Decimal(string: "334.99999")! * adjustment
let baz = Decimal(sign: .plus, exponent: -5, significand: 33499999) * adjustment
Those both produce:
334999990000000000000000000000000
Bottom line, do not supply floating point numbers to Decimal or NSDecimalNumber. Use string representations or use the exponent and mantissa/significand representation and you will not see these strange deviations introduced when using floating point numbers.
I'm using numbers divided by 1^30
Good news, then, because 1^30 = 1. Perhaps you meant 10^30?
Anyway, according to the NSDecimalNumber class reference:
An instance can represent any number that can be expressed as mantissa x 10^exponent where mantissa is a decimal integer up to 38 digits long, and exponent is an integer from –128 through 127.

Swift rounding to X decimals places issue when .999999

I did try different rounding to decimal places methods and all of them have the same in common. When I use a number, lets say 0.99999 and I want to round it to 2 decimal places. My expected result would be 0.99 but instead I get 1.00
I did try
let divisor = pow(10.0, Double(decimals))
let roundedVal = round(value * divisor) / divisor
Also did try
String(format:"%.2f",decimals)
And
let behavior = NSDecimalNumberHandler(roundingMode: .plain, scale: decimals, raiseOnExactness: false, raiseOnOverflow: false, raiseOnUnderflow: false, raiseOnDivideByZero: true)
NSDecimalNumber(value: value).rounding(accordingToBehavior: behavior)
let roundedValue2 = NSDecimalNumber(value: 0.6849).rounding(accordingToBehavior: behavior)
All methods give me the same issue.
Some ideas?
Thanks for the help!
EDIT:
The idea is that rounding is okay for all cases but not okay for that 0.9999 case. The display numbers are small always (range from 0.000 to 1) and decimals to show is parameter so 0.348 should be 0.35 and not 0.34 (when trunked)
let amount = 0.99999999999999
let formatter = NumberFormatter()
formatter.numberStyle = .decimal
formatter.maximumFractionDigits = 2
formatter.roundingMode = .floor // rounding mode floor is the key here
let formattedAmount = formatter.string(from: amount as NSNumber)!
print(formattedAmount) // 0.99

Why does the xcode say "Expression was too complex to be solved in reasonable time..." for my Swift code?

I am writing an app, and I have a block of code that reads like this:
let DestViewController: ftrViewController = segue.destinationViewController as! ftrViewController
let weightInt: Int? = Int(weightInKilos.text!)
let dehydrationInt: Int? = Int(percentOfDehydration.text!)
let lossesInt: Int? = Int(ongoingLosses.text!)
let factorFloat: Float? = Float(Factor.text!)
let lrs24Int = (30 * weightInt! + 70) * factorFloat! + weightInt! * dehydrationInt! * 10 + lossesInt!
However, Xcode says that Expression was too complex to be solved in reasonable time; consider breaking up the expression into distinct sub-expressions.
My equation looks right to me, and I do not believe that my equation is too complex, because I had the same error when the problem was simply that I wasn't declaring the integers correctly (the deal with the ?s and the !s).
Does anybody see a problem in my code that is leading to this error, or is the expression truly too hard for the computer to solve in reasonable time? Thanks!
PS- I think the problem might be the float, because before I added the float, it was working fine.
Rewritten Code
let weightInt: Float? = Float(weightInKilos.text!)
let dehydrationInt: Float? = Float(percentOfDehydration.text!)
let lossesInt: Float? = Float(ongoingLosses.text!)
let factorFloat: Float? = Float(Factor.text!)
let step1 = (30 * weightInt! + 70) * factorFloat! + weigh
let lrs24Int = step1 * dehydrationInt! * 10 + lossesInt!
It was not Only Float problem. I used float to avoid type conversions as all are float. The Main Problem was the Complexity of expression.

How to convert an Int to Hex String in Swift

In Obj-C I used to convert an unsigned integer n to a hex string with
NSString *st = [NSString stringWithFormat:#"%2X", n];
I tried for a long time to translate this into Swift language, but unsuccessfully.
You can now do:
let n = 14
var st = String(format:"%02X", n)
st += " is the hexadecimal representation of \(n)"
print(st)
0E is the hexadecimal representation of 14
Note: The 2 in this example is the field width and represents the minimum length desired. The 0 tells it to pad the result with leading 0's if necessary. (Without the 0, the result would be padded with leading spaces). Of course, if the result is larger than two characters, the field length will not be clipped to a width of 2; it will expand to whatever length is necessary to display the full result.
This only works if you have Foundation imported (this includes the import of Cocoa or UIKit). This isn't a problem if you're doing iOS or macOS programming.
Use uppercase X if you want A...F and lowercase x if you want a...f:
String(format: "%x %X", 64206, 64206) // "face FACE"
If you want to print integer values larger than UInt32.max, add ll (el-el, not eleven) to the format string:
let n = UInt64.max
print(String(format: "%llX is hexadecimal for \(n)", n))
FFFFFFFFFFFFFFFF is hexadecimal for 18446744073709551615
Original Answer
You can still use NSString to do this. The format is:
var st = NSString(format:"%2X", n)
This makes st an NSString, so then things like += do not work. If you want to be able to append to the string with += make st into a String like this:
var st = NSString(format:"%2X", n) as String
or
var st = String(NSString(format:"%2X", n))
or
var st: String = NSString(format:"%2X", n)
Then you can do:
let n = 123
var st = NSString(format:"%2X", n) as String
st += " is the hexadecimal representation of \(n)"
// "7B is the hexadecimal representation of 123"
In Swift there is a specific init method on String for exactly this:
let hex = String(0xF, radix: 16, uppercase: false)
println("hex=\(hex)") // Output: f
With Swift 5, according to your needs, you may choose one of the three following methods in order to solve your problem.
#1. Using String's init(_:radix:uppercase:) initializer
Swift String has a init(_:radix:uppercase:) initializer with the following declaration:
init<T>(_ value: T, radix: Int = 10, uppercase: Bool = false) where T : BinaryInteger
Creates a string representing the given value in base 10, or some other specified base.
The Playground code below shows how to create a String instance that represents an integer value in hexadecimal format by using init(_:radix:uppercase:) and without having to import Foundation:
let string1 = String(2, radix: 16)
print(string1) // prints: "2"
let string2 = String(211, radix: 16)
print(string2) // prints: "d3"
let string3 = String(211, radix: 16, uppercase: true)
print(string3) // prints: "D3"
#2. Using String's init(format:​_:​) initializer
Foundation provides String a init(format:​_:​) initializer. init(format:​_:​) has the following declaration:
init(format: String, _ arguments: CVarArg...)
Returns a String object initialized by using a given format string as a template into which the remaining argument values are substituted.
The Apple's String Programming Guide gives a list of the format specifiers that are supported by String and NSString. Among those format specifiers, %X has the following description:
Unsigned 32-bit integer (unsigned int), printed in hexadecimal using the digits 0–9 and uppercase A–F.
The Playground code below shows how to create a String instance that represents an integer value in hexadecimal format with init(format:​_:​):
import Foundation
let string1 = String(format:"%X", 2)
print(string1) // prints: "2"
let string2 = String(format:"%02X", 1)
print(string2) // prints: "01"
let string3 = String(format:"%02X", 211)
print(string3) // prints: "D3"
let string4 = String(format: "%02X, %02X, %02X", 12, 121, 255)
print(string4) // prints: "0C, 79, FF"
#3. Using String's init(format:​arguments:​) initializer
Foundation provides String a init(format:​arguments:​) initializer. init(format:​arguments:​) has the following declaration:
init(format: String, arguments: [CVarArg])
Returns a String object initialized by using a given format string as a template into which the remaining argument values are substituted according to the user’s default locale.
The Playground code below shows how to create a String instance that represents an integer value in hexadecimal format with init(format:​arguments:​):
import Foundation
let string1 = String(format:"%X", arguments: [2])
print(string1) // prints: "2"
let string2 = String(format:"%02X", arguments: [1])
print(string2) // prints: "01"
let string3 = String(format:"%02X", arguments: [211])
print(string3) // prints: "D3"
let string4 = String(format: "%02X, %02X, %02X", arguments: [12, 121, 255])
print(string4) // prints: "0C, 79, FF"
Swift 5.2.4
let value = 200
let hexString = String(format: "%02X", value)
Answers above work fine for values in the range of a 32 bit Int, but values over this won't work as the value will roll over.
You need to use the length modifier for values greater than a 32bit Int
%x = Unsigned 32-bit integer (unsigned int)
ll = Length modifiers specifying that a following d, o, u, x, or X conversion specifier applies to a long long or unsigned long long argument.
let hexString = String(format:"%llX", decimalValue)
To use
let string2 = String(format:"%02X", 1)
print(string2) // prints: "01"
In Swift3 import foundation is not required, At least not in a Project.
String should have all the functionality as NSString.

Resources