As stated in Apple Docs:
An arbitrary Unicode scalar, written as \u{n}, where n is a 1–8 digit hexadecimal number with a value equal to a valid Unicode code point
let dollarSign = "\u{24}" // $, Unicode scalar U+0024
my question is if I have the hexadecimal digit how could I turn it into string. So if I have the following :
let dollarSignHex = 24
How can I map it to let dollarSignString = ????
24 is a decimal integer constant. If you want the Unicode code point
with the hexadecimal number 24 then you have to start with
let dollarCode = 0x24
or
let dollarCode = 36
Then you can create a string from that integer value with
let dollarSignString = String(UnicodeScalar(dollarCode)) // $
Alternatively, start with a string containing the hexadecimal
representation of the code point, and convert that to a number
and then to a string:
let dollarSignHex = "24"
let dollarCode = UInt32(dollarSignHex, radix: 16)! // 36
let dollarSignString = String(UnicodeScalar(dollarCode)) // $
Related
What i'm looking for:
Given a Double (doublenumber) and an Int (n) I wish to iterate trough the 1st decimal, 2nd decimal, 3rd decimal, 4th decimal.... until the 'n'decimal
My first approach was, coverting to String the Double so I could iterate like an array the string, but the problem is that when I convert to string I lose many decimals numbers
let doubleNumber = 1.00/98 //0.010204081632653061224489795918367346938775510204081632653...
var stringFromDouble = String(doubleNumber) //0.010204081632653
stringFromDouble.removeFirst() //.010204081632653
stringFromDouble.removeFirst() //010204081632653
for letter in stringFromDouble{
//cycle to iterate the decimals
}
If the intention is to get many decimal digits of 1.0/98.0 then you must not store that number in a Double in the first place, because that has a precision of approximately 16 decimal digits only. You could use Decimal which has a precision of 38 decimal digits.
But for more decimal digits you'll have to do “rational arithmetic,” i.e. work with numerator and denominator of the fraction as integers.
Here is how you can print arbitrarily many decimal digits of a rational number. For simplicity I have assumed that the number is positive and less than one.
func printDecimalDigits(of numerator: Int, dividedBy denominator: Int, count: Int) {
var numerator = numerator
for _ in 1...count {
// Multiply by 10 to get the next digit:
numerator *= 10
// Print integer part of `numerator/denominator`:
print(numerator / denominator, terminator: "")
// Reduce `numerator/denominator` to its fractional part:
numerator %= denominator
}
print()
}
Example:
printDecimalDigits(of: 1, dividedBy: 98, count: 100)
// 0102040816326530612244897959183673469387755102040816326530612244897959183673469387755102040816326530
Or as a function which returns the digits as a (lazily evaluated) sequence:
func decimalDigits(of numerator: Int, dividedBy denominator: Int) -> AnySequence<Int> {
return AnySequence(sequence(state: numerator) { num -> Int in
num *= 10
let d = num / denominator
num %= denominator
return d
})
}
Example:
let first1000Digits = decimalDigits(of: 1, dividedBy: 98).prefix(1000)
for d in first1000Digits { print(d) }
I have this code:
var weightSum: Float = 3.14159
let weightPerPortionGrams: Decimal = 0.999999
weightSum = weightSum + (weightPerPortionGrams)
The numbers are examples.
I get an error:
Binary operator '+ =' can not be applied to type 'Float' and 'Decimal'.
Does anyone know how to fix it?
To convert a Decimal to a float, you can either do this:
weightSum += Float(truncating: weightPerPortionGrams as NSNumber)
or this:
weightSum += (weightPerPortionGrams as NSNumber).floatValue
Swift needs both variables to be of the same type to make use of the "+" operator, so you would need to either convert your Decimal to type Float or the other way around before summing them:
weightPerPortionGramsFloat = (weightPerPortionGrams as NSNumber).floatValue
or
weightSumDecimal = (weightSum as NSNumber).decimalValue
Greetings!
func + (left: Float, right: Decimal) -> Float {
return left + Float(right.description)!
}
var weightSum: Float = 3.14159
let weightPerPortionGrams: Decimal = 0.999999
weightSum = weightSum + weightPerPortionGrams
print(weightSum)
// prints 4.141589
Hope it helps!
This is because the variables weightSum and weightPerPortionGrams have different types. Luckily you can convert these variables in Swift.
So, to make this work, you should convert the type of weightPerPortionGrams to the one of weightSum, so both variables become the same type:
weightSum = weightSum + NSDecimalNumber(decimal: weightPerPortionGrams).floatValue
Note that Decimal might not handle the floating points quite well.
You can also do an extra cast, to make sure that we are using the methods of NSDecimalNumber and not of NSNumber:
let double = NSDecimalNumber(decimal: weightPerPortionGrams).doubleValue
weightSum = weightSum + Float(double)
I am having a hard time getting the correct value that I need.
I get from my characteristic vales from:
func peripheral(_ peripheral: CBPeripheral, didUpdateValueFor ...
I can read and print off the values with:
let values = characteristic.value
for val in values! {
print("Value", num)
}
This gets me:
"Value 0" // probe state not important
"Value 46" // temp
"Value 2" // see below
The problem is that the temp is not 46.
Below is a snippet of instructions on how I need to convert the byte to get the actual temp.
The actual temp was around 558 ºF.
Here are a part of the instructions:
Description: temperature data that is valid only if the temperature stat is normal
byte[1] = (unsigned char)temp;
byte[2] = (unsigned char)(temp>>8);
byte[3] = (unsigned char)(temp>>16);
byte[4] = (unsigned char)(temp>>24);
I can't seem to get the correct temp? Please let me know what I am doing wrong.
According to the description, value[1] ... value[4] are the least significant to most significant bytes of the (32-bit integer) temperature, so this is how you would recreate
that value from the bytes:
if let value = characteristic.value, value.count >= 5 {
let tmp = UInt32(value[1]) + UInt32(value[2]) << 8 + UInt32(value[3]) << 16 + UInt32(value[4]) << 24
let temperature = Int32(bitPattern: tmp)
}
The bit-fiddling is done in unsigned integer arithmetic to avoid
an overflow. Assuming that the temperature is a signed value,
this value is then converted to a signed integer with the same
bit representation.
The instructions tell you the answer. You are getting 46 in byte 1, then 2 in byte 2. The instructions say to leave byte 1 alone, but for byte 2 we are to shift the results as temp>>8 — which means "multiply by 256" (because 2^8 is 256). Well, what is
46+256×2
It is 558, just the result we're looking for.
I want to take input from user in binary, What I want is something like:
10101
11110
Then I need to perform bitwise OR on this. I know how to take input and how to perform bitwise OR, only I want to know is how to convert because what I am currently using is not giving right result. What I tried is as below:
let aBits: Int16 = Int16(a)! //a is String "10101"
let bBits: Int16 = Int16(b)! //b is String "11110"
let combinedbits = aBits | bBits
Edit: I don't need decimal to binary conversion with radix, as my string already have only 0 and 1
String can have upto 500 characters like:
1001101111101011011100101100100110111011111011000100111100111110111101011011011100111001100011111010
this is beyond Int limit, how to handle that in Swift?
Edit2 : As per vacawama 's answer, below code works great:
let maxAB = max(a.count, b.count)
let paddedA = String(repeating: "0", count: maxAB - a.count) + a
let paddedB = String(repeating: "0", count: maxAB - b.count) + b
let Str = String(zip(paddedA, paddedB).map({ $0 == ("0", "0") ? "0" : "1" }))
I can have array of upto 500 string and each string can have upto 500 characters. Then I have to get all possible pair and perform bitwise OR and count maximum number of 1's. Any idea to make above solution more efficient? Thank you
Since you need arbitrarily long binary numbers, do everything with strings.
This function first pads the two inputs to the same length, and then uses zip to pair the digits and map to compute the OR for each pair of characters. The resulting array of characters is converted back into a String with String().
func binaryOR(_ a: String, _ b: String) -> String {
let maxAB = max(a.count, b.count)
let paddedA = String(repeating: "0", count: maxAB - a.count) + a
let paddedB = String(repeating: "0", count: maxAB - b.count) + b
return String(zip(paddedA, paddedB).map({ $0 == ("0", "0") ? "0" : "1" }))
}
print(binaryOR("11", "1100")) // "1111"
print(binaryOR("1000", "0001")) // "1001"
I can have array of upto 500 string and each string can have upto 500
characters. Then I have to get all possible pair and perform bitwise
OR and count maximum number of 1's. Any idea to make above solution
more efficient?
You will have to do 500 * 499 / 2 (which is 124,750 comparisons). It is important to avoid unnecessary and/or repeated work.
I would recommend:
Do an initial pass to loop though your strings to find out the length of the largest one. Then pad all of your strings to this length. I would keep track of the original length of each string in a tiny stuct:
struct BinaryNumber {
var string: String // padded string
var length: Int // original length before padding
}
Modify the binaryOR function to take BinaryNumbers and return Int, the count of "1"s in the OR.
func binaryORcountOnes(_ a: BinaryNumber, _ b: BinaryNumber) -> Int {
let maxAB = max(a.length, b.length)
return zip(a.string.suffix(maxAB), b.string.suffix(maxAB)).reduce(0) { total, pair in return total + (pair == ("0", "0") ? 0 : 1) }
}
Note: The use of suffix helps the efficiency by only checking the digits that matter. If the original strings had length 2 and 3, then only the last 3 digits will be OR-ed even if they're padded to length 500.
Loop and compare all pairs of BinaryNumbers to find largest count of ones:
var numbers: [BinaryNumber] // This array was created in step 1
maxOnes = 0
for i in 0 ..< (numbers.count - 1) {
for j in (i + 1) ..< numbers.count {
let ones = binaryORcountOnes(numbers[i], numbers[j])
if ones > maxOnes {
maxOnes = ones
}
}
}
print("maxOnes = \(maxOnes)")
Additional idea for speedup
OR can't create more ones than were in the original two numbers, and the number of ones can't exceed the maximum length of either of the original two numbers. So, if you count the ones in each number when you are padding them and store that in your struct in a var ones: Int property, you can use that to see if you should even bother calling binaryORcountOnes:
maxOnes = 0
for i in 0 ..< (numbers.count - 1) {
for j in (i + 1) ..< numbers.count {
if maxOnes < min(numbers[i].ones + numbers[j].ones, numbers[i].length, numbers[j].length) {
let ones = binaryORcountOnes(numbers[i], numbers[j])
if ones > maxOnes {
maxOnes = ones
}
}
}
}
By the way, the length of the original string should really just be the minimum length that includes the highest order 1. So if the original string was "00101", then the length should be 3 because that is all you need to store "101".
let number = Int(a, radix: 2)
Radix helps using binary instead of decimical value
You can use radix for converting your string. Once converted, you can do a bitwise OR and then check the nonzeroBitCount to count the number of 1's
let a = Int("10101", radix: 2)!
let b = Int("11110", radix: 2)!
let bitwiseOR = a | b
let nonZero = bitwiseOR.nonzeroBitCount
As I already commented above "10101" is actually a String not a Binary so "10101" | "11110" will not calculate what you actually needed.
So what you need to do is convert both value in decimal then use bitwiseOR and convert the result back to in Binary String (in which format you have the data "11111" not 11111)
let a1 = Int("10101", radix: 2)!
let b1 = Int("11110", radix: 2)!
var result = 21 | 30
print(result)
Output: 31
Now convert it back to binary string
let binaryString = String(result, radix: 2)
print(binaryString)
Output: 11111
--: EDIT :--
I'm going to answer a basic example of how to calculate bitwiseOR as the question is specific for not use inbuilt function as string is very large to be converted into an Int.
Algorithm: 1|0 = 1, 1|1 = 1, 0|0 = 0, 0|1 = 1
So, What we do is to fetch all the characters from String one by one the will perform the | operation and append it to another String.
var str1 = "100101" // 37
var str2 = "10111" // 23
/// Result should be "110111" -> "55"
// #1. Make both string equal
let length1 = str1.characters.count
let length2 = str2.characters.count
if length1 != length2 {
let maxLength = max(length1, length2)
for index in 0..<maxLength {
if str1.characters.count < maxLength {
str1 = "0" + str1
}
if str2.characters.count < maxLength {
str2 = "0" + str2
}
}
}
// #2. Get the index and compare one by one in bitwise OR
// a) 1 - 0 = 1,
// b) 0 - 1 = 1,
// c) 1 - 1 = 1,
// d) 0 - 0 = 0
let length = max(str1.characters.count, str2.characters.count)
var newStr = ""
for index in 0..<length {
let charOf1 = Int(String(str1[str1.index(str1.startIndex, offsetBy: index)]))!
let charOf2 = Int(String(str2[str2.index(str2.startIndex, offsetBy: index)]))!
let orResult = charOf1 | charOf2
newStr.append("\(orResult)")
}
print(newStr)
Output: 110111 // 55
I would like to refer Understanding Bitwise Operators for more detail.
func addBinary(_ a: String, _ b: String) {
var result = ""
let arrA = Array(a)
let arrB = Array(b)
var lengthA = arrA.count - 1
var lengthB = arrB.count - 1
var sum = 0
while lengthA >= 0 || lengthB >= 0 || sum == 1 {
sum += (lengthA >= 0) ? Int(String(arrA[lengthA]))! : 0
sum += (lengthB >= 0) ? Int(String(arrB[lengthB]))! : 0
result = String((sum % 2)) + result
sum /= 2
lengthA -= 1
lengthB -= 1
}
print(result) }
addBinary("11", "1")
In Obj-C I used to convert an unsigned integer n to a hex string with
NSString *st = [NSString stringWithFormat:#"%2X", n];
I tried for a long time to translate this into Swift language, but unsuccessfully.
You can now do:
let n = 14
var st = String(format:"%02X", n)
st += " is the hexadecimal representation of \(n)"
print(st)
0E is the hexadecimal representation of 14
Note: The 2 in this example is the field width and represents the minimum length desired. The 0 tells it to pad the result with leading 0's if necessary. (Without the 0, the result would be padded with leading spaces). Of course, if the result is larger than two characters, the field length will not be clipped to a width of 2; it will expand to whatever length is necessary to display the full result.
This only works if you have Foundation imported (this includes the import of Cocoa or UIKit). This isn't a problem if you're doing iOS or macOS programming.
Use uppercase X if you want A...F and lowercase x if you want a...f:
String(format: "%x %X", 64206, 64206) // "face FACE"
If you want to print integer values larger than UInt32.max, add ll (el-el, not eleven) to the format string:
let n = UInt64.max
print(String(format: "%llX is hexadecimal for \(n)", n))
FFFFFFFFFFFFFFFF is hexadecimal for 18446744073709551615
Original Answer
You can still use NSString to do this. The format is:
var st = NSString(format:"%2X", n)
This makes st an NSString, so then things like += do not work. If you want to be able to append to the string with += make st into a String like this:
var st = NSString(format:"%2X", n) as String
or
var st = String(NSString(format:"%2X", n))
or
var st: String = NSString(format:"%2X", n)
Then you can do:
let n = 123
var st = NSString(format:"%2X", n) as String
st += " is the hexadecimal representation of \(n)"
// "7B is the hexadecimal representation of 123"
In Swift there is a specific init method on String for exactly this:
let hex = String(0xF, radix: 16, uppercase: false)
println("hex=\(hex)") // Output: f
With Swift 5, according to your needs, you may choose one of the three following methods in order to solve your problem.
#1. Using String's init(_:radix:uppercase:) initializer
Swift String has a init(_:radix:uppercase:) initializer with the following declaration:
init<T>(_ value: T, radix: Int = 10, uppercase: Bool = false) where T : BinaryInteger
Creates a string representing the given value in base 10, or some other specified base.
The Playground code below shows how to create a String instance that represents an integer value in hexadecimal format by using init(_:radix:uppercase:) and without having to import Foundation:
let string1 = String(2, radix: 16)
print(string1) // prints: "2"
let string2 = String(211, radix: 16)
print(string2) // prints: "d3"
let string3 = String(211, radix: 16, uppercase: true)
print(string3) // prints: "D3"
#2. Using String's init(format:_:) initializer
Foundation provides String a init(format:_:) initializer. init(format:_:) has the following declaration:
init(format: String, _ arguments: CVarArg...)
Returns a String object initialized by using a given format string as a template into which the remaining argument values are substituted.
The Apple's String Programming Guide gives a list of the format specifiers that are supported by String and NSString. Among those format specifiers, %X has the following description:
Unsigned 32-bit integer (unsigned int), printed in hexadecimal using the digits 0–9 and uppercase A–F.
The Playground code below shows how to create a String instance that represents an integer value in hexadecimal format with init(format:_:):
import Foundation
let string1 = String(format:"%X", 2)
print(string1) // prints: "2"
let string2 = String(format:"%02X", 1)
print(string2) // prints: "01"
let string3 = String(format:"%02X", 211)
print(string3) // prints: "D3"
let string4 = String(format: "%02X, %02X, %02X", 12, 121, 255)
print(string4) // prints: "0C, 79, FF"
#3. Using String's init(format:arguments:) initializer
Foundation provides String a init(format:arguments:) initializer. init(format:arguments:) has the following declaration:
init(format: String, arguments: [CVarArg])
Returns a String object initialized by using a given format string as a template into which the remaining argument values are substituted according to the user’s default locale.
The Playground code below shows how to create a String instance that represents an integer value in hexadecimal format with init(format:arguments:):
import Foundation
let string1 = String(format:"%X", arguments: [2])
print(string1) // prints: "2"
let string2 = String(format:"%02X", arguments: [1])
print(string2) // prints: "01"
let string3 = String(format:"%02X", arguments: [211])
print(string3) // prints: "D3"
let string4 = String(format: "%02X, %02X, %02X", arguments: [12, 121, 255])
print(string4) // prints: "0C, 79, FF"
Swift 5.2.4
let value = 200
let hexString = String(format: "%02X", value)
Answers above work fine for values in the range of a 32 bit Int, but values over this won't work as the value will roll over.
You need to use the length modifier for values greater than a 32bit Int
%x = Unsigned 32-bit integer (unsigned int)
ll = Length modifiers specifying that a following d, o, u, x, or X conversion specifier applies to a long long or unsigned long long argument.
let hexString = String(format:"%llX", decimalValue)
To use
let string2 = String(format:"%02X", 1)
print(string2) // prints: "01"
In Swift3 import foundation is not required, At least not in a Project.
String should have all the functionality as NSString.