How to convert IEEE-11073 16-bit SFLOAT to mantissa and exp in Swift? - ios

I want to convert two bytes to into it's mantissa and exponent and try then multiply the mantissa and exponent together to get its Int value.

Related

Foundation: Why Int64.max and Double(Int64.max) prints 2 entire different value in Swift iOS SDK

Here is my Swift Code
print("\(Int64.max)")
print("\(Double(Int64.max))")
It produce following output
9223372036854775807
9.223372036854776e+18
Why the both value is entire different
9.223372036854776e+18 - 9223372036854775807 = 193 FYI
The value of Double you are seeing in the output is only an approximation to some number of significant figures. We can see more significant figures by String(format:)
print(String(format: "%.1f", Double(Int64.max)))
This prints:
9223372036854775808.0
So actually, the difference is not as big as what you claimed it was (193). It's just a difference of 1.
Why is there a difference?
Double stores value using a floating point representation. It can represent a wide range of numbers, but not every number in that range. Double uses 53 bits of mantissa, 1 sign bit and 11 bits to store the exponent. The mantissa represents the significant digits of the number, and the exponent tells you where to put the decimal point. Everything on one side of the decimal point represents positive powers of 2 and everything on the other side represent negative powers of 2. For example:
0.1010000 0010
mantissa exponent
The exponent says to move the decimal point to the right 3 times, so the mantissa becomes 010.10000. The 1 on the left represents 2, and the 1 on the right represents a half (2^-1), so this floating point number represents the number 2.5
To represent Int64.max (2^63-1), you need 63 bits of mantissa to all be 1s and the value in the exponent to be 63. But Double does not have that many bits of mantissa! So it can only approximate. Int64.max + 1 is actually representable by a Double, because it is equal to 2^63. You just need one 1 followed by 52 0s in the mantissa and the exponent can store 64. And that's what Double did.

Why is the accepted range for unsigned int is from -128 to 127? Why not have the range from -127 to 128?

For a signed 8-bit integer, why is the accepted range from -128 to 127? Why not have the range from -127 to 128?
Is it a convention?
Because the Math plays out and so does the bit-string pattern.

Why does Lua display 4.0 instead of 4 when dividing 8 by 2?

If I run print(8 / 2), the Lua demo outputs 4.0. If I then run print(math.floor(8 / 2)), it prints 4. Why does Lua print the .0 in the first example? All of 8, 2, and 4 should be quite easy to represent accurately in binary, so surely there shouldn't be any rounding issues?
Lua 5.3 distinguishes floats and integers: 5 and 5.0 are of different numeric types.
tostring for floats will always include a decimal . and tostring for integers will never include a decimal ..
The / operator is for float division; the // produces integers.
https://www.lua.org/manual/5.3/manual.html#3.4.1
Exponentiation and float division (/) always convert their operands to floats and the result is always a float. Exponentiation uses the ISO C function pow, so that it works for non-integer exponents too.
Floor division (//) is a division that rounds the quotient towards minus infinity, that is, the floor of the division of its operands.

Why float value is rounded in playground but not in project in Swift?

I'm using float value in my project. when I try to access in Project, it's expanding to 1/billions decimal but when it comes to playground it works perfectly.
In xcodeproj:
let sampleFloat: Float = 0.025
print(sampleFloat) // It prints 0.0250000004
In Playground:
let sampleFloat: Float = 0.025
print(sampleFloat) // It prints 0.025
Any clue what's happening here? how can I avoid expansion in xcodeproj?
Lots of comments, but nobody's posted all the info as an answer yet.
The answer is that internally, floating point numbers are represented with binary powers of 2.
In base 10, the tenths digit represents how many 1/10ths are in the value. The hundredths digit represents how many 1/100ths are in the value, the thousandths digit represents how many 1/1000ths are in the value, and so on. In base 10, you can't represent 1/3 exactly. That is 0.33333333333333333...
In binary floating point, the first fractional binary digit represents how many 1/2s are in the value. The second digit represents how many 1/4ths are in in th value, the next digit represents how many 1/8ths are in the value, and so on. There are some (lots of) decimal values that can't be represented exactly in binary floating point. The value 0.1 (1/10) is one such value. That will be approximated by something like 1/16 + 1/32 + 1/256 + 1/512 + 1/4096 + 1/8192.
The value 0.025 is another value that can't be represented exactly in binary floating point.
There is an alternate number format, NSDecimalNumber (Decimal in Swift 3) that uses decimal digits to represent numbers, so it CAN express any decimal value exactly. (Note that it still can't express a fraction like 1/3 exactly.)

RGBA decimal notation to arithmetic notation

I have to customize a iOS app and the guideline says:
Please don’t use RGBA values in 0 to 255 decimal notation, but use 0.0
to 1.0 arithmetic notation instead!
For exemple, the default app color #70C7C6 in the guideline is converted to (0.298, 0.792, 0.784, 1.000).
How can I convert other colors? I never knew this arithmetic notation before.
Convert the string hex values into integers, then to get the arithmetic notation divide each value by 255.
For example "C7" -> 199 -> 199/255. -> 0.78.
The last value is opacity, which sounds like in your case would always be 1.
A color component is a number over a specified range, while working with hex or integer values you have (usually) a number in 0-255 (represented by 0x00-0xFF if working in hexadecimal). But you can express the same value by normalizing it in the range 0.0-1.0, you do so by dividing each component by the maximum allowed value, eg:
You have 0xC7 in hex, which is 199 in decimal, you divide it by 255.0f and you obtain 0.780f.
In practice UIColor already provides methods to obtain normalized values, you just need to convert a number from hex notation, which can be done easily or by using a simple library:
UIColor* color = [UIColor colorWithCSS:#"70c7c6"];
CGFloat r, g, b, a;
[color getRed:&r green:&g blue:&b alpha:&a]

Resources