Why is the accepted range for unsigned int is from -128 to 127? Why not have the range from -127 to 128? - signed

For a signed 8-bit integer, why is the accepted range from -128 to 127? Why not have the range from -127 to 128?
Is it a convention?
Because the Math plays out and so does the bit-string pattern.

Related

How are floating-point pixel values converted to integer values?

How does image library (such as PIL, OpenCV, etc) convert floating-point values to integer pixel values?
For example
import numpy as np
from PIL import Image
# Creates a random image and saves in a file
def get_random_img(m=0, s=1, fname='temp.png'):
im = m + s * np.random.randn(60, 60, 3) # For eg. min: -3.8947058634971179, max: 3.6822041760496904
print(im[0, 0]) # for eg. array([ 0.36234732, 0.96987366, 0.08343])
imp = Image.fromarray(im, 'RGB') # (*)
print(np.array(imp)[0, 0]) # [140 , 74, 217]
imp.save(fname)
return im, imp
For the above method, an example is provided in the comment (which is randomly produced). My question is: how does (*) convert ndarray (which can range from - infinity to plus infinity) to pixel values between 0 and 255?
I tried to investigate the Pil.Image.fromarray method and eventually ended by at line #798 d.decode(data) within Pil.Image.Image().frombytes method. I could find the implementation of decode method, thus unable to know what computation goes behind the conversion.
My initial thought was that maybe the method use minimum (to 0) and maximum (to 255) value from the array and then map all the other values accordingly between 0 and 255. But upon investigations, I found out that's not what is happening. Moreover, how does it handle when the values of the array range between 0 and 1 or any other range of values?
Some libraries assume that floating-point pixel values are between 0 and 1, and will linearly map that range to 0 and 255 when casting to 8-bit unsigned integer. Some others will find the minimum and maximum values and map those to 0 and 255. You should always explicitly do this conversion if you want to be sure of what happened to your data.
In general, a pixel does not need to be 8-bit unsigned integer. A pixel can have any numerical type. Usually a pixel intensity represents an amount of light, or a density of some sort, but this is not always the case. Any physical quantity can be sampled in 2 or more dimensions. The range of meaningful values thus depends on what is imaged. Negative values are often also meaningful.
Many cameras have 8-bit precision when converting light intensity to a digital number. Likewise, displays typically have an b-bit intensity range. This is the reason many image file formats store only 8-bit unsigned integer data. However, some cameras have 12 bits or more, and some processes derive pixel data with a higher precision that one does not want to quantize. Therefore formats such as TIFF and ICS will allow you to save images in just about any numeric format you can think of.
I'm afraid it has done nothing anywhere near as clever as you hoped! It has merely interpreted the first byte of the first float as a uint8, then the second byte as another uint8...
from random import random, seed
import numpy as np
from PIL import Image
# Generate repeatable random data, so other folks get the same results
np.random.seed(42)
# Make a single RGB pixel
im = np.random.randn(1, 1, 3)
# Print the floating point values - not that we are interested in them
print(im)
# OUTPUT: [[[ 0.49671415 -0.1382643 0.64768854]]]
# Save that pixel to a file so we can dump it
im.tofile('array.bin')
# Now make a PIL Image from it and print the uint8 RGB values
imp = Image.fromarray(im, 'RGB')
print(imp.getpixel((0,0)))
# OUTPUT: (124, 48, 169)
So, PIL has interpreted our data as RGB=124/48/169
Now look at the hex we dumped. It is 24 bytes long, i.e. 3 float64 (8-byte) values, one for red, one for green and one for blue for the 1 pixel in our image:
xxd array.bin
Output
00000000: 7c30 a928 2aca df3f 2a05 de05 a5b2 c1bf |0.(*..?*.......
00000010: 685e 2450 ddb9 e43f h^$P...?
And the first byte (7c) has become 124, the second byte (30) has become 48 and the third byte (a9) has become 169.
TLDR; PIL has merely taken the first byte of the first float as the Red uint8 channel of the first pixel, then the second byte of the first float as the Green uint8 channel of the first pixel and the third byte of the first float as the Blue uint8 channel of the first pixel.

Foundation: Why Int64.max and Double(Int64.max) prints 2 entire different value in Swift iOS SDK

Here is my Swift Code
print("\(Int64.max)")
print("\(Double(Int64.max))")
It produce following output
9223372036854775807
9.223372036854776e+18
Why the both value is entire different
9.223372036854776e+18 - 9223372036854775807 = 193 FYI
The value of Double you are seeing in the output is only an approximation to some number of significant figures. We can see more significant figures by String(format:)
print(String(format: "%.1f", Double(Int64.max)))
This prints:
9223372036854775808.0
So actually, the difference is not as big as what you claimed it was (193). It's just a difference of 1.
Why is there a difference?
Double stores value using a floating point representation. It can represent a wide range of numbers, but not every number in that range. Double uses 53 bits of mantissa, 1 sign bit and 11 bits to store the exponent. The mantissa represents the significant digits of the number, and the exponent tells you where to put the decimal point. Everything on one side of the decimal point represents positive powers of 2 and everything on the other side represent negative powers of 2. For example:
0.1010000 0010
mantissa exponent
The exponent says to move the decimal point to the right 3 times, so the mantissa becomes 010.10000. The 1 on the left represents 2, and the 1 on the right represents a half (2^-1), so this floating point number represents the number 2.5
To represent Int64.max (2^63-1), you need 63 bits of mantissa to all be 1s and the value in the exponent to be 63. But Double does not have that many bits of mantissa! So it can only approximate. Int64.max + 1 is actually representable by a Double, because it is equal to 2^63. You just need one 1 followed by 52 0s in the mantissa and the exponent can store 64. And that's what Double did.

How to convert IEEE-11073 16-bit SFLOAT to mantissa and exp in Swift?

I want to convert two bytes to into it's mantissa and exponent and try then multiply the mantissa and exponent together to get its Int value.

RGBA decimal notation to arithmetic notation

I have to customize a iOS app and the guideline says:
Please don’t use RGBA values in 0 to 255 decimal notation, but use 0.0
to 1.0 arithmetic notation instead!
For exemple, the default app color #70C7C6 in the guideline is converted to (0.298, 0.792, 0.784, 1.000).
How can I convert other colors? I never knew this arithmetic notation before.
Convert the string hex values into integers, then to get the arithmetic notation divide each value by 255.
For example "C7" -> 199 -> 199/255. -> 0.78.
The last value is opacity, which sounds like in your case would always be 1.
A color component is a number over a specified range, while working with hex or integer values you have (usually) a number in 0-255 (represented by 0x00-0xFF if working in hexadecimal). But you can express the same value by normalizing it in the range 0.0-1.0, you do so by dividing each component by the maximum allowed value, eg:
You have 0xC7 in hex, which is 199 in decimal, you divide it by 255.0f and you obtain 0.780f.
In practice UIColor already provides methods to obtain normalized values, you just need to convert a number from hex notation, which can be done easily or by using a simple library:
UIColor* color = [UIColor colorWithCSS:#"70c7c6"];
CGFloat r, g, b, a;
[color getRed:&r green:&g blue:&b alpha:&a]

Understanding stanag 4609 klv format

I am trying to parse a stanag 4609 klv stream from external camera.
For beginning, I am trying to figure the altitude value received in stream.
By stanag 4609 documentation, the value is 2 - bytes long, in feet, represented as float.
I know that the camera altitude is approximately 39.8 meters, but I can't interpret the 2 - bytes I receive to that value (in feet).
The 2 bytes I received are {12,23}.
How can I interpret it in the correct way?
In STANAG 4609 KLV, floating point values are encoded as integers. You can check MISB ST0601 for the particular data element you're interested in. It will give you the conversion formula to convert the 2-byte integer into the correct floating point value.
Assuming you're referring to the Sensor True Altitude (tag 15), the conversion formula is (19900/65535) * int_value - 900.
Applying this to your data:
Interpret the bytes [12, 23] ([0x0C, 0x17] in hexadecimal) as an integer. 0xC17 is equal to 3095.
Apply the formula. (19900/65535) * 3095 - 900 = 39.81 meters

Resources