Understanding stanag 4609 klv format - stream

I am trying to parse a stanag 4609 klv stream from external camera.
For beginning, I am trying to figure the altitude value received in stream.
By stanag 4609 documentation, the value is 2 - bytes long, in feet, represented as float.
I know that the camera altitude is approximately 39.8 meters, but I can't interpret the 2 - bytes I receive to that value (in feet).
The 2 bytes I received are {12,23}.
How can I interpret it in the correct way?

In STANAG 4609 KLV, floating point values are encoded as integers. You can check MISB ST0601 for the particular data element you're interested in. It will give you the conversion formula to convert the 2-byte integer into the correct floating point value.
Assuming you're referring to the Sensor True Altitude (tag 15), the conversion formula is (19900/65535) * int_value - 900.
Applying this to your data:
Interpret the bytes [12, 23] ([0x0C, 0x17] in hexadecimal) as an integer. 0xC17 is equal to 3095.
Apply the formula. (19900/65535) * 3095 - 900 = 39.81 meters

Related

How are floating-point pixel values converted to integer values?

How does image library (such as PIL, OpenCV, etc) convert floating-point values to integer pixel values?
For example
import numpy as np
from PIL import Image
# Creates a random image and saves in a file
def get_random_img(m=0, s=1, fname='temp.png'):
im = m + s * np.random.randn(60, 60, 3) # For eg. min: -3.8947058634971179, max: 3.6822041760496904
print(im[0, 0]) # for eg. array([ 0.36234732, 0.96987366, 0.08343])
imp = Image.fromarray(im, 'RGB') # (*)
print(np.array(imp)[0, 0]) # [140 , 74, 217]
imp.save(fname)
return im, imp
For the above method, an example is provided in the comment (which is randomly produced). My question is: how does (*) convert ndarray (which can range from - infinity to plus infinity) to pixel values between 0 and 255?
I tried to investigate the Pil.Image.fromarray method and eventually ended by at line #798 d.decode(data) within Pil.Image.Image().frombytes method. I could find the implementation of decode method, thus unable to know what computation goes behind the conversion.
My initial thought was that maybe the method use minimum (to 0) and maximum (to 255) value from the array and then map all the other values accordingly between 0 and 255. But upon investigations, I found out that's not what is happening. Moreover, how does it handle when the values of the array range between 0 and 1 or any other range of values?
Some libraries assume that floating-point pixel values are between 0 and 1, and will linearly map that range to 0 and 255 when casting to 8-bit unsigned integer. Some others will find the minimum and maximum values and map those to 0 and 255. You should always explicitly do this conversion if you want to be sure of what happened to your data.
In general, a pixel does not need to be 8-bit unsigned integer. A pixel can have any numerical type. Usually a pixel intensity represents an amount of light, or a density of some sort, but this is not always the case. Any physical quantity can be sampled in 2 or more dimensions. The range of meaningful values thus depends on what is imaged. Negative values are often also meaningful.
Many cameras have 8-bit precision when converting light intensity to a digital number. Likewise, displays typically have an b-bit intensity range. This is the reason many image file formats store only 8-bit unsigned integer data. However, some cameras have 12 bits or more, and some processes derive pixel data with a higher precision that one does not want to quantize. Therefore formats such as TIFF and ICS will allow you to save images in just about any numeric format you can think of.
I'm afraid it has done nothing anywhere near as clever as you hoped! It has merely interpreted the first byte of the first float as a uint8, then the second byte as another uint8...
from random import random, seed
import numpy as np
from PIL import Image
# Generate repeatable random data, so other folks get the same results
np.random.seed(42)
# Make a single RGB pixel
im = np.random.randn(1, 1, 3)
# Print the floating point values - not that we are interested in them
print(im)
# OUTPUT: [[[ 0.49671415 -0.1382643 0.64768854]]]
# Save that pixel to a file so we can dump it
im.tofile('array.bin')
# Now make a PIL Image from it and print the uint8 RGB values
imp = Image.fromarray(im, 'RGB')
print(imp.getpixel((0,0)))
# OUTPUT: (124, 48, 169)
So, PIL has interpreted our data as RGB=124/48/169
Now look at the hex we dumped. It is 24 bytes long, i.e. 3 float64 (8-byte) values, one for red, one for green and one for blue for the 1 pixel in our image:
xxd array.bin
Output
00000000: 7c30 a928 2aca df3f 2a05 de05 a5b2 c1bf |0.(*..?*.......
00000010: 685e 2450 ddb9 e43f h^$P...?
And the first byte (7c) has become 124, the second byte (30) has become 48 and the third byte (a9) has become 169.
TLDR; PIL has merely taken the first byte of the first float as the Red uint8 channel of the first pixel, then the second byte of the first float as the Green uint8 channel of the first pixel and the third byte of the first float as the Blue uint8 channel of the first pixel.

Foundation: Why Int64.max and Double(Int64.max) prints 2 entire different value in Swift iOS SDK

Here is my Swift Code
print("\(Int64.max)")
print("\(Double(Int64.max))")
It produce following output
9223372036854775807
9.223372036854776e+18
Why the both value is entire different
9.223372036854776e+18 - 9223372036854775807 = 193 FYI
The value of Double you are seeing in the output is only an approximation to some number of significant figures. We can see more significant figures by String(format:)
print(String(format: "%.1f", Double(Int64.max)))
This prints:
9223372036854775808.0
So actually, the difference is not as big as what you claimed it was (193). It's just a difference of 1.
Why is there a difference?
Double stores value using a floating point representation. It can represent a wide range of numbers, but not every number in that range. Double uses 53 bits of mantissa, 1 sign bit and 11 bits to store the exponent. The mantissa represents the significant digits of the number, and the exponent tells you where to put the decimal point. Everything on one side of the decimal point represents positive powers of 2 and everything on the other side represent negative powers of 2. For example:
0.1010000 0010
mantissa exponent
The exponent says to move the decimal point to the right 3 times, so the mantissa becomes 010.10000. The 1 on the left represents 2, and the 1 on the right represents a half (2^-1), so this floating point number represents the number 2.5
To represent Int64.max (2^63-1), you need 63 bits of mantissa to all be 1s and the value in the exponent to be 63. But Double does not have that many bits of mantissa! So it can only approximate. Int64.max + 1 is actually representable by a Double, because it is equal to 2^63. You just need one 1 followed by 52 0s in the mantissa and the exponent can store 64. And that's what Double did.

Why float value is rounded in playground but not in project in Swift?

I'm using float value in my project. when I try to access in Project, it's expanding to 1/billions decimal but when it comes to playground it works perfectly.
In xcodeproj:
let sampleFloat: Float = 0.025
print(sampleFloat) // It prints 0.0250000004
In Playground:
let sampleFloat: Float = 0.025
print(sampleFloat) // It prints 0.025
Any clue what's happening here? how can I avoid expansion in xcodeproj?
Lots of comments, but nobody's posted all the info as an answer yet.
The answer is that internally, floating point numbers are represented with binary powers of 2.
In base 10, the tenths digit represents how many 1/10ths are in the value. The hundredths digit represents how many 1/100ths are in the value, the thousandths digit represents how many 1/1000ths are in the value, and so on. In base 10, you can't represent 1/3 exactly. That is 0.33333333333333333...
In binary floating point, the first fractional binary digit represents how many 1/2s are in the value. The second digit represents how many 1/4ths are in in th value, the next digit represents how many 1/8ths are in the value, and so on. There are some (lots of) decimal values that can't be represented exactly in binary floating point. The value 0.1 (1/10) is one such value. That will be approximated by something like 1/16 + 1/32 + 1/256 + 1/512 + 1/4096 + 1/8192.
The value 0.025 is another value that can't be represented exactly in binary floating point.
There is an alternate number format, NSDecimalNumber (Decimal in Swift 3) that uses decimal digits to represent numbers, so it CAN express any decimal value exactly. (Note that it still can't express a fraction like 1/3 exactly.)

What does this CRC implementation mean by having a seed value?

I am trying to implement a CRC algorithm in Verilog for the SENT sensor protocol.
In a document put out by the SAE, they say their CRC uses the generator polynomial
x^4 + x^3 + x^2 + 1 and a seed value of 0101. I understand the basic concept of calculating a CRC using XOR division and saving the remainder, but everytime I try to compute a CRC I get the wrong answer.
I know this because in the same document they have a list of examples with data bits and the corresponding checksum.
For example, the series of hex values x"73E73E" has checksum 15 and the series x"748748" has checksum 3. Is there anyone who can arrive at these values using the information above? If so, how did you do it?
This is a couple of sentences copied from the document: "The CRC checksum can be implemented as a series of shift left by 4 (multiply by 16) followed by a 256 element array lookup. The checksum is determined by using all data nibbles in sequence and then checksumming the result with an
extra zero value."
Take a look at RevEng, which can determine the CRC parameters from examples (it would need more examples than you have provided).
The seed is simply the initial value of your crc calculation. It is usual to have a non-zero seed to avoid the crc result being zero in the case of all zero data
I just had to find out the same thing. I was checking a CRC implementation for the CRC algorithm which was cryptic albeit working. So I wanted to get the "normal" CRC algorithm to give me the same numbers so I could refactor without problems.
For the numbers you gave I get 0x73E73E => 12, 0x748748 => 3.
As you can read in Koopman the seed value "Prevents all-zero data word from resulting in all-zero check sequence".
I wrote my standard implementation using the algorithm from Wikipedia in Python:
def nCRCcalc( poly, data, crc, n):
crctemp = ( data << n ) | crc
# data width assumed to be 32 bits
shift = 32
while shift > n:
shift = shift - 1
mask = 1 << shift
if mask & crctemp:
crctemp = crctemp ^ ( poly << (shift - n) )
return crctemp
Poly is the polynomial, data is the data, crc is the seed value and n is the number of bits. So In this case Polynomial is 29, crc is 5 and n is 4.
You might need to reverse nibble order, depending on in which format you receive your data. Also this is obviously not the implementation with the table, just for checking.

What is fastest way to group array elements into buckets in iOS?

So I have an array of 500,000 elements:
float* arrayToBucketize=(float*) malloc(sizeof(float)*500000);
and an array that represents the buckets:
int buckets[5]={0,25,50,75,100};
What is the fastest way to go through the first array, look at each float value, compare it to the "buckets" array and replace that float value with the nearest bucket value. So if the float value was 11.25, it would be replaced with 0. On the other hand, 90.10 would be replaced with 100.
Also, I would need any values outside of that range (<0 and >100) to remain unchanged.
I know I can do this with for loops and if conditions; but in the bad habit of optimizing, I am trying to find a more efficient (faster) way of doing this. I am hoping that there is a C function(s) or an iOS function in the Accelerate framework that can do this. Or possibly a series of Accelerate framework matrix functions.
Thanks
For each value inside the bucket range, divide by the lowest common multiple of your bucket values. Round the results to the nearest integer, and then multiply again by the lowest common multiple.
Using the example numbers:
11.25 / 25 = 0.45
0.45 -> 0
0 * 25 = 0
90.10 / 25 = 3.604
3.604 -> 4
4 * 25 = 100
The accelerate framework has vectorized divide, round, and multiply functions, so these should run fairly quickly.

Resources