Bug in ARCore acquireDepthImage? - arcore

The documentation states that it's 16 bits each with top 3 bits set to 0, meaning the range should be about 8,192. My code is calling
depthImage.getPlanes()[0].getBuffer().asShortBuffer().get(x)
The range of these numbers is about the full signed short range and seemingly random. To debug, I tried printing the following values:
depthImage.getPlanes()[0].getBuffer().get(0);
depthImage.getPlanes()[0].getBuffer().get(1);
The first one oscillates randomly and the second one is almost always in the low numbers such as 0-6, but I've seen as high as 21.
It seems like the 2nd byte is the most significant byte of the number and the 1st byte is the least significant byte (i.e. they are reversed).

It's little-endian encoding, use Short.reverseBytes():
depthMm = java.lang.Short.reverseBytes(depthImage.planes[0].buffer.getShort(offset))

Related

Does a stuffing bit in CAN count towards the next stuffing group

If you have a sequence of bits in CAN data:
011111000001
There will need to be a stuffed 0 after the ones, and a stuffed 1 after the 0s. But I'm not sure where the 1 should go.
The standard seems ambiguous to me because sometimes it talks about "5 consecutive bits during normal operation", but sometimes it says "5 consecutive bits of data". Does a stuffing bit count as data?
i.e.
should it be:
01111100000011
Or
01111100000101
Bit stuffing only applies to the CAN frame until the ACK-bit. In the End-Of-Frame and Intermission fields, no bit stuffing is applied.
It does not matter what is transmitted.
It is simply "after 5 consecutive bits of the same value" one complementary bit is inserted.
The second of your examples is correct. 6 consecutive bits make the message invalid.
From the old Bosch CAN2.0B spec, chapter 5:
The frame segments START OF FRAME, ARBITRATION FIELD, CONTROL FIELD, DATA FIELD and CRC SEQUENCE are coded by the method of bit stuffing.
Meaning everything from the start of the frame to the 15 bit CRC can have bit stuffing, but not the 1 bit CRC delimiter and the rest of the frame.
Whenever a transmitter detects five consecutive bits in the bit stream to be transmitted
This "bit stream" refers to all the fields mentioned in the previously quoted sentence.
...in the actual transmitted bit stream
The actual transmitted bit stream is the original data + appended stuffing bit(s).

Is information stored in registers/memory structured as binary?

Looking at this question on Quora HERE ("Are data stored in registers and memory in hex or binary?"), I think the top answer is saying that data persistence is achieved through physical properties of hardware and is not directly relatable to either binary or hex.
I've always thought of computers as 'binary', but have just realized that that only applies to the usage of components (magnetic up/down or an on/off transistor) and not necessarily the organisation of, for example, memory contents.
i.e. you could, theoretically, create an abstraction in memory that used 'binary components' but that wasn't binary, like this:
100000110001010001100
100001001001010010010
111101111101010100001
100101000001010010010
100100111001010101100
And then recognize that as the (badly-drawn) image of 'hello', rather than the ASCII encoding of 'hello'.
An answer on SO (What's the difference between a word and byte?) mentions that processors can handle 'words', i.e. several bytes at a time, so while information representation has to be binary I don't see why information processing has to be.
Can computers do arithmetic on hex directly? In this case, would the internal representation of information in memory/registers be in binary or hex?
Perhaps "digital computer" would be a good starting term and then from there "binary digit" ("bit"). Electronically, the terms for the values are sometimes "high" and "low". You are right, everything after that depends on the operation. Most of the time, groups of bits are operated on together. Commonly groups are 1, 8, 16, 32 and 64 bits. The meaning of the bits depends on the program but some operations go hand-in-hand with some level of meaning.
When the meaning of a group of bits is not known or important, humans like to be able to decern the value of each bit. Binary could be used but more than 8 bits is hard to read. Although it is rare to operate on groups of 4 bits, hexadecimal is much more readable and is generally used regardless of the number of bits. Sometimes octal is used but that's based on contexts where there is some meaning to a subgrouping of the 3 bits or an avoidance of digits beyond 9.
Integers can be stored in two's complement format and often CPUs have instructions for such integers. Once such operation is negation. For a group of 8 bits, it would map 1 to -1,… 127 to -127, and -1 to 1, … -127 to 127, and 0 to 0 and -128 to -128. Decimal is likely the most valuable to humans here, not base 256, base 2 or base 16. In unsigned hexadecimal, that would be 01 to FF, …, 00 to 00, 80 to 80.
For an intro to how a CPU might do integer addition on a group of bits, see adder circuits.
Other number formats include IEEE-754 floating point and binary-coded decimal.
I think you understand that digital circuits are binary. So, based on the above, yes, operations do operate on a higher conceptual level despite the actual storage.

Working on 16 bit unsigned integer (uint16_t)

I want to generate a 16 bit unsigned integer (uint16_t) which could represent following:
First 2 digits representing some version like 1, 2, 3 etc.
Next 3 digits representing another number may be 123, 345, 071 etc.
And last 11 digits representing a number T234, T566 etc.
How can we do this using objective C. I would like to parse this data later on to get these components back. Please advise.
I think you are misunderstanding just what uint16_t means. It doesn't mean a 16 digit decimal number (which would be any number between 0 and 9,999,999,999,999,999). It means an unsigned number that can be expressed using 16 bits. The range of such a value is 0 to 65535 in decimal. If you really wanted to store the numbers you are talking about you would need 52 bits. You would also be making things very difficult for yourself, since you wouldn't easily be able to extract the first two decimal digits from that 52 bit sequence; You'd have to treat the number as a decimal value then modulus 100 it, you couldn't just say it's bits 1 to 8.
There is a scheme called Binary Coded Decimal that could help you. You would take a 64 bit value (uint64_t) and you'd say that within this value the bits 1-7 are the version (which could be a value up to 127), bits 8-17 are the second number (which could be a value up to 1023) and bits 18-63 could be your third number (those 46 bits would be able to store a number up to 70,368,744,177,663.
All this is technically possible, but you are really going to be making things hard for yourself. It looks like you are storing a version, minor version and build number and most people do that using strings, not decimals

Separating decimal value to least & most significant byte

I'm working on some 65802 code (don't ask :P) and I need to separate a 16-bit value into two 8-bit bytes to store it in memory. How would I go about this?
EDIT:
Also, how would I take two similar bytes and combine them into one 16-bit value?
EDIT:
To clarify, many of the solutions available on the internet are not possible with the programming language I'm using (a version of MS-BASIC). I can't take modulo, and I can't left or rightshift. I've figured out that I can put the two bytes together by multiplying the high byte by 256 and adding it to the low byte, but how would I reverse the process?

Why is the smallest value that can be stored is a Byte(8bit) & not a Bit(1bit)?

Why is the smallest value that can be stored a Byte(8bit) & not a Bit(1bit) in memory?
Even booleans are stored as Bytes. Will we ever bump the smallest number to 32 or 64bits like register's on the CPU?
EDIT: To clarify as many answers seemed confused about the nature of questing. This question is about why isn't a byte 7-bit, 1-bit, 32-bit, etc (not why lower bit primitives must fit within the hardware's byte at min). Is the 8-bit byte simply historical as some hardware has 10-bit bytes for example. Or is there a mathematical reason 8-bit is ideal vs say 10-bit for general processing?
The hardware is built to read data in blocks (bytes, later words and dwords). This provides greater efficiency, than accessing individual bits, and also offers more addressing range. So most data is aligned to at least byte boundary. There exist encodings that operate with bit sequences, rather than bytes, but they are quite rare.
Nowadays the data is most often aligned to dword (32-bits) boundary anyway. Moreover, some hardware (ARM, for example), can't access misaligned multibyte variables, i.e. 16-bit word can't "cross" dword boundary - exception will be thrown.
Because computers address memory at the byte level, so anything smaller than a byte is not addressable.
The underlying methods of processor access are limited to the size of the smallest usable register. On most architectures, that size is 8 bits. You can use smaller portions of these; for instance, C has the bitfield feature in structs that will allow combining fields that only need to be certain bit lengths. Access will still require that the whole byte be read.
Some older exotic architectures actually did have different a "word size." In these machines, 10 bits might be the common size.
Lastly, processors are almost always backwards compatible. Intel, for instance, has maintained complete instruction compatibility from the 386 on up. If you take a program compiled for the 386, it will still run on an i7 processor. Changing the word size would break compatibility. So while it is possible, no manufacturer will ever do it.
Assume that we have native language that consist of 2 character such as a , b
to distinguish two characters we need at least 1 bit for example 0 to represent char a and 1 to represent char b
so that if we count number of characters and special characters and symbols, there are 128 character and to distinguish one character from another, you need log2(128) = 7 bit and 8th bit for transmission

Resources