I am new in Lora, I tray to connect my Lora GW & GPS tracer LW001-BG to The things Network, & it successfully connected to TTN, but how to convert or decode the data from the GPS to latlong format?
here is the documentation http://doc.mokotechnology.com/index.php?s=/2&page_id=143
I receive data format like this 02 01 56 F8 0B 45 F4 29 32 46 and I need to convert/decode it to readable format
thanks I hope someone can help me
The payload of the message is in bytes 3-6 (for the latitude) and 7-10 (for the longitude). The first two bytes indicate how many packages there are (two) and which the current one is (the first).
The four bytes represent a 32 bit floating point value; in your example this is 2239.5210 for the latitude. This means 22 degrees, 39 minutes, and 31.26 seconds (which is the fraction times sixty).
You can see this in an on-line converter: As the byte order is lowest byte first, you need to reverse it, convert it to binary, and then check the checkboxes in the binary representation:
54 F8 0B 45 becomes 45 0B F8 56 or binary
01000101000010111111100001010110
Here the first bit is the sign, followed by 8 bits of the exponent and 23 bits mantissa. The decimal representation is 2239.52099609 and you discard all digits after the fourth to get 2239.5210 (with rounding).
Depending on how you process this data, you might be able to simply cast this to a float variable, as they are generally following the 32 bit IEEE 754 standard.
Related
I have connected OBD2 and getting the can data (11bit 500kpbs CAN) using atmel can controller.
I get data.
Now, how do I get the mode and PIDs from this data?
For example, my data looks like this:
15164A8A-FF088B52 -- Data: 00,00,00,86,9C,FE,9C,FE,
I could see RPM changing, ignition on/off etc... on the data fields.
I don't want to use ELM chips. I need to handle the raw data directly.
HINT: All of my numbers are in HEX.
OBD2 protocol sends you responses in bytes (8 bits). responses are subdivided into header (or called ID as well) and data.
IDs are the address of the ECU and data is "response data" from ECU and it is always 8 bytes (in CAN Bus protocol?!).
8 Bytes of data will be divided into PCI (which can be one or two bytes) and values. PCI will show you what is your frame type (single, First, consecutive or flow control frame) and how many bytes are incoming.
to make it easier I make an example only for single frame:
you might send an OBD request to main ECU like this:
7DF 02 01 0C 00 00 00 00 00
7DF is ECU address for diagnose tester device.
02 is number of sending data bytes
01 is the mode (which you might be interesting in!) 01 is current data, 02 is freeze frame and etc.
0C is rpm PID.
The response from ECU would be something like (single frame):
7E8 04 41 0C 12 13 00 00 00
7E8 is the ECU that responding.
04 number of incoming data bytes.
41 the data are in response to 01 PID
0C response to this PID
12 13 are two byte in response to 0C. Please keep in mind that you have to decode these two bytes with OBD II ISO protocol. you can also find some of conversion rates on Wikipedia.
Other bytes are useless.
To make it short: you have to parse each response from ECU and try to convert the useful bytes to readable decimal value. It depends on which programming language you are using. in C/C++ the best practice in my opinion would be unsigned char which is guaranteed by compiler to be 8 bits and in JAVA it can be Byte. Moreover, try to use bitwise operators to make your life a bit easier.
By more questions do not hesitate to ask.
I have one CAN standard 2.0A frame which contain 8 Bytes of DATA.
e.g
CAN Frame Data "00 CA 22 FF 55 66 AA DF" (8 Bytes)
Now I want to check how many stuff bits would be add in this CAN frame(bit stuffing). standers formula to calculate the Worst case bit stuffing scenario is as following:
64+47+[(34+64-1)/4] ->64 :: Data bits and 47 :: overhead bits 2.0A
How to calculate real stuffed bits in this sample CAN message ??
Any comment, suggestion would be warmly welcome.
There is no way to mathematically "calculate" the stuffed bits. You need to construct the frame (on bit level), traverse the bits, and count.
You could read more about bit stuffin at the link below.
https://en.wikipedia.org/wiki/CAN_bus#Bit_stuffing
Basic principle:
1. Contruct the can frame on bit level
2. Start at frame start bit. When 5 consecutive bits of same polarity is found than insert a bit of opposite polarity.
3. Continue to CRC delimeter (CRC delimeter is excluded)
For example, if I want to store the number one, I can use an integer type, taking up 32-bits or a long type, taking up 64-bits, however there will be the same amount of information (from a symbolic perspective) on both data types.
The variable occupies space based on the type, not the actually contained value.
From the type depends the totality of possible values, of which the current actual value is just one. So the definition set requires a certain amount of space, not the value itself.
EDIT:
I sense confusion :)
Let's say we have 2 bits which can be combined in 4 ways:
00
01
10
11
Now these are all possible combinations of 2 bits.
What those represent is completely indifferent. We just have 4 different states. We can map those to whatever we want:
00 white
01 black
10 red
11 blue
or
00 A
01 B
10 C
11 D
or
00 0
01 1
10 2
11 3
The fact that we can encode those 4 states is bound to the type. Whatever value we store in a variable of that type will always occupy all 2 bits that are necessary to encode all 4 possible values.
A remarkable exception are strings. They can be seen as a modern implementation of Turing's finite tape on which to inscribe characters from an alphabet. Remarkably, we can store all human knowledge with that type (e.G. the totality of all written books could be stored in one single string).
OK, this question sounds simple but I am taken by surprise. In the ancient days when 1 Megabyte was a huge amount of memory, Intel was trying to figure out how to use 16 bits to access 1 Megabyte of memory. They came up with the idea of using segment and offset address values to generate a 20 bit address.
Now, 20 bits gives 2^20 = 1,048,576 locations that can be addressed. Now assuming that we access 1 byte per address location we get 1,048,576/(1024*1024) = 2^20/2^20 Megabytes = 1 Megabyte. Ok understood.
The confusion comes here, we have 16 bit data bus in the ancient 8086 and can access 2 bytes at a time rather than 1, this equate 20 bit address to being able to access a total of 2 Megabyte of data right? Why do we assume that each address only has 1 byte stored in it when the data bus is 2 bytes wide? I am confused here.
It is very important to consider the bus when trying to understand this. This is probably more of an electrical question than a software one, but here is the answer:
For 8086, when reading from ROM, The least significant address line (A0) is not used, reducing the number of address lines to 19 right then and there.
In the case where the CPU needs to read 16 bits from an odd address, say, bytes at 0x3 and 0x4, it will actually do two 16-bit reads: One from 0x2 and one from 0x4, and discard bytes 0x2 and 0x5.
For 8-bit ROM reads, the read on the bus is still 16-bits but the unneeded byte is discarded.
But for RAM there is sometimes a need to write just a single byte, this gets a little more complex. There is an extra output signal on the processor called BHE# (Bus high enable). The combination of A0 and BHE# are used to determine if the write is an 8 or 16-bits wide, and whether or not it is at an odd or even address.
Understanding these two signals is key to answering your question. Stating it simply as possible:
8-bit even access: A0 OFF, BHE# OFF
8-bit odd access: A0 ON, BHE# ON
16-bit access (must be even): A0 OFF, BHE# ON
And we cannot have a bus cycle with A0 ON and BHE# OFF because an odd access to the even byte of the bus is meaningless.
Relating this back to your original understanding: You are completely correct in the case of memory devices. A 1 megabyte 16-bit memory chip will indeed only have 19 address lines, to that chip, 16 bits is a byte, and in effect, they do not physically have an A0 address input.
... almost. 16-bit writable memory devices have two extra signals (BHE# and BLE#) which are connected to the CPU's BHE# and A0 respectively. This so they know to ignore part of the bus when an 8-bit access is under way, making them hybrid 8/16 bit devices. ROM chips do not have these signals.
For the hardware unenlightened, this is a fairly complex area we're touching on here, and it does get very complex indeed in terms of performance considerations and in large systems with mixed 8 and 16 bit hardware.
It's is all explained in fantastic detail in the 8086 datasheet
It's because a byte is the 'atom' in memory addressing and the code must be able to access all the individual bytes in the address space. really a matter of software and compatibility with 8-bit existing software back then.
This too may interest you: How a single byte of memory is accessed by CPU in a 32-bit memory and 32-bit processor
While debugging on Windows XP 32-bit using the immunity debugger, I see the following on the stack:
_Address_ -Value_
00ff2254 ff090045
00ff2258 00000002
My understanding is that every address location contains 8 bits.
Is this correct?
If I'm understanding your question correctly, the answer is yes, every individual memory location contains 8 bits.
The debugger is showing you 4 bytes (32 bits) at a time, to make the display more compact (and because many data types take up 32 bits, so it's often useful to see 32-bit values). That's why the addresses in the left column are 4 locations apart.
If the debugger showed one byte (8 bits) at a time, the display would look like this:
_Address_ -Value_
00ff2254 45
00ff2255 00
00ff2256 09
00ff2257 ff
00ff2258 02
00ff2259 00
00ff225a 00
00ff225b 00
(assuming you're on a "little-endian" machine, which most modern desktop PCs are.)
I think the main problem with your question is that you ask for one thing, but I detect a different question lurking in the shadows.
First, and foremost, addressable entities in the memory of a computer is organized as bytes, which are 8 bits each, so yes, each address can be said to refer to 8 bits, or a byte.
However, you can easily group more bytes together to form bigger and more complex data structures.
If your question is really "Why am I seeing an 8-digit value as the contents at an address in my stack dump", then the reason for that is that it dumps 32-bit (4 bytes) values.
In other words, you can take the address, the address+1, the address+2, and the address+3, grab the bytes from each of those, and combine to a 32-bit value.
Is that really your question?
To complete the answer of RH, you may be surprised to have so many numbers for a given address.
You should consider
Address Byte (8 bits)
00ff2254 45
00ff2255 00
00ff2256 09
00ff2257 ff
00ff2258 02
...
(On a cpu architecture used by XP)
A memory location refers to a location of memory, and each consecutive memory location refers to the next byte in memory. So, you can only address memory on a one byte boundary, and everyone should know that a byte is 8 bits wide.