Calculate LoRa message length with code rate - rate

I'm looking for a formula, with that I can calculate the message length of my payload that is sent over the Lora modulation.
Example:
message length without parity bits: 8 bits,
coding rate: 4/5
Is there a formula to calculate the message length with the added parity bits?

Related

Time needed to generate all of bits in a packet... why is the packet size divided by data size?

I'm learning packet switching system and trying to understand this problem
from a text book. It's about the time needed to generate all of bits in a packet. What we learned so far was calculating a delay time that happens after a packet was made so time for making a packet feels new. Can anyone help me understand why they divided packet sizes by data size in the solution?
Information)
"Host A converts analog voice to a digital 64 kpbs bit stream on the
fly.
Host A then groups the bits into 56 byte packets."
Answer) 56*8 / 64*1000 = 7msec
They are calculating the time needed to generate all of bits in a packet.
Each new bit is added to the packet until the packet is full. The full packet
will then be sent on its way, and a new empty packet will be created to hold the
next set of bits. Once it fills up, it will be sent also.
This means that each packet will contain bits that range from brand new ones, to
bits that have been waiting around for up to 7ms. (The age of the oldest bit in
the packet is important, because it contributes to the observed latency of the
application.)
Your bits are being created in a stream, at a fixed rate of 64*1000 bits per
second. In one second, 64,000 bits would be generated. Therefore, one bit is
generated every 1/64,000 = 0.016 milliseconds.
Those bits are being assembled into packets where each packet contains exactly
56*8 bits. If each bit took 0.016 milliseconds to create, then all 56*8 bits
will be created in approximately 7 milliseconds.
I like to sanity-check this kind of formula by looking at the units: BITS / SECOND.
56*8 BITS / 0.007 SECONDS = 66,286 BITS/SECOND which is approximately your bitrate.
If BITRATE = BITS / SECONDS then by simple algebra, SECONDS = BITS / BITRATE

How to use CRC32 generator for an effective CRC16?

I am writing in C for an embedded STM32F105 microcontroller. I need to implement a CRC routine to validate a message sent over the air.
The microcontroller has a CRC32 generator built into its hardware. You feed it 4 bytes at a time and it calcs the CRC without additional processor overhead. It's non-configurable and uses the Ethernet CRC32 polynomial.
I want to use this hardware CRC generator, but I only want to add two bytes (not four) to each data packet. The packet will vary in size between 4 and 1022 bytes.
Can I simply use the two high (or low) bytes of the CRC32? Or can I always feed the CRC module 2 bytes at a time, with the high bytes being zero?
Is there some other way to get what I'm looking for?
For most applications, sure, you can just use the low two bytes of the CRC-32 as a 16-bit check value. However that will not be a 16-bit CRC. It will be as good as any other hash value for checking for gross errors in a message.
It will not have certain desirable properties for small numbers of bit errors in short packet lengths that are afforded by CRCs.
There's no point in feeding the CRC generator zeros. Go ahead and give it four bytes of data for each instruction.

Unit of digital numbers?

what is the unit of digital numbers https://en.wikipedia.org/wiki/Numerical_digit. For example what is the unit of the difference of two ADC values:
10 - 2 = 8 digits
10 - 2 = 8 units
10 - 2 = 8 symbols
10 - 2 = 8 ???
Or for example I want to describe a slope:
Temperature example: 2 °C per second = 2 °C/sec
ADC example: 2 ??? per second = 2 ???/sec
What is correct?
Best regards
Zlatan
Numbers don't have units by default. Units are simply a multiplied symbol that represents the "nature" of the quantity.
First of all figure out the LSB (least significant bit) of the ADC.
Example: The ADC uses a vref of 1.2V and has 8bit => LSB=1.2V/(2^8-1)=4.7mV
A typical temperature sensor using a bipolar junction has about -2mV/K. The example ADC with LSB=4.7mV will respond with a change of 1LSB per 2.35K temperature decrease.
A change of 1LSB/second means you have a change of -2.35K/per second.
If this isn't accurate enough for your application you can use an ADC with more bits or stack several diodes acting as temperature sensors.
If you use something else than a bipolar junction the sensitivity of the temperature sensor can be different. Just check the spec of the sensor and the ADC (and it's reference to calculate the LSB)
Parameters you need:
Reference voltage of the ADC
Number of bits of the ADC (to calculate LSB)
Temperature coefficient of the sensor

How long can it take to send a message of 200 Byte in an IEEE 802.15.4 beacon enabled network?

How long can it take to send a message of 200 Byte in an IEEE 802.15.4 beacon enabled network?
I am not clear with this question and how to calculate this time.
I have tried to find the articles about IEEE 802.15.4.
Thank you.
There are three kinds of data rates of IEEE 802.15.4. They are 250 kbps, 40 kbps, and 20 kbps. The time vary among rates. Calculation formula is
Time(s) = Data(bits) / Rate(bps)
For example, if the rate is 20 kbps, the data(message) is 200 Bytes, so the time is
(200*8)/(20*1000)=0.08s=80ms
If you use 250kbps the time is 6.4ms.
Note: Time calculated here is the time of transmitting message in the air. Generally, actual time is longer because processing time does not take into account here.

How to calculate IEEE 802.11 CRC-32 FCS?

This is from IEEE Std 802.11-2012 Clause 8.2.4.8 FCS field:
I cannot understand the last two paragraphs:
What's the meaning by "the initial remainder of the division is preset to all ones", and why we need to do that?
What's the meaning by "... the serial incoming bits of the calculation fields and FCS..." ?
Initializing the CRC to all ones avoids the problem of a string of zeros of any length giving a zero CRC.
Read Ross Williams CRC tutorial.

Resources