Time needed to generate all of bits in a packet... why is the packet size divided by data size? - network-programming

I'm learning packet switching system and trying to understand this problem
from a text book. It's about the time needed to generate all of bits in a packet. What we learned so far was calculating a delay time that happens after a packet was made so time for making a packet feels new. Can anyone help me understand why they divided packet sizes by data size in the solution?
Information)
"Host A converts analog voice to a digital 64 kpbs bit stream on the
fly.
Host A then groups the bits into 56 byte packets."
Answer) 56*8 / 64*1000 = 7msec

They are calculating the time needed to generate all of bits in a packet.
Each new bit is added to the packet until the packet is full. The full packet
will then be sent on its way, and a new empty packet will be created to hold the
next set of bits. Once it fills up, it will be sent also.
This means that each packet will contain bits that range from brand new ones, to
bits that have been waiting around for up to 7ms. (The age of the oldest bit in
the packet is important, because it contributes to the observed latency of the
application.)
Your bits are being created in a stream, at a fixed rate of 64*1000 bits per
second. In one second, 64,000 bits would be generated. Therefore, one bit is
generated every 1/64,000 = 0.016 milliseconds.
Those bits are being assembled into packets where each packet contains exactly
56*8 bits. If each bit took 0.016 milliseconds to create, then all 56*8 bits
will be created in approximately 7 milliseconds.
I like to sanity-check this kind of formula by looking at the units: BITS / SECOND.
56*8 BITS / 0.007 SECONDS = 66,286 BITS/SECOND which is approximately your bitrate.
If BITRATE = BITS / SECONDS then by simple algebra, SECONDS = BITS / BITRATE

Related

Diff between bit and byte, and exact meaning of byte

This is just basic theoretical question. so I read that a bit consist of 0 or 1. and a byte consists of 8 bits. and in 8 bit we can store 2^8 nos.
similarly in 10 bits we store 2^10 (1024). but then why do we say that 1024 is 1 kilo bytes, its actually 10 bits which just 1.25 byte to be exact.
please share some knowledge on it
just a concrete explanation.
Bit means like there are 8 bits in 1 byte, bit is the smallest unit of any storage or you can say the system and 8 bits sums up to 1 byte.
A bit, short for binary digit, is the smallest unit of measurement used in computers for information storage. A bit is represented by a 1 or a 0 with the value true or false, also known as on or off. A single byte of information, also known as an octet, is made up of eight bits. The size, or amount of information stored, distinguishes a bit from a byte.
A kilobit is 1,000 bits, but it is designated as 1024 bits in the binary system due to the amount of space required to store a kilobit using common operating systems and storage schemes. Most people, however, think of kilo as referring to 1,000 in order to remember what a kilobit is. A kilobyte then, would be 1,000 bytes.

UART transfer speed

I want to check if my understanding is correct, however I cannot find any precise explanation or examples. Let's say I have UART communication set to 57600 bits/second and I am transferring 8 bit chars. Let's say I choose to have no parity and since I need one start bit and one stop bit, that means that essentially for transferring one char I would need to transfer 10 bits. Does that mean that the transfer speed would be 5760 chars/second?
Your calculation is essentially correct.
But the 5760 chars/second would be the maximum transfer rate. Since it's an asynchronous link, the UART transmitter is allowed to idle the line between character frames.
IOW the baud rate only applies to the bits of the character frame.
The rate that characters are transmitted depends on whether there is data available to keep the transmitter busy/saturated.
For instance, if a microcontroller used programmed I/O (with either polling or interrupt) instead of DMA for UART transmiting, high-priority interrupts could stall the transmissions and introduce delays between frames.
Baudrate = 57600
Time for 1 Bit: 1 / 57600 = 17,36 us
Time for a frame with 10 Bit = 173,6 us
this means max. 1 / 1736 us = 5760 frames(characters) / s**

How to use CRC32 generator for an effective CRC16?

I am writing in C for an embedded STM32F105 microcontroller. I need to implement a CRC routine to validate a message sent over the air.
The microcontroller has a CRC32 generator built into its hardware. You feed it 4 bytes at a time and it calcs the CRC without additional processor overhead. It's non-configurable and uses the Ethernet CRC32 polynomial.
I want to use this hardware CRC generator, but I only want to add two bytes (not four) to each data packet. The packet will vary in size between 4 and 1022 bytes.
Can I simply use the two high (or low) bytes of the CRC32? Or can I always feed the CRC module 2 bytes at a time, with the high bytes being zero?
Is there some other way to get what I'm looking for?
For most applications, sure, you can just use the low two bytes of the CRC-32 as a 16-bit check value. However that will not be a 16-bit CRC. It will be as good as any other hash value for checking for gross errors in a message.
It will not have certain desirable properties for small numbers of bit errors in short packet lengths that are afforded by CRCs.
There's no point in feeding the CRC generator zeros. Go ahead and give it four bytes of data for each instruction.

How long can it take to send a message of 200 Byte in an IEEE 802.15.4 beacon enabled network?

How long can it take to send a message of 200 Byte in an IEEE 802.15.4 beacon enabled network?
I am not clear with this question and how to calculate this time.
I have tried to find the articles about IEEE 802.15.4.
Thank you.
There are three kinds of data rates of IEEE 802.15.4. They are 250 kbps, 40 kbps, and 20 kbps. The time vary among rates. Calculation formula is
Time(s) = Data(bits) / Rate(bps)
For example, if the rate is 20 kbps, the data(message) is 200 Bytes, so the time is
(200*8)/(20*1000)=0.08s=80ms
If you use 250kbps the time is 6.4ms.
Note: Time calculated here is the time of transmitting message in the air. Generally, actual time is longer because processing time does not take into account here.

Does CRC has the following feature

When the data transmission is tampered 1 bit or 2 bits, can the receiver correct it automatically?
No, CRC is an error-detecting code, not an error-correcting code.
Read more here
CRC is primarily used as an error-detecting code. If the total number of bits (including those in the CRC) is smaller than the CRC's period, though, it is possible to correct single-bit errors by computing the syndrome (xor the calculated and received CRC's). Each bit will, if flipped individually, generate a unique syndrome. One can iterate the CRC algorithm to find the syndrome that would be associated with each bit; if one finds the syndrome associated with each bit, one can flip it and correct a single-bit error.
One major danger with doing this, though, is that the CRC will be much less useful for rejecting bogus data. If one uses an 8-bit CRC on a packet with 15 bytes of data, only one in 256 random packets would pass validity, but half of all random packets could be "corrected" by flipping a single bit.

Resources