UART transfer speed - communication

I want to check if my understanding is correct, however I cannot find any precise explanation or examples. Let's say I have UART communication set to 57600 bits/second and I am transferring 8 bit chars. Let's say I choose to have no parity and since I need one start bit and one stop bit, that means that essentially for transferring one char I would need to transfer 10 bits. Does that mean that the transfer speed would be 5760 chars/second?

Your calculation is essentially correct.
But the 5760 chars/second would be the maximum transfer rate. Since it's an asynchronous link, the UART transmitter is allowed to idle the line between character frames.
IOW the baud rate only applies to the bits of the character frame.
The rate that characters are transmitted depends on whether there is data available to keep the transmitter busy/saturated.
For instance, if a microcontroller used programmed I/O (with either polling or interrupt) instead of DMA for UART transmiting, high-priority interrupts could stall the transmissions and introduce delays between frames.

Baudrate = 57600
Time for 1 Bit: 1 / 57600 = 17,36 us
Time for a frame with 10 Bit = 173,6 us
this means max. 1 / 1736 us = 5760 frames(characters) / s**

Related

CK (tCK, nCK) unit ambiguity in DDR3 standard/datasheets?

I am designing a simplistic memory controller and PHY on an Artix-7 FPGA but am having problems reading the datasheet. The timings in the memory part's datasheet (and in the JEDEC JSD79-3F doc) are expressed in CK/tCK/nCK units, which are in my opinion ambiguous if not running the memory at the nominal frequency (e.g. lower than 666 MHz clock for a 1333 MT/s module).
If I run a 1333 MT/s module at a frequency of 300 MHz -- still allowed with DLL on, as per the datasheet speed bins, -- is the CK/tCK/nCK unit equal to 1.5 ns (from the module's native 666 MHz), or 3.33 ns (from the frequency it is actually run at)? On one hand it makes sense that certain delays are constant, but then again some delays are expressed relative to the clock edges on the CK/CK# pins (like CL or CWL).
That is to say, some timing parameters in the datasheet only change when changing speed bins. E.g. tRP is 13.5 ns for a 1333 part, which is also backwards compatible with the tRP of 13.125 ns of a 1066 part -- no matter the chosen operating frequency of the physical clock pins of the device.
But then, running a DDR3 module at 300 MHz only allows usage of CL = CWL = 5, which is again expressed in "CK" units. To my understanding, this means 5 periods of the input clock, i.e. 5 * 3.33 ns.
I suppose all I am asking is whether the "CK" (or nCK or tCK) unit is tied to the chosen speed bin (tCK = 1.5 ns when choosing DDR3-1333) or the actual frequency of the clock signal provided to the memory module by the controlling hardware (e.g. 3.3 ns for the 600 MT/s mode)?
This is the response of u/Allan-H on reddit who has helped me reach a conclusion:
When you set the CL in the mode register, that's the number of clocks that the chip will wait before putting the data on the pins. That clock is the clock that your controller is providing to the chip (it's SDRAM, after all).
It's your responsibility to ensure that the number of clocks you program (e.g. CL=5) when multiplied by the clock period (e.g. 1.875ns) is at least as long as the access time of the RAM. Note that you program a number of clocks, but the important parameter is actually time. The RAM must have the data ready before it can send it to the output buffers.
Now let's run the RAM at a lower speed, say 312.5MHz (3.2ns period). We now have the option of programming CL to be as low as 3, since 3 x 3.2ns > 5 x 1.875ns.
BTW, since we are dealing with fractions of a ns, we also need to take the clock jitter into account.
Counterintuitively, the DRAM chip doesn't know how fast it is; it must be programmed with that information by the DRAM controller. That information might be hard coded into the controller (e.g. for an FPGA implementation) or by software which would typically read the SPD EEPROM on the DIMM to work out the speed grade then write the appropriate values into the DRAM controller.
This also explains timing values defined as e.g. "Greater of 3CK or 5ns". In this case, the memory chip cannot respond faster than 5 ns, but the internal logic also needs 3 positive clock edges on the input CK pins to complete the action defined by this example parameter.

Time needed to generate all of bits in a packet... why is the packet size divided by data size?

I'm learning packet switching system and trying to understand this problem
from a text book. It's about the time needed to generate all of bits in a packet. What we learned so far was calculating a delay time that happens after a packet was made so time for making a packet feels new. Can anyone help me understand why they divided packet sizes by data size in the solution?
Information)
"Host A converts analog voice to a digital 64 kpbs bit stream on the
fly.
Host A then groups the bits into 56 byte packets."
Answer) 56*8 / 64*1000 = 7msec
They are calculating the time needed to generate all of bits in a packet.
Each new bit is added to the packet until the packet is full. The full packet
will then be sent on its way, and a new empty packet will be created to hold the
next set of bits. Once it fills up, it will be sent also.
This means that each packet will contain bits that range from brand new ones, to
bits that have been waiting around for up to 7ms. (The age of the oldest bit in
the packet is important, because it contributes to the observed latency of the
application.)
Your bits are being created in a stream, at a fixed rate of 64*1000 bits per
second. In one second, 64,000 bits would be generated. Therefore, one bit is
generated every 1/64,000 = 0.016 milliseconds.
Those bits are being assembled into packets where each packet contains exactly
56*8 bits. If each bit took 0.016 milliseconds to create, then all 56*8 bits
will be created in approximately 7 milliseconds.
I like to sanity-check this kind of formula by looking at the units: BITS / SECOND.
56*8 BITS / 0.007 SECONDS = 66,286 BITS/SECOND which is approximately your bitrate.
If BITRATE = BITS / SECONDS then by simple algebra, SECONDS = BITS / BITRATE

What is the reason of frameloss in case of high throughput in wlan?

Background:
I've implemented QoS with four queues having strict priorities in wireless device. Queue-0 is having highest priority , Queue-1 is second highest and so on. My wireless device was set as 20Mhz and MCS: -1 which gives throughput around 40-45mbps. I tested this with JDSU having 8 streams of 10mbps that means total JDSU tx rate: 80mbps. In my overnight test, i found frame loss happened in queue-0 and queue-1 which was not expected if we place the device in RF chamber ( Lab environment). However, If i limit the tx rate of JDSU within 45mbps then i don't find any frame loss.Is there any relationship between throughput and frame loss? My topology is like :
jdsu<---->wifi master<------air i/f------>wifi slave >loopback
Just my two cents. Have you considered that the rate you are transmitting at is nearly double the supported data rate of your receiving devices? Wireless nodes broadcast their supported data rate for good reason. This is so other devices on the network can speak at a rate that the other device can understand. So I would say the answer to your question is an emphatic yes. Imagine if I were only capable of comprehending 1000 words per minute but you spoke at a rate of 2500 words per minute. You can safely expect that at some point I am going to be unable to comprehend every word that you are saying.

frequency sampling limit for beaglebone adc

I intend to use the beaglebone to sample a shaped signal of the order of 1 microsec. I need to fit the signal after and therefore i would like to have a sampling rate of let's 10 MHZ. Something that seems feasible with PRU and libpruio. The point is, looking to the adc specifications it seems there is a limit at 200KHz. Is my reasoning correct?
thanks
You'll need additional hardware for a sampling rate of 10 MHz! libpruio isn't designed to work at that speed, as well as the BBB hardware.
The ADC subsystem in the AM335x CPU is clocked at 24 MHz and needs 15 cycles for a sample (14 in continous mode). This leads to a maximum sample rate of 1.6 (1.74) MSamples/s. See SRM, chapter 12 for details.
The problem is to get the samples in to the host memory. I couldn't get this working faster than ~250 kSamples/s (by CPU access - I didn't try DMA).
As long as you don't need more values than the FIFO can hold, you can sample a single line at maximum 1.7 MHz.
BR

DMA Transfer Data Rate

I'm trying to learn about DMA transfer rates and I don't understand this questions. I have the answers but don't know how to get there.
This question concerns the use of DMA to handle the input and storage in memory of data arriving at an input interface, the achievable data rates that can achieved using this mechanism, and the bus bandwidth (capacity) used for particular data rates.
You are given details of the execution of the clock cycles executed for each DMA transfer, and the clock cycles to acquire and release the busses.
Below you are given: the number of clock cycles required for the DMA device to transfer a single data item between the input interface and memory, the number of clock cycles to acquire and releases the system busses, the size (in bits) of each data item, and the clock frequency.
number of clock cycles for each data transfer 8
number of clock cycles to acquire and release busses 4
number of bits per data item = 8
clock frequency = 20MHz
A) What is the maximum achievable data rate in Kbits/second?
B) What percentage of the bus clocks are used by the DMA device if the data rate is 267Kbits/sec?
Answers
A)20000.0
B)2.0
Thanks in advance.
There are two modes to Transfer Data using DMA
1.Burst Mode
Once the DMA controller is granted access to the system bus by the CPU, it transfers all bytes of data in the data block before releasing control of the system buses back to the CPU. CPU is disabled to use Memory Bus for long time.it will not release bus access until entire Block of Data Transferred.
2.Cycle Stealing Mode
Once the DMA controller is granted access to the system bus by the CPU,it transfer the one byte of data then it releases the memory access to cpu . again for another byte of transfer it has to acquire bus access by cpu via BR and BG signal(BUS REQUEST and BUS GRANT).For Every Byte of Transfer it acquires bus access and releases it until entire Block of Data Transferred.
In the Above Example
Clock Frequency is 20MHZ (hz is cycles per second).20 million clock cycles per second.(20 x 10^6 cycles /second )
For Every Byte of Transfer B/W IO Interface and Memory takes 8 clock cycles. There are 20 x 10^6 Clock Cycles.In Cycle Stealing Mode For Every One Byte of Transfer it has to take another 4 clock cycles to For Bus Grant and release Access. So to Transfer One Byte Between IO Interface and Memory, 12 clock cycles are Required. Here 2/3 of clock cycles are used for Data Transfer and 1/3 of Clock Cycles are Used For Bus Access. Here One Clock Cycle is Used to Transfer One Bit of Data. 2/3 of 20 Million Clock Cycles are Used to Transfer Data and 1/3 of 20 Million Clock Cycles are Used for Bus Access.So, 13333.333 Kbits are Transferred b/w IO interface and Memory . If we take 2% from the 13333 Kb/sec approximately it will be 267 Kb/sec. Max Achievable Data Rate in this mode is 13333 Kb/s .
In Burst Mode once DMA Acquired the Bus it will release the Bus After Entire Transfer.20000 x 10^3 clock cycles are used to Transfer 20000 x 10^3 bits which is 20000 kb/sec.4 clock cycles are used for bus access. approximately it will be 20000 Kb/s
To find the maximum achievable data rate, you need to divide the number of bits by the length of the process so:
number of bits per data item / number of clock cycles for each data transfer * clock frequency * 1000
I'm stuck on the second part so if you managed to find the answer, please share :)

Resources