What is the reason of frameloss in case of high throughput in wlan? - wifi

Background:
I've implemented QoS with four queues having strict priorities in wireless device. Queue-0 is having highest priority , Queue-1 is second highest and so on. My wireless device was set as 20Mhz and MCS: -1 which gives throughput around 40-45mbps. I tested this with JDSU having 8 streams of 10mbps that means total JDSU tx rate: 80mbps. In my overnight test, i found frame loss happened in queue-0 and queue-1 which was not expected if we place the device in RF chamber ( Lab environment). However, If i limit the tx rate of JDSU within 45mbps then i don't find any frame loss.Is there any relationship between throughput and frame loss? My topology is like :
jdsu<---->wifi master<------air i/f------>wifi slave >loopback

Just my two cents. Have you considered that the rate you are transmitting at is nearly double the supported data rate of your receiving devices? Wireless nodes broadcast their supported data rate for good reason. This is so other devices on the network can speak at a rate that the other device can understand. So I would say the answer to your question is an emphatic yes. Imagine if I were only capable of comprehending 1000 words per minute but you spoke at a rate of 2500 words per minute. You can safely expect that at some point I am going to be unable to comprehend every word that you are saying.

Related

CK (tCK, nCK) unit ambiguity in DDR3 standard/datasheets?

I am designing a simplistic memory controller and PHY on an Artix-7 FPGA but am having problems reading the datasheet. The timings in the memory part's datasheet (and in the JEDEC JSD79-3F doc) are expressed in CK/tCK/nCK units, which are in my opinion ambiguous if not running the memory at the nominal frequency (e.g. lower than 666 MHz clock for a 1333 MT/s module).
If I run a 1333 MT/s module at a frequency of 300 MHz -- still allowed with DLL on, as per the datasheet speed bins, -- is the CK/tCK/nCK unit equal to 1.5 ns (from the module's native 666 MHz), or 3.33 ns (from the frequency it is actually run at)? On one hand it makes sense that certain delays are constant, but then again some delays are expressed relative to the clock edges on the CK/CK# pins (like CL or CWL).
That is to say, some timing parameters in the datasheet only change when changing speed bins. E.g. tRP is 13.5 ns for a 1333 part, which is also backwards compatible with the tRP of 13.125 ns of a 1066 part -- no matter the chosen operating frequency of the physical clock pins of the device.
But then, running a DDR3 module at 300 MHz only allows usage of CL = CWL = 5, which is again expressed in "CK" units. To my understanding, this means 5 periods of the input clock, i.e. 5 * 3.33 ns.
I suppose all I am asking is whether the "CK" (or nCK or tCK) unit is tied to the chosen speed bin (tCK = 1.5 ns when choosing DDR3-1333) or the actual frequency of the clock signal provided to the memory module by the controlling hardware (e.g. 3.3 ns for the 600 MT/s mode)?
This is the response of u/Allan-H on reddit who has helped me reach a conclusion:
When you set the CL in the mode register, that's the number of clocks that the chip will wait before putting the data on the pins. That clock is the clock that your controller is providing to the chip (it's SDRAM, after all).
It's your responsibility to ensure that the number of clocks you program (e.g. CL=5) when multiplied by the clock period (e.g. 1.875ns) is at least as long as the access time of the RAM. Note that you program a number of clocks, but the important parameter is actually time. The RAM must have the data ready before it can send it to the output buffers.
Now let's run the RAM at a lower speed, say 312.5MHz (3.2ns period). We now have the option of programming CL to be as low as 3, since 3 x 3.2ns > 5 x 1.875ns.
BTW, since we are dealing with fractions of a ns, we also need to take the clock jitter into account.
Counterintuitively, the DRAM chip doesn't know how fast it is; it must be programmed with that information by the DRAM controller. That information might be hard coded into the controller (e.g. for an FPGA implementation) or by software which would typically read the SPD EEPROM on the DIMM to work out the speed grade then write the appropriate values into the DRAM controller.
This also explains timing values defined as e.g. "Greater of 3CK or 5ns". In this case, the memory chip cannot respond faster than 5 ns, but the internal logic also needs 3 positive clock edges on the input CK pins to complete the action defined by this example parameter.

UART transfer speed

I want to check if my understanding is correct, however I cannot find any precise explanation or examples. Let's say I have UART communication set to 57600 bits/second and I am transferring 8 bit chars. Let's say I choose to have no parity and since I need one start bit and one stop bit, that means that essentially for transferring one char I would need to transfer 10 bits. Does that mean that the transfer speed would be 5760 chars/second?
Your calculation is essentially correct.
But the 5760 chars/second would be the maximum transfer rate. Since it's an asynchronous link, the UART transmitter is allowed to idle the line between character frames.
IOW the baud rate only applies to the bits of the character frame.
The rate that characters are transmitted depends on whether there is data available to keep the transmitter busy/saturated.
For instance, if a microcontroller used programmed I/O (with either polling or interrupt) instead of DMA for UART transmiting, high-priority interrupts could stall the transmissions and introduce delays between frames.
Baudrate = 57600
Time for 1 Bit: 1 / 57600 = 17,36 us
Time for a frame with 10 Bit = 173,6 us
this means max. 1 / 1736 us = 5760 frames(characters) / s**

How long can it take to send a message of 200 Byte in an IEEE 802.15.4 beacon enabled network?

How long can it take to send a message of 200 Byte in an IEEE 802.15.4 beacon enabled network?
I am not clear with this question and how to calculate this time.
I have tried to find the articles about IEEE 802.15.4.
Thank you.
There are three kinds of data rates of IEEE 802.15.4. They are 250 kbps, 40 kbps, and 20 kbps. The time vary among rates. Calculation formula is
Time(s) = Data(bits) / Rate(bps)
For example, if the rate is 20 kbps, the data(message) is 200 Bytes, so the time is
(200*8)/(20*1000)=0.08s=80ms
If you use 250kbps the time is 6.4ms.
Note: Time calculated here is the time of transmitting message in the air. Generally, actual time is longer because processing time does not take into account here.

frequency sampling limit for beaglebone adc

I intend to use the beaglebone to sample a shaped signal of the order of 1 microsec. I need to fit the signal after and therefore i would like to have a sampling rate of let's 10 MHZ. Something that seems feasible with PRU and libpruio. The point is, looking to the adc specifications it seems there is a limit at 200KHz. Is my reasoning correct?
thanks
You'll need additional hardware for a sampling rate of 10 MHz! libpruio isn't designed to work at that speed, as well as the BBB hardware.
The ADC subsystem in the AM335x CPU is clocked at 24 MHz and needs 15 cycles for a sample (14 in continous mode). This leads to a maximum sample rate of 1.6 (1.74) MSamples/s. See SRM, chapter 12 for details.
The problem is to get the samples in to the host memory. I couldn't get this working faster than ~250 kSamples/s (by CPU access - I didn't try DMA).
As long as you don't need more values than the FIFO can hold, you can sample a single line at maximum 1.7 MHz.
BR

What is RSSI value in 802.11 packet

I see two values of SSI signal in 802.11 packet when viewed in wireshark. I would like tot know that which one value is the correct RSSI value
Information from wireshark:
SSI Signal: -40 dBm
SSI Noise: -100 dBm
Signal Quality: 64
Antenna: 0
SSI Signal: 60 dB
Also note that SSI signal(second time) is the ((SSI signal) - (SSI Noise))
I am just confused which one is correct. Also the wikipedia entry tells that these implementation can be vendor dependent. I am totally confused about which is the correct value.
Take my answer with a pinch of salt- this is what makes sense to me, need not be correct..if it makes sense to you use it.
The first SSI signal is measurement of the the Rx Signal Strength at the/after the Rx antenna( its doing this calculation at the ADC stage)
The SSI noise is the noise at the ADC stage (probably measured noise ).
The 2nd SSI signal is the SNR, which would be original SSI Siganl - SSI Noise = 60 dB -this difference would be 60 dB not dBm- the way you get that is by converting both values to dB before subtraction. You neednt do it but youll still get the same magnitude just ensure to use dB as the units.
Neither of them are actually RSSI as per IEEE definitions- RSSI is defined to be a number between two values. It does not have a dBm unit, although a lot of popular apps now give it a dBm value which has led to significant confusion. Cisco uses values between 0-100, atheros 0 to 127 etc. So going by that logic the RSSI in this case would probably be Signal Quality -64.
Take my answer with a pinch of salt- this is what makes sense to me, need not be correct..if it makes sense to you use it.
The first SSI signal is measurement of the the Rx Signal Strength at the/after the Rx antenna( its doing this calculation at the ADC stage)
The SSI noise is the noise at the ADC stage (probably measured noise ).
The 2nd SSI signal is the SNR, which would be original SSI Siganl - SSI Noise = 60 dB -this difference would be 60 dB not dBm- the way you get that is by converting both values to dB before subtraction. Now, you need not do the dB conversion I mentioned before subtraction , you'll still get the same magnitude, just ensure to use dB as the units.
Movin on, to answer your specific question, neither of them are actually RSSI as per IEEE definitions- RSSI is defined to be a number between two values. It does not have a dBm unit, although a lot of popular apps now give it a dBm value which has led to significant confusion. Cisco uses values between 0-100, atheros 0 to 127 etc. So going by that logic the RSSI in this case would probably be Signal Quality -64.
I see two values of SSI signal in 802.11 packet when viewed in wireshark
It sounds as if the driver for the 802.11 adapter used to capture the packet is being weird and supplying both antenna signal strength in dBm and antenna strength in dB. What type of adapter was that, and what operating system is the machine on which the machine that did the capture running?
"dBm", as the link above indicates, decibels from 1 milliwatt of power; "dB", as the other link above indicates, is decibels from some unspecified arbitrary point. dBm tells you the actual signal power at the antenna; dB doesn't - you can only use dB values to compare with other dB values.
Neither of those are "RSSI" as defined by 802.11; that RSSI value is also arbitrary, but it's even more arbitrary - 802.11 doesn't even say what it measures, just that larger values correspond to stronger signals, and those values are vendor-dependent.
Also note that SSI signal(second time) is the ((SSI signal) - (SSI Noise))
The writer of the driver for your adapter might not have properly read the Radiotap page about the "dB antenna signal" value (linked above), and might have thought it was supposed to be a signal-to-noise ratio, and calculated that by subtracting the noise value from the signal value (decibels are a logarithmic scale, and the quotient of two values is the difference between the logarithms of those values). I would ignore that value, and use "SSI signal" as an indication of signal strength in milliwatts (-40 dBm = 100 nanowatts, at least as per the table in the Wikipedia article on dBm).
As per the link (on 06/10/2016) http://www.radiotap.org/suggested-fields/RSSI
RSSI is still a "suggested" field only usable with OpenBSD OS.
(I was trying to get the same info with an AirPcap and a Windows machine)

Resources