Why is the maximum data rate of 802.11p lower than 802.11a although 802.11p has a higher frequency band? - wifi

In Vehicle to vehicle communication, high speeds of nodes require low latency. (Latency is
the time interval between sending messages by a sender node and receiving
message by a receiver node.) It should be very low. To achieve the low latency
requirement in Vehicle to vehicle communication, the speed of data transmission should be high. Highspeed data transmission needs high-frequency ranges.
Frequency band of IEEE 802.11p is 5,9 Ghz and frequency band of IEEE 802.11a is 5 Ghz.
Why is the maximum data rate of 802.11p lower than 802.11a although 802.11p has a higher frequency band?

The main reason is a lower channel bandwidth of 802.11p (10Mhz) compared to 802.11a (20Mhz).. Shannon theorem states the maximum data rate that can be transmitted in a channel depends on a given bandwidth..

Related

Why is a sampling frequency of 8000 Hz or below more commonly used than above such as 44100 Hz in heart sound analysis?

I have a heart sound dataset that contains different sampling frequency. They are 4000 Hz and 44100 Hz. For now, I used 4000 Hz because many journals use the same fs.
My question: Why 4000 Hz or 8000 Hz are commonly used for heart sound analysis more than 44100 Hz?
Sampling at 4000Hz is sufficient to capture all frequencies < 2000Hz.
This is what 2000Hz sounds like (warning, it's loud): https://www.youtube.com/watch?v=0voTVFmpVjY
As you can hear, 2000Hz has a higher pitch than just about anything you hear in heart sounds, except for some high frequency parts of swishy blood noise.
A 4000Hz sampling rate is therefore sufficient to capture everything of interest in most heart sound analyses.
Note, however, that if you're going to sample at 4000Hz, you MUST filter out everything above 2000Hz first. Otherwise all the higher frequency noise will be reflected down into the 2000Hz region and will interfere with your signal.

Why the BER for 16QAM is better than that of 32QAM

I am a little confused about the BER. I found that the BER of 16QAM is better than that of 32QAM. is this right, if so, why we go to higher QAM (i.e. 32, 64, and etc).
thank you in advance
If one would target the best BER, you wouldn't even go up to 16QAM and stick at 4QAM / QPSK. You'll have a secure transmission, with the downside of a low spectral efficiency.
16QAM can achieve a spectral efficiency of 4 Bits/s/Hz, where 64QAM has already 6 Bits/s/Hz. This means, you can increase the bitrate by 50% compared to the previous setting. This is especially important if you have limited resources like channels or bandwidth. In Wireless transmission you'll have a bandwidth of a few MHz and there's no parallel channel for other users, so spectral efficiency is the key to increase data throughput. (In fact there's something like an parallel channel, called MIMO, but you get the key)
See the table here for an overview of wireless transmission systems and their spectral efficiency. Spectral Efficiency
Even for more robust transmission systems (in case of BER) you can pick relatively high modulation grades and use the increased number of bits for redundant information. In case of a bit error the receiver is able to repair the original content. This is called Forward Error Correction

How many nodes a typical CAN/LIN/MOST Networks will contain?

I would like to know number of nodes a typical network(CAN/LIN/MOST) Contain and On which basis we will decide?
There's no fixed number but it depends on multiple factors:
Baud rate: Lower the baud rate more the number of nodes. It takes more time for signal to propogate and higher baud rate won't allow that delay.
Wiring: Every node will add capacitance to bus so your wiring scheme will also impact node count.
Signal strength weakens as bus length/node count increases. Hence repeaters may be requred.

How can I find process noise and measurement noise in a Kalman filter if I have a set of RSSI readings?

im have RSSI readings but no idea how to find measurement and process noise. What is the way to find those values?
Not at all. RSSI stands for "Received Signal Strength Indicator" and says absolutely nothing about the signal-to-noise ratio related to your Kalman filter. RSSI is not a "well-defined" things; it can mean a million things:
Defining the "strength" of a signal is a tricky thing. Imagine you're sitting in a car with an FM radio. What does the RSSI bars on that radio's display mean? Maybe:
The amount of Energy passing through the antenna port (including noise, because at this point no one knows what noise and signal are)?
The amount of Energy passing through the selected bandpass for the whole ultra shortwave band (78-108 MHz, depending on region) (incl. noise)?
Energy coming out of the preamplifier (incl. Noise and noise generated by the amplifier)?
Energy passing through the IF filter, which selects your individual station (is that already the signal strength as you want to define it?)?
RMS of the voltage observed by the ADC (the ADC probably samples much higher than your channel bandwidth) (is that the signal strength as you want to define it?)?
RMS of the digital values after a digital channel selection filter (i.t.t.s.s.a.y.w.t.d.i?)?
RMS of the digital values after FM demodulation (i.t.t.s.s.a.y.w.t.d.i?)?
RMS of the digital values after FM demodulation and audio frequency filtering for a mono mix (i.t.t.s.s.a.y.w.t.d.i?)?
RMS of digital values in a stereo audio signal (i.t.t.s.s.a.y.w.t.d.i?) ?
...
as you can imagine, for systems like FM radios, this is still relatively easy. For things like mobile phones, multichannel GPS receivers, WiFi cards, digital beamforming radars etc., RSSI really can mean everything or nothing at all.
You will have to mathematically define away to describe what your noise is. And then you will need to find the formula that describes your exact implementation of what "RSSI" is, and then you can deduct whether knowing RSSI says anything about process noise.
A Kalman Filter is a mathematical construct for computing the expected state of a system that is changing over time, given an initial state and noisy measurements of that system. The key to the "process noise" component of this is the fact that the system is changing. The way that the system changes is the process.
Your state might change due to manual control or due to the nature of the system. For example, if you have a car on a hill, it can roll down the hill naturally (described by the state transition matrix), or you might drive it down the hill manually (described by the control input matrix). Any noise that might affect these inputs - wind, bumps, twitches - can be described with the process noise.
You can measure the process noise the way you would measure variance in any system - take the expected dynamics and compare them with the true dynamics to generate a covariance matrix.

What limits data rate through a medium keep on increasing?

We know data rate is bits per second. It can be also considered as baud rate(symbols per second) times the number of bits in symbol. So, if to increase data rate, we can increase baud rate or we can increase number of bits in a symbol. Why can't we keep on increasing these two? Can someone explain what happens with these 2 occasions separately?
This is essentially a physics question. We can play all sorts of games with how to physically represent a signal (hence, getting more bits per baud), but at the end of the day you can only physically convey so much information for any given rate of change of a signal. If you want to communicate faster, you have to up the frequency, which means having signals that change faster in time -- and nature ultimately limits how fast you can change the signal.
See:
http://en.wikipedia.org/wiki/Nyquist_rate
This gets even worse when you add noise:
http://en.wikipedia.org/wiki/Shannon%E2%80%93Hartley_theorem

Resources