Why is a sampling frequency of 8000 Hz or below more commonly used than above such as 44100 Hz in heart sound analysis? - signal-processing

I have a heart sound dataset that contains different sampling frequency. They are 4000 Hz and 44100 Hz. For now, I used 4000 Hz because many journals use the same fs.
My question: Why 4000 Hz or 8000 Hz are commonly used for heart sound analysis more than 44100 Hz?

Sampling at 4000Hz is sufficient to capture all frequencies < 2000Hz.
This is what 2000Hz sounds like (warning, it's loud): https://www.youtube.com/watch?v=0voTVFmpVjY
As you can hear, 2000Hz has a higher pitch than just about anything you hear in heart sounds, except for some high frequency parts of swishy blood noise.
A 4000Hz sampling rate is therefore sufficient to capture everything of interest in most heart sound analyses.
Note, however, that if you're going to sample at 4000Hz, you MUST filter out everything above 2000Hz first. Otherwise all the higher frequency noise will be reflected down into the 2000Hz region and will interfere with your signal.

Related

Why is the Number of Frequency Bins Determined by the Frame Size in the DFT?

I'm currently working with Fourier transforms and I notice that the output of an FFT usually has the dimensions (n_fft, ) where n_fft is the number of samples to consider in the FFT, though some implementations discard frequency bins above the Nyquist frequency. What I can't wrap my head around is why the frequency resolution is dependent on the number of samples considered in the FFT.
Could someone please explain the intuition behind this please?
The DFT is a change of basis from the time domain to the frequency domain. As Cris put it, "If you have N input samples, you need at least N numbers to fully describe the input."
some implementations discard frequency bins above the Nyquist frequency.
Yes, that's right. When the input signal is real-valued, its spectrum is Hermitian symmetric. The components above the Nyquist frequency are complex-conjugates of lower frequencies, so it is common practice to use real-to-complex FFTs to skip explicitly computing and storing them. In other words, if you have N real-valued samples, you need N/2 complex numbers to represent them.

Why is the maximum data rate of 802.11p lower than 802.11a although 802.11p has a higher frequency band?

In Vehicle to vehicle communication, high speeds of nodes require low latency. (Latency is
the time interval between sending messages by a sender node and receiving
message by a receiver node.) It should be very low. To achieve the low latency
requirement in Vehicle to vehicle communication, the speed of data transmission should be high. Highspeed data transmission needs high-frequency ranges.
Frequency band of IEEE 802.11p is 5,9 Ghz and frequency band of IEEE 802.11a is 5 Ghz.
Why is the maximum data rate of 802.11p lower than 802.11a although 802.11p has a higher frequency band?
The main reason is a lower channel bandwidth of 802.11p (10Mhz) compared to 802.11a (20Mhz).. Shannon theorem states the maximum data rate that can be transmitted in a channel depends on a given bandwidth..

Implementing a phase demodulator in software

I'm currently attempting to send and receive some BPSK modulated data through sound. Currently, I'm using goertzel's algorithm as a bandpass filter for demodulation. I have no formal training in signal processing.
Given a sample rate of 44100Hz and a bucket size of 100, my intuition says that generating a wave at a frequency multiple of 441hz should result in me picking up a relatively constant phase. At other frequencies, the phase i detect should drift.
However, my current implementation shows a drift in phase when detecting a generated sound wave over the course of a second (around 90 degrees). Is this to be expected or a sign of a flaw in my implementation of goertzels?
Furthermore, is there a better, perhaps obvious way to detect the phase of a wave at a specific frequency then using goertzels?
A slow phase drift can be the result of a small difference in the clock frequencies of the transmitter and receiver. This is to be expected.
Usually BPSK data is differentially encoded so you only need to detect the moments when the phase shifts by 180 degrees, and any slow phase drift or offset can be easily ignored.
You will need to perform some form of carrier recovery and symbol recovery to track and correct offsets in transmitter and receiver clocks

How can I find process noise and measurement noise in a Kalman filter if I have a set of RSSI readings?

im have RSSI readings but no idea how to find measurement and process noise. What is the way to find those values?
Not at all. RSSI stands for "Received Signal Strength Indicator" and says absolutely nothing about the signal-to-noise ratio related to your Kalman filter. RSSI is not a "well-defined" things; it can mean a million things:
Defining the "strength" of a signal is a tricky thing. Imagine you're sitting in a car with an FM radio. What does the RSSI bars on that radio's display mean? Maybe:
The amount of Energy passing through the antenna port (including noise, because at this point no one knows what noise and signal are)?
The amount of Energy passing through the selected bandpass for the whole ultra shortwave band (78-108 MHz, depending on region) (incl. noise)?
Energy coming out of the preamplifier (incl. Noise and noise generated by the amplifier)?
Energy passing through the IF filter, which selects your individual station (is that already the signal strength as you want to define it?)?
RMS of the voltage observed by the ADC (the ADC probably samples much higher than your channel bandwidth) (is that the signal strength as you want to define it?)?
RMS of the digital values after a digital channel selection filter (i.t.t.s.s.a.y.w.t.d.i?)?
RMS of the digital values after FM demodulation (i.t.t.s.s.a.y.w.t.d.i?)?
RMS of the digital values after FM demodulation and audio frequency filtering for a mono mix (i.t.t.s.s.a.y.w.t.d.i?)?
RMS of digital values in a stereo audio signal (i.t.t.s.s.a.y.w.t.d.i?) ?
...
as you can imagine, for systems like FM radios, this is still relatively easy. For things like mobile phones, multichannel GPS receivers, WiFi cards, digital beamforming radars etc., RSSI really can mean everything or nothing at all.
You will have to mathematically define away to describe what your noise is. And then you will need to find the formula that describes your exact implementation of what "RSSI" is, and then you can deduct whether knowing RSSI says anything about process noise.
A Kalman Filter is a mathematical construct for computing the expected state of a system that is changing over time, given an initial state and noisy measurements of that system. The key to the "process noise" component of this is the fact that the system is changing. The way that the system changes is the process.
Your state might change due to manual control or due to the nature of the system. For example, if you have a car on a hill, it can roll down the hill naturally (described by the state transition matrix), or you might drive it down the hill manually (described by the control input matrix). Any noise that might affect these inputs - wind, bumps, twitches - can be described with the process noise.
You can measure the process noise the way you would measure variance in any system - take the expected dynamics and compare them with the true dynamics to generate a covariance matrix.

Recover the original analog signal (time varying Voltage) from digitized version?

I have been looking into how to convert my digital data into analog.
So, I have a two column ASCII data file (x: time, y=voltage amplitude) which I would like to convert into an analog signal (varying Voltage with time). There are Digital to Analog converters, but the good ones are quite expensive. There should be a more trivial way to achieve this.
Ultimately what I'd like to do is to reconstruct the original time variant voltage which was sampled every nano-second and recorded as an ASCII data file.
I thought I may feed the data into my laptop's sound card and re-generate the time variant voltage which I can then feed into the analyzer via the audio jack. Does this sound feasible?
I am not looking into recovering the "shape" but the signal (voltage) itself.
Puzzled on several accounts.
You want to convert into an analog signal (varying Voltage with time) But the what you already have, the discrete signal, is indeed a "varying voltage with time", only that both the values (voltages) and times are discrete. That's the way computers (digital equipment, in general) work.
Only when the signal goes to some non-discrete medium (eg. a classical audio cable+plug) we have an analog signal. Precisely, the sound card of your computer is at its core a "Digital to Analog converter".
So, it appears you are not trying to do some digital processing of your signal (interpolation, or whatever), you are not dealing with computer programming, but with a hardware thing: getting the signal to a cable. If so, SO is not the proper place. YOu might try https://electronics.stackexchange.com/ ...
But, on another thing, you say that your data was "sampled every nano-second". That means 1 billion samples per second, or a sample freq of 1Ghz. That's a ridiculously high frequency, at least in the audio world. You cant output that to a sound card, which would be limited to the audio range (about 48Khz = 48000 samples per second).
You want to just fit a curve to the data. Assuming the sampling rate is sufficient, a third-order polynomial would be plenty. At each point N, you fit a cubic polynomial to points N-1, N, N+1, and N+2, and then you have an analytic expression for the data values between those points. Shift over one, and repeat. You can average the values for multiple successive curves, if you want.

Resources